WorldWideScience

Sample records for lempel-ziv incremental parsing

  1. On the Approximation Ratio of Lempel-Ziv Parsing

    DEFF Research Database (Denmark)

    Gagie, Travis; Navarro, Gonzalo; Prezza, Nicola

    2018-01-01

    in the text. Since computing b is NP-complete, a popular gold standard is z, the number of phrases in the Lempel-Ziv parse of the text, where phrases can be copied only from the left. While z can be computed in linear time, almost nothing has been known for decades about its approximation ratio with respect...

  2. Lempel-Ziv Compression in a Sliding Window

    DEFF Research Database (Denmark)

    Bille, Philip; Cording, Patrick Hagge; Fischer, Johannes

    2017-01-01

    result, we combine a simple modification and augmentation of the suffix tree with periodicity properties of sliding windows. We also apply this new technique to obtain an algorithm for the approximate rightmost LZ77 problem that uses O(n(log z + loglogn)) time and O(n) space and produces a (1 + ϵ......We present new algorithms for the sliding window Lempel-Ziv (LZ77) problem and the approximate rightmost LZ77 parsing problem. Our main result is a new and surprisingly simple algorithm that computes the sliding window LZ77 parse in O(w) space and either O(n) expected time or O(n log log w + z log...

  3. Picture data compression coder using subband/transform coding with a Lempel-Ziv-based coder

    Science.gov (United States)

    Glover, Daniel R. (Inventor)

    1995-01-01

    Digital data coders/decoders are used extensively in video transmission. A digitally encoded video signal is separated into subbands. Separating the video into subbands allows transmission at low data rates. Once the data is separated into these subbands it can be coded and then decoded by statistical coders such as the Lempel-Ziv based coder.

  4. Lempel-Ziv complexity analysis of one dimensional cellular automata.

    Science.gov (United States)

    Estevez-Rams, E; Lora-Serrano, R; Nunes, C A J; Aragón-Fernández, B

    2015-12-01

    Lempel-Ziv complexity measure has been used to estimate the entropy density of a string. It is defined as the number of factors in a production factorization of a string. In this contribution, we show that its use can be extended, by using the normalized information distance, to study the spatiotemporal evolution of random initial configurations under cellular automata rules. In particular, the transfer information from time consecutive configurations is studied, as well as the sensitivity to perturbed initial conditions. The behavior of the cellular automata rules can be grouped in different classes, but no single grouping captures the whole nature of the involved rules. The analysis carried out is particularly appropriate for studying the computational processing capabilities of cellular automata rules.

  5. Time-space trade-offs for lempel-ziv compressed indexing

    DEFF Research Database (Denmark)

    Bille, Philip; Ettienne, Mikko Berggren; Gørtz, Inge Li

    2017-01-01

    Given a string S, the compressed indexing problem is to preprocess S into a compressed representation that supports fast substring queries. The goal is to use little space relative to the compressed size of S while supporting fast queries. We present a compressed index based on the Lempel-Ziv 1977...... compression scheme. Let n, and z denote the size of the input string, and the compressed LZ77 string, respectively. We obtain the following time-space trade-offs. Given a pattern string P of length m, we can solve the problem in (i) O (m + occ lg lg n) time using O(z lg(n/z) lg lg z) space, or (ii) (m (1...... best space bound, but has a leading term in the query time of O(m(1 + lgϵ z/lg(n/z))). However, for any polynomial compression ratio, i.e., z = O(n1-δ), for constant δ > 0, this becomes O(m). Our index also supports extraction of any substring of length ℓ in O(ℓ + lg(n/z)) time. Technically, our...

  6. A short note on the paper of Liu et al. (2012). A relative Lempel-Ziv complexity: Application to comparing biological sequences. Chemical Physics Letters, volume 530, 19 March 2012, pages 107-112

    Science.gov (United States)

    Arit, Turkan; Keskin, Burak; Firuzan, Esin; Cavas, Cagin Kandemir; Liu, Liwei; Cavas, Levent

    2018-04-01

    The report entitled "L. Liu, D. Li, F. Bai, A relative Lempel-Ziv complexity: Application to comparing biological sequences, Chem. Phys. Lett. 530 (2012) 107-112" mentions on the powerful construction of phylogenetic trees based on Lempel-Ziv algorithm. On the other hand, the method explained in the paper does not give promising result on the data set on invasive Caulerpa taxifolia in the Mediterranean Sea. The phylogenetic trees are obtained by the proposed method of the aforementioned paper in this short note.

  7. Correlation between detrended fluctuation analysis and the Lempel-Ziv complexity in nonlinear time series analysis

    International Nuclear Information System (INIS)

    Tang You-Fu; Liu Shu-Lin; Jiang Rui-Hong; Liu Ying-Hui

    2013-01-01

    We study the correlation between detrended fluctuation analysis (DFA) and the Lempel-Ziv complexity (LZC) in nonlinear time series analysis in this paper. Typical dynamic systems including a logistic map and a Duffing model are investigated. Moreover, the influence of Gaussian random noise on both the DFA and LZC are analyzed. The results show a high correlation between the DFA and LZC, which can quantify the non-stationarity and the nonlinearity of the time series, respectively. With the enhancement of the random component, the exponent a and the normalized complexity index C show increasing trends. In addition, C is found to be more sensitive to the fluctuation in the nonlinear time series than α. Finally, the correlation between the DFA and LZC is applied to the extraction of vibration signals for a reciprocating compressor gas valve, and an effective fault diagnosis result is obtained

  8. Nonlinear complexity of random visibility graph and Lempel-Ziv on multitype range-intensity interacting financial dynamics

    Science.gov (United States)

    Zhang, Yali; Wang, Jun

    2017-09-01

    In an attempt to investigate the nonlinear complex evolution of financial dynamics, a new financial price model - the multitype range-intensity contact (MRIC) financial model, is developed based on the multitype range-intensity interacting contact system, in which the interaction and transmission of different types of investment attitudes in a stock market are simulated by viruses spreading. Two new random visibility graph (VG) based analyses and Lempel-Ziv complexity (LZC) are applied to study the complex behaviors of return time series and the corresponding random sorted series. The VG method is the complex network theory, and the LZC is a non-parametric measure of complexity reflecting the rate of new pattern generation of a series. In this work, the real stock market indices are considered to be comparatively studied with the simulation data of the proposed model. Further, the numerical empirical study shows the similar complexity behaviors between the model and the real markets, the research confirms that the financial model is reasonable to some extent.

  9. Watermark Compression in Medical Image Watermarking Using Lempel-Ziv-Welch (LZW) Lossless Compression Technique.

    Science.gov (United States)

    Badshah, Gran; Liew, Siau-Chuin; Zain, Jasni Mohd; Ali, Mushtaq

    2016-04-01

    In teleradiology, image contents may be altered due to noisy communication channels and hacker manipulation. Medical image data is very sensitive and can not tolerate any illegal change. Illegally changed image-based analysis could result in wrong medical decision. Digital watermarking technique can be used to authenticate images and detect as well as recover illegal changes made to teleradiology images. Watermarking of medical images with heavy payload watermarks causes image perceptual degradation. The image perceptual degradation directly affects medical diagnosis. To maintain the image perceptual and diagnostic qualities standard during watermarking, the watermark should be lossless compressed. This paper focuses on watermarking of ultrasound medical images with Lempel-Ziv-Welch (LZW) lossless-compressed watermarks. The watermark lossless compression reduces watermark payload without data loss. In this research work, watermark is the combination of defined region of interest (ROI) and image watermarking secret key. The performance of the LZW compression technique was compared with other conventional compression methods based on compression ratio. LZW was found better and used for watermark lossless compression in ultrasound medical images watermarking. Tabulated results show the watermark bits reduction, image watermarking with effective tamper detection and lossless recovery.

  10. Incremental Learning of Context Free Grammars by Parsing-Based Rule Generation and Rule Set Search

    Science.gov (United States)

    Nakamura, Katsuhiko; Hoshina, Akemi

    This paper discusses recent improvements and extensions in Synapse system for inductive inference of context free grammars (CFGs) from sample strings. Synapse uses incremental learning, rule generation based on bottom-up parsing, and the search for rule sets. The form of production rules in the previous system is extended from Revised Chomsky Normal Form A→βγ to Extended Chomsky Normal Form, which also includes A→B, where each of β and γ is either a terminal or nonterminal symbol. From the result of bottom-up parsing, a rule generation mechanism synthesizes minimum production rules required for parsing positive samples. Instead of inductive CYK algorithm in the previous version of Synapse, the improved version uses a novel rule generation method, called ``bridging,'' which bridges the lacked part of the derivation tree for the positive string. The improved version also employs a novel search strategy, called serial search in addition to minimum rule set search. The synthesis of grammars by the serial search is faster than the minimum set search in most cases. On the other hand, the size of the generated CFGs is generally larger than that by the minimum set search, and the system can find no appropriate grammar for some CFL by the serial search. The paper shows experimental results of incremental learning of several fundamental CFGs and compares the methods of rule generation and search strategies.

  11. Dependency Parsing

    CERN Document Server

    Kubler, Sandra; Nivre, Joakim

    2009-01-01

    Dependency-based methods for syntactic parsing have become increasingly popular in natural language processing in recent years. This book gives a thorough introduction to the methods that are most widely used today. After an introduction to dependency grammar and dependency parsing, followed by a formal characterization of the dependency parsing problem, the book surveys the three major classes of parsing models that are in current use: transition-based, graph-based, and grammar-based models. It continues with a chapter on evaluation and one on the comparison of different methods, and it close

  12. Ziv-aflibercept in metastatic colorectal cancer

    Directory of Open Access Journals (Sweden)

    Patel A

    2013-12-01

    Full Text Available Anuj Patel, Weijing Sun Division of Hematology-Oncology, University of Pittsburgh Cancer Institute, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA Abstract: The combination of cytotoxic chemotherapy and antiangiogenic agents has become a conventional treatment option for patients with metastatic colorectal cancer. Ziv-aflibercept is a fusion protein which acts as a decoy receptor for vascular endothelial growth factor (VEGF-A, VEGF-B, and placental growth factor (PlGF; it was approved in combination with 5-fluorouracil, leucovorin, and irinotecan (FOLFIRI for the treatment of patients with metastatic colorectal cancer that is resistant to or has progressed after an oxaliplatin-containing fluoropyrimidine-based regimen. Herein we review the role of tumor angiogenesis as the rationale for antiangiogenic therapy, the clinical data associated with ziv-aflibercept, and its current role as a treatment option compared to other antiangiogenic agents, such as bevacizumab and regorafenib. Keywords: aflibercept, angiogenesis, colorectal cancer

  13. Evaluation of Efficient XML Interchange (EXI) for Large Datasets and as an Alternative to Binary JSON Encodings

    Science.gov (United States)

    2015-03-01

    1977 Algorithm LZMA Lempel-Ziv Markov-chain Algorithm MB Megabyte NCW Network-Centric Warfare NoSQL Not only Structured Query Language NOW Network...Language ( NoSQL ) database (MongoDB Documentation Project, 2014). Its stated design goals are to be lightweight or compact, quickly traversed by

  14. Memory-Based Shallow Parsing

    NARCIS (Netherlands)

    Tjong Kim Sang, E.F.

    2002-01-01

    We present memory-based learning approaches to shallow parsing and apply these to five tasks: base noun phrase identification, arbitrary base phrase recognition, clause detection, noun phrase parsing and full parsing. We use feature selection techniques and system combination methods for improving

  15. LZ-Compressed String Dictionaries

    OpenAIRE

    Arz, Julian; Fischer, Johannes

    2013-01-01

    We show how to compress string dictionaries using the Lempel-Ziv (LZ78) data compression algorithm. Our approach is validated experimentally on dictionaries of up to 1.5 GB of uncompressed text. We achieve compression ratios often outperforming the existing alternatives, especially on dictionaries containing many repeated substrings. Our query times remain competitive.

  16. Unifying LL and LR parsing

    NARCIS (Netherlands)

    W.H.L.M. Pijls (Wim)

    1993-01-01

    textabstractIn parsing theory, LL parsing and LR parsing are regarded to be two distinct methods. In this paper the relation between these methods is clarified.As shown in literature on parsing theory, for every context-free grammar, a so-called non-deterministic LR(0) automaton can be constructed.

  17. Spatial-Aided Low-Delay Wyner-Ziv Video Coding

    Directory of Open Access Journals (Sweden)

    Bo Wu

    2009-01-01

    Full Text Available In distributed video coding, the side information (SI quality plays an important role in Wyner-Ziv (WZ frame coding. Usually, SI is generated at the decoder by the motion-compensated interpolation (MCI from the past and future key frames under the assumption that the motion trajectory between the adjacent frames is translational with constant velocity. However, this assumption is not always true and thus, the coding efficiency for WZ coding is often unsatisfactory in video with high and/or irregular motion. This situation becomes more serious in low-delay applications since only motion-compensated extrapolation (MCE can be applied to yield SI. In this paper, a spatial-aided Wyner-Ziv video coding (WZVC in low-delay application is proposed. In SA-WZVC, at the encoder, each WZ frame is coded as performed in the existing common Wyner-Ziv video coding scheme and meanwhile, the auxiliary information is also coded with the low-complexity DPCM. At the decoder, for the WZ frame decoding, auxiliary information should be decoded firstly and then SI is generated with the help of this auxiliary information by the spatial-aided motion-compensated extrapolation (SA-MCE. Theoretical analysis proved that when a good tradeoff between the auxiliary information coding and WZ frame coding is achieved, SA-WZVC is able to achieve better rate distortion performance than the conventional MCE-based WZVC without auxiliary information. Experimental results also demonstrate that SA-WZVC can efficiently improve the coding performance of WZVC in low-delay application.

  18. Successful single treatment with ziv-aflibercept for existing corneal neovascularization following ocular chemical insult in the rabbit model.

    Science.gov (United States)

    Gore, Ariel; Horwitz, Vered; Cohen, Maayan; Gutman, Hila; Cohen, Liat; Gez, Rellie; Kadar, Tamar; Dachir, Shlomit

    2018-03-13

    To evaluate the efficacy of ziv-aflibercept as a treatment for established corneal neovascularization (NV) and to compare its efficacy to that of bevacizumab following ocular chemical insult of sulfur mustard (SM) in the rabbit model. Chemical SM burn was induced in the right eye of NZW rabbits by vapor exposure. Ziv-aflibercept (2 mg) was applied once to neovascularized eyes by subconjunctival injection while subconjunctival bevacizumab (5 mg) was administered twice a week, for 3 weeks. Non-treated exposed eyes served as a control. A clinical follow-up employed by slit-lamp microscope, was performed up to 12 weeks following exposure and digital photographs of the cornea were taken for measurement of blood vessels length using the image analysis software. Eyes were taken for histological evaluation 2, 4 and 8 weeks following treatment for general morphology and for visualization of NV, using H&E and Masson Trichrome stainings, while conjunctival goblet cell density was determined by PAS staining. Corneal NV developed, starting as early as two weeks after exposure. A single subconjunctival treatment of ziv-aflibercept at 4 weeks post exposure, significantly reduced the extent of existing NV already one week following injection, an effect which lasted for at least 8 weeks following treatment, while NV in the non-treated exposed eyes continued to advance. The extensive reduction in corneal NV in the ziv-aflibercept treated group was confirmed by histological evaluation. Bevacizumab multiple treatment showed a benefit in NV reduction, but to a lesser extent compared to the ziv-aflibercept treatment. Finally, ziv-aflibercept increased the density of conjunctival goblet cells as compared to the exposed non-treated group. Subconjunctival ziv-aflibercept single treatment presented a highly efficient long-term therapeutic benefit in reducing existing corneal NV, following ocular sulfur mustard exposure. These findings show the robust anti-angiogenic efficacy of ziv

  19. Bit-coded regular expression parsing

    DEFF Research Database (Denmark)

    Nielsen, Lasse; Henglein, Fritz

    2011-01-01

    the DFA-based parsing algorithm due to Dub ´e and Feeley to emit the bits of the bit representation without explicitly materializing the parse tree itself. We furthermore show that Frisch and Cardelli’s greedy regular expression parsing algorithm can be straightforwardly modified to produce bit codings...

  20. Memory-Based Shallow Parsing

    OpenAIRE

    Sang, Erik F. Tjong Kim

    2002-01-01

    We present memory-based learning approaches to shallow parsing and apply these to five tasks: base noun phrase identification, arbitrary base phrase recognition, clause detection, noun phrase parsing and full parsing. We use feature selection techniques and system combination methods for improving the performance of the memory-based learner. Our approach is evaluated on standard data sets and the results are compared with that of other systems. This reveals that our approach works well for ba...

  1. Mutiple LDPC Decoding using Bitplane Correlation for Transform Domain Wyner-Ziv Video Coding

    DEFF Research Database (Denmark)

    Luong, Huynh Van; Huang, Xin; Forchhammer, Søren

    2011-01-01

    Distributed video coding (DVC) is an emerging video coding paradigm for systems which fully or partly exploit the source statistics at the decoder to reduce the computational burden at the encoder. This paper considers a Low Density Parity Check (LDPC) based Transform Domain Wyner-Ziv (TDWZ) video...... codec. To improve the LDPC coding performance in the context of TDWZ, this paper proposes a Wyner-Ziv video codec using bitplane correlation through multiple parallel LDPC decoding. The proposed scheme utilizes inter bitplane correlation to enhance the bitplane decoding performance. Experimental results...

  2. Corrections of the NIST Statistical Test Suite for Randomness

    OpenAIRE

    Kim, Song-Ju; Umeno, Ken; Hasegawa, Akio

    2004-01-01

    It is well known that the NIST statistical test suite was used for the evaluation of AES candidate algorithms. We have found that the test setting of Discrete Fourier Transform test and Lempel-Ziv test of this test suite are wrong. We give four corrections of mistakes in the test settings. This suggests that re-evaluation of the test results should be needed.

  3. Two models of minimalist, incremental syntactic analysis.

    Science.gov (United States)

    Stabler, Edward P

    2013-07-01

    Minimalist grammars (MGs) and multiple context-free grammars (MCFGs) are weakly equivalent in the sense that they define the same languages, a large mildly context-sensitive class that properly includes context-free languages. But in addition, for each MG, there is an MCFG which is strongly equivalent in the sense that it defines the same language with isomorphic derivations. However, the structure-building rules of MGs but not MCFGs are defined in a way that generalizes across categories. Consequently, MGs can be exponentially more succinct than their MCFG equivalents, and this difference shows in parsing models too. An incremental, top-down beam parser for MGs is defined here, sound and complete for all MGs, and hence also capable of parsing all MCFG languages. But since the parser represents its grammar transparently, the relative succinctness of MGs is again evident. Although the determinants of MG structure are narrowly and discretely defined, probabilistic influences from a much broader domain can influence even the earliest analytic steps, allowing frequency and context effects to come early and from almost anywhere, as expected in incremental models. Copyright © 2013 Cognitive Science Society, Inc.

  4. Video Scene Parsing with Predictive Feature Learning

    OpenAIRE

    Jin, Xiaojie; Li, Xin; Xiao, Huaxin; Shen, Xiaohui; Lin, Zhe; Yang, Jimei; Chen, Yunpeng; Dong, Jian; Liu, Luoqi; Jie, Zequn; Feng, Jiashi; Yan, Shuicheng

    2016-01-01

    In this work, we address the challenging video scene parsing problem by developing effective representation learning methods given limited parsing annotations. In particular, we contribute two novel methods that constitute a unified parsing framework. (1) \\textbf{Predictive feature learning}} from nearly unlimited unlabeled video data. Different from existing methods learning features from single frame parsing, we learn spatiotemporal discriminative features by enforcing a parsing network to ...

  5. Faster, Practical GLL Parsing

    NARCIS (Netherlands)

    A. Afroozeh (Ali); A. Izmaylova (Anastasia)

    2015-01-01

    htmlabstractGeneralized LL (GLL) parsing is an extension of recursive-descent (RD) parsing that supports all context-free grammars in cubic time and space. GLL parsers have the direct relationship with the grammar that RD parsers have, and therefore, compared to GLR, are easier to understand, debug,

  6. Dependency Parsing with Transformed Feature

    Directory of Open Access Journals (Sweden)

    Fuxiang Wu

    2017-01-01

    Full Text Available Dependency parsing is an important subtask of natural language processing. In this paper, we propose an embedding feature transforming method for graph-based parsing, transform-based parsing, which directly utilizes the inner similarity of the features to extract information from all feature strings including the un-indexed strings and alleviate the feature sparse problem. The model transforms the extracted features to transformed features via applying a feature weight matrix, which consists of similarities between the feature strings. Since the matrix is usually rank-deficient because of similar feature strings, it would influence the strength of constraints. However, it is proven that the duplicate transformed features do not degrade the optimization algorithm: the margin infused relaxed algorithm. Moreover, this problem can be alleviated by reducing the number of the nearest transformed features of a feature. In addition, to further improve the parsing accuracy, a fusion parser is introduced to integrate transformed and original features. Our experiments verify that both transform-based and fusion parser improve the parsing accuracy compared to the corresponding feature-based parser.

  7. Three-month outcome of ziv-aflibercept for exudative age-related macular degeneration.

    Science.gov (United States)

    Mansour, Ahmad M; Chhablani, Jay; Antonios, Rafic S; Yogi, Rohit; Younis, Muhammad H; Dakroub, Rola; Chahine, Hasan

    2016-12-01

    In vitro and in vivo studies did not detect toxicity to the retinal pigment epithelium cells using intravitreal ziv-aflibercept. Our purpose is to ascertain the 3-month safety and efficacy in wet age-related macular degeneration (AMD) treated with intravitreal ziv-aflibercept. Prospectively, consecutive patients with wet AMD underwent ziv-aflibercept intravitreal injection (1.25 mg/0.05 mL) from March 2015 to November 2015. Monitoring of best-corrected visual acuity, intraocular inflammation, cataract progression and by spectral domain optical coherence tomography were carried out at baseline day 1, 1 week, 1 month, 2 months and 3 months after injections. 30 eyes were treated (22 Caucasians, 8 Indians; 16 men, 14 women; 14 right eyes and 16 left eyes) with mean age of 74.3 years with 11 treatment-naïve cases and 19 having had treatment-non-naïve. Best-corrected visual acuity improved from baseline logMAR 1.08-0.74 at 1 week, 0.72 at 1 month, 0.67 at 2 months and 0.71 at 3 months (p<0.001 for all time periods). Central macular thickness in microns decreased from 332.8 to 302.0 at 1 week, 244.8 at 1 month, 229.0 at 2 months and 208.2 at 3 months (p<0.001 for all time periods). There were no signs of intraocular inflammation, or change in lens status or increase in intraocular pressure throughout the study. Off label use of ziv-aflibercept improves visual acuity, without detectable ocular toxicity and offers a cheaper alternative to the same molecule aflibercept, especially in low/middle-income countries and in countries where aflibercept (Eylea) is not available. NCT02486484. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  8. Contextual Semantic Parsing using Crowdsourced Spatial Descriptions

    OpenAIRE

    Dukes, Kais

    2014-01-01

    We describe a contextual parser for the Robot Commands Treebank, a new crowdsourced resource. In contrast to previous semantic parsers that select the most-probable parse, we consider the different problem of parsing using additional situational context to disambiguate between different readings of a sentence. We show that multiple semantic analyses can be searched using dynamic programming via interaction with a spatial planner, to guide the parsing process. We are able to parse sentences in...

  9. Studi Kompresi Data dengan Metode Arithmetic Coding

    OpenAIRE

    Santoso, Petrus

    2001-01-01

    In Bahasa Indonesia : Ada banyak sekali metode kompresi data yang ada saat ini. Sebagian besar metode tersebut bisa dikelompokkan ke dalam salah satu dari dua kelompok besar, statistical based dan dictionary based. Contoh dari dictionary based coding adalah Lempel Ziv Welch dan contoh dari statistical based coding adalah Huffman Coding dan Arithmetic Coding yang merupakan algoritma terbaru. Makalah ini mengulas prinsip-prinsip dari Arithmetic Coding serta keuntungan-keuntungannya dibandi...

  10. Intravitreal injection of ziv-aflibercept in the treatment of choroidal and retinal vascular diseases.

    Science.gov (United States)

    HodjatJalali, Kamran; Mehravaran, Shiva; Faghihi, Hooshang; Hashemi, Hassan; Kazemi, Pegah; Rastad, Hadith

    2017-09-01

    To investigate the short-term outcomes after intravitreal injection of ziv-aflibercept in the treatment of choroidal and retinal vascular diseases. Thirty-four eyes of 29 patients with age-related macular degeneration (AMD), diabetic retinopathy, and retinal vein occlusion (RVO) received a single dose intravitreal injection of 0.05 ml ziv-aflibercept (1.25 mg). Visual acuity, spectral domain optical coherence tomography (SD-OCT) activity, and possible side effects were assessed before and at 1 week and 1 month after the intervention. At 1 month after treatment, mean central macular thickness (CMT) significantly decreased from 531.09 μm to 339.5 μm ( P  < 0.001), and no signs of side effects were observed in any subject. All patients responded to treatment in terms of reduction in CMT. The improvement in visual acuity was statistically non-significant. Our findings suggest that a single dose intravitreal injection of ziv-aflibercept may have acceptable relative safety and efficacy in the treatment of patients with intraocular vascular disease. The trial was registered in the Iranian Registry of Clinical Trials (IRCT2015081723651N1).

  11. Lempel–Ziv Data Compression on Parallel and Distributed Systems

    Directory of Open Access Journals (Sweden)

    Sergio De Agostino

    2011-09-01

    Full Text Available We present a survey of results concerning Lempel–Ziv data compression on parallel and distributed systems, starting from the theoretical approach to parallel time complexity to conclude with the practical goal of designing distributed algorithms with low communication cost. Storer’s extension for image compression is also discussed.

  12. From LZ77 to the run-length encoded burrows-wheeler transform, and back

    DEFF Research Database (Denmark)

    Policriti, Alberto; Prezza, Nicola

    2017-01-01

    The Lempel-Ziv factorization (LZ77) and the Run-Length encoded Burrows-Wheeler Transform (RLBWT) are two important tools in text compression and indexing, being their sizes z and r closely related to the amount of text self-repetitiveness. In this paper we consider the problem of converting the t......(r + z) words of working space. Note that r and z can be constant if the text is highly repetitive, and our algorithms can operate with (up to) exponentially less space than naive solutions based on full decompression.......The Lempel-Ziv factorization (LZ77) and the Run-Length encoded Burrows-Wheeler Transform (RLBWT) are two important tools in text compression and indexing, being their sizes z and r closely related to the amount of text self-repetitiveness. In this paper we consider the problem of converting the two...... representations into each other within a working space proportional to the input and the output. Let n be the text length. We show that RLBWT can be converted to LZ77 in O(n log r) time and O(r) words of working space. Conversely, we provide an algorithm to convert LZ77 to RLBWT in O(n(log r + log z)) time and O...

  13. Application development with Parse using iOS SDK

    CERN Document Server

    Birani, Bhanu

    2013-01-01

    A practical guide, featuring step-by-step instructions showing you how to use Parse iOS, and handle your data on cloud.If you are a developer who wants to build your applications instantly using Parse iOS as a back end application development, this book is ideal for you. This book will help you to understand Parse, featuring examples to help you get familiar with the concepts of Parse iOS.

  14. Two-pass greedy regular expression parsing

    DEFF Research Database (Denmark)

    Grathwohl, Niels Bjørn Bugge; Henglein, Fritz; Nielsen, Lasse

    2013-01-01

    We present new algorithms for producing greedy parses for regular expressions (REs) in a semi-streaming fashion. Our lean-log algorithm executes in time O(mn) for REs of size m and input strings of size n and outputs a compact bit-coded parse tree representation. It improves on previous algorithms...... by: operating in only 2 passes; using only O(m) words of random-access memory (independent of n); requiring only kn bits of sequentially written and read log storage, where k ... and not requiring it to be stored at all. Previous RE parsing algorithms do not scale linearly with input size, or require substantially more log storage and employ 3 passes where the first consists of reversing the input, or do not or are not known to produce a greedy parse. The performance of our unoptimized C...

  15. Chinese Unknown Word Recognition for PCFG-LA Parsing

    Directory of Open Access Journals (Sweden)

    Qiuping Huang

    2014-01-01

    Full Text Available This paper investigates the recognition of unknown words in Chinese parsing. Two methods are proposed to handle this problem. One is the modification of a character-based model. We model the emission probability of an unknown word using the first and last characters in the word. It aims to reduce the POS tag ambiguities of unknown words to improve the parsing performance. In addition, a novel method, using graph-based semisupervised learning (SSL, is proposed to improve the syntax parsing of unknown words. Its goal is to discover additional lexical knowledge from a large amount of unlabeled data to help the syntax parsing. The method is mainly to propagate lexical emission probabilities to unknown words by building the similarity graphs over the words of labeled and unlabeled data. The derived distributions are incorporated into the parsing process. The proposed methods are effective in dealing with the unknown words to improve the parsing. Empirical results for Penn Chinese Treebank and TCT Treebank revealed its effectiveness.

  16. Design of methodology for incremental compiler construction

    Directory of Open Access Journals (Sweden)

    Pavel Haluza

    2011-01-01

    Full Text Available The paper deals with possibilities of the incremental compiler construction. It represents the compiler construction possibilities for languages with a fixed set of lexical units and for languages with a variable set of lexical units, too. The methodology design for the incremental compiler construction is based on the known algorithms for standard compiler construction and derived for both groups of languages. Under the group of languages with a fixed set of lexical units there belong languages, where each lexical unit has its constant meaning, e.g., common programming languages. For this group of languages the paper tries to solve the problem of the incremental semantic analysis, which is based on incremental parsing. In the group of languages with a variable set of lexical units (e.g., professional typographic system TEX, it is possible to change arbitrarily the meaning of each character on the input file at any time during processing. The change takes effect immediately and its validity can be somehow limited or is given by the end of the input. For this group of languages this paper tries to solve the problem case when we use macros temporarily changing the category of arbitrary characters.

  17. Image portion identification methods, image parsing methods, image parsing systems, and articles of manufacture

    Science.gov (United States)

    Lassahn, Gordon D.; Lancaster, Gregory D.; Apel, William A.; Thompson, Vicki S.

    2013-01-08

    Image portion identification methods, image parsing methods, image parsing systems, and articles of manufacture are described. According to one embodiment, an image portion identification method includes accessing data regarding an image depicting a plurality of biological substrates corresponding to at least one biological sample and indicating presence of at least one biological indicator within the biological sample and, using processing circuitry, automatically identifying a portion of the image depicting one of the biological substrates but not others of the biological substrates.

  18. Transform domain Wyner-Ziv video coding with refinement of noise residue and side information

    DEFF Research Database (Denmark)

    Huang, Xin; Forchhammer, Søren

    2010-01-01

    are successively updating the estimated noise residue for noise modeling and side information frame quality during decoding. Experimental results show that the proposed decoder can improve the Rate- Distortion (RD) performance of a state-of-the-art Wyner Ziv video codec for the set of test sequences.......Distributed Video Coding (DVC) is a video coding paradigm which mainly exploits the source statistics at the decoder based on the availability of side information at the decoder. This paper considers feedback channel based Transform Domain Wyner-Ziv (TDWZ) DVC. The coding efficiency of TDWZ video...... coding does not match that of conventional video coding yet, mainly due to the quality of side information and inaccurate noise estimation. In this context, a novel TDWZ video decoder with noise residue refinement (NRR) and side information refinement (SIR) is proposed. The proposed refinement schemes...

  19. From LL-regular to LL(1) grammars: Transformations, covers and parsing

    NARCIS (Netherlands)

    Nijholt, Antinus

    1982-01-01

    In this paper it is shown that it is possible to transform any LL-regular grammar G into an LL(1) grammar G' in such a way that parsing G' is as good as parsing G. That is, a parse of a sentence of grammar G can be obtained with a simple string homomorphism from the parse of a corresponding sentence

  20. Context-free parsing with connectionist networks

    Science.gov (United States)

    Fanty, M. A.

    1986-08-01

    This paper presents a simple algorithm which converts any context-free grammar into a connectionist network which parses strings (of arbitrary but fixed maximum length) in the language defined by that grammar. The network is fast, O(n), and deterministicd. It consists of binary units which compute a simple function of their input. When the grammar is put in Chomsky normal form, O(n3) units needed to parse inputs of length up to n.

  1. On Parsing CHILDES

    OpenAIRE

    Laakso, Aarre

    2005-01-01

    Research on child language acquisition would benefit from the availability of a large body of syntactically parsed utterances between parents and children. We consider the problem of generating such a ``treebank'' from the CHILDES corpus, which currently contains primarily orthographically transcribed speech tagged for lexical category.

  2. Dual decomposition for parsing with non-projective head automata

    OpenAIRE

    Koo, Terry; Rush, Alexander Matthew; Collins, Michael; Jaakkola, Tommi S.; Sontag, David Alexander

    2010-01-01

    This paper introduces algorithms for non-projective parsing based on dual decomposition. We focus on parsing algorithms for non-projective head automata, a generalization of head-automata models to non-projective structures. The dual decomposition algorithms are simple and efficient, relying on standard dynamic programming and minimum spanning tree algorithms. They provably solve an LP relaxation of the non-projective parsing problem. Empirically the LP relaxation is very often tight: for man...

  3. Safety profiles of anti-VEGF drugs: bevacizumab, ranibizumab, aflibercept and ziv-aflibercept on human retinal pigment epithelium cells in culture

    Science.gov (United States)

    Malik, Deepika; Tarek, Mohamed; Caceres del Carpio, Javier; Ramirez, Claudio; Boyer, David; Kenney, M Cristina; Kuppermann, Baruch D

    2014-01-01

    Purpose To compare the safety profiles of antivascular endothelial growth factor (VEGF) drugs ranibizumab, bevacizumab, aflibercept and ziv-aflibercept on retinal pigment epithelium cells in culture. Methods Human retinal pigment epithelium cells (ARPE-19) were exposed for 24 h to four anti-VEGF drugs at 1/2×, 1×, 2× and 10× clinical concentrations. Cell viability and mitochondrial membrane potential assay were performed to evaluate early apoptotic changes and rate of overall cell death. Results Cell viability decreased at 10× concentrations in bevacizumab (82.38%, p=0.0001), aflibercept (82.68%, p=0.0002) and ziv-aflibercept (77.25%, p<0.0001), but not at lower concentrations. However, no changes were seen in cell viability in ranibizumab-treated cells at all concentrations including 10×. Mitochondrial membrane potential was slightly decreased in 10× ranibizumab-treated cells (89.61%, p=0.0006) and 2× and 10× aflibercept-treated cells (88.76%, 81.46%; p<0.01, respectively). A larger reduction in mitochondrial membrane potential was seen at 1×, 2× and 10× concentrations of bevacizumab (86.53%, 74.38%, 66.67%; p<0.01) and ziv-aflibercept (73.50%, 64.83% and 49.65% p<0.01) suggestive of early apoptosis at lower doses, including the clinical doses. Conclusions At clinical doses, neither ranibizumab nor aflibercept produced evidence of mitochondrial toxicity or cell death. However, bevacizumab and ziv-aflibercept showed mild mitochondrial toxicity at clinically relevant doses. PMID:24836865

  4. Interacting price model and fluctuation behavior analysis from Lempel–Ziv complexity and multi-scale weighted-permutation entropy

    Energy Technology Data Exchange (ETDEWEB)

    Li, Rui, E-mail: lirui1401@bjtu.edu.cn; Wang, Jun

    2016-01-08

    A financial price model is developed based on the voter interacting system in this work. The Lempel–Ziv complexity is introduced to analyze the complex behaviors of the stock market. Some stock market stylized facts including fat tails, absence of autocorrelation and volatility clustering are investigated for the proposed price model firstly. Then the complexity of fluctuation behaviors of the real stock markets and the proposed price model are mainly explored by Lempel–Ziv complexity (LZC) analysis and multi-scale weighted-permutation entropy (MWPE) analysis. A series of LZC analyses of the returns and the absolute returns of daily closing prices and moving average prices are performed. Moreover, the complexity of the returns, the absolute returns and their corresponding intrinsic mode functions (IMFs) derived from the empirical mode decomposition (EMD) with MWPE is also investigated. The numerical empirical study shows similar statistical and complex behaviors between the proposed price model and the real stock markets, which exhibits that the proposed model is feasible to some extent. - Highlights: • A financial price dynamical model is developed based on the voter interacting system. • Lempel–Ziv complexity is the firstly applied to investigate the stock market dynamics system. • MWPE is employed to explore the complexity fluctuation behaviors of the stock market. • Empirical results show the feasibility of the proposed financial model.

  5. Interacting price model and fluctuation behavior analysis from Lempel–Ziv complexity and multi-scale weighted-permutation entropy

    International Nuclear Information System (INIS)

    Li, Rui; Wang, Jun

    2016-01-01

    A financial price model is developed based on the voter interacting system in this work. The Lempel–Ziv complexity is introduced to analyze the complex behaviors of the stock market. Some stock market stylized facts including fat tails, absence of autocorrelation and volatility clustering are investigated for the proposed price model firstly. Then the complexity of fluctuation behaviors of the real stock markets and the proposed price model are mainly explored by Lempel–Ziv complexity (LZC) analysis and multi-scale weighted-permutation entropy (MWPE) analysis. A series of LZC analyses of the returns and the absolute returns of daily closing prices and moving average prices are performed. Moreover, the complexity of the returns, the absolute returns and their corresponding intrinsic mode functions (IMFs) derived from the empirical mode decomposition (EMD) with MWPE is also investigated. The numerical empirical study shows similar statistical and complex behaviors between the proposed price model and the real stock markets, which exhibits that the proposed model is feasible to some extent. - Highlights: • A financial price dynamical model is developed based on the voter interacting system. • Lempel–Ziv complexity is the firstly applied to investigate the stock market dynamics system. • MWPE is employed to explore the complexity fluctuation behaviors of the stock market. • Empirical results show the feasibility of the proposed financial model.

  6. A structural SVM approach for reference parsing.

    Science.gov (United States)

    Zhang, Xiaoli; Zou, Jie; Le, Daniel X; Thoma, George R

    2011-06-09

    Automated extraction of bibliographic data, such as article titles, author names, abstracts, and references is essential to the affordable creation of large citation databases. References, typically appearing at the end of journal articles, can also provide valuable information for extracting other bibliographic data. Therefore, parsing individual reference to extract author, title, journal, year, etc. is sometimes a necessary preprocessing step in building citation-indexing systems. The regular structure in references enables us to consider reference parsing a sequence learning problem and to study structural Support Vector Machine (structural SVM), a newly developed structured learning algorithm on parsing references. In this study, we implemented structural SVM and used two types of contextual features to compare structural SVM with conventional SVM. Both methods achieve above 98% token classification accuracy and above 95% overall chunk-level accuracy for reference parsing. We also compared SVM and structural SVM to Conditional Random Field (CRF). The experimental results show that structural SVM and CRF achieve similar accuracies at token- and chunk-levels. When only basic observation features are used for each token, structural SVM achieves higher performance compared to SVM since it utilizes the contextual label features. However, when the contextual observation features from neighboring tokens are combined, SVM performance improves greatly, and is close to that of structural SVM after adding the second order contextual observation features. The comparison of these two methods with CRF using the same set of binary features show that both structural SVM and CRF perform better than SVM, indicating their stronger sequence learning ability in reference parsing.

  7. Faster Scannerless GLR parsing

    NARCIS (Netherlands)

    J.J. Vinju (Jurgen); G.R. Economopoulos (Giorgos Robert); P. Klint (Paul)

    2008-01-01

    textabstractAnalysis and renovation of large software portfolios requires syntax analysis of multiple, usually embedded, languages and this is beyond the capabilities of many standard parsing techniques. The traditional separation between lexer and parser falls short due to the limitations of

  8. Faster scannerless GLR parsing

    NARCIS (Netherlands)

    G.R. Economopoulos (Giorgos Robert); P. Klint (Paul); J.J. Vinju (Jurgen); O. de Moor; M.I. Schwartzbach

    2009-01-01

    textabstractAnalysis and renovation of large software portfolios requires syntax analysis of multiple, usually embedded, languages and this is beyond the capabilities of many standard parsing techniques. The traditional separation between lexer and parser falls short due to the limitations of

  9. Probabilistic lexical generalization for French dependency parsing

    OpenAIRE

    Henestroza Anguiano , Enrique; Candito , Marie

    2012-01-01

    International audience; This paper investigates the impact on French dependency parsing of lexical generalization methods beyond lemmatization and morphological analysis. A distributional thesaurus is created from a large text corpus and used for distributional clustering and WordNet automatic sense ranking. The standard approach for lexical generalization in parsing is to map a word to a single generalized class, either replacing the word with the class or adding a new feature for the class....

  10. Faster scannerless GLR parsing

    NARCIS (Netherlands)

    Economopoulos, G.R.; Klint, P.; Vinju, J.J.; Moor, de O.; Schwartzbach, M.I.

    2009-01-01

    Analysis and renovation of large software portfolios requires syntax analysis of multiple, usually embedded, languages and this is beyond the capabilities of many standard parsing techniques. The traditional separation between lexer and parser falls short due to the limitations of tokenization based

  11. Error Parsing: An alternative method of implementing social judgment theory

    OpenAIRE

    Crystal C. Hall; Daniel M. Oppenheimer

    2015-01-01

    We present a novel method of judgment analysis called Error Parsing, based upon an alternative method of implementing Social Judgment Theory (SJT). SJT and Error Parsing both posit the same three components of error in human judgment: error due to noise, error due to cue weighting, and error due to inconsistency. In that sense, the broad theory and framework are the same. However, SJT and Error Parsing were developed to answer different questions, and thus use different m...

  12. CLINICAL AND ELECTROPHYSIOLOGICAL EVALUATION AFTER INTRAVITREAL ZIV-AFLIBERCEPT FOR EXUDATIVE AGE-RELATED MACULAR DEGENERATION.

    Science.gov (United States)

    de Oliveira Dias, João Rafael; de Andrade, Gabriel Costa; Kniggendorf, Vinicius Ferreira; Novais, Eduardo Amorim; Maia, André; Meyer, Carsten; Watanabe, Sung Eun Song; Farah, Michel Eid; Rodrigues, Eduardo Büchele

    2017-08-01

    To evaluate the 6-month safety and efficacy of ziv-aflibercept intravitreal injections for treating exudative age-related macular degeneration. Fifteen patients with unilateral exudative age-related macular degeneration were enrolled. The best-corrected visual acuity was measured and spectral domain optical coherence tomography was performed at baseline and monthly. Full-field electroretinography and multifocal electroretinography were obtained at baseline and 4, 13, and 26 weeks after the first injection. All patients received three monthly intravitreal injections of ziv-aflibercept (1.25 mg) followed by as-needed treatment. Between baseline and 26 weeks, the mean logMAR best-corrected visual acuity improved (P = 0.00408) from 0.93 ± 0.4 (20/200) to 0.82 ± 0.5 (20/160) logarithm of the minimum angle of resolution, respectively; the central retinal thickness decreased significantly (P = 0.0007) from 490.3 ± 155.1 microns to 327.9 ± 101.5 microns; the mean total macular volume decreased significantly (P macular responses within the first central 15° showed significantly (P macular volume from baseline to 26 weeks. No retinal toxicity on full-field electroretinography or adverse events occurred during the follow-up period.

  13. Parallel iterative decoding of transform domain Wyner-Ziv video using cross bitplane correlation

    DEFF Research Database (Denmark)

    Luong, Huynh Van; Huang, Xin; Forchhammer, Søren

    2011-01-01

    decoding scheme is proposed to improve the coding efficiency of TDWZ video codecs. The proposed parallel iterative LDPC decoding scheme is able to utilize cross bitplane correlation during decoding, by iteratively refining the soft-input, updating a modeled noise distribution and thereafter enhancing......In recent years, Transform Domain Wyner-Ziv (TDWZ) video coding has been proposed as an efficient Distributed Video Coding (DVC) solution, which fully or partly exploits the source statistics at the decoder to reduce the computational burden at the encoder. In this paper, a parallel iterative LDPC...

  14. Data compression and genomes: a two-dimensional life domain map.

    Science.gov (United States)

    Menconi, Giulia; Benci, Vieri; Buiatti, Marcello

    2008-07-21

    We define the complexity of DNA sequences as the information content per nucleotide, calculated by means of some Lempel-Ziv data compression algorithm. It is possible to use the statistics of the complexity values of the functional regions of different complete genomes to distinguish among genomes of different domains of life (Archaea, Bacteria and Eukarya). We shall focus on the distribution function of the complexity of non-coding regions. We show that the three domains may be plotted in separate regions within the two-dimensional space where the axes are the skewness coefficient and the curtosis coefficient of the aforementioned distribution. Preliminary results on 15 genomes are introduced.

  15. Integrating high dimensional bi-directional parsing models for gene mention tagging.

    Science.gov (United States)

    Hsu, Chun-Nan; Chang, Yu-Ming; Kuo, Cheng-Ju; Lin, Yu-Shi; Huang, Han-Shen; Chung, I-Fang

    2008-07-01

    Tagging gene and gene product mentions in scientific text is an important initial step of literature mining. In this article, we describe in detail our gene mention tagger participated in BioCreative 2 challenge and analyze what contributes to its good performance. Our tagger is based on the conditional random fields model (CRF), the most prevailing method for the gene mention tagging task in BioCreative 2. Our tagger is interesting because it accomplished the highest F-scores among CRF-based methods and second over all. Moreover, we obtained our results by mostly applying open source packages, making it easy to duplicate our results. We first describe in detail how we developed our CRF-based tagger. We designed a very high dimensional feature set that includes most of information that may be relevant. We trained bi-directional CRF models with the same set of features, one applies forward parsing and the other backward, and integrated two models based on the output scores and dictionary filtering. One of the most prominent factors that contributes to the good performance of our tagger is the integration of an additional backward parsing model. However, from the definition of CRF, it appears that a CRF model is symmetric and bi-directional parsing models will produce the same results. We show that due to different feature settings, a CRF model can be asymmetric and the feature setting for our tagger in BioCreative 2 not only produces different results but also gives backward parsing models slight but constant advantage over forward parsing model. To fully explore the potential of integrating bi-directional parsing models, we applied different asymmetric feature settings to generate many bi-directional parsing models and integrate them based on the output scores. Experimental results show that this integrated model can achieve even higher F-score solely based on the training corpus for gene mention tagging. Data sets, programs and an on-line service of our gene

  16. Toward the Soundness of Sense Structure Definitions in Thesaurus-Dictionaries. Parsing Problems and Solutions

    Directory of Open Access Journals (Sweden)

    Neculai Curteanu

    2012-10-01

    Full Text Available In this paper we point out some difficult problems of thesaurus-dictionary entry parsing, relying on the parsing technology of SCD (Segmentation-Cohesion-Dependency configurations, successfully applied on six largest thesauri -- Romanian (2, French, German (2, and Russian. \\textbf{Challenging Problems:} \\textbf{(a}~Intricate and~/~or recursive structures of the lexicographic segments met in the entries of certain thesauri; \\textbf{(b}~Cyclicity (recursive calls of some sense marker classes on marker sequences; \\textbf{(c}~Establishing the hypergraph-driven dependencies between all the atomic and non-atomic sense definitions. Classical approach to solve these parsing problems is hard mainly because of depth-first search of sense definitions and markers, the substantial complexity of entries, and the sense tree dynamic construction embodied within these parsers. \\textbf{SCD-based Parsing Solutions:} \\textbf{(a}~The SCD parsing method is a procedural tool, completely formal grammar-free, handling the recursive structure of the lexicographic segments by procedural non-recursive calls performed on the SCD parsing configurations of the entry structure. \\textbf{(b}~For dealing with cyclicity (recursive calls between secondary sense markers and the sense enumeration markers, we proposed the Enumeration Closing Condition, sometimes coupled with New{\\_}Paragraphs typographic markers transformed into numeral sense enumeration. \\textbf{(c}~These problems, their lexicographic modeling and parsing solutions are addressed to both dictionary parser programmers to experience the SCD-based parsing method, as well as to lexicographers and thesauri designers for tailoring balanced lexical-semantics granularities and sounder sense tree definitions of the dictionary entries.

  17. On Collocations and Their Interaction with Parsing and Translation

    Directory of Open Access Journals (Sweden)

    Violeta Seretan

    2013-10-01

    Full Text Available We address the problem of automatically processing collocations—a subclass of multi-word expressions characterized by a high degree of morphosyntactic flexibility—in the context of two major applications, namely, syntactic parsing and machine translation. We show that parsing and collocation identification are processes that are interrelated and that benefit from each other, inasmuch as syntactic information is crucial for acquiring collocations from corpora and, vice versa, collocational information can be used to improve parsing performance. Similarly, we focus on the interrelation between collocations and machine translation, highlighting the use of translation information for multilingual collocation identification, as well as the use of collocational knowledge for improving translation. We give a panorama of the existing relevant work, and we parallel the literature surveys with our own experiments involving a symbolic parser and a rule-based translation system. The results show a significant improvement over approaches in which the corresponding tasks are decoupled.

  18. A Semantic Constraint on Syntactic Parsing.

    Science.gov (United States)

    Crain, Stephen; Coker, Pamela L.

    This research examines how semantic information influences syntactic parsing decisions during sentence processing. In the first experiment, subjects were presented lexical strings having syntactically identical surface structures but with two possible underlying structures: "The children taught by the Berlitz method," and "The…

  19. Parse Journal #2: Introduction

    OpenAIRE

    Bowman, Jason; Malik, Suhail; Phillips, Andrea

    2015-01-01

    As a periodical concerned with the critical potential of artistic research, this edition of the PARSE journal mobilises the multiple perspectives of artists, thinkers, critics and curators on the problematics, discontents and possibilities of private capital as an unregulated yet assumptive producer of art’s value, including its integration with state-funding. We have put emphasis on how this conditioning of art’s production, circulation, reception and sale can be put to task. In particular, ...

  20. Cuidado de enfermagem a pessoas com hipertensão fundamentado na teoria de Parse Atención de enfermería a personas con hipertensión basada en la teoría de Parse Nursing care to people with hypertension based on Parse's theory

    Directory of Open Access Journals (Sweden)

    Fabíola Vládia Freire da Silva

    2013-03-01

    Full Text Available Este estudo propõe o cuidado de enfermagem, baseado nos princípios de Parse, a pessoas com hipertensão consultadas na Estratégia Saúde da Família. Estudo descritivo, de cunho qualitativo, realizado de março a maio de 2011, com quatorze enfermeiros no município de Itapajé-Ceará. Para coleta das informações utilizou-se a entrevista semiestruturada e, para análise, o discurso dos sujeitos. Emergiram três categorias baseadas nos princípios de Parse: Multidimensão dos significados - o enfermeiro conduz ao relato dos significados; Sincronização de ritmos - o enfermeiro ajuda a identificar harmonia e desarmonia; Mobilização da transcendência - o enfermeiro guia o plano de mudanças. Notou-se aproximação dos discursos ao teorizado por Parse quando citaram buscar um cuidado humanizado, com a participação da família, valorização da autonomia, utilização da educação em saúde, com orientações individuais. Percebeu-se a viabilidade na implementação do cuidado de enfermagem fundamentado na Teoria de Parse a pessoas com hipertensão.Este estudio propone la atención de enfermería, basada en los principios de Parse, para personas con hipertensión en la Estrategia de Salud Familiar. Estudio descriptivo, cualitativo, realizado de marzo a mayo/2011, con catorce enfermeros en Itapajé, Ceará. Para la recolección de las informaciones, se utilizó la entrevista semiestructurada y para análisis, el discurso de los sujetos. Emergieron tres categorías: Multidimensiones de los significados - el enfermero conduce al relato de los significados; Sincronización de los ritmos - el enfermero ayuda a identificar armonía y desarmonía; Movilización de la trascendencia - el enfermero guía el plan de cambios. Se observó semejanzas entre el enfoque de los discursos y la Teoría de Parse, cuando citaron la búsqueda de atención humanizada, con participación de la familia, valoración de la autonomía, uso de la educación en salud

  1. The psychosis-like effects of Δ(9)-tetrahydrocannabinol are associated with increased cortical noise in healthy humans.

    Science.gov (United States)

    Cortes-Briones, Jose A; Cahill, John D; Skosnik, Patrick D; Mathalon, Daniel H; Williams, Ashley; Sewell, R Andrew; Roach, Brian J; Ford, Judith M; Ranganathan, Mohini; D'Souza, Deepak Cyril

    2015-12-01

    Drugs that induce psychosis may do so by increasing the level of task-irrelevant random neural activity or neural noise. Increased levels of neural noise have been demonstrated in psychotic disorders. We tested the hypothesis that neural noise could also be involved in the psychotomimetic effects of delta-9-tetrahydrocannabinol (Δ(9)-THC), the principal active constituent of cannabis. Neural noise was indexed by measuring the level of randomness in the electroencephalogram during the prestimulus baseline period of an oddball task using Lempel-Ziv complexity, a nonlinear measure of signal randomness. The acute, dose-related effects of Δ(9)-THC on Lempel-Ziv complexity and signal power were studied in humans (n = 24) who completed 3 test days during which they received intravenous Δ(9)-THC (placebo, .015 and .03 mg/kg) in a double-blind, randomized, crossover, and counterbalanced design. Δ(9)-THC increased neural noise in a dose-related manner. Furthermore, there was a strong positive relationship between neural noise and the psychosis-like positive and disorganization symptoms induced by Δ(9)-THC, which was independent of total signal power. Instead, there was no relationship between noise and negative-like symptoms. In addition, Δ(9)-THC reduced total signal power during both active drug conditions compared with placebo, but no relationship was detected between signal power and psychosis-like symptoms. At doses that produced psychosis-like effects, Δ(9)-THC increased neural noise in humans in a dose-dependent manner. Furthermore, increases in neural noise were related with increases in Δ(9)-THC-induced psychosis-like symptoms but not negative-like symptoms. These findings suggest that increases in neural noise may contribute to the psychotomimetic effects of Δ(9)-THC. Published by Elsevier Inc.

  2. Neural Markers of Responsiveness to the Environment in Human Sleep

    DEFF Research Database (Denmark)

    Andrillon, Thomas; Poulsen, Andreas Trier; Hansen, Lars Kai

    2016-01-01

    by Lempel-Ziv complexity (LZc), a measure shown to track arousal in sleep and anesthesia. Neural activity related to the semantic content of stimuli was conserved in light non-rapid eye movement (NREM) sleep. However, these processes were suppressed in deep NREM sleep and, importantly, also in REM sleep...... could be related to modulation in sleep depth. InREMsleep, however, this relationship was reversed.Wetherefore propose that, in REM sleep, endogenously generated processes compete with the processing of external input. Sleep can thus be seen as a self-regulated process in which external information can...... be processed in lighter stages but suppressed in deeper stages. Last, our results suggest drastically different gating mechanisms in NREM and REM sleep....

  3. Telugu dependency parsing using different statistical parsers

    Directory of Open Access Journals (Sweden)

    B. Venkata Seshu Kumari

    2017-01-01

    Full Text Available In this paper we explore different statistical dependency parsers for parsing Telugu. We consider five popular dependency parsers namely, MaltParser, MSTParser, TurboParser, ZPar and Easy-First Parser. We experiment with different parser and feature settings and show the impact of different settings. We also provide a detailed analysis of the performance of all the parsers on major dependency labels. We report our results on test data of Telugu dependency treebank provided in the ICON 2010 tools contest on Indian languages dependency parsing. We obtain state-of-the art performance of 91.8% in unlabeled attachment score and 70.0% in labeled attachment score. To the best of our knowledge ours is the only work which explored all the five popular dependency parsers and compared the performance under different feature settings for Telugu.

  4. Single-View 3D Scene Reconstruction and Parsing by Attribute Grammar.

    Science.gov (United States)

    Liu, Xiaobai; Zhao, Yibiao; Zhu, Song-Chun

    2018-03-01

    In this paper, we present an attribute grammar for solving two coupled tasks: i) parsing a 2D image into semantic regions; and ii) recovering the 3D scene structures of all regions. The proposed grammar consists of a set of production rules, each describing a kind of spatial relation between planar surfaces in 3D scenes. These production rules are used to decompose an input image into a hierarchical parse graph representation where each graph node indicates a planar surface or a composite surface. Different from other stochastic image grammars, the proposed grammar augments each graph node with a set of attribute variables to depict scene-level global geometry, e.g., camera focal length, or local geometry, e.g., surface normal, contact lines between surfaces. These geometric attributes impose constraints between a node and its off-springs in the parse graph. Under a probabilistic framework, we develop a Markov Chain Monte Carlo method to construct a parse graph that optimizes the 2D image recognition and 3D scene reconstruction purposes simultaneously. We evaluated our method on both public benchmarks and newly collected datasets. Experiments demonstrate that the proposed method is capable of achieving state-of-the-art scene reconstruction of a single image.

  5. Improved virtual channel noise model for transform domain Wyner-Ziv video coding

    DEFF Research Database (Denmark)

    Huang, Xin; Forchhammer, Søren

    2009-01-01

    Distributed video coding (DVC) has been proposed as a new video coding paradigm to deal with lossy source coding using side information to exploit the statistics at the decoder to reduce computational demands at the encoder. A virtual channel noise model is utilized at the decoder to estimate...... the noise distribution between the side information frame and the original frame. This is one of the most important aspects influencing the coding performance of DVC. Noise models with different granularity have been proposed. In this paper, an improved noise model for transform domain Wyner-Ziv video...... coding is proposed, which utilizes cross-band correlation to estimate the Laplacian parameters more accurately. Experimental results show that the proposed noise model can improve the rate-distortion (RD) performance....

  6. Recursive Neural Networks Based on PSO for Image Parsing

    Directory of Open Access Journals (Sweden)

    Guo-Rong Cai

    2013-01-01

    Full Text Available This paper presents an image parsing algorithm which is based on Particle Swarm Optimization (PSO and Recursive Neural Networks (RNNs. State-of-the-art method such as traditional RNN-based parsing strategy uses L-BFGS over the complete data for learning the parameters. However, this could cause problems due to the nondifferentiable objective function. In order to solve this problem, the PSO algorithm has been employed to tune the weights of RNN for minimizing the objective. Experimental results obtained on the Stanford background dataset show that our PSO-based training algorithm outperforms traditional RNN, Pixel CRF, region-based energy, simultaneous MRF, and superpixel MRF.

  7. Parsing polarization squeezing into Fock layers

    DEFF Research Database (Denmark)

    Mueller, Christian R.; Madsen, Lars Skovgaard; Klimov, Andrei B.

    2016-01-01

    photon number do the methods coincide; when the photon number is indefinite, we parse the state in Fock layers, finding that substantially higher squeezing can be observed in some of the single layers. By capitalizing on the properties of the Husimi Q function, we map this notion onto the Poincare space......, providing a full account of the measured squeezing....

  8. Sequence distance via parsing complexity: Heartbeat signals

    International Nuclear Information System (INIS)

    Degli Esposti, M.; Farinelli, C.; Menconi, G.

    2009-01-01

    We compare and discuss the use of different symbolic codings of electrocardiogram (ECG) signals in order to distinguish healthy patients from hospitalized ones. To this aim, we recall a parsing-based similarity distance and compare the performances of several methods of classification of data.

  9. YakYak: Parsing with Logical Side Constraints

    DEFF Research Database (Denmark)

    Hansen, Niels Damgaard; Klarlund, Nils; Schwartzbach, Michael Ignatieff

    2000-01-01

    Yak, which extends Yacc with first-order logic for specifying consteaints that are regular tree languages. Concise formulas about the parse tree replace explicit programming, and they are turned into canonical attribute grammars through tree automata calculations. YakYak is implemented as a proprocessor...

  10. ParseCNV integrative copy number variation association software with quality tracking.

    Science.gov (United States)

    Glessner, Joseph T; Li, Jin; Hakonarson, Hakon

    2013-03-01

    A number of copy number variation (CNV) calling algorithms exist; however, comprehensive software tools for CNV association studies are lacking. We describe ParseCNV, unique software that takes CNV calls and creates probe-based statistics for CNV occurrence in both case-control design and in family based studies addressing both de novo and inheritance events, which are then summarized based on CNV regions (CNVRs). CNVRs are defined in a dynamic manner to allow for a complex CNV overlap while maintaining precise association region. Using this approach, we avoid failure to converge and non-monotonic curve fitting weaknesses of programs, such as CNVtools and CNVassoc, and although Plink is easy to use, it only provides combined CNV state probe-based statistics, not state-specific CNVRs. Existing CNV association methods do not provide any quality tracking information to filter confident associations, a key issue which is fully addressed by ParseCNV. In addition, uncertainty in CNV calls underlying CNV associations is evaluated to verify significant results, including CNV overlap profiles, genomic context, number of probes supporting the CNV and single-probe intensities. When optimal quality control parameters are followed using ParseCNV, 90% of CNVs validate by polymerase chain reaction, an often problematic stage because of inadequate significant association review. ParseCNV is freely available at http://parsecnv.sourceforge.net.

  11. A Proposal for Cardiac Arrhythmia Classification using Complexity Measures

    Directory of Open Access Journals (Sweden)

    AROTARITEI, D.

    2017-08-01

    Full Text Available Cardiovascular diseases are one of the major problems of humanity and therefore one of their component, arrhythmia detection and classification drawn an increased attention worldwide. The presence of randomness in discrete time series, like those arising in electrophysiology, is firmly connected with computational complexity measure. This connection can be used, for instance, in the analysis of RR-intervals of electrocardiographic (ECG signal, coded as binary string, to detect and classify arrhythmia. Our approach uses three algorithms (Lempel-Ziv, Sample Entropy and T-Code to compute the information complexity applied and a classification tree to detect 13 types of arrhythmia with encouraging results. To overcome the computational effort required for complexity calculus, a cloud computing solution with executable code deployment is also proposed.

  12. Finding EL+ justifications using the Earley parsing algorithm

    CSIR Research Space (South Africa)

    Nortje, R

    2009-12-01

    Full Text Available into a reachability preserving context free grammar (CFG). The well known earley algorithm for parsing strings, given some CFG, is then applied to the problem of extracting minimal reachability-based axiom sets for subsumption entailments. The author has...

  13. Parsing Universal Dependencies without training

    DEFF Research Database (Denmark)

    Martínez Alonso, Héctor; Agic, Zeljko; Plank, Barbara

    2017-01-01

    We present UDP, the first training-free parser for Universal Dependencies (UD). Our algorithm is based on PageRank and a small set of specific dependency head rules. UDP features two-step decoding to guarantee that function words are attached as leaf nodes. The parser requires no training......, and it is competitive with a delexicalized transfer system. UDP offers a linguistically sound unsupervised alternative to cross-lingual parsing for UD. The parser has very few parameters and distinctly robust to domain change across languages....

  14. Parsing with subdomain instance weighting from raw corpora

    NARCIS (Netherlands)

    Plank, B.; Sima'an, K.

    2008-01-01

    The treebanks that are used for training statistical parsers consist of hand-parsed sentences from a single source/domain like newspaper text. However, newspaper text concerns different subdomains of language use (e.g. finance, sports, politics, music), which implies that the statistics gathered by

  15. Parsing with Subdomain Instance Weighting from Raw Corpora

    NARCIS (Netherlands)

    Plank, Barbara; Sima'an, Khalil

    2008-01-01

    The treebanks that are used for training statistical parsers consist of hand-parsed sentences from a single source/domain like newspaper text. However, newspaper text concerns different subdomains of language use (e.g. finance, sports, politics, music), which implies that the statistics gathered by

  16. Perceiving Event Dynamics and Parsing Hollywood Films

    Science.gov (United States)

    Cutting, James E.; Brunick, Kaitlin L.; Candan, Ayse

    2012-01-01

    We selected 24 Hollywood movies released from 1940 through 2010 to serve as a film corpus. Eight viewers, three per film, parsed them into events, which are best termed subscenes. While watching a film a second time, viewers scrolled through frames and recorded the frame number where each event began. Viewers agreed about 90% of the time. We then…

  17. Time-Driven Effects on Parsing during Reading

    Science.gov (United States)

    Roll, Mikael; Lindgren, Magnus; Alter, Kai; Horne, Merle

    2012-01-01

    The phonological trace of perceived words starts fading away in short-term memory after a few seconds. Spoken utterances are usually 2-3 s long, possibly to allow the listener to parse the words into coherent prosodic phrases while they still have a clear representation. Results from this brain potential study suggest that even during silent…

  18. "gnparser": a powerful parser for scientific names based on Parsing Expression Grammar.

    Science.gov (United States)

    Mozzherin, Dmitry Y; Myltsev, Alexander A; Patterson, David J

    2017-05-26

    Scientific names in biology act as universal links. They allow us to cross-reference information about organisms globally. However variations in spelling of scientific names greatly diminish their ability to interconnect data. Such variations may include abbreviations, annotations, misspellings, etc. Authorship is a part of a scientific name and may also differ significantly. To match all possible variations of a name we need to divide them into their elements and classify each element according to its role. We refer to this as 'parsing' the name. Parsing categorizes name's elements into those that are stable and those that are prone to change. Names are matched first by combining them according to their stable elements. Matches are then refined by examining their varying elements. This two stage process dramatically improves the number and quality of matches. It is especially useful for the automatic data exchange within the context of "Big Data" in biology. We introduce Global Names Parser (gnparser). It is a Java tool written in Scala language (a language for Java Virtual Machine) to parse scientific names. It is based on a Parsing Expression Grammar. The parser can be applied to scientific names of any complexity. It assigns a semantic meaning (such as genus name, species epithet, rank, year of publication, authorship, annotations, etc.) to all elements of a name. It is able to work with nested structures as in the names of hybrids. gnparser performs with ≈99% accuracy and processes 30 million name-strings/hour per CPU thread. The gnparser library is compatible with Scala, Java, R, Jython, and JRuby. The parser can be used as a command line application, as a socket server, a web-app or as a RESTful HTTP-service. It is released under an Open source MIT license. Global Names Parser (gnparser) is a fast, high precision tool for biodiversity informaticians and biologists working with large numbers of scientific names. It can replace expensive and error

  19. Análisis de los posibles desde la teoría de Parse en una persona con Alzheimer The analysis of the possibles through Parse's theory on a person with Alzheimer

    Directory of Open Access Journals (Sweden)

    Virtudes Rodero-Sánchez

    2006-11-01

    Full Text Available Cuando una persona acude al sistema sanitario con un problema de salud, con mucha frecuencia se le va a pedir que introduzca cambios en sus hábitos y estilo de vida. Esta demanda se suele concretar en un pacto-compromiso que se establece persona-profesional. Hemos observado que este pacto, a pesar de que el profesional se esfuerza en enmarcarlo en objetivos realistas, con demasiada frecuencia sobreviene la frustración, sobre todo en escenarios de cronicidad. La teoría de Parse nos ofrece una manera diferente de abordar el cambio. En la teoría de Parse, El Ser Humano en Devenir, los posibles son la expresión de la fuerza, entendida como una manera única de transformación, que consiste en avanzar con las esperanzas, anhelos y los proyectos de la persona. Planteamos: en primer lugar un análisis de los elementos de lo que Parse llama su tercer principio, la co-trancendencia con los posibles; en segundo lugar el análisis de los posibles desde este marco de referencia a través de una narrativa; y por último la práctica enfermera.When a person comes to the Health Care System with a health problem will often be asked to change some of his habits and lifestyles. This demand becomes a compromise-pact between the person and the professional. We have observed that in this compromise-pact, despite the effort of the professional to hide it behind realist targets, the patient usually becomes frustrated, especially in cases of chronic illnesses. Parse's theory offers us a different way to approach the change. In Parse's theory, The Human Becoming, the possibles are the expression of power, understood as a unique way of transformation, consisting in advancing with the hopes, desires and projects of a person. We suggest, first of all, an analysis of the elements that Parse calls her third principle: co-transcendence with the possibles; secondly, the analysis of the possibles from the basis of this reference framework through a narration and, finally

  20. Nonlinear complexity behaviors of agent-based 3D Potts financial dynamics with random environments

    Science.gov (United States)

    Xing, Yani; Wang, Jun

    2018-02-01

    A new microscopic 3D Potts interaction financial price model is established in this work, to investigate the nonlinear complexity behaviors of stock markets. 3D Potts model, which extends the 2D Potts model to three-dimensional, is a cubic lattice model to explain the interaction behavior among the agents. In order to explore the complexity of real financial markets and the 3D Potts financial model, a new random coarse-grained Lempel-Ziv complexity is proposed to certain series, such as the price returns, the price volatilities, and the random time d-returns. Then the composite multiscale entropy (CMSE) method is applied to the intrinsic mode functions (IMFs) and the corresponding shuffled data to study the complexity behaviors. The empirical results indicate that the 3D financial model is feasible.

  1. Fetching and Parsing Data from the Web with OpenRefine

    Directory of Open Access Journals (Sweden)

    Evan Peter Williamson

    2017-08-01

    Full Text Available OpenRefine is a powerful tool for exploring, cleaning, and transforming data. An earlier Programming Historian lesson, “Cleaning Data with OpenRefine”, introduced the basic functionality of Refine to efficiently discover and correct inconsistency in a data set. Building on those essential data wrangling skills, this lesson focuses on Refine’s ability to fetch URLs and parse web content. Examples introduce some of the advanced features to transform and enhance a data set including: - fetch URLs using Refine - construct URL queries to retrieve information from a simple web API - parse HTML and JSON responses to extract relevant data - use array functions to manipulate string values - use Jython to extend Refine’s functionality It will be helpful to have basic familiarity with OpenRefine, HTML, and programming concepts such as variables and loops to complete this lesson.

  2. Advanced programming concepts in a course on grammars and parsing

    NARCIS (Netherlands)

    Jeuring, J.T.; Swierstra, S.D.

    1999-01-01

    One of the important goals of the Computer Science curriculum at Utrecht University is to familiarize students with abstract programming concepts such as, for example, partial evaluation and deforestation. A course on grammars and parsing offers excellent possibilities for exemplifying and

  3. Adaptive Noise Model for Transform Domain Wyner-Ziv Video using Clustering of DCT Blocks

    DEFF Research Database (Denmark)

    Luong, Huynh Van; Huang, Xin; Forchhammer, Søren

    2011-01-01

    The noise model is one of the most important aspects influencing the coding performance of Distributed Video Coding. This paper proposes a novel noise model for Transform Domain Wyner-Ziv (TDWZ) video coding by using clustering of DCT blocks. The clustering algorithm takes advantage of the residual...... modelling. Furthermore, the proposed cluster level noise model is adaptively combined with a coefficient level noise model in this paper to robustly improve coding performance of TDWZ video codec up to 1.24 dB (by Bjøntegaard metric) compared to the DISCOVER TDWZ video codec....... information of all frequency bands, iteratively classifies blocks into different categories and estimates the noise parameter in each category. The experimental results show that the coding performance of the proposed cluster level noise model is competitive with state-ofthe- art coefficient level noise...

  4. Fast Parsing using Pruning and Grammar Specialization

    OpenAIRE

    Rayner, Manny; Carter, David

    1996-01-01

    We show how a general grammar may be automatically adapted for fast parsing of utterances from a specific domain by means of constituent pruning and grammar specialization based on explanation-based learning. These methods together give an order of magnitude increase in speed, and the coverage loss entailed by grammar specialization is reduced to approximately half that reported in previous work. Experiments described here suggest that the loss of coverage has been reduced to the point where ...

  5. Is human sentence parsing serial or parallel? Evidence from event-related brain potentials.

    Science.gov (United States)

    Hopf, Jens-Max; Bader, Markus; Meng, Michael; Bayer, Josef

    2003-01-01

    In this ERP study we investigate the processes that occur in syntactically ambiguous German sentences at the point of disambiguation. Whereas most psycholinguistic theories agree on the view that processing difficulties arise when parsing preferences are disconfirmed (so-called garden-path effects), important differences exist with respect to theoretical assumptions about the parser's recovery from a misparse. A key distinction can be made between parsers that compute all alternative syntactic structures in parallel (parallel parsers) and parsers that compute only a single preferred analysis (serial parsers). To distinguish empirically between parallel and serial parsing models, we compare ERP responses to garden-path sentences with ERP responses to truly ungrammatical sentences. Garden-path sentences contain a temporary and ultimately curable ungrammaticality, whereas truly ungrammatical sentences remain so permanently--a difference which gives rise to different predictions in the two classes of parsing architectures. At the disambiguating word, ERPs in both sentence types show negative shifts of similar onset latency, amplitude, and scalp distribution in an initial time window between 300 and 500 ms. In a following time window (500-700 ms), the negative shift to garden-path sentences disappears at right central parietal sites, while it continues in permanently ungrammatical sentences. These data are taken as evidence for a strictly serial parser. The absence of a difference in the early time window indicates that temporary and permanent ungrammaticalities trigger the same kind of parsing responses. Later differences can be related to successful reanalysis in garden-path but not in ungrammatical sentences. Copyright 2003 Elsevier Science B.V.

  6. Locating and parsing bibliographic references in HTML medical articles.

    Science.gov (United States)

    Zou, Jie; Le, Daniel; Thoma, George R

    2010-06-01

    The set of references that typically appear toward the end of journal articles is sometimes, though not always, a field in bibliographic (citation) databases. But even if references do not constitute such a field, they can be useful as a preprocessing step in the automated extraction of other bibliographic data from articles, as well as in computer-assisted indexing of articles. Automation in data extraction and indexing to minimize human labor is key to the affordable creation and maintenance of large bibliographic databases. Extracting the components of references, such as author names, article title, journal name, publication date and other entities, is therefore a valuable and sometimes necessary task. This paper describes a two-step process using statistical machine learning algorithms, to first locate the references in HTML medical articles and then to parse them. Reference locating identifies the reference section in an article and then decomposes it into individual references. We formulate this step as a two-class classification problem based on text and geometric features. An evaluation conducted on 500 articles drawn from 100 medical journals achieves near-perfect precision and recall rates for locating references. Reference parsing identifies the components of each reference. For this second step, we implement and compare two algorithms. One relies on sequence statistics and trains a Conditional Random Field. The other focuses on local feature statistics and trains a Support Vector Machine to classify each individual word, followed by a search algorithm that systematically corrects low confidence labels if the label sequence violates a set of predefined rules. The overall performance of these two reference-parsing algorithms is about the same: above 99% accuracy at the word level, and over 97% accuracy at the chunk level.

  7. Unraveling chaotic attractors by complex networks and measurements of stock market complexity.

    Science.gov (United States)

    Cao, Hongduo; Li, Ying

    2014-03-01

    We present a novel method for measuring the complexity of a time series by unraveling a chaotic attractor modeled on complex networks. The complexity index R, which can potentially be exploited for prediction, has a similar meaning to the Kolmogorov complexity (calculated from the Lempel-Ziv complexity), and is an appropriate measure of a series' complexity. The proposed method is used to research the complexity of the world's major capital markets. None of these markets are completely random, and they have different degrees of complexity, both over the entire length of their time series and at a level of detail. However, developing markets differ significantly from mature markets. Specifically, the complexity of mature stock markets is stronger and more stable over time, whereas developing markets exhibit relatively low and unstable complexity over certain time periods, implying a stronger long-term price memory process.

  8. Multiscale multifractal DCCA and complexity behaviors of return intervals for Potts price model

    Science.gov (United States)

    Wang, Jie; Wang, Jun; Stanley, H. Eugene

    2018-02-01

    To investigate the characteristics of extreme events in financial markets and the corresponding return intervals among these events, we use a Potts dynamic system to construct a random financial time series model of the attitudes of market traders. We use multiscale multifractal detrended cross-correlation analysis (MM-DCCA) and Lempel-Ziv complexity (LZC) perform numerical research of the return intervals for two significant China's stock market indices and for the proposed model. The new MM-DCCA method is based on the Hurst surface and provides more interpretable cross-correlations of the dynamic mechanism between different return interval series. We scale the LZC method with different exponents to illustrate the complexity of return intervals in different scales. Empirical studies indicate that the proposed return intervals from the Potts system and the real stock market indices hold similar statistical properties.

  9. A simple DOP model for constituency parsing of Italian sentences

    NARCIS (Netherlands)

    Sangati, F.

    2009-01-01

    We present a simplified Data-Oriented Parsing (DOP) formalism for learning the constituency structure of Italian sentences. In our approach we try to simplify the original DOP methodology by constraining the number and type of fragments we extract from the training corpus. We provide some examples

  10. Fuzzy context-free languages - Part 2: Recognition and parsing algorithms

    NARCIS (Netherlands)

    Asveld, P.R.J.

    2005-01-01

    In a companion paper [P.R.J. Asveld, Fuzzy context-free languages---Part 1: Generalized fuzzy context-free grammars, Theoret. Comp. Sci. (2005)] we used fuzzy context-free grammars in order to model grammatical errors resulting in erroneous inputs for robust recognizing and parsing algorithms for

  11. Fuzzy Context- Free Languages. Part 2: Recognition and Parsing Algorithms

    NARCIS (Netherlands)

    Asveld, P.R.J.

    2000-01-01

    In a companion paper \\cite{Asv:FCF1} we used fuzzy context-free grammars in order to model grammatical errors resulting in erroneous inputs for robust recognizing and parsing algorithms for fuzzy context-free languages. In particular, this approach enables us to distinguish between small errors

  12. Chomsky-Schützenberger parsing for weighted multiple context-free languages

    Directory of Open Access Journals (Sweden)

    Tobias Denkinger

    2017-07-01

    Full Text Available We prove a Chomsky-Schützenberger representation theorem for multiple context-free languages weighted over complete commutative strong bimonoids. Using this representation we devise a parsing algorithm for a restricted form of those devices.

  13. Generative re-ranking model for dependency parsing of Italian sentences

    NARCIS (Netherlands)

    Sangati, F.

    2009-01-01

    We present a general framework for dependency parsing of Italian sentences based on a combination of discriminative and generative models. We use a state-of-the-art discriminative model to obtain a k-best list of candidate structures for the test sentences, and use the generative model to compute

  14. Introduction to special issue on machine learning approaches to shallow parsing

    NARCIS (Netherlands)

    Hammerton, J; Osborne, M; Armstrong, S; Daelemans, W

    2002-01-01

    This article introduces the problem of partial or shallow parsing (assigning partial syntactic structure to sentences) and explains why it is an important natural language processing (NLP) task. The complexity of the task makes Machine Learning an attractive option in comparison to the handcrafting

  15. Pippi — Painless parsing, post-processing and plotting of posterior and likelihood samples

    Science.gov (United States)

    Scott, Pat

    2012-11-01

    Interpreting samples from likelihood or posterior probability density functions is rarely as straightforward as it seems it should be. Producing publication-quality graphics of these distributions is often similarly painful. In this short note I describe pippi, a simple, publicly available package for parsing and post-processing such samples, as well as generating high-quality PDF graphics of the results. Pippi is easily and extensively configurable and customisable, both in its options for parsing and post-processing samples, and in the visual aspects of the figures it produces. I illustrate some of these using an existing supersymmetric global fit, performed in the context of a gamma-ray search for dark matter. Pippi can be downloaded and followed at http://github.com/patscott/pippi.

  16. Automated vocabulary discovery for geo-parsing online epidemic intelligence.

    Science.gov (United States)

    Keller, Mikaela; Freifeld, Clark C; Brownstein, John S

    2009-11-24

    Automated surveillance of the Internet provides a timely and sensitive method for alerting on global emerging infectious disease threats. HealthMap is part of a new generation of online systems designed to monitor and visualize, on a real-time basis, disease outbreak alerts as reported by online news media and public health sources. HealthMap is of specific interest for national and international public health organizations and international travelers. A particular task that makes such a surveillance useful is the automated discovery of the geographic references contained in the retrieved outbreak alerts. This task is sometimes referred to as "geo-parsing". A typical approach to geo-parsing would demand an expensive training corpus of alerts manually tagged by a human. Given that human readers perform this kind of task by using both their lexical and contextual knowledge, we developed an approach which relies on a relatively small expert-built gazetteer, thus limiting the need of human input, but focuses on learning the context in which geographic references appear. We show in a set of experiments, that this approach exhibits a substantial capacity to discover geographic locations outside of its initial lexicon. The results of this analysis provide a framework for future automated global surveillance efforts that reduce manual input and improve timeliness of reporting.

  17. Direct migration motion estimation and mode decision to decoder for a low-complexity decoder Wyner-Ziv video coding

    Science.gov (United States)

    Lei, Ted Chih-Wei; Tseng, Fan-Shuo

    2017-07-01

    This paper addresses the problem of high-computational complexity decoding in traditional Wyner-Ziv video coding (WZVC). The key focus is the migration of two traditionally high-computationally complex encoder algorithms, namely motion estimation and mode decision. In order to reduce the computational burden in this process, the proposed architecture adopts the partial boundary matching algorithm and four flexible types of block mode decision at the decoder. This approach does away with the need for motion estimation and mode decision at the encoder. The experimental results show that the proposed padding block-based WZVC not only decreases decoder complexity to approximately one hundredth that of the state-of-the-art DISCOVER decoding but also outperforms DISCOVER codec by up to 3 to 4 dB.

  18. Parsing and Quantification of Raw Orbitrap Mass Spectrometer Data Using RawQuant.

    Science.gov (United States)

    Kovalchik, Kevin A; Moggridge, Sophie; Chen, David D Y; Morin, Gregg B; Hughes, Christopher S

    2018-06-01

    Effective analysis of protein samples by mass spectrometry (MS) requires careful selection and optimization of a range of experimental parameters. As the output from the primary detection device, the "raw" MS data file can be used to gauge the success of a given sample analysis. However, the closed-source nature of the standard raw MS file can complicate effective parsing of the data contained within. To ease and increase the range of analyses possible, the RawQuant tool was developed to enable parsing of raw MS files derived from Thermo Orbitrap instruments to yield meta and scan data in an openly readable text format. RawQuant can be commanded to export user-friendly files containing MS 1 , MS 2 , and MS 3 metadata as well as matrices of quantification values based on isobaric tagging approaches. In this study, the utility of RawQuant is demonstrated in several scenarios: (1) reanalysis of shotgun proteomics data for the identification of the human proteome, (2) reanalysis of experiments utilizing isobaric tagging for whole-proteome quantification, and (3) analysis of a novel bacterial proteome and synthetic peptide mixture for assessing quantification accuracy when using isobaric tags. Together, these analyses successfully demonstrate RawQuant for the efficient parsing and quantification of data from raw Thermo Orbitrap MS files acquired in a range of common proteomics experiments. In addition, the individual analyses using RawQuant highlights parametric considerations in the different experimental sets and suggests targetable areas to improve depth of coverage in identification-focused studies and quantification accuracy when using isobaric tags.

  19. Neural Semantic Parsing by Character-based Translation: Experiments with Abstract Meaning Representations

    NARCIS (Netherlands)

    van Noord, Rik; Bos, Johannes

    2017-01-01

    We evaluate the character-level translation method for neural semantic parsing on a large corpus of sentences annotated with Abstract Meaning Representations (AMRs). Using a sequence-to-sequence model, and some trivial preprocessing and postprocessing of AMRs, we obtain a baseline accuracy of 53.1

  20. Characteristics analysis of acupuncture electroencephalograph based on mutual information Lempel—Ziv complexity

    International Nuclear Information System (INIS)

    Luo Xi-Liu; Wang Jiang; Deng Bin; Wei Xi-Le; Bian Hong-Rui; Han Chun-Xiao

    2012-01-01

    As a convenient approach to the characterization of cerebral cortex electrical information, electroencephalograph (EEG) has potential clinical application in monitoring the acupuncture effects. In this paper, a method composed of the mutual information method and Lempel—Ziv complexity method (MILZC) is proposed to investigate the effects of acupuncture on the complexity of information exchanges between different brain regions based on EEGs. In the experiments, eight subjects are manually acupunctured at ‘Zusanli’ acupuncture point (ST-36) with different frequencies (i.e., 50, 100, 150, and 200 times/min) and the EEGs are recorded simultaneously. First, MILZC values are compared in general. Then average brain connections are used to quantify the effectiveness of acupuncture under the above four frequencies. Finally, significance index P values are used to study the spatiality of the acupuncture effect on the brain. Three main findings are obtained: (i) MILZC values increase during the acupuncture; (ii) manual acupunctures (MAs) with 100 times/min and 150 times/min are more effective than with 50 times/min and 200 times/min; (iii) contralateral hemisphere activation is more prominent than ipsilateral hemisphere's. All these findings suggest that acupuncture contributes to the increase of brain information exchange complexity and the MILZC method can successfully describe these changes. (interdisciplinary physics and related areas of science and technology)

  1. A hierarchical methodology for urban facade parsing from TLS point clouds

    Science.gov (United States)

    Li, Zhuqiang; Zhang, Liqiang; Mathiopoulos, P. Takis; Liu, Fangyu; Zhang, Liang; Li, Shuaipeng; Liu, Hao

    2017-01-01

    The effective and automated parsing of building facades from terrestrial laser scanning (TLS) point clouds of urban environments is an important research topic in the GIS and remote sensing fields. It is also challenging because of the complexity and great variety of the available 3D building facade layouts as well as the noise and data missing of the input TLS point clouds. In this paper, we introduce a novel methodology for the accurate and computationally efficient parsing of urban building facades from TLS point clouds. The main novelty of the proposed methodology is that it is a systematic and hierarchical approach that considers, in an adaptive way, the semantic and underlying structures of the urban facades for segmentation and subsequent accurate modeling. Firstly, the available input point cloud is decomposed into depth planes based on a data-driven method; such layer decomposition enables similarity detection in each depth plane layer. Secondly, the labeling of the facade elements is performed using the SVM classifier in combination with our proposed BieS-ScSPM algorithm. The labeling outcome is then augmented with weak architectural knowledge. Thirdly, least-squares fitted normalized gray accumulative curves are applied to detect regular structures, and a binarization dilation extraction algorithm is used to partition facade elements. A dynamic line-by-line division is further applied to extract the boundaries of the elements. The 3D geometrical façade models are then reconstructed by optimizing facade elements across depth plane layers. We have evaluated the performance of the proposed method using several TLS facade datasets. Qualitative and quantitative performance comparisons with several other state-of-the-art methods dealing with the same facade parsing problem have demonstrated its superiority in performance and its effectiveness in improving segmentation accuracy.

  2. Extending TF1: Argument parsing, function composition, and vectorization

    CERN Document Server

    Tsang Mang Kin, Arthur Leonard

    2017-01-01

    In this project, we extend the functionality of the TF1 function class in root. We add argument parsing, making it possible to freely pass variables and parameters into pre-defined and user-defined functions. We also introduce a syntax to use certain compositions of functions, namely normalized sums and convolutions, directly in TF1. Finally, we introduce some simple vectorization functionality to TF1 and demonstrate the potential to speed up parallelizable computations.

  3. Towards a Robuster Interpretive Parsing: learning from overt forms in Optimality Theory

    NARCIS (Netherlands)

    Biró, T.

    2013-01-01

    The input data to grammar learning algorithms often consist of overt forms that do not contain full structural descriptions. This lack of information may contribute to the failure of learning. Past work on Optimality Theory introduced Robust Interpretive Parsing (RIP) as a partial solution to this

  4. Tackling Error Propagation through Reinforcement Learning: A Case of Greedy Dependency Parsing

    NARCIS (Netherlands)

    Le, M.N.; Fokkens, A.S.

    Error propagation is a common problem in NLP. Reinforcement learning explores erroneous states during training and can therefore be more robust when mistakes are made early in a process. In this paper, we apply reinforcement learning to greedy dependency parsing which is known to suffer from error

  5. Process of 3D wireless decentralized sensor deployment using parsing crossover scheme

    Directory of Open Access Journals (Sweden)

    Albert H.R. Ko

    2015-07-01

    Full Text Available A Wireless Sensor Networks (WSN usually consists of numerous wireless devices deployed in a region of interest, each able to collect and process environmental information and communicate with neighboring devices. It can thus be regarded as a Multi-Agent System for territorial security, where individual agents cooperate with each other to avoid duplication of effort and to exploit other agent’s capacities. The problem of sensor deployment becomes non-trivial when we consider environmental factors, such as terrain elevations. Due to the fact that all sensors are homogeneous, the chromosomes that encode sensor positions are actually interchangeable, and conventional crossover schemes such as uniform crossover would cause some redundancy as well as over-concentration in certain specific geographical area. We propose a Parsing Crossover Scheme that intends to reduce redundancy and ease geographical concentration pattern in an effort to facilitate the search. The proposed parsing crossover method demonstrates better performances than those of uniform crossover under different terrain irregularities.

  6. Automated vocabulary discovery for geo-parsing online epidemic intelligence

    Directory of Open Access Journals (Sweden)

    Freifeld Clark C

    2009-11-01

    Full Text Available Abstract Background Automated surveillance of the Internet provides a timely and sensitive method for alerting on global emerging infectious disease threats. HealthMap is part of a new generation of online systems designed to monitor and visualize, on a real-time basis, disease outbreak alerts as reported by online news media and public health sources. HealthMap is of specific interest for national and international public health organizations and international travelers. A particular task that makes such a surveillance useful is the automated discovery of the geographic references contained in the retrieved outbreak alerts. This task is sometimes referred to as "geo-parsing". A typical approach to geo-parsing would demand an expensive training corpus of alerts manually tagged by a human. Results Given that human readers perform this kind of task by using both their lexical and contextual knowledge, we developed an approach which relies on a relatively small expert-built gazetteer, thus limiting the need of human input, but focuses on learning the context in which geographic references appear. We show in a set of experiments, that this approach exhibits a substantial capacity to discover geographic locations outside of its initial lexicon. Conclusion The results of this analysis provide a framework for future automated global surveillance efforts that reduce manual input and improve timeliness of reporting.

  7. Characterisation of the Effects of Sleep Deprivation on the Electroencephalogram Using Permutation Lempel–Ziv Complexity, a Non-Linear Analysis Tool

    Directory of Open Access Journals (Sweden)

    Pinar Deniz Tosun

    2017-12-01

    Full Text Available Specific patterns of brain activity during sleep and waking are recorded in the electroencephalogram (EEG. Time-frequency analysis methods have been widely used to analyse the EEG and identified characteristic oscillations for each vigilance state (VS, i.e., wakefulness, rapid-eye movement (REM and non-rapid-eye movement (NREM sleep. However, other aspects such as change of patterns associated with brain dynamics may not be captured unless a non-linear-based analysis method is used. In this pilot study, Permutation Lempel–Ziv complexity (PLZC, a novel symbolic dynamics analysis method, was used to characterise the changes in the EEG in sleep and wakefulness during baseline and recovery from sleep deprivation (SD. The results obtained with PLZC were contrasted with a related non-linear method, Lempel–Ziv complexity (LZC. Both measure the emergence of new patterns. However, LZC is dependent on the absolute amplitude of the EEG, while PLZC is only dependent on the relative amplitude due to symbolisation procedure and thus, more resistant to noise. We showed that PLZC discriminates activated brain states associated with wakefulness and REM sleep, which both displayed higher complexity, compared to NREM sleep. Additionally, significantly lower PLZC values were measured in NREM sleep during the recovery period following SD compared to baseline, suggesting a reduced emergence of new activity patterns in the EEG. These findings were validated using PLZC on surrogate data. By contrast, LZC was merely reflecting changes in the spectral composition of the EEG. Overall, this study implies that PLZC is a robust non-linear complexity measure, which is not dependent on amplitude variations in the signal, and which may be useful to further assess EEG alterations induced by environmental or pharmacological manipulations.

  8. Creating Parsing Lexicons from Semantic Lexicons Automatically and Its Applications

    National Research Council Canada - National Science Library

    Ayan, Necip F; Dorr, Bonnie

    2002-01-01

    ...). We also present the effects of using such a lexicon on the parser performance. The advantage of automating the process is that the same technique can be applied directly to lexicons we have for other languages, for example, Arabic, Chinese, and Spanish. The results indicate that our method will help us generate parsing lexicons which can be used by a broad-coverage parser that runs on different languages.

  9. Cryptanalysis of a family of 1D unimodal maps

    Science.gov (United States)

    Md Said, Mohamad Rushdan; Hina, Aliyu Danladi; Banerjee, Santo

    2017-07-01

    In this paper, we proposed a topologically conjugate map, equivalent to the well known logistic map. This constructed map is defined on the integer domain [0, 2n) with a view to be used as a random number generator (RNG) based on an integer domain as is the required in classical cryptography. The maps were found to have a one to one correspondence between points in their respective defining intervals defined on an n-bits precision. The dynamics of the proposed map similar with that of the logistic map, in terms of the Lyapunov exponents with the control parameter. This similarity between the curves indicates topological conjugacy between the maps. With a view to be applied in cryptography as a Pseudo-Random number generator (PRNG), the complexity of the constructed map as a source of randomness is determined using both the permutation entropy (PE) and the Lempel-Ziv (LZ-76) complexity measures, and the results are compared with numerical simulations.

  10. Lossless compression for 3D PET

    International Nuclear Information System (INIS)

    Macq, B.; Sibomana, M.; Coppens, A.; Bol, A.; Michel, C.

    1994-01-01

    A new adaptive scheme is proposed for the lossless compression of positron emission tomography (PET) sinogram data. The algorithm uses an adaptive differential pulse code modulator (ADPCM) followed by a universal variable length coder (UVLC). Contrasting with Lempel-Ziv (LZ), which operates on a whole sinogram, UVLC operates very efficiently on short data blocks. This is a major advantage for real-time implementation. The algorithm is adaptive and codes data after some on-line estimations of the statistics inside each block. Its efficiency is tested when coding dynamic and static scans from two PET scanners and reaches asymptotically the entropy limit for long frames. For very short 3D frames, the new algorithm is twice more efficient than LZ. Since an ASIC implementing a similar UVLC scheme is available today, a similar one should be able to sustain PET data lossless compression and decompression at a rate of 27 MBytes/sec. This algorithm is consequently a good candidate for the next generation of lossless compression engine

  11. Exploring Human Activity Patterns Using Taxicab Static Points

    Directory of Open Access Journals (Sweden)

    Bin Jiang

    2012-06-01

    Full Text Available This paper explores the patterns of human activities within a geographical space by adopting the taxicab static points which refer to the locations with zero speed along the tracking trajectory. We report the findings from both aggregated and individual aspects. Results from the aggregated level indicate the following: (1 Human activities exhibit an obvious regularity in time, for example, there is a burst of activity during weekend nights and a lull during the week. (2 They show a remarkable spatial drifting pattern, which strengthens our understanding of the activities in any given place. (3 Activities are heterogeneous in space irrespective of their drifting with time. These aggregated results not only help in city planning, but also facilitate traffic control and management. On the other hand, investigations on an individual level suggest that (4 activities witnessed by one taxicab will have different temporal regularity to another, and (5 each regularity implies a high level of prediction with low entropy by applying the Lempel-Ziv algorithm.

  12. A Tandem Semantic Interpreter for Incremental Parse Selection

    Science.gov (United States)

    1990-09-28

    syntactic role to semantic role. An exam - ple from Fillmore [10] is the sentence I copied that letter, which can be uttered when point- ing either to...person. We want the word fiddle to have the sort predicate violin as its lexical interpretation, how- ever, notthing. Thus, for Ross went for his fiddle...to receive an interpretation, a sort hierarchy is needed to establish that all violins are things. A well-structured sort hierarchy allows newly added

  13. Implementing Kernel Methods Incrementally by Incremental Nonlinear Projection Trick.

    Science.gov (United States)

    Kwak, Nojun

    2016-05-20

    Recently, the nonlinear projection trick (NPT) was introduced enabling direct computation of coordinates of samples in a reproducing kernel Hilbert space. With NPT, any machine learning algorithm can be extended to a kernel version without relying on the so called kernel trick. However, NPT is inherently difficult to be implemented incrementally because an ever increasing kernel matrix should be treated as additional training samples are introduced. In this paper, an incremental version of the NPT (INPT) is proposed based on the observation that the centerization step in NPT is unnecessary. Because the proposed INPT does not change the coordinates of the old data, the coordinates obtained by INPT can directly be used in any incremental methods to implement a kernel version of the incremental methods. The effectiveness of the INPT is shown by applying it to implement incremental versions of kernel methods such as, kernel singular value decomposition, kernel principal component analysis, and kernel discriminant analysis which are utilized for problems of kernel matrix reconstruction, letter classification, and face image retrieval, respectively.

  14. Compressing DNA sequence databases with coil

    Directory of Open Access Journals (Sweden)

    Hendy Michael D

    2008-05-01

    Full Text Available Abstract Background Publicly available DNA sequence databases such as GenBank are large, and are growing at an exponential rate. The sheer volume of data being dealt with presents serious storage and data communications problems. Currently, sequence data is usually kept in large "flat files," which are then compressed using standard Lempel-Ziv (gzip compression – an approach which rarely achieves good compression ratios. While much research has been done on compressing individual DNA sequences, surprisingly little has focused on the compression of entire databases of such sequences. In this study we introduce the sequence database compression software coil. Results We have designed and implemented a portable software package, coil, for compressing and decompressing DNA sequence databases based on the idea of edit-tree coding. coil is geared towards achieving high compression ratios at the expense of execution time and memory usage during compression – the compression time represents a "one-off investment" whose cost is quickly amortised if the resulting compressed file is transmitted many times. Decompression requires little memory and is extremely fast. We demonstrate a 5% improvement in compression ratio over state-of-the-art general-purpose compression tools for a large GenBank database file containing Expressed Sequence Tag (EST data. Finally, coil can efficiently encode incremental additions to a sequence database. Conclusion coil presents a compelling alternative to conventional compression of flat files for the storage and distribution of DNA sequence databases having a narrow distribution of sequence lengths, such as EST data. Increasing compression levels for databases having a wide distribution of sequence lengths is a direction for future work.

  15. Learning for Semantic Parsing with Kernels under Various Forms of Supervision

    Science.gov (United States)

    2007-08-01

    natural language sentences to their formal executable meaning representations. This is a challenging problem and is critical for developing computing...sentences are semantically tractable. This indi- cates that Geoquery is more challenging domain for semantic parsing than ATIS. In the past, there have been a...Combining parsers. In Proceedings of the Conference on Em- pirical Methods in Natural Language Processing and Very Large Corpora (EMNLP/ VLC -99), pp. 187–194

  16. Using machine learning to parse breast pathology reports.

    Science.gov (United States)

    Yala, Adam; Barzilay, Regina; Salama, Laura; Griffin, Molly; Sollender, Grace; Bardia, Aditya; Lehman, Constance; Buckley, Julliette M; Coopey, Suzanne B; Polubriaginof, Fernanda; Garber, Judy E; Smith, Barbara L; Gadd, Michele A; Specht, Michelle C; Gudewicz, Thomas M; Guidi, Anthony J; Taghian, Alphonse; Hughes, Kevin S

    2017-01-01

    Extracting information from electronic medical record is a time-consuming and expensive process when done manually. Rule-based and machine learning techniques are two approaches to solving this problem. In this study, we trained a machine learning model on pathology reports to extract pertinent tumor characteristics, which enabled us to create a large database of attribute searchable pathology reports. This database can be used to identify cohorts of patients with characteristics of interest. We collected a total of 91,505 breast pathology reports from three Partners hospitals: Massachusetts General Hospital, Brigham and Women's Hospital, and Newton-Wellesley Hospital, covering the period from 1978 to 2016. We trained our system with annotations from two datasets, consisting of 6295 and 10,841 manually annotated reports. The system extracts 20 separate categories of information, including atypia types and various tumor characteristics such as receptors. We also report a learning curve analysis to show how much annotation our model needs to perform reasonably. The model accuracy was tested on 500 reports that did not overlap with the training set. The model achieved accuracy of 90% for correctly parsing all carcinoma and atypia categories for a given patient. The average accuracy for individual categories was 97%. Using this classifier, we created a database of 91,505 parsed pathology reports. Our learning curve analysis shows that the model can achieve reasonable results even when trained on a few annotations. We developed a user-friendly interface to the database that allows physicians to easily identify patients with target characteristics and export the matching cohort. This model has the potential to reduce the effort required for analyzing large amounts of data from medical records, and to minimize the cost and time required to glean scientific insight from these data.

  17. KEGGParser: parsing and editing KEGG pathway maps in Matlab.

    Science.gov (United States)

    Arakelyan, Arsen; Nersisyan, Lilit

    2013-02-15

    KEGG pathway database is a collection of manually drawn pathway maps accompanied with KGML format files intended for use in automatic analysis. KGML files, however, do not contain the required information for complete reproduction of all the events indicated in the static image of a pathway map. Several parsers and editors of KEGG pathways exist for processing KGML files. We introduce KEGGParser-a MATLAB based tool for KEGG pathway parsing, semiautomatic fixing, editing, visualization and analysis in MATLAB environment. It also works with Scilab. The source code is available at http://www.mathworks.com/matlabcentral/fileexchange/37561.

  18. (Invariability in the Samoan syntax/prosody interface and consequences for syntactic parsing

    Directory of Open Access Journals (Sweden)

    Kristine M. Yu

    2017-10-01

    Full Text Available While it has long been clear that prosody should be part of the grammar influencing the action of the syntactic parser, how to bring prosody into computational models of syntactic parsing has remained unclear. The challenge is that prosodic information in the speech signal is the result of the interaction of a multitude of conditioning factors. From this output, how can we factor out the contribution of syntax to conditioning prosodic events? And if we are able to do that factorization and define a production model from the syntactic grammar to a prosodified utterance, how can we then define a comprehension model based on that production model? In this case study of the Samoan morphosyntax-prosody interface, we show how to factor out the influence of syntax on prosody in empirical work and confirm there is invariable morphosyntactic conditioning of high edge tones. Then, we show how this invariability can be precisely characterized and used by a parsing model that factors the various influences of morphosyntax on tonal events. We expect that models of these kinds can be extended to more comprehensive perspectives on Samoan and to languages where the syntax/prosody coupling is more complex.

  19. FastaValidator: an open-source Java library to parse and validate FASTA formatted sequences.

    Science.gov (United States)

    Waldmann, Jost; Gerken, Jan; Hankeln, Wolfgang; Schweer, Timmy; Glöckner, Frank Oliver

    2014-06-14

    Advances in sequencing technologies challenge the efficient importing and validation of FASTA formatted sequence data which is still a prerequisite for most bioinformatic tools and pipelines. Comparative analysis of commonly used Bio*-frameworks (BioPerl, BioJava and Biopython) shows that their scalability and accuracy is hampered. FastaValidator represents a platform-independent, standardized, light-weight software library written in the Java programming language. It targets computer scientists and bioinformaticians writing software which needs to parse quickly and accurately large amounts of sequence data. For end-users FastaValidator includes an interactive out-of-the-box validation of FASTA formatted files, as well as a non-interactive mode designed for high-throughput validation in software pipelines. The accuracy and performance of the FastaValidator library qualifies it for large data sets such as those commonly produced by massive parallel (NGS) technologies. It offers scientists a fast, accurate and standardized method for parsing and validating FASTA formatted sequence data.

  20. Parsing the Dictionary of Modern Literary Russian Language with the Method of SCD Configurations. The Lexicographic Modeling

    Directory of Open Access Journals (Sweden)

    Neculai Curteanu

    2012-05-01

    Full Text Available This paper extends the experience of parsing other five, sensibly different, Romanian, French, and German largest dictionaries, to \\textbf{\\textit{DMLRL}} (Dictionary of Modern Literary Russian Language [18], using the optimal and portable parsing method of SCD (Segmentation-Cohesion-Dependency configurations [7], [11], [15]. The purpose of the present paper is to elaborate the lexicographic modeling of \\textbf{\\textit{DMLRL}}, which necessarily precedes the sense tree parsing dictionary entries. The following \\textbf{\\textit{three}} SCD configurations are described: the \\textbf{\\textit{first one}} has to separate the lexicographic segments in a \\textbf{\\textit{DMLRL}} entry, the \\textbf{\\textit{second}} SCD-configuration concentrates on the SCD marker classes and their hypergraph hierarchy for \\textbf{\\textit{DMLRL}} primary and secondary senses, while the \\textbf{\\textit{third}} SCD configuration hands down the same modeling process to the atomic sense definitions and their examples-to-definitions. The dependency hypergraph of the third SCD configuration, interconnected to the one of the second SCD configuration, is specified completely at the atomic sense level for the first time, exceeding the SCD configuration modeling for other five dictionaries [15], [14]. Numerous examples from \\textbf{\\textit{DMLRL}} and comparison to \\textbf{\\textit{DLR-DAR}} Romanian thesaurus-dictionary support the proposed \\textbf{\\textit{DMLRL}} lexicographic modeling.

  1. Parsing a cognitive task: a characterization of the mind's bottleneck.

    Directory of Open Access Journals (Sweden)

    Mariano Sigman

    2005-02-01

    Full Text Available Parsing a mental operation into components, characterizing the parallel or serial nature of this flow, and understanding what each process ultimately contributes to response time are fundamental questions in cognitive neuroscience. Here we show how a simple theoretical model leads to an extended set of predictions concerning the distribution of response time and its alteration by simultaneous performance of another task. The model provides a synthesis of psychological refractory period and random-walk models of response time. It merely assumes that a task consists of three consecutive stages-perception, decision based on noisy integration of evidence, and response-and that the perceptual and motor stages can operate simultaneously with stages of another task, while the central decision process constitutes a bottleneck. We designed a number-comparison task that provided a thorough test of the model by allowing independent variations in number notation, numerical distance, response complexity, and temporal asynchrony relative to an interfering probe task of tone discrimination. The results revealed a parsing of the comparison task in which each variable affects only one stage. Numerical distance affects the integration process, which is the only step that cannot proceed in parallel and has a major contribution to response time variability. The other stages, mapping the numeral to an internal quantity and executing the motor response, can be carried out in parallel with another task. Changing the duration of these processes has no significant effect on the variance.

  2. PyParse: a semiautomated system for scoring spoken recall data.

    Science.gov (United States)

    Solway, Alec; Geller, Aaron S; Sederberg, Per B; Kahana, Michael J

    2010-02-01

    Studies of human memory often generate data on the sequence and timing of recalled items, but scoring such data using conventional methods is difficult or impossible. We describe a Python-based semiautomated system that greatly simplifies this task. This software, called PyParse, can easily be used in conjunction with many common experiment authoring systems. Scored data is output in a simple ASCII format and can be accessed with the programming language of choice, allowing for the identification of features such as correct responses, prior-list intrusions, extra-list intrusions, and repetitions.

  3. Motion based parsing for video from observational psychology

    Science.gov (United States)

    Kokaram, Anil; Doyle, Erika; Lennon, Daire; Joyeux, Laurent; Fuller, Ray

    2006-01-01

    In Psychology it is common to conduct studies involving the observation of humans undertaking some task. The sessions are typically recorded on video and used for subjective visual analysis. The subjective analysis is tedious and time consuming, not only because much useless video material is recorded but also because subjective measures of human behaviour are not necessarily repeatable. This paper presents tools using content based video analysis that allow automated parsing of video from one such study involving Dyslexia. The tools rely on implicit measures of human motion that can be generalised to other applications in the domain of human observation. Results comparing quantitative assessment of human motion with subjective assessment are also presented, illustrating that the system is a useful scientific tool.

  4. Parsing partial molar volumes of small molecules: a molecular dynamics study.

    Science.gov (United States)

    Patel, Nisha; Dubins, David N; Pomès, Régis; Chalikian, Tigran V

    2011-04-28

    We used molecular dynamics (MD) simulations in conjunction with the Kirkwood-Buff theory to compute the partial molar volumes for a number of small solutes of various chemical natures. We repeated our computations using modified pair potentials, first, in the absence of the Coulombic term and, second, in the absence of the Coulombic and the attractive Lennard-Jones terms. Comparison of our results with experimental data and the volumetric results of Monte Carlo simulation with hard sphere potentials and scaled particle theory-based computations led us to conclude that, for small solutes, the partial molar volume computed with the Lennard-Jones potential in the absence of the Coulombic term nearly coincides with the cavity volume. On the other hand, MD simulations carried out with the pair interaction potentials containing only the repulsive Lennard-Jones term produce unrealistically large partial molar volumes of solutes that are close to their excluded volumes. Our simulation results are in good agreement with the reported schemes for parsing partial molar volume data on small solutes. In particular, our determined interaction volumes() and the thickness of the thermal volume for individual compounds are in good agreement with empirical estimates. This work is the first computational study that supports and lends credence to the practical algorithms of parsing partial molar volume data that are currently in use for molecular interpretations of volumetric data.

  5. Analysis of Azari Language based on Parsing using Link Gram

    Directory of Open Access Journals (Sweden)

    Maryam Arabzadeh

    2014-09-01

    Full Text Available There are different classes of theories for the natural lanuguage syntactic parsing problem and for creating the related grammars .This paper presents a syntactic grammar developed in the link grammar formalism for Turkish which is an agglutinative language. In the link grammar formalism, the words of a sentence are linked with each other depending on their syntactic roles. Turkish has complex derivational and inflectional morphology, and derivational and inflection morphemes play important syntactic roles in the sentences. In order to develop a link grammar for Turkish, the lexical parts in the morphological representations of Turkish words are removed, and the links are created depending on the part of speech tags and inflectional morphemes in words. Furthermore, a derived word is separated at the derivational boundaries in order to treat each derivation morpheme as a special distinct word, and allow it to be linked with the rest of the sentence. The derivational morphemes of a word are also linked with each other with special links to indicate that they are parts of the same word. Finally the adapted unique link grammar formalism for Turkish provides flexibility for the linkage construction, and similar methods can be used for other languages with complex morphology. Finally, using the Delphi programming language, the link grammar related to the Azeri language was developed and implemented and then by selecting 250 random sentences, this grammar is evaluated and then tested. For 84.31% of the sentences, the result set of the parser contains the correct parse.

  6. Attribute And-Or Grammar for Joint Parsing of Human Pose, Parts and Attributes.

    Science.gov (United States)

    Park, Seyoung; Nie, Xiaohan; Zhu, Song-Chun

    2017-07-25

    This paper presents an attribute and-or grammar (A-AOG) model for jointly inferring human body pose and human attributes in a parse graph with attributes augmented to nodes in the hierarchical representation. In contrast to other popular methods in the current literature that train separate classifiers for poses and individual attributes, our method explicitly represents the decomposition and articulation of body parts, and account for the correlations between poses and attributes. The A-AOG model is an amalgamation of three traditional grammar formulations: (i)Phrase structure grammar representing the hierarchical decomposition of the human body from whole to parts; (ii)Dependency grammar modeling the geometric articulation by a kinematic graph of the body pose; and (iii)Attribute grammar accounting for the compatibility relations between different parts in the hierarchy so that their appearances follow a consistent style. The parse graph outputs human detection, pose estimation, and attribute prediction simultaneously, which are intuitive and interpretable. We conduct experiments on two tasks on two datasets, and experimental results demonstrate the advantage of joint modeling in comparison with computing poses and attributes independently. Furthermore, our model obtains better performance over existing methods for both pose estimation and attribute prediction tasks.

  7. Tackling Error Propagation through Reinforcement Learning: A Case of Greedy Dependency Parsing

    OpenAIRE

    Le, Minh; Fokkens, Antske

    2017-01-01

    Error propagation is a common problem in NLP. Reinforcement learning explores erroneous states during training and can therefore be more robust when mistakes are made early in a process. In this paper, we apply reinforcement learning to greedy dependency parsing which is known to suffer from error propagation. Reinforcement learning improves accuracy of both labeled and unlabeled dependencies of the Stanford Neural Dependency Parser, a high performance greedy parser, while maintaining its eff...

  8. Machine learning to parse breast pathology reports in Chinese.

    Science.gov (United States)

    Tang, Rong; Ouyang, Lizhi; Li, Clara; He, Yue; Griffin, Molly; Taghian, Alphonse; Smith, Barbara; Yala, Adam; Barzilay, Regina; Hughes, Kevin

    2018-01-29

    Large structured databases of pathology findings are valuable in deriving new clinical insights. However, they are labor intensive to create and generally require manual annotation. There has been some work in the bioinformatics community to support automating this work via machine learning in English. Our contribution is to provide an automated approach to construct such structured databases in Chinese, and to set the stage for extraction from other languages. We collected 2104 de-identified Chinese benign and malignant breast pathology reports from Hunan Cancer Hospital. Physicians with native Chinese proficiency reviewed the reports and annotated a variety of binary and numerical pathologic entities. After excluding 78 cases with a bilateral lesion in the same report, 1216 cases were used as a training set for the algorithm, which was then refined by 405 development cases. The Natural language processing algorithm was tested by using the remaining 405 cases to evaluate the machine learning outcome. The model was used to extract 13 binary entities and 8 numerical entities. When compared to physicians with native Chinese proficiency, the model showed a per-entity accuracy from 91 to 100% for all common diagnoses on the test set. The overall accuracy of binary entities was 98% and of numerical entities was 95%. In a per-report evaluation for binary entities with more than 100 training cases, 85% of all the testing reports were completely correct and 11% had an error in 1 out of 22 entities. We have demonstrated that Chinese breast pathology reports can be automatically parsed into structured data using standard machine learning approaches. The results of our study demonstrate that techniques effective in parsing English reports can be scaled to other languages.

  9. Mobile Backend as a Service: the pros and cons of parse

    OpenAIRE

    Nguyen, Phu

    2016-01-01

    Using a pre-built backend for an application is an affordable and swift approach to prototyping new application ideas. Mobile Backend as a Service (MBaaS) is the term for pre-built backend systems that developers can use. However, it is advisable to understand the pros and the cons of an MBaaS before deciding to use it. The aim of the thesis was to determine the advantages and disadvantages of using Parse, a provider of mobile backend as a service, in application development. Parse’s defin...

  10. Increased spontaneous MEG signal diversity for psychoactive doses of ketamine, LSD and psilocybin

    Science.gov (United States)

    Schartner, Michael M.; Carhart-Harris, Robin L.; Barrett, Adam B.; Seth, Anil K.; Muthukumaraswamy, Suresh D.

    2017-04-01

    What is the level of consciousness of the psychedelic state? Empirically, measures of neural signal diversity such as entropy and Lempel-Ziv (LZ) complexity score higher for wakeful rest than for states with lower conscious level like propofol-induced anesthesia. Here we compute these measures for spontaneous magnetoencephalographic (MEG) signals from humans during altered states of consciousness induced by three psychedelic substances: psilocybin, ketamine and LSD. For all three, we find reliably higher spontaneous signal diversity, even when controlling for spectral changes. This increase is most pronounced for the single-channel LZ complexity measure, and hence for temporal, as opposed to spatial, signal diversity. We also uncover selective correlations between changes in signal diversity and phenomenological reports of the intensity of psychedelic experience. This is the first time that these measures have been applied to the psychedelic state and, crucially, that they have yielded values exceeding those of normal waking consciousness. These findings suggest that the sustained occurrence of psychedelic phenomenology constitutes an elevated level of consciousness - as measured by neural signal diversity.

  11. A method for rapid similarity analysis of RNA secondary structures

    Directory of Open Access Journals (Sweden)

    Liu Na

    2006-11-01

    Full Text Available Abstract Background Owing to the rapid expansion of RNA structure databases in recent years, efficient methods for structure comparison are in demand for function prediction and evolutionary analysis. Usually, the similarity of RNA secondary structures is evaluated based on tree models and dynamic programming algorithms. We present here a new method for the similarity analysis of RNA secondary structures. Results Three sets of real data have been used as input for the example applications. Set I includes the structures from 5S rRNAs. Set II includes the secondary structures from RNase P and RNase MRP. Set III includes the structures from 16S rRNAs. Reasonable phylogenetic trees are derived for these three sets of data by using our method. Moreover, our program runs faster as compared to some existing ones. Conclusion The famous Lempel-Ziv algorithm can efficiently extract the information on repeated patterns encoded in RNA secondary structures and makes our method an alternative to analyze the similarity of RNA secondary structures. This method will also be useful to researchers who are interested in evolutionary analysis.

  12. Electroencephalogram complexity analysis in children with attention-deficit/hyperactivity disorder during a visual cognitive task.

    Science.gov (United States)

    Zarafshan, Hadi; Khaleghi, Ali; Mohammadi, Mohammad Reza; Moeini, Mahdi; Malmir, Nastaran

    2016-01-01

    The aim of this study was to investigate electroencephalogram (EEG) dynamics using complexity analysis in children with attention-deficit/hyperactivity disorder (ADHD) compared with healthy control children when performing a cognitive task. Thirty 7-12-year-old children meeting Diagnostic and Statistical Manual of Mental Disorders-Fifth Edition (DSM-5) criteria for ADHD and 30 healthy control children underwent an EEG evaluation during a cognitive task, and Lempel-Ziv complexity (LZC) values were computed. There were no significant differences between ADHD and control groups on age and gender. The mean LZC of the ADHD children was significantly larger than healthy children over the right anterior and right posterior regions during the cognitive performance. In the ADHD group, complexity of the right hemisphere was higher than that of the left hemisphere, but the complexity of the left hemisphere was higher than that of the right hemisphere in the normal group. Although fronto-striatal dysfunction is considered conclusive evidence for the pathophysiology of ADHD, our arithmetic mental task has provided evidence of structural and functional changes in the posterior regions and probably cerebellum in ADHD.

  13. Modeling and complexity of stochastic interacting Lévy type financial price dynamics

    Science.gov (United States)

    Wang, Yiduan; Zheng, Shenzhou; Zhang, Wei; Wang, Jun; Wang, Guochao

    2018-06-01

    In attempt to reproduce and investigate nonlinear dynamics of security markets, a novel nonlinear random interacting price dynamics, which is considered as a Lévy type process, is developed and investigated by the combination of lattice oriented percolation and Potts dynamics, which concerns with the instinctive random fluctuation and the fluctuation caused by the spread of the investors' trading attitudes, respectively. To better understand the fluctuation complexity properties of the proposed model, the complexity analyses of random logarithmic price return and corresponding volatility series are preformed, including power-law distribution, Lempel-Ziv complexity and fractional sample entropy. In order to verify the rationality of the proposed model, the corresponding studies of actual security market datasets are also implemented for comparison. The empirical results reveal that this financial price model can reproduce some important complexity features of actual security markets to some extent. The complexity of returns decreases with the increase of parameters γ1 and β respectively, furthermore, the volatility series exhibit lower complexity than the return series

  14. New approach of financial volatility duration dynamics by stochastic finite-range interacting voter system.

    Science.gov (United States)

    Wang, Guochao; Wang, Jun

    2017-01-01

    We make an approach on investigating the fluctuation behaviors of financial volatility duration dynamics. A new concept of volatility two-component range intensity (VTRI) is developed, which constitutes the maximal variation range of volatility intensity and shortest passage time of duration, and can quantify the investment risk in financial markets. In an attempt to study and describe the nonlinear complex properties of VTRI, a random agent-based financial price model is developed by the finite-range interacting biased voter system. The autocorrelation behaviors and the power-law scaling behaviors of return time series and VTRI series are investigated. Then, the complexity of VTRI series of the real markets and the proposed model is analyzed by Fuzzy entropy (FuzzyEn) and Lempel-Ziv complexity. In this process, we apply the cross-Fuzzy entropy (C-FuzzyEn) to study the asynchrony of pairs of VTRI series. The empirical results reveal that the proposed model has the similar complex behaviors with the actual markets and indicate that the proposed stock VTRI series analysis and the financial model are meaningful and feasible to some extent.

  15. Generative complexity of Gray-Scott model

    Science.gov (United States)

    Adamatzky, Andrew

    2018-03-01

    In the Gray-Scott reaction-diffusion system one reactant is constantly fed in the system, another reactant is reproduced by consuming the supplied reactant and also converted to an inert product. The rate of feeding one reactant in the system and the rate of removing another reactant from the system determine configurations of concentration profiles: stripes, spots, waves. We calculate the generative complexity-a morphological complexity of concentration profiles grown from a point-wise perturbation of the medium-of the Gray-Scott system for a range of the feeding and removal rates. The morphological complexity is evaluated using Shannon entropy, Simpson diversity, approximation of Lempel-Ziv complexity, and expressivity (Shannon entropy divided by space-filling). We analyse behaviour of the systems with highest values of the generative morphological complexity and show that the Gray-Scott systems expressing highest levels of the complexity are composed of the wave-fragments (similar to wave-fragments in sub-excitable media) and travelling localisations (similar to quasi-dissipative solitons and gliders in Conway's Game of Life).

  16. Freeing Space for NASA: Incorporating a Lossless Compression Algorithm into NASA's FOSS System

    Science.gov (United States)

    Fiechtner, Kaitlyn; Parker, Allen

    2011-01-01

    NASA's Fiber Optic Strain Sensing (FOSS) system can gather and store up to 1,536,000 bytes (1.46 megabytes) per second. Since the FOSS system typically acquires hours - or even days - of data, the system can gather hundreds of gigabytes of data for a given test event. To store such large quantities of data more effectively, NASA is modifying a Lempel-Ziv-Oberhumer (LZO) lossless data compression program to compress data as it is being acquired in real time. After proving that the algorithm is capable of compressing the data from the FOSS system, the LZO program will be modified and incorporated into the FOSS system. Implementing an LZO compression algorithm will instantly free up memory space without compromising any data obtained. With the availability of memory space, the FOSS system can be used more efficiently on test specimens, such as Unmanned Aerial Vehicles (UAVs) that can be in flight for days. By integrating the compression algorithm, the FOSS system can continue gathering data, even on longer flights.

  17. Lossless compression for 3D PET

    International Nuclear Information System (INIS)

    Macq, B.; Sibomana, M.; Coppens, A.; Bol, A.; Michel, C.; Baker, K.; Jones, B.

    1994-01-01

    A new adaptive scheme is proposed for the lossless compression of positron emission tomography (PET) sinogram data. The algorithm uses an adaptive differential pulse code modulator (ADPCM) followed by a universal variable length coder (UVLC). Contrasting with Lempel-Ziv (LZ), which operates on a whole sinogram, UVLC operates very efficiently on short data blocks. This is a major advantage for real-time implementation. The algorithms is adaptive and codes data after some on-line estimations of the statistics inside each block. Its efficiency is tested when coding dynamic and static scans from two PET scanners and reaches asymptotically the entropy limit for long frames. For very short 3D frames, the new algorithm is twice more efficient than LZ. Since an application specific integrated circuit (ASIC) implementing a similar UVLC scheme is available today, a similar one should be able to sustain PET data lossless compression and decompression at a rate of 27 MBytes/sec. This algorithm is consequently a good candidate for the next generation of lossless compression engine

  18. Automatic evaluation of intrapartum fetal heart rate recordings: a comprehensive analysis of useful features.

    Science.gov (United States)

    Chudáček, V; Spilka, J; Janků, P; Koucký, M; Lhotská, L; Huptych, M

    2011-08-01

    Cardiotocography is the monitoring of fetal heart rate (FHR) and uterine contractions (TOCO), used routinely since the 1960s by obstetricians to detect fetal hypoxia. The evaluation of the FHR in clinical settings is based on an evaluation of macroscopic morphological features and so far has managed to avoid adopting any achievements from the HRV research field. In this work, most of the features utilized for FHR characterization, including FIGO, HRV, nonlinear, wavelet, and time and frequency domain features, are investigated and assessed based on their statistical significance in the task of distinguishing the FHR into three FIGO classes. We assess the features on a large data set (552 records) and unlike in other published papers we use three-class expert evaluation of the records instead of the pH values. We conclude the paper by presenting the best uncorrelated features and their individual rank of importance according to the meta-analysis of three different ranking methods. The number of accelerations and decelerations, interval index, as well as Lempel-Ziv complexity and Higuchi's fractal dimension are among the top five features.

  19. Assessment of features for automatic CTG analysis based on expert annotation.

    Science.gov (United States)

    Chudácek, Vacláv; Spilka, Jirí; Lhotská, Lenka; Janku, Petr; Koucký, Michal; Huptych, Michal; Bursa, Miroslav

    2011-01-01

    Cardiotocography (CTG) is the monitoring of fetal heart rate (FHR) and uterine contractions (TOCO) since 1960's used routinely by obstetricians to detect fetal hypoxia. The evaluation of the FHR in clinical settings is based on an evaluation of macroscopic morphological features and so far has managed to avoid adopting any achievements from the HRV research field. In this work, most of the ever-used features utilized for FHR characterization, including FIGO, HRV, nonlinear, wavelet, and time and frequency domain features, are investigated and the features are assessed based on their statistical significance in the task of distinguishing the FHR into three FIGO classes. Annotation derived from the panel of experts instead of the commonly utilized pH values was used for evaluation of the features on a large data set (552 records). We conclude the paper by presenting the best uncorrelated features and their individual rank of importance according to the meta-analysis of three different ranking methods. Number of acceleration and deceleration, interval index, as well as Lempel-Ziv complexity and Higuchi's fractal dimension are among the top five features.

  20. Sleep Disrupts High-Level Speech Parsing Despite Significant Basic Auditory Processing.

    Science.gov (United States)

    Makov, Shiri; Sharon, Omer; Ding, Nai; Ben-Shachar, Michal; Nir, Yuval; Zion Golumbic, Elana

    2017-08-09

    The extent to which the sleeping brain processes sensory information remains unclear. This is particularly true for continuous and complex stimuli such as speech, in which information is organized into hierarchically embedded structures. Recently, novel metrics for assessing the neural representation of continuous speech have been developed using noninvasive brain recordings that have thus far only been tested during wakefulness. Here we investigated, for the first time, the sleeping brain's capacity to process continuous speech at different hierarchical levels using a newly developed Concurrent Hierarchical Tracking (CHT) approach that allows monitoring the neural representation and processing-depth of continuous speech online. Speech sequences were compiled with syllables, words, phrases, and sentences occurring at fixed time intervals such that different linguistic levels correspond to distinct frequencies. This enabled us to distinguish their neural signatures in brain activity. We compared the neural tracking of intelligible versus unintelligible (scrambled and foreign) speech across states of wakefulness and sleep using high-density EEG in humans. We found that neural tracking of stimulus acoustics was comparable across wakefulness and sleep and similar across all conditions regardless of speech intelligibility. In contrast, neural tracking of higher-order linguistic constructs (words, phrases, and sentences) was only observed for intelligible speech during wakefulness and could not be detected at all during nonrapid eye movement or rapid eye movement sleep. These results suggest that, whereas low-level auditory processing is relatively preserved during sleep, higher-level hierarchical linguistic parsing is severely disrupted, thereby revealing the capacity and limits of language processing during sleep. SIGNIFICANCE STATEMENT Despite the persistence of some sensory processing during sleep, it is unclear whether high-level cognitive processes such as speech

  1. Cross-Lingual Dependency Parsing with Late Decoding for Truly Low-Resource Languages

    OpenAIRE

    Schlichtkrull, Michael Sejr; Søgaard, Anders

    2017-01-01

    In cross-lingual dependency annotation projection, information is often lost during transfer because of early decoding. We present an end-to-end graph-based neural network dependency parser that can be trained to reproduce matrices of edge scores, which can be directly projected across word alignments. We show that our approach to cross-lingual dependency parsing is not only simpler, but also achieves an absolute improvement of 2.25% averaged across 10 languages compared to the previous state...

  2. Parsing Heterogeneous Striatal Activity

    Directory of Open Access Journals (Sweden)

    Kae Nakamura

    2017-05-01

    Full Text Available The striatum is an input channel of the basal ganglia and is well known to be involved in reward-based decision making and learning. At the macroscopic level, the striatum has been postulated to contain parallel functional modules, each of which includes neurons that perform similar computations to support selection of appropriate actions for different task contexts. At the single-neuron level, however, recent studies in monkeys and rodents have revealed heterogeneity in neuronal activity even within restricted modules of the striatum. Looking for generality in the complex striatal activity patterns, here we briefly survey several types of striatal activity, focusing on their usefulness for mediating behaviors. In particular, we focus on two types of behavioral tasks: reward-based tasks that use salient sensory cues and manipulate outcomes associated with the cues; and perceptual decision tasks that manipulate the quality of noisy sensory cues and associate all correct decisions with the same outcome. Guided by previous insights on the modular organization and general selection-related functions of the basal ganglia, we relate striatal activity patterns on these tasks to two types of computations: implementation of selection and evaluation. We suggest that a parsing with the selection/evaluation categories encourages a focus on the functional commonalities revealed by studies with different animal models and behavioral tasks, instead of a focus on aspects of striatal activity that may be specific to a particular task setting. We then highlight several questions in the selection-evaluation framework for future explorations.

  3. The interaction of parsing rules and argument – Predicate constructions: implications for the structure of the Grammaticon in FunGramKB

    Directory of Open Access Journals (Sweden)

    María del Carmen Fumero Pérez

    2017-07-01

    Full Text Available The Functional Grammar Knowledge Base (FunGramKB, (Periñán-Pascual and Arcas-Túnez 2010 is a multipurpose lexico-conceptual knowledge base designed to be used in different Natural Language Processing (NLP tasks. It is complemented with the ARTEMIS (Automatically Representing Text Meaning via an Interlingua–based System application, a parsing device linguistically grounded on Role and Reference Grammar (RRG that transduces natural language fragments into their corresponding grammatical and semantic structures. This paper unveils the different phases involved in its parsing routine, paying special attention to the treatment of argumental constructions. As an illustrative case, we will follow all the steps necessary to effectively parse a For-Benefactive structure within ARTEMIS. This methodology will reveal the necessity to distinguish between Kernel constructs and L1-constructions, since the latter involve a modification of the lexical template of the verb. Our definition of L1-constructions leads to the reorganization of the catalogue of FunGramKB L1-constructions, formerly based on Levin’s (1993 alternations. Accordingly, a rearrangement of the internal configuration of the L1-Constructicon within the Grammaticon is proposed.

  4. Perform wordcount Map-Reduce Job in Single Node Apache Hadoop cluster and compress data using Lempel-Ziv-Oberhumer (LZO) algorithm

    OpenAIRE

    Mirajkar, Nandan; Bhujbal, Sandeep; Deshmukh, Aaradhana

    2013-01-01

    Applications like Yahoo, Facebook, Twitter have huge data which has to be stored and retrieved as per client access. This huge data storage requires huge database leading to increase in physical storage and becomes complex for analysis required in business growth. This storage capacity can be reduced and distributed processing of huge data can be done using Apache Hadoop which uses Map-reduce algorithm and combines the repeating data so that entire data is stored in reduced format. The paper ...

  5. Deep Incremental Boosting

    OpenAIRE

    Mosca, Alan; Magoulas, George D

    2017-01-01

    This paper introduces Deep Incremental Boosting, a new technique derived from AdaBoost, specifically adapted to work with Deep Learning methods, that reduces the required training time and improves generalisation. We draw inspiration from Transfer of Learning approaches to reduce the start-up time to training each incremental Ensemble member. We show a set of experiments that outlines some preliminary results on some common Deep Learning datasets and discuss the potential improvements Deep In...

  6. High-content image informatics of the structural nuclear protein NuMA parses trajectories for stem/progenitor cell lineages and oncogenic transformation

    International Nuclear Information System (INIS)

    Vega, Sebastián L.; Liu, Er; Arvind, Varun; Bushman, Jared; Sung, Hak-Joon; Becker, Matthew L.; Lelièvre, Sophie; Kohn, Joachim; Vidi, Pierre-Alexandre; Moghe, Prabhas V.

    2017-01-01

    Stem and progenitor cells that exhibit significant regenerative potential and critical roles in cancer initiation and progression remain difficult to characterize. Cell fates are determined by reciprocal signaling between the cell microenvironment and the nucleus; hence parameters derived from nuclear remodeling are ideal candidates for stem/progenitor cell characterization. Here we applied high-content, single cell analysis of nuclear shape and organization to examine stem and progenitor cells destined to distinct differentiation endpoints, yet undistinguishable by conventional methods. Nuclear descriptors defined through image informatics classified mesenchymal stem cells poised to either adipogenic or osteogenic differentiation, and oligodendrocyte precursors isolated from different regions of the brain and destined to distinct astrocyte subtypes. Nuclear descriptors also revealed early changes in stem cells after chemical oncogenesis, allowing the identification of a class of cancer-mitigating biomaterials. To capture the metrology of nuclear changes, we developed a simple and quantitative “imaging-derived” parsing index, which reflects the dynamic evolution of the high-dimensional space of nuclear organizational features. A comparative analysis of parsing outcomes via either nuclear shape or textural metrics of the nuclear structural protein NuMA indicates the nuclear shape alone is a weak phenotypic predictor. In contrast, variations in the NuMA organization parsed emergent cell phenotypes and discerned emergent stages of stem cell transformation, supporting a prognosticating role for this protein in the outcomes of nuclear functions. - Highlights: • High-content analysis of nuclear shape and organization classify stem and progenitor cells poised for distinct lineages. • Early oncogenic changes in mesenchymal stem cells (MSCs) are also detected with nuclear descriptors. • A new class of cancer-mitigating biomaterials was identified based on image

  7. High-content image informatics of the structural nuclear protein NuMA parses trajectories for stem/progenitor cell lineages and oncogenic transformation

    Energy Technology Data Exchange (ETDEWEB)

    Vega, Sebastián L. [Department of Chemical and Biochemical Engineering, Rutgers University, Piscataway, NJ (United States); Liu, Er; Arvind, Varun [Department of Biomedical Engineering, Rutgers University, Piscataway, NJ (United States); Bushman, Jared [Department of Chemistry and Chemical Biology, New Jersey Center for Biomaterials, Piscataway, NJ (United States); School of Pharmacy, University of Wyoming, Laramie, WY (United States); Sung, Hak-Joon [Department of Chemistry and Chemical Biology, New Jersey Center for Biomaterials, Piscataway, NJ (United States); Department of Biomedical Engineering, Vanderbilt University, Nashville, TN (United States); Becker, Matthew L. [Department of Polymer Science and Engineering, University of Akron, Akron, OH (United States); Lelièvre, Sophie [Department of Basic Medical Sciences, Purdue University, West Lafayette, IN (United States); Kohn, Joachim [Department of Chemistry and Chemical Biology, New Jersey Center for Biomaterials, Piscataway, NJ (United States); Vidi, Pierre-Alexandre, E-mail: pvidi@wakehealth.edu [Department of Cancer Biology, Wake Forest School of Medicine, Winston-Salem, NC (United States); Moghe, Prabhas V., E-mail: moghe@rutgers.edu [Department of Chemical and Biochemical Engineering, Rutgers University, Piscataway, NJ (United States); Department of Biomedical Engineering, Rutgers University, Piscataway, NJ (United States)

    2017-02-01

    Stem and progenitor cells that exhibit significant regenerative potential and critical roles in cancer initiation and progression remain difficult to characterize. Cell fates are determined by reciprocal signaling between the cell microenvironment and the nucleus; hence parameters derived from nuclear remodeling are ideal candidates for stem/progenitor cell characterization. Here we applied high-content, single cell analysis of nuclear shape and organization to examine stem and progenitor cells destined to distinct differentiation endpoints, yet undistinguishable by conventional methods. Nuclear descriptors defined through image informatics classified mesenchymal stem cells poised to either adipogenic or osteogenic differentiation, and oligodendrocyte precursors isolated from different regions of the brain and destined to distinct astrocyte subtypes. Nuclear descriptors also revealed early changes in stem cells after chemical oncogenesis, allowing the identification of a class of cancer-mitigating biomaterials. To capture the metrology of nuclear changes, we developed a simple and quantitative “imaging-derived” parsing index, which reflects the dynamic evolution of the high-dimensional space of nuclear organizational features. A comparative analysis of parsing outcomes via either nuclear shape or textural metrics of the nuclear structural protein NuMA indicates the nuclear shape alone is a weak phenotypic predictor. In contrast, variations in the NuMA organization parsed emergent cell phenotypes and discerned emergent stages of stem cell transformation, supporting a prognosticating role for this protein in the outcomes of nuclear functions. - Highlights: • High-content analysis of nuclear shape and organization classify stem and progenitor cells poised for distinct lineages. • Early oncogenic changes in mesenchymal stem cells (MSCs) are also detected with nuclear descriptors. • A new class of cancer-mitigating biomaterials was identified based on image

  8. On excursion increments in heartbeat dynamics

    International Nuclear Information System (INIS)

    Guzmán-Vargas, L.; Reyes-Ramírez, I.; Hernández-Pérez, R.

    2013-01-01

    We study correlation properties of excursion increments of heartbeat time series from healthy subjects and heart failure patients. We construct the excursion time based on the original heartbeat time series, representing the time employed by the walker to return to the local mean value. Next, the detrended fluctuation analysis and the fractal dimension method are applied to the magnitude and sign of the increments in the time excursions between successive excursions for the mentioned groups. Our results show that for magnitude series of excursion increments both groups display long-range correlations with similar correlation exponents, indicating that large (small) increments (decrements) are more likely to be followed by large (small) increments (decrements). For sign sequences and for both groups, we find that increments are short-range anti-correlated, which is noticeable under heart failure conditions

  9. The Parsing Syllable Envelopes Test for Assessment of Amplitude Modulation Discrimination Skills in Children: Development, Normative Data, and Test-Retest Reliability Studies.

    Science.gov (United States)

    Cameron, Sharon; Chong-White, Nicky; Mealings, Kiri; Beechey, Tim; Dillon, Harvey; Young, Taegan

    2018-02-01

    Intensity peaks and valleys in the acoustic signal are salient cues to syllable structure, which is accepted to be a crucial early step in phonological processing. As such, the ability to detect low-rate (envelope) modulations in signal amplitude is essential to parse an incoming speech signal into smaller phonological units. The Parsing Syllable Envelopes (ParSE) test was developed to quantify the ability of children to recognize syllable boundaries using an amplitude modulation detection paradigm. The envelope of a 750-msec steady-state /a/ vowel is modulated into two or three pseudo-syllables using notches with modulation depths varying between 0% and 100% along an 11-step continuum. In an adaptive three-alternative forced-choice procedure, the participant identified whether one, two, or three pseudo-syllables were heard. Development of the ParSE stimuli and test protocols, and collection of normative and test-retest reliability data. Eleven adults (aged 23 yr 10 mo to 50 yr 9 mo, mean 32 yr 10 mo) and 134 typically developing, primary-school children (aged 6 yr 0 mo to 12 yr 4 mo, mean 9 yr 3 mo). There were 73 males and 72 females. Data were collected using a touchscreen computer. Psychometric functions (PFs) were automatically fit to individual data by the ParSE software. Performance was related to the modulation depth at which syllables can be detected with 88% accuracy (referred to as the upper boundary of the uncertainty region [UBUR]). A shallower PF slope reflected a greater level of uncertainty. Age effects were determined based on raw scores. z Scores were calculated to account for the effect of age on performance. Outliers, and individual data for which the confidence interval of the UBUR exceeded a maximum allowable value, were removed. Nonparametric tests were used as the data were skewed toward negative performance. Across participants, the performance criterion (UBUR) was met with a median modulation depth of 42%. The effect of age on the UBUR was

  10. Relative clauses as a benchmark for Minimalist parsing

    Directory of Open Access Journals (Sweden)

    Thomas Graf

    2017-07-01

    Full Text Available Minimalist grammars have been used recently in a series of papers to explain well-known contrasts in human sentence processing in terms of subtle structural differences. These proposals combine a top-down parser with complexity metrics that relate parsing difficulty to memory usage. So far, though, there has been no large-scale exploration of the space of viable metrics. Building on this earlier work, we compare the ability of 1600 metrics to derive several processing effects observed with relative clauses, many of which have been proven difficult to unify. We show that among those 1600 candidates, a few metrics (and only a few can provide a unified account of all these contrasts. This is a welcome result for two reasons: First, it provides a novel account of extensively studied psycholinguistic data. Second, it significantly limits the number of viable metrics that may be applied to other phenomena, thus reducing theoretical indeterminacy.

  11. Legislative Bargaining and Incremental Budgeting

    OpenAIRE

    Dhammika Dharmapala

    2002-01-01

    The notion of 'incrementalism', formulated by Aaron Wildavsky in the 1960's, has been extremely influential in the public budgeting literature. In essence, it entails the claim that legislators engaged in budgetary policymaking accept past allocations, and decide only on the allocation of increments to revenue. Wildavsky explained incrementalism with reference to the cognitive limitations of lawmakers and their desire to reduce conflict. This paper uses a legislative bargaining framework to u...

  12. Electroencephalography signatures of attention-deficit/hyperactivity disorder: clinical utility.

    Science.gov (United States)

    Alba, Guzmán; Pereda, Ernesto; Mañas, Soledad; Méndez, Leopoldo D; González, Almudena; González, Julián J

    2015-01-01

    The techniques and the most important results on the use of electroencephalography (EEG) to extract different measures are reviewed in this work, which can be clinically useful to study subjects with attention-deficit/hyperactivity disorder (ADHD). First, we discuss briefly and in simple terms the EEG analysis and processing techniques most used in the context of ADHD. We review techniques that both analyze individual EEG channels (univariate measures) and study the statistical interdependence between different EEG channels (multivariate measures), the so-called functional brain connectivity. Among the former ones, we review the classical indices of absolute and relative spectral power and estimations of the complexity of the channels, such as the approximate entropy and the Lempel-Ziv complexity. Among the latter ones, we focus on the magnitude square coherence and on different measures based on the concept of generalized synchronization and its estimation in the state space. Second, from a historical point of view, we present the most important results achieved with these techniques and their clinical utility (sensitivity, specificity, and accuracy) to diagnose ADHD. Finally, we propose future research lines based on these results.

  13. Unmanned Maritime Systems Incremental Acquisition Approach

    Science.gov (United States)

    2016-12-01

    REPORT TYPE AND DATES COVERED MBA professional report 4. TITLE AND SUBTITLE UNMANNED MARITIME SYSTEMS INCREMENTAL ACQUISITION APPROACH 5. FUNDING...Approved for public release. Distribution is unlimited. UNMANNED MARITIME SYSTEMS INCREMENTAL ACQUISITION APPROACH Thomas Driscoll, Lieutenant...UNMANNED MARITIME SYSTEMS INCREMENTAL ACQUISITION APPROACH ABSTRACT The purpose of this MBA report is to explore and understand the issues

  14. FEM Simulation of Incremental Shear

    International Nuclear Information System (INIS)

    Rosochowski, Andrzej; Olejnik, Lech

    2007-01-01

    A popular way of producing ultrafine grained metals on a laboratory scale is severe plastic deformation. This paper introduces a new severe plastic deformation process of incremental shear. A finite element method simulation is carried out for various tool geometries and process kinematics. It has been established that for the successful realisation of the process the inner radius of the channel as well as the feeding increment should be approximately 30% of the billet thickness. The angle at which the reciprocating die works the material can be 30 deg. . When compared to equal channel angular pressing, incremental shear shows basic similarities in the mode of material flow and a few technological advantages which make it an attractive alternative to the known severe plastic deformation processes. The most promising characteristic of incremental shear is the possibility of processing very long billets in a continuous way which makes the process more industrially relevant

  15. High-content image informatics of the structural nuclear protein NuMA parses trajectories for stem/progenitor cell lineages and oncogenic transformation.

    Science.gov (United States)

    Vega, Sebastián L; Liu, Er; Arvind, Varun; Bushman, Jared; Sung, Hak-Joon; Becker, Matthew L; Lelièvre, Sophie; Kohn, Joachim; Vidi, Pierre-Alexandre; Moghe, Prabhas V

    2017-02-01

    Stem and progenitor cells that exhibit significant regenerative potential and critical roles in cancer initiation and progression remain difficult to characterize. Cell fates are determined by reciprocal signaling between the cell microenvironment and the nucleus; hence parameters derived from nuclear remodeling are ideal candidates for stem/progenitor cell characterization. Here we applied high-content, single cell analysis of nuclear shape and organization to examine stem and progenitor cells destined to distinct differentiation endpoints, yet undistinguishable by conventional methods. Nuclear descriptors defined through image informatics classified mesenchymal stem cells poised to either adipogenic or osteogenic differentiation, and oligodendrocyte precursors isolated from different regions of the brain and destined to distinct astrocyte subtypes. Nuclear descriptors also revealed early changes in stem cells after chemical oncogenesis, allowing the identification of a class of cancer-mitigating biomaterials. To capture the metrology of nuclear changes, we developed a simple and quantitative "imaging-derived" parsing index, which reflects the dynamic evolution of the high-dimensional space of nuclear organizational features. A comparative analysis of parsing outcomes via either nuclear shape or textural metrics of the nuclear structural protein NuMA indicates the nuclear shape alone is a weak phenotypic predictor. In contrast, variations in the NuMA organization parsed emergent cell phenotypes and discerned emergent stages of stem cell transformation, supporting a prognosticating role for this protein in the outcomes of nuclear functions. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Guide to LIBXSIF, a Library for Parsing the Extended Standard Input Format of Accelerated Beamlines(LCC-0060)

    International Nuclear Information System (INIS)

    Tenenbaum, P

    2003-01-01

    We describe LIBXSIF, a standalone library for parsing the Extended Standard Input Format of accelerator beamlines. Included in the description are: documentation of user commands; full description of permitted accelerator elements and their attributes; the construction of beamline lists; the mechanics of adding LIBXSIF to an existing program; and ''under the hood'' details for users who wish to modify the library or are merely morbidly curious

  17. Parsing in a Dynamical System: An Attractor-Based Account of the Interaction of Lexical and Structural Constraints in Sentence Processing.

    Science.gov (United States)

    Tabor, Whitney; And Others

    1997-01-01

    Proposes a dynamical systems approach to parsing in which syntactic hypotheses are associated with attractors in a metric space. The experiments discussed documented various contingent frequency effects that cut across traditional linguistic grains, each of which was predicted by the dynamical systems model. (47 references) (Author/CK)

  18. On conditional scalar increment and joint velocity-scalar increment statistics

    International Nuclear Information System (INIS)

    Zhang Hengbin; Wang Danhong; Tong Chenning

    2004-01-01

    Conditional velocity and scalar increment statistics are usually studied in the context of Kolmogorov's refined similarity hypotheses and are considered universal (quasi-Gaussian) for inertial-range separations. In such analyses the locally averaged energy and scalar dissipation rates are used as conditioning variables. Recent studies have shown that certain local turbulence structures can be captured when the local scalar variance (φ 2 ) r and the local kinetic energy k r are used as the conditioning variables. We study the conditional increments using these conditioning variables, which also provide the local turbulence scales. Experimental data obtained in the fully developed region of an axisymmetric turbulent jet are used to compute the statistics. The conditional scalar increment probability density function (PDF) conditional on (φ 2 ) r is found to be close to Gaussian for (φ 2 ) r small compared with its mean and is sub-Gaussian and bimodal for large (φ 2 ) r , and therefore is not universal. We find that the different shapes of the conditional PDFs are related to the instantaneous degree of non-equilibrium (production larger than dissipation) of the local scalar. There is further evidence of this from the conditional PDF conditional on both (φ 2 ) r and χ r , which is largely a function of (φ 2 ) r /χ r , a measure of the degree of non-equilibrium. The velocity-scalar increment joint PDF is close to joint Gaussian and quad-modal for equilibrium and non-equilibrium local velocity and scalar, respectively. The latter shape is associated with a combination of the ramp-cliff and plane strain structures. Kolmogorov's refined similarity hypotheses also predict a dependence of the conditional PDF on the degree of non-equilibrium. Therefore, the quasi-Gaussian (joint) PDF, previously observed in the context of Kolmogorov's refined similarity hypotheses, is only one of the conditional PDF shapes of inertial range turbulence. The present study suggests that

  19. BAIK– PROGRAMMING LANGUAGE BASED ON INDONESIAN LEXICAL PARSING FOR MULTITIER WEB DEVELOPMENT

    Directory of Open Access Journals (Sweden)

    Haris Hasanudin

    2012-05-01

    Full Text Available Business software development with global team is increasing rapidly and the programming language as development tool takes the important role in the global web development. The real user friendly programming language should be written in local language for programmer who has native language is not in English. This paper presents our design of BAIK (Bahasa Anak Indonesia untuk Komputerscripting language which syntax is modeled with Bahasa Indonesian for multitier web development. Researcher propose the implementation of Indonesian Parsing Engine and Binary Search Tree structure for memory allocation of variable and compose the language features that support basic Object Oriented Programming, Common Gateway Interface, HTML style manipulation and database connection. Our goal is to build real programming language from simple structure design for web development using Indonesian lexical words. Pengembangan bisnis perangkat lunak dalam tim berskala global meningkat dengan cepat dan bahasa pemrograman berperan penting dalam pengembangan web secara global. Bahasa pemrograman yang benar-benar ramah terhadap pengguna harus ditulis dalam bahasa lokal programmer yang bahasa ibunya bukan Bahasa Inggris. Paper ini menyajikan desain dari bahasa penulisan BAIK (Bahasa Anak Indonesia untuk Komputer, yang sintaksisnya dimodelkan dengan Bahasa Indonesia untuk pengembangan web multitier. Peneliti mengusulkan implementasi dari parsing engine Bahasa Indonesia dan struktur binary search tree untuk alokasi memori terhadap variabel, serta membuat fitur bahasa yang mendukung dasar pemrograman berbasis objek, common gateway interface, manipulasi gaya HTML, dan koneksi basis data. Tujuan penelitian ini adalah untuk menciptakan bahasa pemrograman yang sesungguhnya dan menggunakan desain struktur sederhana untuk pengembangan web dengan menggunakan kata-kata dari Bahasa Indonesia.

  20. Parsing (malicious pleasures: Schadenfreude and gloating at others’ adversity

    Directory of Open Access Journals (Sweden)

    Colin Wayne Leach

    2015-02-01

    Full Text Available We offer the first empirical comparison of the pleasure in seeing (i.e., schadenfreude and in causing (i.e., gloating others’ adversity. In Study 1, we asked participants to recall and report on an (individual or group episode of pleasure that conformed to our formal definition of schadenfreude, gloating, pride, or joy, without reference to an emotion word. Schadenfreude and gloating were distinct in the situational features of the episode, participants’ appraisals of it, and their expressions of pleasure (e.g., smiling, boasting. In Study 2, we had participants imagine being in an (individual or group emotion episode designed to fit our conceptualization of schadenfreude or gloating. Individual and group versions of the emotions did not differ much in either study. However, the two pleasures differed greatly in their situational features, appraisals, experience, and expression. This parsing of the particular pleasures of schadenfreude and gloating brings nuance to the study of (malicious pleasure, which tends to be less finely conceptualized and examined than displeasure despite its importance to social relations.

  1. Mechanisms for interaction: Syntax as procedures for online interactive meaning building.

    Science.gov (United States)

    Kempson, Ruth; Chatzikyriakidis, Stergios; Cann, Ronnie

    2016-01-01

    We argue that to reflect participant interactivity in conversational dialogue, the Christiansen & Chater (C&C) perspective needs a formal grammar framework capturing word-by-word incrementality, as in Dynamic Syntax, in which syntax is the incremental building of semantic representations reflecting real-time parsing dynamics. We demonstrate that, with such formulation, syntactic, semantic, and morpho-syntactic dependencies are all analysable as grounded in their potential for interaction.

  2. 48 CFR 3432.771 - Provision for incremental funding.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 7 2010-10-01 2010-10-01 false Provision for incremental funding. 3432.771 Section 3432.771 Federal Acquisition Regulations System DEPARTMENT OF EDUCATION..., Incremental Funding, in a solicitation if a cost-reimbursement contract using incremental funding is...

  3. GFFview: A Web Server for Parsing and Visualizing Annotation Information of Eukaryotic Genome.

    Science.gov (United States)

    Deng, Feilong; Chen, Shi-Yi; Wu, Zhou-Lin; Hu, Yongsong; Jia, Xianbo; Lai, Song-Jia

    2017-10-01

    Owing to wide application of RNA sequencing (RNA-seq) technology, more and more eukaryotic genomes have been extensively annotated, such as the gene structure, alternative splicing, and noncoding loci. Annotation information of genome is prevalently stored as plain text in General Feature Format (GFF), which could be hundreds or thousands Mb in size. Therefore, it is a challenge for manipulating GFF file for biologists who have no bioinformatic skill. In this study, we provide a web server (GFFview) for parsing the annotation information of eukaryotic genome and then generating statistical description of six indices for visualization. GFFview is very useful for investigating quality and difference of the de novo assembled transcriptome in RNA-seq studies.

  4. Increment memory module for spectrometric data recording

    International Nuclear Information System (INIS)

    Zhuchkov, A.A.; Myagkikh, A.I.

    1988-01-01

    Incremental memory unit designed to input differential energy spectra of nuclear radiation is described. ROM application as incremental device has allowed to reduce the number of elements and do simplify information readout from the unit. 12-bit 2048 channels present memory unit organization. The device is connected directly with the bus of microprocessor systems similar to KR 580. Incrementation maximal time constitutes 3 mks. It is possible to use this unit in multichannel counting mode

  5. Small Diameter Bomb Increment II (SDB II)

    Science.gov (United States)

    2015-12-01

    Selected Acquisition Report (SAR) RCS: DD-A&T(Q&A)823-439 Small Diameter Bomb Increment II (SDB II) As of FY 2017 President’s Budget Defense... Bomb Increment II (SDB II) DoD Component Air Force Joint Participants Department of the Navy Responsible Office References SAR Baseline (Production...Mission and Description Small Diameter Bomb Increment II (SDB II) is a joint interest United States Air Force (USAF) and Department of the Navy

  6. Planning Through Incrementalism

    Science.gov (United States)

    Lasserre, Ph.

    1974-01-01

    An incremental model of decisionmaking is discussed and compared with the Comprehensive Rational Approach. A model of reconciliation between the two approaches is proposed, and examples are given in the field of economic development and educational planning. (Author/DN)

  7. Lossless medical image compression with a hybrid coder

    Science.gov (United States)

    Way, Jing-Dar; Cheng, Po-Yuen

    1998-10-01

    The volume of medical image data is expected to increase dramatically in the next decade due to the large use of radiological image for medical diagnosis. The economics of distributing the medical image dictate that data compression is essential. While there is lossy image compression, the medical image must be recorded and transmitted lossless before it reaches the users to avoid wrong diagnosis due to the image data lost. Therefore, a low complexity, high performance lossless compression schematic that can approach the theoretic bound and operate in near real-time is needed. In this paper, we propose a hybrid image coder to compress the digitized medical image without any data loss. The hybrid coder is constituted of two key components: an embedded wavelet coder and a lossless run-length coder. In this system, the medical image is compressed with the lossy wavelet coder first, and the residual image between the original and the compressed ones is further compressed with the run-length coder. Several optimization schemes have been used in these coders to increase the coding performance. It is shown that the proposed algorithm is with higher compression ratio than run-length entropy coders such as arithmetic, Huffman and Lempel-Ziv coders.

  8. Empirical and Statistical Evaluation of the Effectiveness of Four Lossless Data Compression Algorithms

    Directory of Open Access Journals (Sweden)

    N. A. Azeez

    2017-04-01

    Full Text Available Data compression is the process of reducing the size of a file to effectively reduce storage space and communication cost. The evolvement in technology and digital age has led to an unparalleled usage of digital files in this current decade. The usage of data has resulted to an increase in the amount of data being transmitted via various channels of data communication which has prompted the need to look into the current lossless data compression algorithms to check for their level of effectiveness so as to maximally reduce the bandwidth requirement in communication and transfer of data. Four lossless data compression algorithm: Lempel-Ziv Welch algorithm, Shannon-Fano algorithm, Adaptive Huffman algorithm and Run-Length encoding have been selected for implementation. The choice of these algorithms was based on their similarities, particularly in application areas. Their level of efficiency and effectiveness were evaluated using some set of predefined performance evaluation metrics namely compression ratio, compression factor, compression time, saving percentage, entropy and code efficiency. The algorithms implementation was done in the NetBeans Integrated Development Environment using Java as the programming language. Through the statistical analysis performed using Boxplot and ANOVA and comparison made on the four algo

  9. Three perspectives on complexity: entropy, compression, subsymmetry

    Science.gov (United States)

    Nagaraj, Nithin; Balasubramanian, Karthi

    2017-12-01

    There is no single universally accepted definition of `Complexity'. There are several perspectives on complexity and what constitutes complex behaviour or complex systems, as opposed to regular, predictable behaviour and simple systems. In this paper, we explore the following perspectives on complexity: effort-to-describe (Shannon entropy H, Lempel-Ziv complexity LZ), effort-to-compress (ETC complexity) and degree-of-order (Subsymmetry or SubSym). While Shannon entropy and LZ are very popular and widely used, ETC is relatively a new complexity measure. In this paper, we also propose a novel normalized complexity measure SubSym based on the existing idea of counting the number of subsymmetries or palindromes within a sequence. We compare the performance of these complexity measures on the following tasks: (A) characterizing complexity of short binary sequences of lengths 4 to 16, (B) distinguishing periodic and chaotic time series from 1D logistic map and 2D Hénon map, (C) analyzing the complexity of stochastic time series generated from 2-state Markov chains, and (D) distinguishing between tonic and irregular spiking patterns generated from the `Adaptive exponential integrate-and-fire' neuron model. Our study reveals that each perspective has its own advantages and uniqueness while also having an overlap with each other.

  10. FDTD Stability: Critical Time Increment

    OpenAIRE

    Z. Skvor; L. Pauk

    2003-01-01

    A new approach suitable for determination of the maximal stable time increment for the Finite-Difference Time-Domain (FDTD) algorithm in common curvilinear coordinates, for general mesh shapes and certain types of boundaries is presented. The maximal time increment corresponds to a characteristic value of a Helmholz equation that is solved by a finite-difference (FD) method. If this method uses exactly the same discretization as the given FDTD method (same mesh, boundary conditions, order of ...

  11. Incremental Visualizer for Visible Objects

    DEFF Research Database (Denmark)

    Bukauskas, Linas; Bøhlen, Michael Hanspeter

    This paper discusses the integration of database back-end and visualizer front-end into a one tightly coupled system. The main aim which we achieve is to reduce the data pipeline from database to visualization by using incremental data extraction of visible objects in a fly-through scenarios. We...... also argue that passing only relevant data from the database will substantially reduce the overall load of the visualization system. We propose the system Incremental Visualizer for Visible Objects (IVVO) which considers visible objects and enables incremental visualization along the observer movement...... path. IVVO is the novel solution which allows data to be visualized and loaded on the fly from the database and which regards visibilities of objects. We run a set of experiments to convince that IVVO is feasible in terms of I/O operations and CPU load. We consider the example of data which uses...

  12. “Less of the Heroine than the Woman”: Parsing Gender in the British Novel

    Directory of Open Access Journals (Sweden)

    Susan Carlile

    2017-06-01

    Full Text Available This essay offers two methods that will help students resist the temptation to judge eighteenth-century novels by twenty-first-century standards. These methods prompt students to parse the question of whether female protagonists in novels—in this case, Daniel Defoe’s Roxana (1724, Samuel Johnson’s Rasselas (1759, and Charlotte Lennox’s Sophia (1762—are portrayed as perfect models or as complex humans. The first method asks them to engage with definitions of the term “heroine,” and the second method uses word clouds to extend their thinking about the complexity of embodying a mid-eighteenth-century female identity.

  13. Pseudocode Interpreter (Pseudocode Integrated Development Environment with Lexical Analyzer and Syntax Analyzer using Recursive Descent Parsing Algorithm

    Directory of Open Access Journals (Sweden)

    Christian Lester D. Gimeno

    2017-11-01

    Full Text Available –This research study focused on the development of a software that helps students design, write, validate and run their pseudocode in a semi Integrated Development Environment (IDE instead of manually writing it on a piece of paper.Specifically, the study aimed to develop lexical analyzer or lexer, syntax analyzer or parser using recursive descent parsing algorithm and an interpreter. The lexical analyzer reads pseudocodesource in a sequence of symbols or characters as lexemes.The lexemes are then analyzed by the lexer that matches a pattern for valid tokens and passes to the syntax analyzer or parser. The syntax analyzer or parser takes those valid tokens and builds meaningful commands using recursive descent parsing algorithm in a form of an abstract syntax tree. The generation of an abstract syntax tree is based on the specified grammar rule created by the researcher expressed in Extended Backus-Naur Form. The Interpreter takes the generated abstract syntax tree and starts the evaluation or interpretation to produce pseudocode output. The software was evaluated using white-box testing by several ICT professionals and black-box testing by several computer science students based on the International Organization for Standardization (ISO 9126 software quality standards. The overall results of the evaluation both for white-box and black-box were described as “Excellent in terms of functionality, reliability, usability, efficiency, maintainability and portability”.

  14. Power variation for Gaussian processes with stationary increments

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Corcuera, J.M.; Podolskij, Mark

    2009-01-01

    We develop the asymptotic theory for the realised power variation of the processes X=•G, where G is a Gaussian process with stationary increments. More specifically, under some mild assumptions on the variance function of the increments of G and certain regularity conditions on the path of the pr......We develop the asymptotic theory for the realised power variation of the processes X=•G, where G is a Gaussian process with stationary increments. More specifically, under some mild assumptions on the variance function of the increments of G and certain regularity conditions on the path...... a chaos representation....

  15. Incremental passivity and output regulation for switched nonlinear systems

    Science.gov (United States)

    Pang, Hongbo; Zhao, Jun

    2017-10-01

    This paper studies incremental passivity and global output regulation for switched nonlinear systems, whose subsystems are not required to be incrementally passive. A concept of incremental passivity for switched systems is put forward. First, a switched system is rendered incrementally passive by the design of a state-dependent switching law. Second, the feedback incremental passification is achieved by the design of a state-dependent switching law and a set of state feedback controllers. Finally, we show that once the incremental passivity for switched nonlinear systems is assured, the output regulation problem is solved by the design of global nonlinear regulator controllers comprising two components: the steady-state control and the linear output feedback stabilising controllers, even though the problem for none of subsystems is solvable. Two examples are presented to illustrate the effectiveness of the proposed approach.

  16. Theory of Single Point Incremental Forming

    DEFF Research Database (Denmark)

    Martins, P.A.F.; Bay, Niels; Skjødt, Martin

    2008-01-01

    This paper presents a closed-form theoretical analysis modelling the fundamentals of single point incremental forming and explaining the experimental and numerical results available in the literature for the past couple of years. The model is based on membrane analysis with bi-directional in-plan......-plane contact friction and is focused on the extreme modes of deformation that are likely to be found in single point incremental forming processes. The overall investigation is supported by experimental work performed by the authors and data retrieved from the literature.......This paper presents a closed-form theoretical analysis modelling the fundamentals of single point incremental forming and explaining the experimental and numerical results available in the literature for the past couple of years. The model is based on membrane analysis with bi-directional in...

  17. Efficiency of Oral Incremental Rehearsal versus Written Incremental Rehearsal on Students' Rate, Retention, and Generalization of Spelling Words

    Science.gov (United States)

    Garcia, Dru; Joseph, Laurice M.; Alber-Morgan, Sheila; Konrad, Moira

    2014-01-01

    The purpose of this study was to examine the efficiency of an incremental rehearsal oral versus an incremental rehearsal written procedure on a sample of primary grade children's weekly spelling performance. Participants included five second and one first grader who were in need of help with their spelling according to their teachers. An…

  18. Performance Evaluation of Incremental K-means Clustering Algorithm

    OpenAIRE

    Chakraborty, Sanjay; Nagwani, N. K.

    2014-01-01

    The incremental K-means clustering algorithm has already been proposed and analysed in paper [Chakraborty and Nagwani, 2011]. It is a very innovative approach which is applicable in periodically incremental environment and dealing with a bulk of updates. In this paper the performance evaluation is done for this incremental K-means clustering algorithm using air pollution database. This paper also describes the comparison on the performance evaluations between existing K-means clustering and i...

  19. The effect of recognizability on figure-ground processing: does it affect parsing or only figure selection?

    Science.gov (United States)

    Navon, David

    2011-03-01

    Though figure-ground assignment has been shown to be probably affected by recognizability, it appears sensible that object recognition must follow at least the earlier process of figure-ground segregation. To examine whether or not rudimentary object recognition could, counterintuitively, start even before the completion of the stage of parsing in which figure-ground segregation is done, participants were asked to respond, in a go/no-go fashion, whenever any out of 16 alternative connected patterns (that constituted familiar stimuli in the upright orientation) appeared. The white figure of the to-be-attended stimulus-target or foil-could be segregated from the white ambient ground only by means of a frame surrounding it. Such a frame was absent until the onset of target display. Then, to manipulate organizational quality, the greyness of the frame was either gradually increased from zero (in Experiment 1) or changed abruptly to a stationary level whose greyness was varied between trials (in Experiments 2 and 3). Stimulus recognizability was manipulated by orientation angle. In all three experiments the effect of recognizability was found to be considerably larger when organizational quality was minimal due to an extremely faint frame. This result is argued to be incompatible with any version of a serial thesis suggesting that processing aimed at object recognition starts only with a good enough level of organizational quality. The experiments rather provide some support to the claim, termed here "early interaction hypothesis", positing interaction between early recognition processing and preassignment parsing processes.

  20. Quantum independent increment processes

    CERN Document Server

    Franz, Uwe

    2005-01-01

    This volume is the first of two volumes containing the revised and completed notes lectures given at the school "Quantum Independent Increment Processes: Structure and Applications to Physics". This school was held at the Alfried-Krupp-Wissenschaftskolleg in Greifswald during the period March 9 – 22, 2003, and supported by the Volkswagen Foundation. The school gave an introduction to current research on quantum independent increment processes aimed at graduate students and non-specialists working in classical and quantum probability, operator algebras, and mathematical physics. The present first volume contains the following lectures: "Lévy Processes in Euclidean Spaces and Groups" by David Applebaum, "Locally Compact Quantum Groups" by Johan Kustermans, "Quantum Stochastic Analysis" by J. Martin Lindsay, and "Dilations, Cocycles and Product Systems" by B.V. Rajarama Bhat.

  1. Incremental short daily home hemodialysis: a case series.

    Science.gov (United States)

    Toth-Manikowski, Stephanie M; Mullangi, Surekha; Hwang, Seungyoung; Shafi, Tariq

    2017-07-05

    Patients starting dialysis often have substantial residual kidney function. Incremental hemodialysis provides a hemodialysis prescription that supplements patients' residual kidney function while maintaining total (residual + dialysis) urea clearance (standard Kt/Vurea) targets. We describe our experience with incremental hemodialysis in patients using NxStage System One for home hemodialysis. From 2011 to 2015, we initiated 5 incident hemodialysis patients on an incremental home hemodialysis regimen. The biochemical parameters of all patients remained stable on the incremental hemodialysis regimen and they consistently achieved standard Kt/Vurea targets. Of the two patients with follow-up >6 months, residual kidney function was preserved for ≥2 years. Importantly, the patients were able to transition to home hemodialysis without automatically requiring 5 sessions per week at the outset and gradually increased the number of treatments and/or dialysate volume as the residual kidney function declined. An incremental home hemodialysis regimen can be safely prescribed and may improve acceptability of home hemodialysis. Reducing hemodialysis frequency by even one treatment per week can reduce the number of fistula or graft cannulations or catheter connections by >100 per year, an important consideration for patient well-being, access longevity, and access-related infections. The incremental hemodialysis approach, supported by national guidelines, can be considered for all home hemodialysis patients with residual kidney function.

  2. Quantum independent increment processes

    CERN Document Server

    Franz, Uwe

    2006-01-01

    This is the second of two volumes containing the revised and completed notes of lectures given at the school "Quantum Independent Increment Processes: Structure and Applications to Physics". This school was held at the Alfried-Krupp-Wissenschaftskolleg in Greifswald in March, 2003, and supported by the Volkswagen Foundation. The school gave an introduction to current research on quantum independent increment processes aimed at graduate students and non-specialists working in classical and quantum probability, operator algebras, and mathematical physics. The present second volume contains the following lectures: "Random Walks on Finite Quantum Groups" by Uwe Franz and Rolf Gohm, "Quantum Markov Processes and Applications in Physics" by Burkhard Kümmerer, Classical and Free Infinite Divisibility and Lévy Processes" by Ole E. Barndorff-Nielsen, Steen Thorbjornsen, and "Lévy Processes on Quantum Groups and Dual Groups" by Uwe Franz.

  3. Growth increments in teeth of Diictodon (Therapsida

    Directory of Open Access Journals (Sweden)

    J. Francis Thackeray

    1991-09-01

    Full Text Available Growth increments circa 0.02 mm in width have been observed in sectioned tusks of Diictodon from the Late Permian lower Beaufort succession of the South African Karoo, dated between about 260 and 245 million years ago. Mean growth increments show a decline from relatively high values in the Tropidostoma/Endothiodon Assemblage Zone, to lower values in the Aulacephalodon/Cistecephaluszone, declining still further in the Dicynodon lacerficeps/Whaitsia zone at the end of the Permian. These changes coincide with gradual changes in carbon isotope ratios measured from Diictodon tooth apatite. It is suggested that the decline in growth increments is related to environmental changes associated with a decline in primary production which contributed to the decline in abundance and ultimate extinction of Diictodon.

  4. Incremental Integrity Checking: Limitations and Possibilities

    DEFF Research Database (Denmark)

    Christiansen, Henning; Martinenghi, Davide

    2005-01-01

    Integrity checking is an essential means for the preservation of the intended semantics of a deductive database. Incrementality is the only feasible approach to checking and can be obtained with respect to given update patterns by exploiting query optimization techniques. By reducing the problem...... to query containment, we show that no procedure exists that always returns the best incremental test (aka simplification of integrity constraints), and this according to any reasonable criterion measuring the checking effort. In spite of this theoretical limitation, we develop an effective procedure...

  5. 21 CFR 874.1070 - Short increment sensitivity index (SISI) adapter.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Short increment sensitivity index (SISI) adapter. 874.1070 Section 874.1070 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN... increment sensitivity index (SISI) adapter. (a) Identification. A short increment sensitivity index (SISI...

  6. Thermo-msf-parser: an open source Java library to parse and visualize Thermo Proteome Discoverer msf files.

    Science.gov (United States)

    Colaert, Niklaas; Barsnes, Harald; Vaudel, Marc; Helsens, Kenny; Timmerman, Evy; Sickmann, Albert; Gevaert, Kris; Martens, Lennart

    2011-08-05

    The Thermo Proteome Discoverer program integrates both peptide identification and quantification into a single workflow for peptide-centric proteomics. Furthermore, its close integration with Thermo mass spectrometers has made it increasingly popular in the field. Here, we present a Java library to parse the msf files that constitute the output of Proteome Discoverer. The parser is also implemented as a graphical user interface allowing convenient access to the information found in the msf files, and in Rover, a program to analyze and validate quantitative proteomics information. All code, binaries, and documentation is freely available at http://thermo-msf-parser.googlecode.com.

  7. A Syntactic-Semantic Approach to Incremental Verification

    OpenAIRE

    Bianculli, Domenico; Filieri, Antonio; Ghezzi, Carlo; Mandrioli, Dino

    2013-01-01

    Software verification of evolving systems is challenging mainstream methodologies and tools. Formal verification techniques often conflict with the time constraints imposed by change management practices for evolving systems. Since changes in these systems are often local to restricted parts, an incremental verification approach could be beneficial. This paper introduces SiDECAR, a general framework for the definition of verification procedures, which are made incremental by the framework...

  8. Grammar-Based Specification and Parsing of Binary File Formats

    Directory of Open Access Journals (Sweden)

    William Underwood

    2012-03-01

    Full Text Available The capability to validate and view or play binary file formats, as well as to convert binary file formats to standard or current file formats, is critically important to the preservation of digital data and records. This paper describes the extension of context-free grammars from strings to binary files. Binary files are arrays of data types, such as long and short integers, floating-point numbers and pointers, as well as characters. The concept of an attribute grammar is extended to these context-free array grammars. This attribute grammar has been used to define a number of chunk-based and directory-based binary file formats. A parser generator has been used with some of these grammars to generate syntax checkers (recognizers for validating binary file formats. Among the potential benefits of an attribute grammar-based approach to specification and parsing of binary file formats is that attribute grammars not only support format validation, but support generation of error messages during validation of format, validation of semantic constraints, attribute value extraction (characterization, generation of viewers or players for file formats, and conversion to current or standard file formats. The significance of these results is that with these extensions to core computer science concepts, traditional parser/compiler technologies can potentially be used as a part of a general, cost effective curation strategy for binary file formats.

  9. Shredder: GPU-Accelerated Incremental Storage and Computation

    OpenAIRE

    Bhatotia, Pramod; Rodrigues, Rodrigo; Verma, Akshat

    2012-01-01

    Redundancy elimination using data deduplication and incremental data processing has emerged as an important technique to minimize storage and computation requirements in data center computing. In this paper, we present the design, implementation and evaluation of Shredder, a high performance content-based chunking framework for supporting incremental storage and computation systems. Shredder exploits the massively parallel processing power of GPUs to overcome the CPU bottlenecks of content-ba...

  10. A Theoretical Basis for Entropy-Scaling Effects in Human Mobility Patterns.

    Science.gov (United States)

    Osgood, Nathaniel D; Paul, Tuhin; Stanley, Kevin G; Qian, Weicheng

    2016-01-01

    Characterizing how people move through space has been an important component of many disciplines. With the advent of automated data collection through GPS and other location sensing systems, researchers have the opportunity to examine human mobility at spatio-temporal resolution heretofore impossible. However, the copious and complex data collected through these logging systems can be difficult for humans to fully exploit, leading many researchers to propose novel metrics for encapsulating movement patterns in succinct and useful ways. A particularly salient proposed metric is the mobility entropy rate of the string representing the sequence of locations visited by an individual. However, mobility entropy rate is not scale invariant: entropy rate calculations based on measurements of the same trajectory at varying spatial or temporal granularity do not yield the same value, limiting the utility of mobility entropy rate as a metric by confounding inter-experimental comparisons. In this paper, we derive a scaling relationship for mobility entropy rate of non-repeating straight line paths from the definition of Lempel-Ziv compression. We show that the resulting formulation predicts the scaling behavior of simulated mobility traces, and provides an upper bound on mobility entropy rate under certain assumptions. We further show that this formulation has a maximum value for a particular sampling rate, implying that optimal sampling rates for particular movement patterns exist.

  11. Simulation and comparison of perturb and observe and incremental ...

    Indian Academy of Sciences (India)

    Perturb and Observe (P & O) algorithm and Incremental conductance algorithm. ... Keywords. Solar array; insolation; MPPT; modelling, P & O; incremental conductance. 1. .... voltage level. It is also ..... Int. J. Advances in Eng. Technol. 133–148.

  12. Efficient incremental relaying

    KAUST Repository

    Fareed, Muhammad Mehboob

    2013-07-01

    We propose a novel relaying scheme which improves the spectral efficiency of cooperative diversity systems by utilizing limited feedback from destination. Our scheme capitalizes on the fact that relaying is only required when direct transmission suffers deep fading. We calculate the packet error rate for the proposed efficient incremental relaying scheme with both amplify and forward and decode and forward relaying. Numerical results are also presented to verify their analytical counterparts. © 2013 IEEE.

  13. Entity versus incremental theories predict older adults' memory performance.

    Science.gov (United States)

    Plaks, Jason E; Chasteen, Alison L

    2013-12-01

    The authors examined whether older adults' implicit theories regarding the modifiability of memory in particular (Studies 1 and 3) and abilities in general (Study 2) would predict memory performance. In Study 1, individual differences in older adults' endorsement of the "entity theory" (a belief that one's ability is fixed) or "incremental theory" (a belief that one's ability is malleable) of memory were measured using a version of the Implicit Theories Measure (Dweck, 1999). Memory performance was assessed with a free-recall task. Results indicated that the higher the endorsement of the incremental theory, the better the free recall. In Study 2, older and younger adults' theories were measured using a more general version of the Implicit Theories Measure that focused on the modifiability of abilities in general. Again, for older adults, the higher the incremental endorsement, the better the free recall. Moreover, as predicted, implicit theories did not predict younger adults' memory performance. In Study 3, participants read mock news articles reporting evidence in favor of either the entity or incremental theory. Those in the incremental condition outperformed those in the entity condition on reading span and free-recall tasks. These effects were mediated by pretask worry such that, for those in the entity condition, higher worry was associated with lower performance. Taken together, these studies suggest that variation in entity versus incremental endorsement represents a key predictor of older adults' memory performance. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  14. Lifetime costs of lung transplantation : Estimation of incremental costs

    NARCIS (Netherlands)

    VanEnckevort, PJ; Koopmanschap, MA; Tenvergert, EM; VanderBij, W; Rutten, FFH

    1997-01-01

    Despite an expanding number of centres which provide lung transplantation, information about the incremental costs of lung transplantation is scarce. From 1991 until 1995, in The Netherlands a technology assessment was performed which provided information about the incremental costs of lung

  15. Defense Agencies Initiative Increment 2 (DAI Inc 2)

    Science.gov (United States)

    2016-03-01

    module. In an ADM dated September 23, 2013, the MDA established Increment 2 as a MAIS program to include budget formulation; grants financial...2016 Major Automated Information System Annual Report Defense Agencies Initiative Increment 2 (DAI Inc 2) Defense Acquisition Management...President’s Budget RDT&E - Research, Development, Test, and Evaluation SAE - Service Acquisition Executive TBD - To Be Determined TY - Then

  16. Biometrics Enabling Capability Increment 1 (BEC Inc 1)

    Science.gov (United States)

    2016-03-01

    modal biometrics submissions to include iris, face, palm and finger prints from biometrics collection devices, which will support the Warfighter in...2016 Major Automated Information System Annual Report Biometrics Enabling Capability Increment 1 (BEC Inc 1) Defense Acquisition Management...Phone: 227-3119 DSN Fax: Date Assigned: July 15, 2015 Program Information Program Name Biometrics Enabling Capability Increment 1 (BEC Inc 1) DoD

  17. 76 FR 53763 - Immigration Benefits Business Transformation, Increment I

    Science.gov (United States)

    2011-08-29

    ..., 100, et al. Immigration Benefits Business Transformation, Increment I; Final Rule #0;#0;Federal... Benefits Business Transformation, Increment I AGENCY: U.S. Citizenship and Immigration Services, DHS... USCIS is engaged in an enterprise-wide transformation effort to implement new business processes and to...

  18. Shakedown analysis by finite element incremental procedures

    International Nuclear Information System (INIS)

    Borkowski, A.; Kleiber, M.

    1979-01-01

    It is a common occurence in many practical problems that external loads are variable and the exact time-dependent history of loading is unknown. Instead of it load is characterized by a given loading domain: a convex polyhedron in the n-dimensional space of load parameters. The problem is then to check whether a structure shakes down, i.e. responds elastically after a few elasto-plastic cycles, or not to a variable loading as defined above. Such check can be performed by an incremental procedure. One should reproduce incrementally a simple cyclic process which consists of proportional load paths that connect the origin of the load space with the corners of the loading domain. It was proved that if a structure shakes down to such loading history then it is able to adopt itself to an arbitrary load path contained in the loading domain. The main advantage of such approach is the possibility to use existing incremental finite-element computer codes. (orig.)

  19. Automatic incrementalization of Prolog based static analyses

    DEFF Research Database (Denmark)

    Eichberg, Michael; Kahl, Matthias; Saha, Diptikalyan

    2007-01-01

    Modem development environments integrate various static analyses into the build process. Analyses that analyze the whole project whenever the project changes are impractical in this context. We present an approach to automatic incrementalization of analyses that are specified as tabled logic...... programs and evaluated using incremental tabled evaluation, a technique for efficiently updating memo tables in response to changes in facts and rules. The approach has been implemented and integrated into the Eclipse IDE. Our measurements show that this technique is effective for automatically...

  20. Mission Planning System Increment 5 (MPS Inc 5)

    Science.gov (United States)

    2016-03-01

    2016 Major Automated Information System Annual Report Mission Planning System Increment 5 (MPS Inc 5) Defense Acquisition Management Information...President’s Budget RDT&E - Research, Development, Test, and Evaluation SAE - Service Acquisition Executive TBD - To Be Determined TY - Then Year...Phone: 845-9625 DSN Fax: Date Assigned: May 19, 2014 Program Information Program Name Mission Planning System Increment 5 (MPS Inc 5) DoD

  1. MRI: Modular reasoning about interference in incremental programming

    OpenAIRE

    Oliveira, Bruno C. D. S; Schrijvers, Tom; Cook, William R

    2012-01-01

    Incremental Programming (IP) is a programming style in which new program components are defined as increments of other components. Examples of IP mechanisms include: Object-oriented programming (OOP) inheritance, aspect-oriented programming (AOP) advice and feature-oriented programming (FOP). A characteristic of IP mechanisms is that, while individual components can be independently defined, the composition of components makes those components become tightly coupled, sh...

  2. Incremental short daily home hemodialysis: a case series

    OpenAIRE

    Toth-Manikowski, Stephanie M.; Mullangi, Surekha; Hwang, Seungyoung; Shafi, Tariq

    2017-01-01

    Background Patients starting dialysis often have substantial residual kidney function. Incremental hemodialysis provides a hemodialysis prescription that supplements patients? residual kidney function while maintaining total (residual + dialysis) urea clearance (standard Kt/Vurea) targets. We describe our experience with incremental hemodialysis in patients using NxStage System One for home hemodialysis. Case presentation From 2011 to 2015, we initiated 5 incident hemodialysis patients on an ...

  3. Volatilities, Traded Volumes, and Price Increments in Derivative Securities

    Science.gov (United States)

    Kim, Kyungsik; Lim, Gyuchang; Kim, Soo Yong; Scalas, Enrico

    2007-03-01

    We apply the detrended fluctuation analysis (DFA) to the statistics of the Korean treasury bond (KTB) futures from which the logarithmic increments, volatilities, and traded volumes are estimated over a specific time lag. For our case, the logarithmic increment of futures prices has no long-memory property, while the volatility and the traded volume exhibit the existence of long-memory property. To analyze and calculate whether the volatility clustering is due to the inherent higher-order correlation not detected by applying directly the DFA to logarithmic increments of the KTB futures, it is of importance to shuffle the original tick data of futures prices and to generate the geometric Brownian random walk with the same mean and standard deviation. It is really shown from comparing the three tick data that the higher-order correlation inherent in logarithmic increments makes the volatility clustering. Particularly, the result of the DFA on volatilities and traded volumes may be supported the hypothesis of price changes.

  4. Space-time quantitative source apportionment of soil heavy metal concentration increments.

    Science.gov (United States)

    Yang, Yong; Christakos, George; Guo, Mingwu; Xiao, Lu; Huang, Wei

    2017-04-01

    Assessing the space-time trends and detecting the sources of heavy metal accumulation in soils have important consequences in the prevention and treatment of soil heavy metal pollution. In this study, we collected soil samples in the eastern part of the Qingshan district, Wuhan city, Hubei Province, China, during the period 2010-2014. The Cd, Cu, Pb and Zn concentrations in soils exhibited a significant accumulation during 2010-2014. The spatiotemporal Kriging technique, based on a quantitative characterization of soil heavy metal concentration variations in terms of non-separable variogram models, was employed to estimate the spatiotemporal soil heavy metal distribution in the study region. Our findings showed that the Cd, Cu, and Zn concentrations have an obvious incremental tendency from the southwestern to the central part of the study region. However, the Pb concentrations exhibited an obvious tendency from the northern part to the central part of the region. Then, spatial overlay analysis was used to obtain absolute and relative concentration increments of adjacent 1- or 5-year periods during 2010-2014. The spatial distribution of soil heavy metal concentration increments showed that the larger increments occurred in the center of the study region. Lastly, the principal component analysis combined with the multiple linear regression method were employed to quantify the source apportionment of the soil heavy metal concentration increments in the region. Our results led to the conclusion that the sources of soil heavy metal concentration increments should be ascribed to industry, agriculture and traffic. In particular, 82.5% of soil heavy metal concentration increment during 2010-2014 was ascribed to industrial/agricultural activities sources. Using STK and SOA to obtain the spatial distribution of heavy metal concentration increments in soils. Using PCA-MLR to quantify the source apportionment of soil heavy metal concentration increments. Copyright © 2017

  5. Support vector machine incremental learning triggered by wrongly predicted samples

    Science.gov (United States)

    Tang, Ting-long; Guan, Qiu; Wu, Yi-rong

    2018-05-01

    According to the classic Karush-Kuhn-Tucker (KKT) theorem, at every step of incremental support vector machine (SVM) learning, the newly adding sample which violates the KKT conditions will be a new support vector (SV) and migrate the old samples between SV set and non-support vector (NSV) set, and at the same time the learning model should be updated based on the SVs. However, it is not exactly clear at this moment that which of the old samples would change between SVs and NSVs. Additionally, the learning model will be unnecessarily updated, which will not greatly increase its accuracy but decrease the training speed. Therefore, how to choose the new SVs from old sets during the incremental stages and when to process incremental steps will greatly influence the accuracy and efficiency of incremental SVM learning. In this work, a new algorithm is proposed to select candidate SVs and use the wrongly predicted sample to trigger the incremental processing simultaneously. Experimental results show that the proposed algorithm can achieve good performance with high efficiency, high speed and good accuracy.

  6. Average-case analysis of incremental topological ordering

    DEFF Research Database (Denmark)

    Ajwani, Deepak; Friedrich, Tobias

    2010-01-01

    Many applications like pointer analysis and incremental compilation require maintaining a topological ordering of the nodes of a directed acyclic graph (DAG) under dynamic updates. All known algorithms for this problem are either only analyzed for worst-case insertion sequences or only evaluated...... experimentally on random DAGs. We present the first average-case analysis of incremental topological ordering algorithms. We prove an expected runtime of under insertion of the edges of a complete DAG in a random order for the algorithms of Alpern et al. (1990) [4], Katriel and Bodlaender (2006) [18], and Pearce...

  7. Motion-Induced Blindness Using Increments and Decrements of Luminance

    Directory of Open Access Journals (Sweden)

    Stine Wm Wren

    2017-10-01

    Full Text Available Motion-induced blindness describes the disappearance of stationary elements of a scene when other, perhaps non-overlapping, elements of the scene are in motion. We measured the effects of increment (200.0 cd/m2 and decrement targets (15.0 cd/m2 and masks presented on a grey background (108.0 cd/m2, tapping into putative ON- and OFF-channels, on the rate of target disappearance psychophysically. We presented two-frame motion, which has coherent motion energy, and dynamic Glass patterns and dynamic anti-Glass patterns, which do not have coherent motion energy. Using the method of constant stimuli, participants viewed stimuli of varying durations (3.1 s, 4.6 s, 7.0 s, 11 s, or 16 s in a given trial and then indicated whether or not the targets vanished during that trial. Psychometric function midpoints were used to define absolute threshold mask duration for the disappearance of the target. 95% confidence intervals for threshold disappearance times were estimated using a bootstrap technique for each of the participants across two experiments. Decrement masks were more effective than increment masks with increment targets. Increment targets were easier to mask than decrement targets. Distinct mask pattern types had no effect, suggesting that perceived coherence contributes to the effectiveness of the mask. The ON/OFF dichotomy clearly carries its influence to the level of perceived motion coherence. Further, the asymmetry in the effects of increment and decrement masks on increment and decrement targets might lead one to speculate that they reflect the ‘importance’ of detecting decrements in the environment.

  8. Logistics Modernization Program Increment 2 (LMP Inc 2)

    Science.gov (United States)

    2016-03-01

    Sections 3 and 4 of the LMP Increment 2 Business Case, ADM), key functional requirements, Critical Design Review (CDR) Reports, and Economic ...from the 2013 version of the LMP Increment 2 Economic Analysis and replace it with references to the Economic Analysis that will be completed...of ( inbound /outbound) IDOCs into the system. LMP must be able to successfully process 95% of ( inbound /outbound) IDOCs into the system. Will meet

  9. Incremental Tensor Principal Component Analysis for Handwritten Digit Recognition

    Directory of Open Access Journals (Sweden)

    Chang Liu

    2014-01-01

    Full Text Available To overcome the shortcomings of traditional dimensionality reduction algorithms, incremental tensor principal component analysis (ITPCA based on updated-SVD technique algorithm is proposed in this paper. This paper proves the relationship between PCA, 2DPCA, MPCA, and the graph embedding framework theoretically and derives the incremental learning procedure to add single sample and multiple samples in detail. The experiments on handwritten digit recognition have demonstrated that ITPCA has achieved better recognition performance than that of vector-based principal component analysis (PCA, incremental principal component analysis (IPCA, and multilinear principal component analysis (MPCA algorithms. At the same time, ITPCA also has lower time and space complexity.

  10. Validation of the periodicity of growth increment deposition in ...

    African Journals Online (AJOL)

    Validation of the periodicity of growth increment deposition in otoliths from the larval and early juvenile stages of two cyprinids from the Orange–Vaal river ... Linear regression models were fitted to the known age post-fertilisation and the age estimated using increment counts to test the correspondence between the two for ...

  11. Creating Helical Tool Paths for Single Point Incremental Forming

    DEFF Research Database (Denmark)

    Skjødt, Martin; Hancock, Michael H.; Bay, Niels

    2007-01-01

    Single point incremental forming (SPIF) is a relatively new sheet forming process. A sheet is clamped in a rig and formed incrementally using a rotating single point tool in the form of a rod with a spherical end. The process is often performed on a CNC milling machine and the tool movement...

  12. Single point incremental forming: Formability of PC sheets

    Science.gov (United States)

    Formisano, A.; Boccarusso, L.; Carrino, L.; Lambiase, F.; Minutolo, F. Memola Capece

    2018-05-01

    Recent research on Single Point Incremental Forming of polymers has slightly covered the possibility of expanding the materials capability window of this flexible forming process beyond metals, by demonstrating the workability of thermoplastic polymers at room temperature. Given the different behaviour of polymers compared to metals, different aspects need to be deepened to better understand the behaviour of these materials when incrementally formed. Thus, the aim of the work is to investigate the formability of incrementally formed polycarbonate thin sheets. To this end, an experimental investigation at room temperature was conducted involving formability tests; varying wall angle cone and pyramid frusta were manufactured by processing polycarbonate sheets with different thicknesses and using tools with different diameters, in order to draw conclusions on the formability of polymer sheets through the evaluation of the forming angles and the observation of the failure mechanisms.

  13. Martingales, nonstationary increments, and the efficient market hypothesis

    Science.gov (United States)

    McCauley, Joseph L.; Bassler, Kevin E.; Gunaratne, Gemunu H.

    2008-06-01

    We discuss the deep connection between nonstationary increments, martingales, and the efficient market hypothesis for stochastic processes x(t) with arbitrary diffusion coefficients D(x,t). We explain why a test for a martingale is generally a test for uncorrelated increments. We explain why martingales look Markovian at the level of both simple averages and 2-point correlations. But while a Markovian market has no memory to exploit and cannot be beaten systematically, a martingale admits memory that might be exploitable in higher order correlations. We also use the analysis of this paper to correct a misstatement of the ‘fair game’ condition in terms of serial correlations in Fama’s paper on the EMH. We emphasize that the use of the log increment as a variable in data analysis generates spurious fat tails and spurious Hurst exponents.

  14. Incremental Learning for Place Recognition in Dynamic Environments

    OpenAIRE

    Luo, Jie; Pronobis, Andrzej; Caputo, Barbara; Jensfelt, Patric

    2007-01-01

    This paper proposes a discriminative approach to template-based Vision-based place recognition is a desirable feature for an autonomous mobile system. In order to work in realistic scenarios, visual recognition algorithms should be adaptive, i.e. should be able to learn from experience and adapt continuously to changes in the environment. This paper presents a discriminative incremental learning approach to place recognition. We use a recently introduced version of the incremental SVM, which ...

  15. On the instability increments of a stationary pinch

    International Nuclear Information System (INIS)

    Bud'ko, A.B.

    1989-01-01

    The stability of stationary pinch to helical modes is numerically studied. It is shown that in the case of a rather fast plasma pressure decrease to the pinch boundary, for example, for an isothermal diffusion pinch with Gauss density distribution instabilities with m=0 modes are the most quickly growing. Instability increments are calculated. A simple analytical expression of a maximum increment of growth of sausage instability for automodel Gauss profiles is obtained

  16. Incremental Trust in Grid Computing

    DEFF Research Database (Denmark)

    Brinkløv, Michael Hvalsøe; Sharp, Robin

    2007-01-01

    This paper describes a comparative simulation study of some incremental trust and reputation algorithms for handling behavioural trust in large distributed systems. Two types of reputation algorithm (based on discrete and Bayesian evaluation of ratings) and two ways of combining direct trust and ...... of Grid computing systems....

  17. Convergent systems vs. incremental stability

    NARCIS (Netherlands)

    Rüffer, B.S.; Wouw, van de N.; Mueller, M.

    2013-01-01

    Two similar stability notions are considered; one is the long established notion of convergent systems, the other is the younger notion of incremental stability. Both notions require that any two solutions of a system converge to each other. Yet these stability concepts are different, in the sense

  18. Power calculation of linear and angular incremental encoders

    Science.gov (United States)

    Prokofev, Aleksandr V.; Timofeev, Aleksandr N.; Mednikov, Sergey V.; Sycheva, Elena A.

    2016-04-01

    Automation technology is constantly expanding its role in improving the efficiency of manufacturing and testing processes in all branches of industry. More than ever before, the mechanical movements of linear slides, rotary tables, robot arms, actuators, etc. are numerically controlled. Linear and angular incremental photoelectric encoders measure mechanical motion and transmit the measured values back to the control unit. The capabilities of these systems are undergoing continual development in terms of their resolution, accuracy and reliability, their measuring ranges, and maximum speeds. This article discusses the method of power calculation of linear and angular incremental photoelectric encoders, to find the optimum parameters for its components, such as light emitters, photo-detectors, linear and angular scales, optical components etc. It analyzes methods and devices that permit high resolutions in the order of 0.001 mm or 0.001°, as well as large measuring lengths of over 100 mm. In linear and angular incremental photoelectric encoders optical beam is usually formulated by a condenser lens passes through the measuring unit changes its value depending on the movement of a scanning head or measuring raster. Past light beam is converting into an electrical signal by the photo-detecter's block for processing in the electrical block. Therefore, for calculating the energy source is a value of the desired value of the optical signal at the input of the photo-detecter's block, which reliably recorded and processed in the electronic unit of linear and angular incremental optoelectronic encoders. Automation technology is constantly expanding its role in improving the efficiency of manufacturing and testing processes in all branches of industry. More than ever before, the mechanical movements of linear slides, rotary tables, robot arms, actuators, etc. are numerically controlled. Linear and angular incremental photoelectric encoders measure mechanical motion and

  19. Making context explicit for explanation and incremental knowledge acquisition

    Energy Technology Data Exchange (ETDEWEB)

    Brezillon, P. [Univ. Paris (France)

    1996-12-31

    Intelligent systems may be improved by making context explicit in problem solving. This is a lesson drawn from a study of the reasons why a number of knowledge-based systems (KBSs) failed. We discuss the interest to make context explicit in explanation generation and incremental knowledge acquisition, two important aspects of intelligent systems that aim to cooperate with users. We show how context can be used to better explain and incrementally acquire knowledge. The advantages of using context in explanation and incremental knowledge acquisition are discussed through SEPIT, an expert system for supporting diagnosis and explanation through simulation of power plants. We point out how the limitations of such systems may be overcome by making context explicit.

  20. SmilesDrawer: Parsing and Drawing SMILES-Encoded Molecular Structures Using Client-Side JavaScript.

    Science.gov (United States)

    Probst, Daniel; Reymond, Jean-Louis

    2018-01-22

    Here we present SmilesDrawer, a dependency-free JavaScript component capable of both parsing and drawing SMILES-encoded molecular structures client-side, developed to be easily integrated into web projects and to display organic molecules in large numbers and fast succession. SmilesDrawer can draw structurally and stereochemically complex structures such as maitotoxin and C 60 without using templates, yet has an exceptionally small computational footprint and low memory usage without the requirement for loading images or any other form of client-server communication, making it easy to integrate even in secure (intranet, firewalled) or offline applications. These features allow the rendering of thousands of molecular structure drawings on a single web page within seconds on a wide range of hardware supporting modern browsers. The source code as well as the most recent build of SmilesDrawer is available on Github ( http://doc.gdb.tools/smilesDrawer/ ). Both yarn and npm packages are also available.

  1. A Python package for parsing, validating, mapping and formatting sequence variants using HGVS nomenclature.

    Science.gov (United States)

    Hart, Reece K; Rico, Rudolph; Hare, Emily; Garcia, John; Westbrook, Jody; Fusaro, Vincent A

    2015-01-15

    Biological sequence variants are commonly represented in scientific literature, clinical reports and databases of variation using the mutation nomenclature guidelines endorsed by the Human Genome Variation Society (HGVS). Despite the widespread use of the standard, no freely available and comprehensive programming libraries are available. Here we report an open-source and easy-to-use Python library that facilitates the parsing, manipulation, formatting and validation of variants according to the HGVS specification. The current implementation focuses on the subset of the HGVS recommendations that precisely describe sequence-level variation relevant to the application of high-throughput sequencing to clinical diagnostics. The package is released under the Apache 2.0 open-source license. Source code, documentation and issue tracking are available at http://bitbucket.org/hgvs/hgvs/. Python packages are available at PyPI (https://pypi.python.org/pypi/hgvs). Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.

  2. Three routes forward for biofuels: Incremental, leapfrog, and transitional

    International Nuclear Information System (INIS)

    Morrison, Geoff M.; Witcover, Julie; Parker, Nathan C.; Fulton, Lew

    2016-01-01

    This paper examines three technology routes for lowering the carbon intensity of biofuels: (1) a leapfrog route that focuses on major technological breakthroughs in lignocellulosic pathways at new, stand-alone biorefineries; (2) an incremental route in which improvements are made to existing U.S. corn ethanol and soybean biodiesel biorefineries; and (3) a transitional route in which biotechnology firms gain experience growing, handling, or chemically converting lignocellulosic biomass in a lower-risk fashion than leapfrog biorefineries by leveraging existing capital stock. We find the incremental route is likely to involve the largest production volumes and greenhouse gas benefits until at least the mid-2020s, but transitional and leapfrog biofuels together have far greater long-term potential. We estimate that the Renewable Fuel Standard, California's Low Carbon Fuel Standard, and federal tax credits provided an incentive of roughly $1.5–2.5 per gallon of leapfrog biofuel between 2012 and 2015, but that regulatory elements in these policies mostly incentivize lower-risk incremental investments. Adjustments in policy may be necessary to bring a greater focus on transitional technologies that provide targeted learning and cost reduction opportunities for leapfrog biofuels. - Highlights: • Three technological pathways are compared that lower carbon intensity of biofuels. • Incremental changes lead to faster greenhouse gas reductions. • Leapfrog changes lead to greatest long-term potential. • Two main biofuel policies (RFS and LCFS) are largely incremental in nature. • Transitional biofuels offer medium-risk, medium reward pathway.

  3. Incrementality in naming and reading complex numerals: Evidence from eyetracking

    NARCIS (Netherlands)

    Korvorst, M.H.W.; Roelofs, A.P.A.; Levelt, W.J.M.

    2006-01-01

    Individuals speak incrementally when they interleave planning and articulation. Eyetracking, along with the measurement of speech onset latencies, can be used to gain more insight into the degree of incrementality adopted by speakers. In the current article, two eyetracking experiments are reported

  4. Atmospheric response to Saharan dust deduced from ECMWF reanalysis increments

    Science.gov (United States)

    Kishcha, P.; Alpert, P.; Barkan, J.; Kirchner, I.; Machenhauer, B.

    2003-04-01

    This study focuses on the atmospheric temperature response to dust deduced from a new source of data - the European Reanalysis (ERA) increments. These increments are the systematic errors of global climate models, generated in reanalysis procedure. The model errors result not only from the lack of desert dust but also from a complex combination of many kinds of model errors. Over the Sahara desert the dust radiative effect is believed to be a predominant model defect which should significantly affect the increments. This dust effect was examined by considering correlation between the increments and remotely-sensed dust. Comparisons were made between April temporal variations of the ERA analysis increments and the variations of the Total Ozone Mapping Spectrometer aerosol index (AI) between 1979 and 1993. The distinctive structure was identified in the distribution of correlation composed of three nested areas with high positive correlation (> 0.5), low correlation, and high negative correlation (Forecast(ECMWF) suggests that the PCA (NCA) corresponds mainly to anticyclonic (cyclonic) flow, negative (positive) vorticity, and downward (upward) airflow. These facts indicate an interaction between dust-forced heating /cooling and atmospheric circulation. The April correlation results are supported by the analysis of vertical distribution of dust concentration, derived from the 24-hour dust prediction system at Tel Aviv University (website: http://earth.nasa.proj.ac.il/dust/current/). For other months the analysis is more complicated because of the essential increasing of humidity along with the northward progress of the ITCZ and the significant impact on the increments.

  5. OXYGEN UPTAKE KINETICS DURING INCREMENTAL- AND DECREMENTAL-RAMP CYCLE ERGOMETRY

    Directory of Open Access Journals (Sweden)

    Fadıl Özyener

    2011-09-01

    Full Text Available The pulmonary oxygen uptake (VO2 response to incremental-ramp cycle ergometry typically demonstrates lagged-linear first-order kinetics with a slope of ~10-11 ml·min-1·W-1, both above and below the lactate threshold (ӨL, i.e. there is no discernible VO2 slow component (or "excess" VO2 above ӨL. We were interested in determining whether a reverse ramp profile would yield the same response dynamics. Ten healthy males performed a maximum incremental -ramp (15-30 W·min-1, depending on fitness. On another day, the work rate (WR was increased abruptly to the incremental maximum and then decremented at the same rate of 15-30 W.min-1 (step-decremental ramp. Five subjects also performed a sub-maximal ramp-decremental test from 90% of ӨL. VO2 was determined breath-by-breath from continuous monitoring of respired volumes (turbine and gas concentrations (mass spectrometer. The incremental-ramp VO2-WR slope was 10.3 ± 0.7 ml·min-1·W-1, whereas that of the descending limb of the decremental ramp was 14.2 ± 1.1 ml·min-1·W-1 (p < 0.005. The sub-maximal decremental-ramp slope, however, was only 9. 8 ± 0.9 ml·min-1·W-1: not significantly different from that of the incremental-ramp. This suggests that the VO2 response in the supra-ӨL domain of incremental-ramp exercise manifest not actual, but pseudo, first-order kinetics

  6. Finance for incremental housing: current status and prospects for expansion

    NARCIS (Netherlands)

    Ferguson, B.; Smets, P.G.S.M.

    2010-01-01

    Appropriate finance can greatly increase the speed and lower the cost of incremental housing - the process used by much of the low/moderate-income majority of most developing countries to acquire shelter. Informal finance continues to dominate the funding of incremental housing. However, new sources

  7. One Step at a Time: SBM as an Incremental Process.

    Science.gov (United States)

    Conrad, Mark

    1995-01-01

    Discusses incremental SBM budgeting and answers questions regarding resource equity, bookkeeping requirements, accountability, decision-making processes, and purchasing. Approaching site-based management as an incremental process recognizes that every school system engages in some level of site-based decisions. Implementation can be gradual and…

  8. History Matters: Incremental Ontology Reasoning Using Modules

    Science.gov (United States)

    Cuenca Grau, Bernardo; Halaschek-Wiener, Christian; Kazakov, Yevgeny

    The development of ontologies involves continuous but relatively small modifications. Existing ontology reasoners, however, do not take advantage of the similarities between different versions of an ontology. In this paper, we propose a technique for incremental reasoning—that is, reasoning that reuses information obtained from previous versions of an ontology—based on the notion of a module. Our technique does not depend on a particular reasoning calculus and thus can be used in combination with any reasoner. We have applied our results to incremental classification of OWL DL ontologies and found significant improvement over regular classification time on a set of real-world ontologies.

  9. 76 FR 73475 - Immigration Benefits Business Transformation, Increment I; Correction

    Science.gov (United States)

    2011-11-29

    ... Benefits Business Transformation, Increment I, 76 FR 53764 (Aug. 29, 2011). The final rule removed form... [CIS No. 2481-09; Docket No. USCIS-2009-0022] RIN 1615-AB83 Immigration Benefits Business Transformation, Increment I; Correction AGENCY: U.S. Citizenship and Immigration Services, DHS. ACTION: Final...

  10. Short-term load forecasting with increment regression tree

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Jingfei; Stenzel, Juergen [Darmstadt University of Techonology, Darmstadt 64283 (Germany)

    2006-06-15

    This paper presents a new regression tree method for short-term load forecasting. Both increment and non-increment tree are built according to the historical data to provide the data space partition and input variable selection. Support vector machine is employed to the samples of regression tree nodes for further fine regression. Results of different tree nodes are integrated through weighted average method to obtain the comprehensive forecasting result. The effectiveness of the proposed method is demonstrated through its application to an actual system. (author)

  11. A comparison between swallowing sounds and vibrations in patients with dysphagia

    Science.gov (United States)

    Movahedi, Faezeh; Kurosu, Atsuko; Coyle, James L.; Perera, Subashan

    2017-01-01

    The cervical auscultation refers to the observation and analysis of sounds or vibrations captured during swallowing using either a stethoscope or acoustic/vibratory detectors. Microphones and accelerometers have recently become two common sensors used in modern cervical auscultation methods. There are open questions about whether swallowing signals recorded by these two sensors provide unique or complementary information about swallowing function; or whether they present interchangeable information. The aim of this study is to present a broad comparison of swallowing signals recorded by a microphone and a tri-axial accelerometer from 72 patients (mean age 63.94 ± 12.58 years, 42 male, 30 female), who underwent videofluoroscopic examination. The participants swallowed one or more boluses of thickened liquids of different consistencies, including thin liquids, nectar-thick liquids, and pudding. A comfortable self-selected volume from a cup or a controlled volume by the examiner from a 5ml spoon was given to the participants. A comprehensive set of features was extracted in time, information-theoretic, and frequency domains from each of 881 swallows presented in this study. The swallowing sounds exhibited significantly higher frequency content and kurtosis values than the swallowing vibrations. In addition, the Lempel-Ziv complexity was lower for swallowing sounds than those for swallowing vibrations. To conclude, information provided by microphones and accelerometers about swallowing function are unique and these two transducers are not interchangeable. Consequently, the selection of transducer would be a vital step in future studies. PMID:28495001

  12. The Time Course of Incremental Word Processing during Chinese Reading

    Science.gov (United States)

    Zhou, Junyi; Ma, Guojie; Li, Xingshan; Taft, Marcus

    2018-01-01

    In the current study, we report two eye movement experiments investigating how Chinese readers process incremental words during reading. These are words where some of the component characters constitute another word (an embedded word). In two experiments, eye movements were monitored while the participants read sentences with incremental words…

  13. Switch-mode High Voltage Drivers for Dielectric Electro Active Polymer (DEAP) Incremental Actuators

    DEFF Research Database (Denmark)

    Thummala, Prasanth

    voltage DC-DC converters for driving the DEAP based incremental actuators. The DEAP incremental actuator technology has the potential to be used in various industries, e.g., automotive, space and medicine. The DEAP incremental actuator consists of three electrically isolated and mechanically connected...

  14. Efficient Incremental Checkpointing of Java Programs

    DEFF Research Database (Denmark)

    Lawall, Julia Laetitia; Muller, Gilles

    2000-01-01

    This paper investigates the optimization of language-level checkpointing of Java programs. First, we describe how to systematically associate incremental checkpoints with Java classes. While being safe, the genericness of this solution induces substantial execution overhead. Second, to solve...

  15. Single-point incremental forming and formability-failure diagrams

    DEFF Research Database (Denmark)

    Silva, M.B.; Skjødt, Martin; Atkins, A.G.

    2008-01-01

    In a recent work [1], the authors constructed a closed-form analytical model that is capable of dealing with the fundamentals of single point incremental forming and explaining the experimental and numerical results published in the literature over the past couple of years. The model is based...... of deformation that are commonly found in general single point incremental forming processes; and (ii) to investigate the formability limits of SPIF in terms of ductile damage mechanics and the question of whether necking does, or does not, precede fracture. Experimentation by the authors together with data...

  16. Complex Incremental Product Innovation in Established Service Firms: A Micro Institutional Perspective

    OpenAIRE

    Vermeulen, Patrick; Bosch, Frans; Volberda, Henk

    2007-01-01

    textabstractMany product innovation studies have described key determinants that should lead to successful incremental product innovation. Despite numerous studies suggesting how incremental product innovation should be successfully undertaken, many firms still struggle with this type of innovation. In this paper, we use an institutional perspective to investigate why established firms in the financial services industry struggle with their complex incremental product innovation efforts. We ar...

  17. Incremental Innovation and Competitive Pressure in the Presence of Discrete Innovation

    DEFF Research Database (Denmark)

    Ghosh, Arghya; Kato, Takao; Morita, Hodaka

    2017-01-01

    Technical progress consists of improvements made upon the existing technology (incremental innovation) and innovative activities aiming at entirely new technology (discrete innovation). Incremental innovation is often of limited relevance to the new technology invented by successful discrete...

  18. Calculation of the increment reduction in spruce stands by charcoal smoke

    Energy Technology Data Exchange (ETDEWEB)

    Guede, J

    1954-01-01

    Chronic damage to spruce trees by charcoal smoke, often hardly noticeable from outward appearance but causing marked reductions of wood increment can be determined by means of a calculation by increment cores. Sulfurous acid anhydride causes the closure of the stomates of needles by which the circulation of water is checked. The assimilation and the wood increment are reduced. The cores are taken from uninjured trees belonging to the dominant class. These trees are liable to irregular variations in the trend of growth only by atmospheric influences and disturbances in the circulation of water. The decrease of increment of a stand can be judged by the trend of growth of the basal area of sample trees. Two methods are applied: in the first method, the difference between the mean total increment before the damage has been caused and that after it is calculated by the yield table in deriving the site quality classes from the basal area growth of dominant stems. This is possible by using the mean diameter of each age class and the frequency curve of basal area for each site class. In the other method, the reduction of basal area increment of sample trees is measured directly. The total reduction of a stand can be judged by the share of the dominant class of stem in the total current growth of the basal area of a sound stand and by the percent of reduction of the sample trees.

  19. Do otolith increments allow correct inferences about age and growth of coral reef fishes?

    Science.gov (United States)

    Booth, D. J.

    2014-03-01

    Otolith increment structure is widely used to estimate age and growth of marine fishes. Here, I test the accuracy of the long-term otolith increment analysis of the lemon damselfish Pomacentrus moluccensis to describe age and growth characteristics. I compare the number of putative annual otolith increments (as a proxy for actual age) and widths of these increments (as proxies for somatic growth) with actual tagged fish-length data, based on a 6-year dataset, the longest time course for a coral reef fish. Estimated age from otoliths corresponded closely with actual age in all cases, confirming annual increment formation. However, otolith increment widths were poor proxies for actual growth in length [linear regression r 2 = 0.44-0.90, n = 6 fish] and were clearly of limited value in estimating annual growth. Up to 60 % of the annual growth variation was missed using otolith increments, suggesting the long-term back calculations of otolith growth characteristics of reef fish populations should be interpreted with caution.

  20. The Dark Side of Malleability: Incremental Theory Promotes Immoral Behaviors.

    Science.gov (United States)

    Huang, Niwen; Zuo, Shijiang; Wang, Fang; Cai, Pan; Wang, Fengxiang

    2017-01-01

    Implicit theories drastically affect an individual's processing of social information, decision making, and action. The present research focuses on whether individuals who hold the implicit belief that people's moral character is fixed (entity theorists) and individuals who hold the implicit belief that people's moral character is malleable (incremental theorists) make different choices when facing a moral decision. Incremental theorists are less likely to make the fundamental attribution error (FAE), rarely make moral judgment based on traits and show more tolerance to immorality, relative to entity theorists, which might decrease the possibility of undermining the self-image when they engage in immoral behaviors, and thus we posit that incremental beliefs facilitate immorality. Four studies were conducted to explore the effect of these two types of implicit theories on immoral intention or practice. The association between implicit theories and immoral behavior was preliminarily examined from the observer perspective in Study 1, and the results showed that people tended to associate immoral behaviors (including everyday immoral intention and environmental destruction) with an incremental theorist rather than an entity theorist. Then, the relationship was further replicated from the actor perspective in Studies 2-4. In Study 2, implicit theories, which were measured, positively predicted the degree of discrimination against carriers of the hepatitis B virus. In Study 3, implicit theories were primed through reading articles, and the participants in the incremental condition showed more cheating than those in the entity condition. In Study 4, implicit theories were primed through a new manipulation, and the participants in the unstable condition (primed incremental theory) showed more discrimination than those in the other three conditions. Taken together, the results of our four studies were consistent with our hypotheses.

  1. The Dark Side of Malleability: Incremental Theory Promotes Immoral Behaviors

    Directory of Open Access Journals (Sweden)

    Niwen Huang

    2017-08-01

    Full Text Available Implicit theories drastically affect an individual’s processing of social information, decision making, and action. The present research focuses on whether individuals who hold the implicit belief that people’s moral character is fixed (entity theorists and individuals who hold the implicit belief that people’s moral character is malleable (incremental theorists make different choices when facing a moral decision. Incremental theorists are less likely to make the fundamental attribution error (FAE, rarely make moral judgment based on traits and show more tolerance to immorality, relative to entity theorists, which might decrease the possibility of undermining the self-image when they engage in immoral behaviors, and thus we posit that incremental beliefs facilitate immorality. Four studies were conducted to explore the effect of these two types of implicit theories on immoral intention or practice. The association between implicit theories and immoral behavior was preliminarily examined from the observer perspective in Study 1, and the results showed that people tended to associate immoral behaviors (including everyday immoral intention and environmental destruction with an incremental theorist rather than an entity theorist. Then, the relationship was further replicated from the actor perspective in Studies 2–4. In Study 2, implicit theories, which were measured, positively predicted the degree of discrimination against carriers of the hepatitis B virus. In Study 3, implicit theories were primed through reading articles, and the participants in the incremental condition showed more cheating than those in the entity condition. In Study 4, implicit theories were primed through a new manipulation, and the participants in the unstable condition (primed incremental theory showed more discrimination than those in the other three conditions. Taken together, the results of our four studies were consistent with our hypotheses.

  2. Dental caries increments and related factors in children with type 1 diabetes mellitus.

    Science.gov (United States)

    Siudikiene, J; Machiulskiene, V; Nyvad, B; Tenovuo, J; Nedzelskiene, I

    2008-01-01

    The aim of this study was to analyse possible associations between caries increments and selected caries determinants in children with type 1 diabetes mellitus and their age- and sex-matched non-diabetic controls, over 2 years. A total of 63 (10-15 years old) diabetic and non-diabetic pairs were examined for dental caries, oral hygiene and salivary factors. Salivary flow rates, buffer effect, concentrations of mutans streptococci, lactobacilli, yeasts, total IgA and IgG, protein, albumin, amylase and glucose were analysed. Means of 2-year decayed/missing/filled surface (DMFS) increments were similar in diabetics and their controls. Over the study period, both unstimulated and stimulated salivary flow rates remained significantly lower in diabetic children compared to controls. No differences were observed in the counts of lactobacilli, mutans streptococci or yeast growth during follow-up, whereas salivary IgA, protein and glucose concentrations were higher in diabetics than in controls throughout the 2-year period. Multivariable linear regression analysis showed that children with higher 2-year DMFS increments were older at baseline and had higher salivary glucose concentrations than children with lower 2-year DMFS increments. Likewise, higher 2-year DMFS increments in diabetics versus controls were associated with greater increments in salivary glucose concentrations in diabetics. Higher increments in active caries lesions in diabetics versus controls were associated with greater increments of dental plaque and greater increments of salivary albumin. Our results suggest that, in addition to dental plaque as a common caries risk factor, diabetes-induced changes in salivary glucose and albumin concentrations are indicative of caries development among diabetics. Copyright 2008 S. Karger AG, Basel.

  3. Enabling Incremental Query Re-Optimization.

    Science.gov (United States)

    Liu, Mengmeng; Ives, Zachary G; Loo, Boon Thau

    2016-01-01

    As declarative query processing techniques expand to the Web, data streams, network routers, and cloud platforms, there is an increasing need to re-plan execution in the presence of unanticipated performance changes. New runtime information may affect which query plan we prefer to run. Adaptive techniques require innovation both in terms of the algorithms used to estimate costs , and in terms of the search algorithm that finds the best plan. We investigate how to build a cost-based optimizer that recomputes the optimal plan incrementally given new cost information, much as a stream engine constantly updates its outputs given new data. Our implementation especially shows benefits for stream processing workloads. It lays the foundations upon which a variety of novel adaptive optimization algorithms can be built. We start by leveraging the recently proposed approach of formulating query plan enumeration as a set of recursive datalog queries ; we develop a variety of novel optimization approaches to ensure effective pruning in both static and incremental cases. We further show that the lessons learned in the declarative implementation can be equally applied to more traditional optimizer implementations.

  4. Context-dependent incremental timing cells in the primate hippocampus.

    Science.gov (United States)

    Sakon, John J; Naya, Yuji; Wirth, Sylvia; Suzuki, Wendy A

    2014-12-23

    We examined timing-related signals in primate hippocampal cells as animals performed an object-place (OP) associative learning task. We found hippocampal cells with firing rates that incrementally increased or decreased across the memory delay interval of the task, which we refer to as incremental timing cells (ITCs). Three distinct categories of ITCs were identified. Agnostic ITCs did not distinguish between different trial types. The remaining two categories of cells signaled time and trial context together: One category of cells tracked time depending on the behavioral action required for a correct response (i.e., early vs. late release), whereas the other category of cells tracked time only for those trials cued with a specific OP combination. The context-sensitive ITCs were observed more often during sessions where behavioral learning was observed and exhibited reduced incremental firing on incorrect trials. Thus, single primate hippocampal cells signal information about trial timing, which can be linked with trial type/context in a learning-dependent manner.

  5. Statistics of wind direction and its increments

    International Nuclear Information System (INIS)

    Doorn, Eric van; Dhruva, Brindesh; Sreenivasan, Katepalli R.; Cassella, Victor

    2000-01-01

    We study some elementary statistics of wind direction fluctuations in the atmosphere for a wide range of time scales (10 -4 sec to 1 h), and in both vertical and horizontal planes. In the plane parallel to the ground surface, the direction time series consists of two parts: a constant drift due to large weather systems moving with the mean wind speed, and fluctuations about this drift. The statistics of the direction fluctuations show a rough similarity to Brownian motion but depend, in detail, on the wind speed. This dependence manifests itself quite clearly in the statistics of wind-direction increments over various intervals of time. These increments are intermittent during periods of low wind speeds but Gaussian-like during periods of high wind speeds. (c) 2000 American Institute of Physics

  6. Incremental Beliefs of Ability, Achievement Emotions and Learning of Singapore Students

    Science.gov (United States)

    Luo, Wenshu; Lee, Kerry; Ng, Pak Tee; Ong, Joanne Xiao Wei

    2014-01-01

    This study investigated the relationships of students' incremental beliefs of math ability to their achievement emotions, classroom engagement and math achievement. A sample of 273 secondary students in Singapore were administered measures of incremental beliefs of math ability, math enjoyment, pride, boredom and anxiety, as well as math classroom…

  7. The Cognitive Underpinnings of Incremental Rehearsal

    Science.gov (United States)

    Varma, Sashank; Schleisman, Katrina B.

    2014-01-01

    Incremental rehearsal (IR) is a flashcard technique that has been developed and evaluated by school psychologists. We discuss potential learning and memory effects from cognitive psychology that may explain the observed superiority of IR over other flashcard techniques. First, we propose that IR is a form of "spaced practice" that…

  8. The Dark Side of Malleability: Incremental Theory Promotes Immoral Behaviors

    OpenAIRE

    Huang, Niwen; Zuo, Shijiang; Wang, Fang; Cai, Pan; Wang, Fengxiang

    2017-01-01

    Implicit theories drastically affect an individual’s processing of social information, decision making, and action. The present research focuses on whether individuals who hold the implicit belief that people’s moral character is fixed (entity theorists) and individuals who hold the implicit belief that people’s moral character is malleable (incremental theorists) make different choices when facing a moral decision. Incremental theorists are less likely to make the fundamental attribution err...

  9. Organization Strategy and Structural Differences for Radical Versus Incremental Innovation

    OpenAIRE

    John E. Ettlie; William P. Bridges; Robert D. O'Keefe

    1984-01-01

    The purpose of this study was to test a model of the organizational innovation process that suggests that the strategy-structure causal sequence is differentiated by radical versus incremental innovation. That is, unique strategy and structure will be required for radical innovation, especially process adoption, while more traditional strategy and structure arrangements tend to support new product introduction and incremental process adoption. This differentiated theory is strongly supported ...

  10. Incremental deformation: A literature review

    Directory of Open Access Journals (Sweden)

    Nasulea Daniel

    2017-01-01

    Full Text Available Nowadays the customer requirements are in permanent changing and according with them the tendencies in the modern industry is to implement flexible manufacturing processes. In the last decades, metal forming gained attention of the researchers and considerable changes has occurred. Because for a small number of parts, the conventional metal forming processes are expensive and time-consuming in terms of designing and manufacturing preparation, the manufacturers and researchers became interested in flexible processes. One of the most investigated flexible processes in metal forming is incremental sheet forming (ISF. ISF is an advanced flexible manufacturing process which allows to manufacture complex 3D products without expensive dedicated tools. In most of the cases it is needed for an ISF process the following: a simple tool, a fixing device for sheet metal blank and a universal CNC machine. Using this process it can be manufactured axis-symmetric parts, usually using a CNC lathe but also complex asymmetrical parts using CNC milling machines, robots or dedicated equipment. This paper aim to present the current status of incremental sheet forming technologies in terms of process parameters and their influences, wall thickness distribution, springback effect, formability, surface quality and the current main research directions.

  11. Complex Incremental Product Innovation in Established Service Firms: A Micro Institutional Perspective

    NARCIS (Netherlands)

    P.A.M. Vermeulen (Patrick); F.A.J. van den Bosch (Frans); H.W. Volberda (Henk)

    2006-01-01

    textabstractMany product innovation studies have described key determinants that should lead to successful incremental product innovation. Despite numerous studies suggesting how incremental product innovation should be successfully undertaken, many firms still struggle with this type of innovation.

  12. Complex Incremental Product Innovation in Established Service Firms: A Micro Institutional Perspective

    NARCIS (Netherlands)

    P.A.M. Vermeulen (Patrick); F.A.J. van den Bosch (Frans); H.W. Volberda (Henk)

    2007-01-01

    textabstractMany product innovation studies have described key determinants that should lead to successful incremental product innovation. Despite numerous studies suggesting how incremental product innovation should be successfully undertaken, many firms still struggle with this type of innovation.

  13. Teraflop-scale Incremental Machine Learning

    OpenAIRE

    Özkural, Eray

    2011-01-01

    We propose a long-term memory design for artificial general intelligence based on Solomonoff's incremental machine learning methods. We use R5RS Scheme and its standard library with a few omissions as the reference machine. We introduce a Levin Search variant based on Stochastic Context Free Grammar together with four synergistic update algorithms that use the same grammar as a guiding probability distribution of programs. The update algorithms include adjusting production probabilities, re-u...

  14. A Self-Organizing Incremental Neural Network based on local distribution learning.

    Science.gov (United States)

    Xing, Youlu; Shi, Xiaofeng; Shen, Furao; Zhou, Ke; Zhao, Jinxi

    2016-12-01

    In this paper, we propose an unsupervised incremental learning neural network based on local distribution learning, which is called Local Distribution Self-Organizing Incremental Neural Network (LD-SOINN). The LD-SOINN combines the advantages of incremental learning and matrix learning. It can automatically discover suitable nodes to fit the learning data in an incremental way without a priori knowledge such as the structure of the network. The nodes of the network store rich local information regarding the learning data. The adaptive vigilance parameter guarantees that LD-SOINN is able to add new nodes for new knowledge automatically and the number of nodes will not grow unlimitedly. While the learning process continues, nodes that are close to each other and have similar principal components are merged to obtain a concise local representation, which we call a relaxation data representation. A denoising process based on density is designed to reduce the influence of noise. Experiments show that the LD-SOINN performs well on both artificial and real-word data. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Incremental Reconstruction of Urban Environments by Edge-Points Delaunay Triangulation

    OpenAIRE

    Romanoni, Andrea; Matteucci, Matteo

    2016-01-01

    Urban reconstruction from a video captured by a surveying vehicle constitutes a core module of automated mapping. When computational power represents a limited resource and, a detailed map is not the primary goal, the reconstruction can be performed incrementally, from a monocular video, carving a 3D Delaunay triangulation of sparse points; this allows online incremental mapping for tasks such as traversability analysis or obstacle avoidance. To exploit the sharp edges of urban landscape, we ...

  16. Quasi-static incremental behavior of granular materials: Elastic-plastic coupling and micro-scale dissipation

    Science.gov (United States)

    Kuhn, Matthew R.; Daouadji, Ali

    2018-05-01

    The paper addresses a common assumption of elastoplastic modeling: that the recoverable, elastic strain increment is unaffected by alterations of the elastic moduli that accompany loading. This assumption is found to be false for a granular material, and discrete element (DEM) simulations demonstrate that granular materials are coupled materials at both micro- and macro-scales. Elasto-plastic coupling at the macro-scale is placed in the context of thermomechanics framework of Tomasz Hueckel and Hans Ziegler, in which the elastic moduli are altered by irreversible processes during loading. This complex behavior is explored for multi-directional loading probes that follow an initial monotonic loading. An advanced DEM model is used in the study, with non-convex non-spherical particles and two different contact models: a conventional linear-frictional model and an exact implementation of the Hertz-like Cattaneo-Mindlin model. Orthotropic true-triaxial probes were used in the study (i.e., no direct shear strain), with tiny strain increments of 2 ×10-6 . At the micro-scale, contact movements were monitored during small increments of loading and load-reversal, and results show that these movements are not reversed by a reversal of strain direction, and some contacts that were sliding during a loading increment continue to slide during reversal. The probes show that the coupled part of a strain increment, the difference between the recoverable (elastic) increment and its reversible part, must be considered when partitioning strain increments into elastic and plastic parts. Small increments of irreversible (and plastic) strain and contact slipping and frictional dissipation occur for all directions of loading, and an elastic domain, if it exists at all, is smaller than the strain increment used in the simulations.

  17. Automatic incrementalization of Prolog based static analyses

    DEFF Research Database (Denmark)

    Eichberg, Michael; Kahl, Matthias; Saha, Diptikalyan

    2007-01-01

    Modem development environments integrate various static analyses into the build process. Analyses that analyze the whole project whenever the project changes are impractical in this context. We present an approach to automatic incrementalization of analyses that are specified as tabled logic prog...

  18. A new recursive incremental algorithm for building minimal acyclic deterministic finite automata

    NARCIS (Netherlands)

    Watson, B.W.; Martin-Vide, C.; Mitrana, V.

    2003-01-01

    This chapter presents a new algorithm for incrementally building minimal acyclic deterministic finite automata. Such minimal automata are a compact representation of a finite set of words (e.g. in a spell checker). The incremental aspect of such algorithms (where the intermediate automaton is

  19. Atmospheric response to Saharan dust deduced from ECMWF reanalysis (ERA) temperature increments

    Science.gov (United States)

    Kishcha, P.; Alpert, P.; Barkan, J.; Kirchner, I.; Machenhauer, B.

    2003-09-01

    This study focuses on the atmospheric temperature response to dust deduced from a new source of data the European Reanalysis (ERA) increments. These increments are the systematic errors of global climate models, generated in the reanalysis procedure. The model errors result not only from the lack of desert dust but also from a complex combination of many kinds of model errors. Over the Sahara desert the lack of dust radiative effect is believed to be a predominant model defect which should significantly affect the increments. This dust effect was examined by considering correlation between the increments and remotely sensed dust. Comparisons were made between April temporal variations of the ERA analysis increments and the variations of the Total Ozone Mapping Spectrometer aerosol index (AI) between 1979 and 1993. The distinctive structure was identified in the distribution of correlation composed of three nested areas with high positive correlation (>0.5), low correlation and high negative correlation (Forecast (ECMWF) suggest that the PCA (NCA) corresponds mainly to anticyclonic (cyclonic) flow, negative (positive) vorticity and downward (upward) airflow. These findings are associated with the interaction between dust-forced heating/cooling and atmospheric circulation. This paper contributes to a better understanding of dust radiative processes missed in the model.

  20. Global Combat Support System - Army Increment 2 (GCSS-A Inc 2)

    Science.gov (United States)

    2016-03-01

    2016 Major Automated Information System Annual Report Global Combat Support System - Army Increment 2 (GCSS-A Inc 2) Defense Acquisition...Secretary of Defense PB - President’s Budget RDT&E - Research, Development, Test, and Evaluation SAE - Service Acquisition Executive TBD - To Be...Date Assigned: Program Information Program Name Global Combat Support System - Army Increment 2 (GCSS-A Inc 2) DoD Component Army Responsible

  1. The balanced scorecard: an incremental approach model to health care management.

    Science.gov (United States)

    Pineno, Charles J

    2002-01-01

    The balanced scorecard represents a technique used in strategic management to translate an organization's mission and strategy into a comprehensive set of performance measures that provide the framework for implementation of strategic management. This article develops an incremental approach for decision making by formulating a specific balanced scorecard model with an index of nonfinancial as well as financial measures. The incremental approach to costs, including profit contribution analysis and probabilities, allows decisionmakers to assess, for example, how their desire to meet different health care needs will cause changes in service design. This incremental approach to the balanced scorecard may prove to be useful in evaluating the existence of causality relationships between different objective and subjective measures to be included within the balanced scorecard.

  2. Volatilities, traded volumes, and the hypothesis of price increments in derivative securities

    Science.gov (United States)

    Lim, Gyuchang; Kim, SooYong; Scalas, Enrico; Kim, Kyungsik

    2007-08-01

    A detrended fluctuation analysis (DFA) is applied to the statistics of Korean treasury bond (KTB) futures from which the logarithmic increments, volatilities, and traded volumes are estimated over a specific time lag. In this study, the logarithmic increment of futures prices has no long-memory property, while the volatility and the traded volume exhibit the existence of the long-memory property. To analyze and calculate whether the volatility clustering is due to a inherent higher-order correlation not detected by with the direct application of the DFA to logarithmic increments of KTB futures, it is of importance to shuffle the original tick data of future prices and to generate a geometric Brownian random walk with the same mean and standard deviation. It was found from a comparison of the three tick data that the higher-order correlation inherent in logarithmic increments leads to volatility clustering. Particularly, the result of the DFA on volatilities and traded volumes can be supported by the hypothesis of price changes.

  3. Software designs of image processing tasks with incremental refinement of computation.

    Science.gov (United States)

    Anastasia, Davide; Andreopoulos, Yiannis

    2010-08-01

    Software realizations of computationally-demanding image processing tasks (e.g., image transforms and convolution) do not currently provide graceful degradation when their clock-cycles budgets are reduced, e.g., when delay deadlines are imposed in a multitasking environment to meet throughput requirements. This is an important obstacle in the quest for full utilization of modern programmable platforms' capabilities since worst-case considerations must be in place for reasonable quality of results. In this paper, we propose (and make available online) platform-independent software designs performing bitplane-based computation combined with an incremental packing framework in order to realize block transforms, 2-D convolution and frame-by-frame block matching. The proposed framework realizes incremental computation: progressive processing of input-source increments improves the output quality monotonically. Comparisons with the equivalent nonincremental software realization of each algorithm reveal that, for the same precision of the result, the proposed approach can lead to comparable or faster execution, while it can be arbitrarily terminated and provide the result up to the computed precision. Application examples with region-of-interest based incremental computation, task scheduling per frame, and energy-distortion scalability verify that our proposal provides significant performance scalability with graceful degradation.

  4. Time-Dependent Behavior of Microvascular Blood Flow and Oxygenation: A Predictor of Functional Outcomes.

    Science.gov (United States)

    Kuliga, Katarzyna Z; Gush, Rodney; Clough, Geraldine F; Chipperfield, Andrew John

    2018-05-01

    This study investigates the time-dependent behaviour and algorithmic complexity of low-frequency periodic oscillations in blood flux (BF) and oxygenation signals from the microvasculature. Microvascular BF and oxygenation (OXY: oxyHb, deoxyHb, totalHb, and SO 2 %) was recorded from 15 healthy young adult males using combined laser Doppler fluximetry and white light spectroscopy with local skin temperature clamped to 33  °C and during local thermal hyperaemia (LTH) at 43 °C. Power spectral density of the BF and OXY signals was evaluated within the frequency range (0.0095-1.6 Hz). Signal complexity was determined using the Lempel-Ziv (LZ) algorithm. Fold increase in BF during LTH was 15.6 (10.3, 22.8) and in OxyHb 4.8 (3.5, 5.9) (median, range). All BF and OXY signals exhibited multiple oscillatory components with clear differences in signal power distribution across frequency bands at 33 and 43 °C. Significant reduction in the intrinsic variability and complexity of the microvascular signals during LTH was found, with mean LZ complexity of BF and OxyHb falling by 25% and 49%, respectively ( ). These results provide corroboration that in human skin microvascular blood flow and oxygenation are influenced by multiple time-varying oscillators that adapt to local influences and become more predictable during increased haemodynamic flow. Recent evidence strongly suggests that the inability of microvascular networks to adapt to an imposed stressor is symptomatic of disease risk which might be assessed via BF and OXY via the combination signal analysis techniques described here.

  5. Nonlinear Recurrent Dynamics and Long-Term Nonstationarities in EEG Alpha Cortical Activity: Implications for Choosing Adequate Segment Length in Nonlinear EEG Analyses.

    Science.gov (United States)

    Cerquera, Alexander; Vollebregt, Madelon A; Arns, Martijn

    2018-03-01

    Nonlinear analysis of EEG recordings allows detection of characteristics that would probably be neglected by linear methods. This study aimed to determine a suitable epoch length for nonlinear analysis of EEG data based on its recurrence rate in EEG alpha activity (electrodes Fz, Oz, and Pz) from 28 healthy and 64 major depressive disorder subjects. Two nonlinear metrics, Lempel-Ziv complexity and scaling index, were applied in sliding windows of 20 seconds shifted every 1 second and in nonoverlapping windows of 1 minute. In addition, linear spectral analysis was carried out for comparison with the nonlinear results. The analysis with sliding windows showed that the cortical dynamics underlying alpha activity had a recurrence period of around 40 seconds in both groups. In the analysis with nonoverlapping windows, long-term nonstationarities entailed changes over time in the nonlinear dynamics that became significantly different between epochs across time, which was not detected with the linear spectral analysis. Findings suggest that epoch lengths shorter than 40 seconds neglect information in EEG nonlinear studies. In turn, linear analysis did not detect characteristics from long-term nonstationarities in EEG alpha waves of control subjects and patients with major depressive disorder patients. We recommend that application of nonlinear metrics in EEG time series, particularly of alpha activity, should be carried out with epochs around 60 seconds. In addition, this study aimed to demonstrate that long-term nonlinearities are inherent to the cortical brain dynamics regardless of the presence or absence of a mental disorder.

  6. Electroencephalography signatures of attention-deficit/hyperactivity disorder: clinical utility

    Directory of Open Access Journals (Sweden)

    Alba G

    2015-10-01

    Full Text Available Guzmán Alba,1 Ernesto Pereda,2 Soledad Mañas,3 Leopoldo D Méndez,3 Almudena González,1 Julián J González1 1Physiology Unit, Health Sciences Faculty (S Medicine, 2Department of Industrial Engineering, School of Engineering and Technology, University of La Laguna, 3Clinical Neurophysiology Unit, University Hospital La Candelaria, Tenerife, Spain Abstract: The techniques and the most important results on the use of electroencephalography (EEG to extract different measures are reviewed in this work, which can be clinically useful to study subjects with attention-deficit/hyperactivity disorder (ADHD. First, we discuss briefly and in simple terms the EEG analysis and processing techniques most used in the context of ADHD. We review techniques that both analyze individual EEG channels (univariate measures and study the statistical interdependence between different EEG channels (multivariate measures, the so-called functional brain connectivity. Among the former ones, we review the classical indices of absolute and relative spectral power and estimations of the complexity of the channels, such as the approximate entropy and the Lempel-Ziv complexity. Among the latter ones, we focus on the magnitude square coherence and on different measures based on the concept of generalized synchronization and its estimation in the state space. Second, from a historical point of view, we present the most important results achieved with these techniques and their clinical utility (sensitivity, specificity, and accuracy to diagnose ADHD. Finally, we propose future research lines based on these results. Keywords: EEG, ADHD, power spectrum, functional connectivity, clinical assessment

  7. THE EFFECTS OF BANKRUPTCY ON THE PREDICTABILITY OF PRICE FORMATION PROCESSES ON WARSAW’S STOCK MARKET

    Directory of Open Access Journals (Sweden)

    Paweł Fiedor

    2016-07-01

    Full Text Available In this study we investigate how bankruptcy affects the market behaviour of prices of stocks on Warsaw’s Stock Exchange. As the behaviour of prices can be seen in a myriad of ways, we investigate a particular aspect of this behaviour, namely the predictability of these price formation processes. We approximate their predictability as the structural complexity of logarithmic returns. This method of analysing predictability of price formation processes using information theory follows closely the mathematical definition of predictability, and is equal to the degree to which redundancy is present in the time series describing stock returns. We use Shannon’s entropy rate (approximating Kolmogorov-Sinai entropy to measure this redundancy, and estimate it using the Lempel-Ziv algorithm, computing it with a running window approach over the entire price history of 50 companies listed on the Warsaw market which have gone bankrupt in the last few years. This enables us not only to compare the differences between predictability of price formation processes before and after their filing for bankruptcy, but also to compare the changes in predictability over time, as well as divided into different categories of companies and bankruptcies. There exists a large body of research analysing the efficiency of the whole market and the predictability of price changes enlarge, but only a few detailed studies analysing the influence of external stimulion the efficiency of price formation processes. This study fills this gap in the knowledge of financial markets, and their response to extreme external events.

  8. Use of a machine learning algorithm to classify expertise: analysis of hand motion patterns during a simulated surgical task.

    Science.gov (United States)

    Watson, Robert A

    2014-08-01

    To test the hypothesis that machine learning algorithms increase the predictive power to classify surgical expertise using surgeons' hand motion patterns. In 2012 at the University of North Carolina at Chapel Hill, 14 surgical attendings and 10 first- and second-year surgical residents each performed two bench model venous anastomoses. During the simulated tasks, the participants wore an inertial measurement unit on the dorsum of their dominant (right) hand to capture their hand motion patterns. The pattern from each bench model task performed was preprocessed into a symbolic time series and labeled as expert (attending) or novice (resident). The labeled hand motion patterns were processed and used to train a Support Vector Machine (SVM) classification algorithm. The trained algorithm was then tested for discriminative/predictive power against unlabeled (blinded) hand motion patterns from tasks not used in the training. The Lempel-Ziv (LZ) complexity metric was also measured from each hand motion pattern, with an optimal threshold calculated to separately classify the patterns. The LZ metric classified unlabeled (blinded) hand motion patterns into expert and novice groups with an accuracy of 70% (sensitivity 64%, specificity 80%). The SVM algorithm had an accuracy of 83% (sensitivity 86%, specificity 80%). The results confirmed the hypothesis. The SVM algorithm increased the predictive power to classify blinded surgical hand motion patterns into expert versus novice groups. With further development, the system used in this study could become a viable tool for low-cost, objective assessment of procedural proficiency in a competency-based curriculum.

  9. Nonlinear analysis of EEGs of patients with major depression during different emotional states.

    Science.gov (United States)

    Akdemir Akar, Saime; Kara, Sadık; Agambayev, Sümeyra; Bilgiç, Vedat

    2015-12-01

    Although patients with major depressive disorder (MDD) have dysfunctions in cognitive behaviors and the regulation of emotions, the underlying brain dynamics of the pathophysiology are unclear. Therefore, nonlinear techniques can be used to understand the dynamic behavior of the EEG signals of MDD patients. To investigate and clarify the dynamics of MDD patients׳ brains during different emotional states, EEG recordings were analyzed using nonlinear techniques. The purpose of the present study was to assess whether there are different EEG complexities that discriminate between MDD patients and healthy controls during emotional processing. Therefore, nonlinear parameters, such as Katz fractal dimension (KFD), Higuchi fractal dimension (HFD), Shannon entropy (ShEn), Lempel-Ziv complexity (LZC) and Kolmogorov complexity (KC), were computed from the EEG signals of two groups under different experimental states: noise (negative emotional content) and music (positive emotional content) periods. First, higher complexity values were generated by MDD patients relative to controls. Significant differences were obtained in the frontal and parietal scalp locations using KFD (pemotional bias was demonstrated by their higher brain complexities during the noise period than the music stimulus. Additionally, we found that the KFD, HFD and LZC values were more sensitive in discriminating between patients and controls than the ShEn and KC measures, according to the results of ANOVA and ROC calculations. It can be concluded that the nonlinear analysis may be a useful and discriminative tool in investigating the neuro-dynamic properties of the brain in patients with MDD during emotional stimulation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Evidence combination for incremental decision-making processes

    NARCIS (Netherlands)

    Berrada, Ghita; van Keulen, Maurice; de Keijzer, Ander

    The establishment of a medical diagnosis is an incremental process highly fraught with uncertainty. At each step of this painstaking process, it may be beneficial to be able to quantify the uncertainty linked to the diagnosis and steadily update the uncertainty estimation using available sources of

  11. Incremental Nonnegative Matrix Factorization for Face Recognition

    Directory of Open Access Journals (Sweden)

    Wen-Sheng Chen

    2008-01-01

    Full Text Available Nonnegative matrix factorization (NMF is a promising approach for local feature extraction in face recognition tasks. However, there are two major drawbacks in almost all existing NMF-based methods. One shortcoming is that the computational cost is expensive for large matrix decomposition. The other is that it must conduct repetitive learning, when the training samples or classes are updated. To overcome these two limitations, this paper proposes a novel incremental nonnegative matrix factorization (INMF for face representation and recognition. The proposed INMF approach is based on a novel constraint criterion and our previous block strategy. It thus has some good properties, such as low computational complexity, sparse coefficient matrix. Also, the coefficient column vectors between different classes are orthogonal. In particular, it can be applied to incremental learning. Two face databases, namely FERET and CMU PIE face databases, are selected for evaluation. Compared with PCA and some state-of-the-art NMF-based methods, our INMF approach gives the best performance.

  12. Apparatus for electrical-assisted incremental forming and process thereof

    Science.gov (United States)

    Roth, John; Cao, Jian

    2018-04-24

    A process and apparatus for forming a sheet metal component using an electric current passing through the component. The process can include providing an incremental forming machine, the machine having at least one arcuate tipped tool and at least electrode spaced a predetermined distance from the arcuate tipped tool. The machine is operable to perform a plurality of incremental deformations on the sheet metal component using the arcuate tipped tool. The machine is also operable to apply an electric direct current through the electrode into the sheet metal component at the predetermined distance from the arcuate tipped tool while the machine is forming the sheet metal component.

  13. Existing School Buildings: Incremental Seismic Retrofit Opportunities.

    Science.gov (United States)

    Federal Emergency Management Agency, Washington, DC.

    The intent of this document is to provide technical guidance to school district facility managers for linking specific incremental seismic retrofit opportunities to specific maintenance and capital improvement projects. The linkages are based on logical affinities, such as technical fit, location of the work within the building, cost saving…

  14. Demonstration/Validation of Incremental Sampling at Two Diverse Military Ranges and Development of an Incremental Sampling Tool

    Science.gov (United States)

    2010-06-01

    Sampling (MIS)? • Technique of combining many increments of soil from a number of points within exposure area • Developed by Enviro Stat (Trademarked...Demonstrating a reliable soil sampling strategy to accurately characterize contaminant concentrations in spatially extreme and heterogeneous...into a set of decision (exposure) units • One or several discrete or small- scale composite soil samples collected to represent each decision unit

  15. Ethical leadership: meta-analytic evidence of criterion-related and incremental validity.

    Science.gov (United States)

    Ng, Thomas W H; Feldman, Daniel C

    2015-05-01

    This study examines the criterion-related and incremental validity of ethical leadership (EL) with meta-analytic data. Across 101 samples published over the last 15 years (N = 29,620), we observed that EL demonstrated acceptable criterion-related validity with variables that tap followers' job attitudes, job performance, and evaluations of their leaders. Further, followers' trust in the leader mediated the relationships of EL with job attitudes and performance. In terms of incremental validity, we found that EL significantly, albeit weakly in some cases, predicted task performance, citizenship behavior, and counterproductive work behavior-even after controlling for the effects of such variables as transformational leadership, use of contingent rewards, management by exception, interactional fairness, and destructive leadership. The article concludes with a discussion of ways to strengthen the incremental validity of EL. (PsycINFO Database Record (c) 2015 APA, all rights reserved).

  16. Parsing pyrogenic polycyclic aromatic hydrocarbons: forensic chemistry, receptor models, and source control policy.

    Science.gov (United States)

    O'Reilly, Kirk T; Pietari, Jaana; Boehm, Paul D

    2014-04-01

    A realistic understanding of contaminant sources is required to set appropriate control policy. Forensic chemical methods can be powerful tools in source characterization and identification, but they require a multiple-lines-of-evidence approach. Atmospheric receptor models, such as the US Environmental Protection Agency (USEPA)'s chemical mass balance (CMB), are increasingly being used to evaluate sources of pyrogenic polycyclic aromatic hydrocarbons (PAHs) in sediments. This paper describes the assumptions underlying receptor models and discusses challenges in complying with these assumptions in practice. Given the variability within, and the similarity among, pyrogenic PAH source types, model outputs are sensitive to specific inputs, and parsing among some source types may not be possible. Although still useful for identifying potential sources, the technical specialist applying these methods must describe both the results and their inherent uncertainties in a way that is understandable to nontechnical policy makers. The authors present an example case study concerning an investigation of a class of parking-lot sealers as a significant source of PAHs in urban sediment. Principal component analysis is used to evaluate published CMB model inputs and outputs. Targeted analyses of 2 areas where bans have been implemented are included. The results do not support the claim that parking-lot sealers are a significant source of PAHs in urban sediments. © 2013 SETAC.

  17. Incremental projection approach of regularization for inverse problems

    Energy Technology Data Exchange (ETDEWEB)

    Souopgui, Innocent, E-mail: innocent.souopgui@usm.edu [The University of Southern Mississippi, Department of Marine Science (United States); Ngodock, Hans E., E-mail: hans.ngodock@nrlssc.navy.mil [Naval Research Laboratory (United States); Vidard, Arthur, E-mail: arthur.vidard@imag.fr; Le Dimet, François-Xavier, E-mail: ledimet@imag.fr [Laboratoire Jean Kuntzmann (France)

    2016-10-15

    This paper presents an alternative approach to the regularized least squares solution of ill-posed inverse problems. Instead of solving a minimization problem with an objective function composed of a data term and a regularization term, the regularization information is used to define a projection onto a convex subspace of regularized candidate solutions. The objective function is modified to include the projection of each iterate in the place of the regularization. Numerical experiments based on the problem of motion estimation for geophysical fluid images, show the improvement of the proposed method compared with regularization methods. For the presented test case, the incremental projection method uses 7 times less computation time than the regularization method, to reach the same error target. Moreover, at convergence, the incremental projection is two order of magnitude more accurate than the regularization method.

  18. Incremental Frequent Subgraph Mining on Large Evolving Graphs

    KAUST Repository

    Abdelhamid, Ehab; Canim, Mustafa; Sadoghi, Mohammad; Bhatta, Bishwaranjan; Chang, Yuan-Chi; Kalnis, Panos

    2017-01-01

    , such as social networks, utilize large evolving graphs. Mining these graphs using existing techniques is infeasible, due to the high computational cost. In this paper, we propose IncGM+, a fast incremental approach for continuous frequent subgraph mining problem

  19. Parallel Algorithm for Incremental Betweenness Centrality on Large Graphs

    KAUST Repository

    Jamour, Fuad Tarek

    2017-10-17

    Betweenness centrality quantifies the importance of nodes in a graph in many applications, including network analysis, community detection and identification of influential users. Typically, graphs in such applications evolve over time. Thus, the computation of betweenness centrality should be performed incrementally. This is challenging because updating even a single edge may trigger the computation of all-pairs shortest paths in the entire graph. Existing approaches cannot scale to large graphs: they either require excessive memory (i.e., quadratic to the size of the input graph) or perform unnecessary computations rendering them prohibitively slow. We propose iCentral; a novel incremental algorithm for computing betweenness centrality in evolving graphs. We decompose the graph into biconnected components and prove that processing can be localized within the affected components. iCentral is the first algorithm to support incremental betweeness centrality computation within a graph component. This is done efficiently, in linear space; consequently, iCentral scales to large graphs. We demonstrate with real datasets that the serial implementation of iCentral is up to 3.7 times faster than existing serial methods. Our parallel implementation that scales to large graphs, is an order of magnitude faster than the state-of-the-art parallel algorithm, while using an order of magnitude less computational resources.

  20. Estimation of incremental reactivities for multiple day scenarios: an application to ethane and dimethyoxymethane

    Science.gov (United States)

    Stockwell, William R.; Geiger, Harald; Becker, Karl H.

    Single-day scenarios are used to calculate incremental reactivities by definition (Carter, J. Air Waste Management Assoc. 44 (1994) 881-899.) but even unreactive organic compounds may have a non-negligible effect on ozone concentrations if multiple-day scenarios are considered. The concentration of unreactive compounds and their products may build up over a multiple-day period and the oxidation products may be highly reactive or highly unreactive affecting the overall incremental reactivity of the organic compound. We have developed a method for calculating incremental reactivities for multiple days based on a standard scenario for polluted European conditions. This method was used to estimate maximum incremental reactivities (MIR) and maximum ozone incremental reactivities (MOIR) for ethane and dimethyoxymethane for scenarios ranging from 1 to 6 days. It was found that the incremental reactivities increased as the length of the simulation period increased. The MIR of ethane increased faster than the value for dimethyoxymethane as the scenarios became longer. The MOIRs of ethane and dimethyoxymethane increased but the change was more modest for scenarios longer than 3 days. MOIRs of both volatile organic compounds were equal within the uncertainties of their chemical mechanisms by the 5 day scenario. These results show that dimethyoxymethane has an ozone forming potential on a per mass basis that is only somewhat greater than ethane if multiple-day scenarios are considered.

  1. [Spatiotemporal variation of Populus euphratica's radial increment at lower reaches of Tarim River after ecological water transfer].

    Science.gov (United States)

    An, Hong-Yan; Xu, Hai-Liang; Ye, Mao; Yu, Pu-Ji; Gong, Jun-Jun

    2011-01-01

    Taking the Populus euphratica at lower reaches of Tarim River as test object, and by the methods of tree dendrohydrology, this paper studied the spatiotemporal variation of P. euphratic' s branch radial increment after ecological water transfer. There was a significant difference in the mean radial increment before and after ecological water transfer. The radial increment after the eco-water transfer was increased by 125%, compared with that before the water transfer. During the period of ecological water transfer, the radial increment was increased with increasing water transfer quantity, and there was a positive correlation between the annual radial increment and the total water transfer quantity (R2 = 0.394), suggesting that the radial increment of P. euphratica could be taken as the performance indicator of ecological water transfer. After the ecological water transfer, the radial increment changed greatly with the distance to the River, i.e. , decreased significantly along with the increasing distance to the River (P = 0.007). The P. euphratic' s branch radial increment also differed with stream segment (P = 0.017 ), i.e. , the closer to the head-water point (Daxihaizi Reservoir), the greater the branch radial increment. It was considered that the limited effect of the current ecological water transfer could scarcely change the continually deteriorating situation of the lower reaches of Tarim River.

  2. [Incremental cost effectiveness of multifocal cataract surgery].

    Science.gov (United States)

    Pagel, N; Dick, H B; Krummenauer, F

    2007-02-01

    Supplementation of cataract patients with multifocal intraocular lenses involves an additional financial investment when compared to the corresponding monofocal supplementation, which usually is not funded by German health care insurers. In the context of recent resource allocation discussions, however, the cost effectiveness of multifocal cataract surgery could become an important rationale. Therefore an evidence-based estimation of its cost effectiveness was carried out. Three independent meta-analyses were implemented to estimate the gain in uncorrected near visual acuity and best corrected visual acuity (vision lines) as well as the predictability (fraction of patients without need for reading aids) of multifocal supplementation. Study reports published between 1995 and 2004 (English or German language) were screened for appropriate key words. Meta effects in visual gain and predictability were estimated by means and standard deviations of the reported effect measures. Cost data were estimated by German DRG rates and individual lens costs; the cost effectiveness of multifocal cataract surgery was then computed in terms of its marginal cost effectiveness ratio (MCER) for each clinical benefit endpoint; the incremental costs of multifocal versus monofocal cataract surgery were further estimated by means of their respective incremental cost effectiveness ratio (ICER). An independent meta-analysis estimated the complication profiles to be expected after monofocal and multifocal cataract surgery in order to evaluate expectable complication-associated additional costs of both procedures; the marginal and incremental cost effectiveness estimates were adjusted accordingly. A sensitivity analysis comprised cost variations of +/- 10 % and utility variations alongside the meta effect estimate's 95 % confidence intervals. Total direct costs from the health care insurer's perspective were estimated 3363 euro, associated with a visual meta benefit in best corrected visual

  3. Lactate and ammonia concentration in blood and sweat during incremental cycle ergometer exercise

    NARCIS (Netherlands)

    Ament, W; Huizenga, [No Value; Mook, GA; Gips, CH; Verkerke, GJ

    It is known that the concentrations of ammonia and lactate in blood increase during incremental exercise. Sweat also contains lactate and ammonia. The aim of the present study was to investigate the physiological response of lactate and ammonia in plasma and sweat during a stepwise incremental cycle

  4. Potential stocks and increments of woody biomass in the European Union under different management and climate scenarios.

    Science.gov (United States)

    Kindermann, Georg E; Schörghuber, Stefan; Linkosalo, Tapio; Sanchez, Anabel; Rammer, Werner; Seidl, Rupert; Lexer, Manfred J

    2013-02-01

    Forests play an important role in the global carbon flow. They can store carbon and can also provide wood which can substitute other materials. In EU27 the standing biomass is steadily increasing. Increments and harvests seem to have reached a plateau between 2005 and 2010. One reason for reaching this plateau will be the circumstance that the forests are getting older. High ages have the advantage that they typical show high carbon concentration and the disadvantage that the increment rates are decreasing. It should be investigated how biomass stock, harvests and increments will develop under different climate scenarios and two management scenarios where one is forcing to store high biomass amounts in forests and the other tries to have high increment rates and much harvested wood. A management which is maximising standing biomass will raise the stem wood carbon stocks from 30 tC/ha to 50 tC/ha until 2100. A management which is maximising increments will lower the stock to 20 tC/ha until 2100. The estimates for the climate scenarios A1b, B1 and E1 are different but there is much more effect by the management target than by the climate scenario. By maximising increments the harvests are 0.4 tC/ha/year higher than in the management which maximises the standing biomass. The increments until 2040 are close together but around 2100 the increments when maximising standing biomass are approximately 50 % lower than those when maximising increments. Cold regions will benefit from the climate changes in the climate scenarios by showing higher increments. The results of this study suggest that forest management should maximise increments, not stocks to be more efficient in sense of climate change mitigation. This is true especially for regions which have already high carbon stocks in forests, what is the case in many regions in Europe. During the time span 2010-2100 the forests of EU27 will absorb additional 1750 million tC if they are managed to maximise increments compared

  5. PCA/INCREMENT MEMORY interface for analog processors on-line with PC-XT/AT IBM

    International Nuclear Information System (INIS)

    Biri, S.; Buttsev, V.S.; Molnar, J.; Samojlov, V.N.

    1989-01-01

    The functional and operational descriptions on PCA/INCREMENT MEMORY interface are discussed. The following is solved with this unit: connection between the analogue signal processor and PC, nuclear spectrum acquisition up to 2 24 -1 counts/channel using increment or decrement method, data read/write from or to memory via data bus PC during the spectrum acquisition. Dual ported memory organization is 4096x24 bit, increment cycle time at 4.77 MHz system clock frequency is 1.05 μs. 6 refs.; 2 figs

  6. Playing by the rules? Analysing incremental urban developments

    NARCIS (Netherlands)

    Karnenbeek, van Lilian; Janssen-Jansen, Leonie

    2018-01-01

    Current urban developments are often considered outdated and static, and the argument follows that they should become more adaptive. In this paper, we argue that existing urban development are already adaptive and incremental. Given this flexibility in urban development, understanding changes in the

  7. A program for the numerical control of a pulse increment system

    Energy Technology Data Exchange (ETDEWEB)

    Gray, D.C.

    1963-08-21

    This report will describe the important features of the development of magnetic tapes for the numerical control of a pulse-increment system consisting of a modified Gorton lathe and its associated control unit developed by L. E. Foley of Equipment Development Service, Engineering Services, General Electric Co., Schenectady, N.Y. Included is a description of CUPID (Control and Utilization of Pulse Increment Devices), a FORTRAN program for the design of these tapes on the IBM 7090 computer, and instructions for its operation.

  8. BMI and BMI SDS in childhood: annual increments and conditional change

    OpenAIRE

    Brannsether-Ellingsen, Bente; Eide, Geir Egil; Roelants, Mathieu; Bjerknes, Robert; Juliusson, Petur Benedikt

    2016-01-01

    Background: Early detection of abnormal weight gain in childhood may be important for preventive purposes. It is still debated which annual changes in BMI should warrant attention. Aim: To analyse 1-year increments of Body Mass Index (BMI) and standardised BMI (BMI SDS) in childhood and explore conditional change in BMI SDS as an alternative method to evaluate 1-year changes in BMI. Subjects and methods: The distributions of 1-year increments of BMI (kg/m2) and BMI SDS are summarised by...

  9. Decoupled Simulation Method For Incremental Sheet Metal Forming

    International Nuclear Information System (INIS)

    Sebastiani, G.; Brosius, A.; Tekkaya, A. E.; Homberg, W.; Kleiner, M.

    2007-01-01

    Within the scope of this article a decoupling algorithm to reduce computing time in Finite Element Analyses of incremental forming processes will be investigated. Based on the given position of the small forming zone, the presented algorithm aims at separating a Finite Element Model in an elastic and an elasto-plastic deformation zone. Including the elastic response of the structure by means of model simplifications, the costly iteration in the elasto-plastic zone can be restricted to the small forming zone and to few supporting elements in order to reduce computation time. Since the forming zone moves along the specimen, an update of both, forming zone with elastic boundary and supporting structure, is needed after several increments.The presented paper discusses the algorithmic implementation of the approach and introduces several strategies to implement the denoted elastic boundary condition at the boundary of the plastic forming zone

  10. Automobile sheet metal part production with incremental sheet forming

    Directory of Open Access Journals (Sweden)

    İsmail DURGUN

    2016-02-01

    Full Text Available Nowadays, effect of global warming is increasing drastically so it leads to increased interest on energy efficiency and sustainable production methods. As a result of adverse conditions, national and international project platforms, OEMs (Original Equipment Manufacturers, SMEs (Small and Mid-size Manufacturers perform many studies or improve existing methodologies in scope of advanced manufacturing techniques. In this study, advanced manufacturing and sustainable production method "Incremental Sheet Metal Forming (ISF" was used for sheet metal forming process. A vehicle fender was manufactured with or without die by using different toolpath strategies and die sets. At the end of the study, Results have been investigated under the influence of method and parameters used.Keywords: Template incremental sheet metal, Metal forming

  11. Conservation of wildlife populations: factoring in incremental disturbance.

    Science.gov (United States)

    Stewart, Abbie; Komers, Petr E

    2017-06-01

    Progressive anthropogenic disturbance can alter ecosystem organization potentially causing shifts from one stable state to another. This potential for ecosystem shifts must be considered when establishing targets and objectives for conservation. We ask whether a predator-prey system response to incremental anthropogenic disturbance might shift along a disturbance gradient and, if it does, whether any disturbance thresholds are evident for this system. Development of linear corridors in forested areas increases wolf predation effectiveness, while high density of development provides a safe-haven for their prey. If wolves limit moose population growth, then wolves and moose should respond inversely to land cover disturbance. Using general linear model analysis, we test how the rate of change in moose ( Alces alces ) density and wolf ( Canis lupus ) harvest density are influenced by the rate of change in land cover and proportion of land cover disturbed within a 300,000 km 2 area in the boreal forest of Alberta, Canada. Using logistic regression, we test how the direction of change in moose density is influenced by measures of land cover change. In response to incremental land cover disturbance, moose declines occurred where 43% of land cover was disturbed and wolf density declined. Wolves and moose appeared to respond inversely to incremental disturbance with the balance between moose decline and wolf increase shifting at about 43% of land cover disturbed. Conservation decisions require quantification of disturbance rates and their relationships to predator-prey systems because ecosystem responses to anthropogenic disturbance shift across disturbance gradients.

  12. Numerical simulation of pseudoelastic shape memory alloys using the large time increment method

    Science.gov (United States)

    Gu, Xiaojun; Zhang, Weihong; Zaki, Wael; Moumni, Ziad

    2017-04-01

    The paper presents a numerical implementation of the large time increment (LATIN) method for the simulation of shape memory alloys (SMAs) in the pseudoelastic range. The method was initially proposed as an alternative to the conventional incremental approach for the integration of nonlinear constitutive models. It is adapted here for the simulation of pseudoelastic SMA behavior using the Zaki-Moumni model and is shown to be especially useful in situations where the phase transformation process presents little or lack of hardening. In these situations, a slight stress variation in a load increment can result in large variations of strain and local state variables, which may lead to difficulties in numerical convergence. In contrast to the conventional incremental method, the LATIN method solve the global equilibrium and local consistency conditions sequentially for the entire loading path. The achieved solution must satisfy the conditions of static and kinematic admissibility and consistency simultaneously after several iterations. 3D numerical implementation is accomplished using an implicit algorithm and is then used for finite element simulation using the software Abaqus. Computational tests demonstrate the ability of this approach to simulate SMAs presenting flat phase transformation plateaus and subjected to complex loading cases, such as the quasi-static behavior of a stent structure. Some numerical results are contrasted to those obtained using step-by-step incremental integration.

  13. Incremental net social benefit associated with using nuclear-fueled power plants

    International Nuclear Information System (INIS)

    Maoz, I.

    1976-12-01

    The incremental net social benefit (INSB) resulting from nuclear-fueled, rather than coal-fired, electric power generation is assessed. The INSB is defined as the difference between the 'incremental social benefit' (ISB)--caused by the cheaper technology of electric power generation, and the 'incremental social cost' (ISC)--associated with an increased power production, which is induced by cheaper technology. Section 2 focuses on the theoretical and empirical problems associated with the assessment of the long-run price elasticity of the demand for electricity, and the theoretical-econometric considerations that lead to the reasonable estimates of price elasticities of demand from those provided by recent empirical studies. Section 3 covers the theoretical and empirical difficulties associated with the construction of the long-run social marginal cost curves (LRSMC) of electricity. Sections 4 and 5 discuss the assessment methodology and provide numerical examples for the calculation of the INSB resulting from nuclear-fueled power generation

  14. On the validity of the incremental approach to estimate the impact of cities on air quality

    Science.gov (United States)

    Thunis, Philippe

    2018-01-01

    The question of how much cities are the sources of their own air pollution is not only theoretical as it is critical to the design of effective strategies for urban air quality planning. In this work, we assess the validity of the commonly used incremental approach to estimate the likely impact of cities on their air pollution. With the incremental approach, the city impact (i.e. the concentration change generated by the city emissions) is estimated as the concentration difference between a rural background and an urban background location, also known as the urban increment. We show that the city impact is in reality made up of the urban increment and two additional components and consequently two assumptions need to be fulfilled for the urban increment to be representative of the urban impact. The first assumption is that the rural background location is not influenced by emissions from within the city whereas the second requires that background concentration levels, obtained with zero city emissions, are equal at both locations. Because the urban impact is not measurable, the SHERPA modelling approach, based on a full air quality modelling system, is used in this work to assess the validity of these assumptions for some European cities. Results indicate that for PM2.5, these two assumptions are far from being fulfilled for many large or medium city sizes. For this type of cities, urban increments are largely underestimating city impacts. Although results are in better agreement for NO2, similar issues are met. In many situations the incremental approach is therefore not an adequate estimate of the urban impact on air pollution. This poses issues in terms of interpretation when these increments are used to define strategic options in terms of air quality planning. We finally illustrate the interest of comparing modelled and measured increments to improve our confidence in the model results.

  15. Bipower variation for Gaussian processes with stationary increments

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Corcuera, José Manuel; Podolskij, Mark

    2009-01-01

    Convergence in probability and central limit laws of bipower variation for Gaussian processes with stationary increments and for integrals with respect to such processes are derived. The main tools of the proofs are some recent powerful techniques of Wiener/Itô/Malliavin calculus for establishing...

  16. The National Institute of Education and Incremental Budgeting.

    Science.gov (United States)

    Hastings, Anne H.

    1979-01-01

    The National Institute of Education's (NIE) history demonstrates that the relevant criteria for characterizing budgeting as incremental are not the predictability and stability of appropriations but the conditions of complexity, limited information, multiple factors, and imperfect agreement on ends; NIE's appropriations were dominated by political…

  17. Incremental Support Vector Machine Framework for Visual Sensor Networks

    Directory of Open Access Journals (Sweden)

    Yuichi Motai

    2007-01-01

    Full Text Available Motivated by the emerging requirements of surveillance networks, we present in this paper an incremental multiclassification support vector machine (SVM technique as a new framework for action classification based on real-time multivideo collected by homogeneous sites. The technique is based on an adaptation of least square SVM (LS-SVM formulation but extends beyond the static image-based learning of current SVM methodologies. In applying the technique, an initial supervised offline learning phase is followed by a visual behavior data acquisition and an online learning phase during which the cluster head performs an ensemble of model aggregations based on the sensor nodes inputs. The cluster head then selectively switches on designated sensor nodes for future incremental learning. Combining sensor data offers an improvement over single camera sensing especially when the latter has an occluded view of the target object. The optimization involved alleviates the burdens of power consumption and communication bandwidth requirements. The resulting misclassification error rate, the iterative error reduction rate of the proposed incremental learning, and the decision fusion technique prove its validity when applied to visual sensor networks. Furthermore, the enabled online learning allows an adaptive domain knowledge insertion and offers the advantage of reducing both the model training time and the information storage requirements of the overall system which makes it even more attractive for distributed sensor networks communication.

  18. Two-Point Incremental Forming with Partial Die: Theory and Experimentation

    Science.gov (United States)

    Silva, M. B.; Martins, P. A. F.

    2013-04-01

    This paper proposes a new level of understanding of two-point incremental forming (TPIF) with partial die by means of a combined theoretical and experimental investigation. The theoretical developments include an innovative extension of the analytical model for rotational symmetric single point incremental forming (SPIF), originally developed by the authors, to address the influence of the major operating parameters of TPIF and to successfully explain the differences in formability between SPIF and TPIF. The experimental work comprised the mechanical characterization of the material and the determination of its formability limits at necking and fracture by means of circle grid analysis and benchmark incremental sheet forming tests. Results show the adequacy of the proposed analytical model to handle the deformation mechanics of SPIF and TPIF with partial die and demonstrate that neck formation is suppressed in TPIF, so that traditional forming limit curves are inapplicable to describe failure and must be replaced by fracture forming limits derived from ductile damage mechanics. The overall geometric accuracy of sheet metal parts produced by TPIF with partial die is found to be better than that of parts fabricated by SPIF due to smaller elastic recovery upon unloading.

  19. A comparative study of velocity increment generation between the rigid body and flexible models of MMET

    Energy Technology Data Exchange (ETDEWEB)

    Ismail, Norilmi Amilia, E-mail: aenorilmi@usm.my [School of Aerospace Engineering, Engineering Campus, Universiti Sains Malaysia, 14300 Nibong Tebal, Pulau Pinang (Malaysia)

    2016-02-01

    The motorized momentum exchange tether (MMET) is capable of generating useful velocity increments through spin–orbit coupling. This study presents a comparative study of the velocity increments between the rigid body and flexible models of MMET. The equations of motions of both models in the time domain are transformed into a function of true anomaly. The equations of motion are integrated, and the responses in terms of the velocity increment of the rigid body and flexible models are compared and analysed. Results show that the initial conditions, eccentricity, and flexibility of the tether have significant effects on the velocity increments of the tether.

  20. Evolving effective incremental SAT solvers with GP

    OpenAIRE

    Bader, Mohamed; Poli, R.

    2008-01-01

    Hyper-Heuristics could simply be defined as heuristics to choose other heuristics, and it is a way of combining existing heuristics to generate new ones. In a Hyper-Heuristic framework, the framework is used for evolving effective incremental (Inc*) solvers for SAT. We test the evolved heuristics (IncHH) against other known local search heuristics on a variety of benchmark SAT problems.

  1. Real Time Implementation of Incremental Fuzzy Logic Controller for Gas Pipeline Corrosion Control

    Directory of Open Access Journals (Sweden)

    Gopalakrishnan Jayapalan

    2014-01-01

    Full Text Available A robust virtual instrumentation based fuzzy incremental corrosion controller is presented to protect metallic gas pipelines. Controller output depends on error and change in error of the controlled variable. For corrosion control purpose pipe to soil potential is considered as process variable. The proposed fuzzy incremental controller is designed using a very simple control rule base and the most natural and unbiased membership functions. The proposed scheme is tested for a wide range of pipe to soil potential control. Performance comparison between the conventional proportional integral type and proposed fuzzy incremental controller is made in terms of several performance criteria such as peak overshoot, settling time, and rise time. Result shows that the proposed controller outperforms its conventional counterpart in each case. Designed controller can be taken in automode without waiting for initial polarization to stabilize. Initial startup curve of proportional integral controller and fuzzy incremental controller is reported. This controller can be used to protect any metallic structures such as pipelines, tanks, concrete structures, ship, and offshore structures.

  2. Minimizing System Modification in an Incremental Design Approach

    DEFF Research Database (Denmark)

    Pop, Paul; Eles, Petru; Pop, Traian

    2001-01-01

    In this paper we present an approach to mapping and scheduling of distributed embedded systems for hard real-time applications, aiming at minimizing the system modification cost. We consider an incremental design process that starts from an already existing sys-tem running a set of applications. We...

  3. EFFECT OF COST INCREMENT DISTRIBUTION PATTERNS ON THE PERFORMANCE OF JIT SUPPLY CHAIN

    Directory of Open Access Journals (Sweden)

    Ayu Bidiawati J.R

    2008-01-01

    Full Text Available Cost is an important consideration in supply chain (SC optimisation. This is due to emphasis placed on cost reduction in order to optimise profit. Some researchers use cost as one of their performance measures and others propose ways of accurately calculating cost. As product moves across SC, the product cost also increases. This paper studied the effect of cost increment distribution patterns on the performance of a JIT Supply Chain. In particular, it is necessary to know if inventory allocation across SC needs to be modified to accommodate different cost increment distribution patterns. It was found that funnel is still the best card distribution pattern for JIT-SC regardless the cost increment distribution patterns used.

  4. BMI and BMI SDS in childhood: annual increments and conditional change.

    Science.gov (United States)

    Brannsether, Bente; Eide, Geir Egil; Roelants, Mathieu; Bjerknes, Robert; Júlíusson, Pétur Benedikt

    2017-02-01

    Background Early detection of abnormal weight gain in childhood may be important for preventive purposes. It is still debated which annual changes in BMI should warrant attention. Aim To analyse 1-year increments of Body Mass Index (BMI) and standardised BMI (BMI SDS) in childhood and explore conditional change in BMI SDS as an alternative method to evaluate 1-year changes in BMI. Subjects and methods The distributions of 1-year increments of BMI (kg/m 2 ) and BMI SDS are summarised by percentiles. Differences according to sex, age, height, weight, initial BMI and weight status on the BMI and BMI SDS increments were assessed with multiple linear regression. Conditional change in BMI SDS was based on the correlation between annual BMI measurements converted to SDS. Results BMI increments depended significantly on sex, height, weight and initial BMI. Changes in BMI SDS depended significantly only on the initial BMI SDS. The distribution of conditional change in BMI SDS using a two-correlation model was close to normal (mean = 0.11, SD = 1.02, n = 1167), with 3.2% (2.3-4.4%) of the observations below -2 SD and 2.8% (2.0-4.0%) above +2 SD. Conclusion Conditional change in BMI SDS can be used to detect unexpected large changes in BMI SDS. Although this method requires the use of a computer, it may be clinically useful to detect aberrant weight development.

  5. Relating annual increments of the endangered Blanding's turtle plastron growth to climate.

    Science.gov (United States)

    Richard, Monik G; Laroque, Colin P; Herman, Thomas B

    2014-05-01

    This research is the first published study to report a relationship between climate variables and plastron growth increments of turtles, in this case the endangered Nova Scotia Blanding's turtle (Emydoidea blandingii). We used techniques and software common to the discipline of dendrochronology to successfully cross-date our growth increment data series, to detrend and average our series of 80 immature Blanding's turtles into one common chronology, and to seek correlations between the chronology and environmental temperature and precipitation variables. Our cross-dated chronology had a series intercorrelation of 0.441 (above 99% confidence interval), an average mean sensitivity of 0.293, and an average unfiltered autocorrelation of 0.377. Our master chronology represented increments from 1975 to 2007 (33 years), with index values ranging from a low of 0.688 in 2006 to a high of 1.303 in 1977. Univariate climate response function analysis on mean monthly air temperature and precipitation values revealed a positive correlation with the previous year's May temperature and current year's August temperature; a negative correlation with the previous year's October temperature; and no significant correlation with precipitation. These techniques for determining growth increment response to environmental variables should be applicable to other turtle species and merit further exploration.

  6. Incremental Validity of the Trait Emotional Intelligence Questionnaire-Short Form (TEIQue-SF).

    Science.gov (United States)

    Siegling, A B; Vesely, Ashley K; Petrides, K V; Saklofske, Donald H

    2015-01-01

    This study examined the incremental validity of the adult short form of the Trait Emotional Intelligence Questionnaire (TEIQue-SF) in predicting 7 construct-relevant criteria beyond the variance explained by the Five-factor model and coping strategies. Additionally, the relative contributions of the questionnaire's 4 subscales were assessed. Two samples of Canadian university students completed the TEIQue-SF, along with measures of the Big Five, coping strategies (Sample 1 only), and emotion-laden criteria. The TEIQue-SF showed consistent incremental effects beyond the Big Five or the Big Five and coping strategies, predicting all 7 criteria examined across the 2 samples. Furthermore, 2 of the 4 TEIQue-SF subscales accounted for the measure's incremental validity. Although the findings provide good support for the validity and utility of the TEIQue-SF, directions for further research are emphasized.

  7. Learning in Different Modes: The Interaction Between Incremental and Radical Change

    DEFF Research Database (Denmark)

    Petersen, Anders Hedegaard; Boer, Harry; Gertsen, Frank

    2004-01-01

    The objective of the study presented in this article is to contribute to the development of theory on continuous innovation, i.e. the combination of operationally effective exploitation and strategically flexible exploration. A longitudinal case study is presented of the interaction between...... incremental and radical change in Danish company, observed through the lens of organizational learning. The radical change process is described in five phases, each of which had its own effects on incremental change initiatives in the company. The research identified four factors explaining these effects, all...

  8. Incremental Approach to the Technology of Test Design for Industrial Projects

    Directory of Open Access Journals (Sweden)

    P. D. Drobintsev

    2014-01-01

    Full Text Available The paper presents an approach to effort reduction in developing test suites for industrial software products based on the incremental technology. The main problems to be solved by the incremental technology are full automation design of test scenarios and significant reducing of test explosion. The proposed approach provides solutions to the mentioned problems through joint co-working of a designer and a customer, through the integration of symbolic verification with the automatic generation of test suites; through the usage of an efficient technology with the toolset VRS/TAT.

  9. Towards Reliable and Energy-Efficient Incremental Cooperative Communication for Wireless Body Area Networks.

    Science.gov (United States)

    Yousaf, Sidrah; Javaid, Nadeem; Qasim, Umar; Alrajeh, Nabil; Khan, Zahoor Ali; Ahmed, Mansoor

    2016-02-24

    In this study, we analyse incremental cooperative communication for wireless body area networks (WBANs) with different numbers of relays. Energy efficiency (EE) and the packet error rate (PER) are investigated for different schemes. We propose a new cooperative communication scheme with three-stage relaying and compare it to existing schemes. Our proposed scheme provides reliable communication with less PER at the cost of surplus energy consumption. Analytical expressions for the EE of the proposed three-stage cooperative communication scheme are also derived, taking into account the effect of PER. Later on, the proposed three-stage incremental cooperation is implemented in a network layer protocol; enhanced incremental cooperative critical data transmission in emergencies for static WBANs (EInCo-CEStat). Extensive simulations are conducted to validate the proposed scheme. Results of incremental relay-based cooperative communication protocols are compared to two existing cooperative routing protocols: cooperative critical data transmission in emergencies for static WBANs (Co-CEStat) and InCo-CEStat. It is observed from the simulation results that incremental relay-based cooperation is more energy efficient than the existing conventional cooperation protocol, Co-CEStat. The results also reveal that EInCo-CEStat proves to be more reliable with less PER and higher throughput than both of the counterpart protocols. However, InCo-CEStat has less throughput with a greater stability period and network lifetime. Due to the availability of more redundant links, EInCo-CEStat achieves a reduced packet drop rate at the cost of increased energy consumption.

  10. A Pipelining Implementation for Parsing X-ray Diffraction Source Data and Removing the Background Noise

    International Nuclear Information System (INIS)

    Bauer, Michael A; Biem, Alain; McIntyre, Stewart; Xie Yuzhen

    2010-01-01

    Synchrotrons can be used to generate X-rays in order to probe materials at the atomic level. One approach is to use X-ray diffraction (XRD) to do this. The data from an XRD experiment consists of a sequence of digital image files which for a single scan could consist of hundreds or even thousands of digital images. Existing analysis software processes these images individually sequentially and is usually used after the experiment is completed. The results from an XRD detector can be thought of as a sequence of images, generated during the scan by the X-ray beam. If these images could be analyzed in near real-time, the results could be sent to the researcher running the experiment and used to improve the overall experimental process and results. In this paper, we report on a stream processing application to remove background from XRD images using a pipelining implementation. We describe our implementation techniques of using IBM Infosphere Streams for parsing XRD source data and removing the background. We present experimental results showing the super-linear speedup attained over a purely sequential version of the algorithm on a quad-core machine. These results demonstrate the potential of making good use of multi-cores for high-performance stream processing of XRD images.

  11. Effect of multiple forming tools on geometrical and mechanical properties in incremental sheet forming

    Science.gov (United States)

    Wernicke, S.; Dang, T.; Gies, S.; Tekkaya, A. E.

    2018-05-01

    The tendency to a higher variety of products requires economical manufacturing processes suitable for the production of prototypes and small batches. In the case of complex hollow-shaped parts, single point incremental forming (SPIF) represents a highly flexible process. The flexibility of this process comes along with a very long process time. To decrease the process time, a new incremental forming approach with multiple forming tools is investigated. The influence of two incremental forming tools on the resulting mechanical and geometrical component properties compared to SPIF is presented. Sheets made of EN AW-1050A were formed to frustums of a pyramid using different tool-path strategies. Furthermore, several variations of the tool-path strategy are analyzed. A time saving between 40% and 60% was observed depending on the tool-path and the radii of the forming tools while the mechanical properties remained unchanged. This knowledge can increase the cost efficiency of incremental forming processes.

  12. Parallel Algorithm for Incremental Betweenness Centrality on Large Graphs

    KAUST Repository

    Jamour, Fuad Tarek; Skiadopoulos, Spiros; Kalnis, Panos

    2017-01-01

    : they either require excessive memory (i.e., quadratic to the size of the input graph) or perform unnecessary computations rendering them prohibitively slow. We propose iCentral; a novel incremental algorithm for computing betweenness centrality in evolving

  13. Incremental principal component pursuit for video background modeling

    Science.gov (United States)

    Rodriquez-Valderrama, Paul A.; Wohlberg, Brendt

    2017-03-14

    An incremental Principal Component Pursuit (PCP) algorithm for video background modeling that is able to process one frame at a time while adapting to changes in background, with a computational complexity that allows for real-time processing, having a low memory footprint and is robust to translational and rotational jitter.

  14. Geometry of finite deformations and time-incremental analysis

    Czech Academy of Sciences Publication Activity Database

    Fiala, Zdeněk

    2016-01-01

    Roč. 81, May (2016), s. 230-244 ISSN 0020-7462 Institutional support: RVO:68378297 Keywords : solid mechanics * finite deformations * time-incremental analysis * Lagrangian system * evolution equation of Lie type Subject RIV: BE - Theoretical Physics Impact factor: 2.074, year: 2016 http://www.sciencedirect.com/science/article/pii/S0020746216000330

  15. Incremental View Maintenance for Deductive Graph Databases Using Generalized Discrimination Networks

    Directory of Open Access Journals (Sweden)

    Thomas Beyhl

    2016-12-01

    Full Text Available Nowadays, graph databases are employed when relationships between entities are in the scope of database queries to avoid performance-critical join operations of relational databases. Graph queries are used to query and modify graphs stored in graph databases. Graph queries employ graph pattern matching that is NP-complete for subgraph isomorphism. Graph database views can be employed that keep ready answers in terms of precalculated graph pattern matches for often stated and complex graph queries to increase query performance. However, such graph database views must be kept consistent with the graphs stored in the graph database. In this paper, we describe how to use incremental graph pattern matching as technique for maintaining graph database views. We present an incremental maintenance algorithm for graph database views, which works for imperatively and declaratively specified graph queries. The evaluation shows that our maintenance algorithm scales when the number of nodes and edges stored in the graph database increases. Furthermore, our evaluation shows that our approach can outperform existing approaches for the incremental maintenance of graph query results.

  16. Final Safety Analysis Report (FSAR) for Building 332, Increment III

    Energy Technology Data Exchange (ETDEWEB)

    Odell, B. N.; Toy, Jr., A. J.

    1977-08-31

    This Final Safety Analysis Report (FSAR) supplements the Preliminary Safety Analysis Report (PSAR), dated January 18, 1974, for Building 332, Increment III of the Plutonium Materials Engineering Facility located at the Lawrence Livermore Laboratory (LLL). The FSAR, in conjunction with the PSAR, shows that the completed increment provides facilities for safely conducting the operations as described. These documents satisfy the requirements of ERDA Manual Appendix 6101, Annex C, dated April 8, 1971. The format and content of this FSAR complies with the basic requirements of the letter of request from ERDA San to LLL, dated March 10, 1972. Included as appendices in support of th FSAR are the Building 332 Operational Safety Procedure and the LLL Disaster Control Plan.

  17. Health level seven interoperability strategy: big data, incrementally structured.

    Science.gov (United States)

    Dolin, R H; Rogers, B; Jaffe, C

    2015-01-01

    Describe how the HL7 Clinical Document Architecture (CDA), a foundational standard in US Meaningful Use, contributes to a "big data, incrementally structured" interoperability strategy, whereby data structured incrementally gets large amounts of data flowing faster. We present cases showing how this approach is leveraged for big data analysis. To support the assertion that semi-structured narrative in CDA format can be a useful adjunct in an overall big data analytic approach, we present two case studies. The first assesses an organization's ability to generate clinical quality reports using coded data alone vs. coded data supplemented by CDA narrative. The second leverages CDA to construct a network model for referral management, from which additional observations can be gleaned. The first case shows that coded data supplemented by CDA narrative resulted in significant variances in calculated performance scores. In the second case, we found that the constructed network model enables the identification of differences in patient characteristics among different referral work flows. The CDA approach goes after data indirectly, by focusing first on the flow of narrative, which is then incrementally structured. A quantitative assessment of whether this approach will lead to a greater flow of data and ultimately a greater flow of structured data vs. other approaches is planned as a future exercise. Along with growing adoption of CDA, we are now seeing the big data community explore the standard, particularly given its potential to supply analytic en- gines with volumes of data previously not possible.

  18. A fast implementation of the incremental backprojection algorithms for parallel beam geometries

    International Nuclear Information System (INIS)

    Chen, C.M.; Wang, C.Y.; Cho, Z.H.

    1996-01-01

    Filtered-backprojection algorithms are the most widely used approaches for reconstruction of computed tomographic (CT) images, such as X-ray CT and positron emission tomographic (PET) images. The Incremental backprojection algorithm is a fast backprojection approach based on restructuring the Shepp and Logan algorithm. By exploiting interdependency (position and values) of adjacent pixels, the Incremental algorithm requires only O(N) and O(N 2 ) multiplications in contrast to O(N 2 ) and O(N 3 ) multiplications for the Shepp and Logan algorithm in two-dimensional (2-D) and three-dimensional (3-D) backprojections, respectively, for each view, where N is the size of the image in each dimension. In addition, it may reduce the number of additions for each pixel computation. The improvement achieved by the Incremental algorithm in practice was not, however, as significant as expected. One of the main reasons is due to inevitably visiting pixels outside the beam in the searching flow scheme originally developed for the Incremental algorithm. To optimize implementation of the Incremental algorithm, an efficient scheme, namely, coded searching flow scheme, is proposed in this paper to minimize the overhead caused by searching for all pixels in a beam. The key idea of this scheme is to encode the searching flow for all pixels inside each beam. While backprojecting, all pixels may be visited without any overhead due to using the coded searching flow as the a priori information. The proposed coded searching flow scheme has been implemented on a Sun Sparc 10 and a Sun Sparc 20 workstations. The implementation results show that the proposed scheme is 1.45--2.0 times faster than the original searching flow scheme for most cases tested

  19. Increment and mortality in a virgin Douglas-fir forest.

    Science.gov (United States)

    Robert W. Steele; Norman P. Worthington

    1955-01-01

    Is there any basis to the forester's rule of thumb that virgin forests eventually reach an equilibrium where increment and mortality approximately balance? Are we wasting potential timber volume by failing to salvage mortality in old-growth stands?

  20. Diagnostic value of triphasic incremental helical CT in early and progressive gastric carcinoma

    International Nuclear Information System (INIS)

    Gao Jianbo; Yan Xuehua; Li Mengtai; Guo Hua; Chen Xuejun; Guan Sheng; Zhang Xiefu; Li Shuxin; Yang Xiaopeng

    2001-01-01

    Objective: To investigate helical CT enhancement characteristics of gastric carcinoma, and the diagnostic value and preoperative staging of gastric carcinoma with triphasic incremental helical CT of the stomach with water-filling method. Methods: Both double-contrast barium examination and triphasic incremental helical CT of the stomach with water-filling method were performed in 46 patients with gastric carcinoma. Results: (1) Among these patients, normal gastric wall exhibited one layered structure in 18 patients, two or three layered structure in 28 patients in the arterial and portal venous phase. (2) Two cases of early stomach cancer showed marked enhancement in the arterial and portal venous phase and obvious attenuation of enhancement in the equilibrium phase. On the contrary, 32 of the 44 advanced gastric carcinoma was showed marked enhancement in the venous phase compared with the arterial phase ( t = 4.226, P < 0.05). (3) The total accuracy of triphasic incremental helical CT in determining TNM-staging was 81.0%. Conclusion: Different types of gastric carcinoma have different enhancement features. Triphases incremental helical CT is more accurate than conventional CT in the preoperative staging of gastric carcinoma

  1. A System to Derive Optimal Tree Diameter Increment Models from the Eastwide Forest Inventory Data Base (EFIDB)

    Science.gov (United States)

    Don C. Bragg

    2002-01-01

    This article is an introduction to the computer software used by the Potential Relative Increment (PRI) approach to optimal tree diameter growth modeling. These DOS programs extract qualified tree and plot data from the Eastwide Forest Inventory Data Base (EFIDB), calculate relative tree increment, sort for the highest relative increments by diameter class, and...

  2. Annual increments, specific gravity and energy of Eucalyptus grandis by gamma-ray attenuation technique

    International Nuclear Information System (INIS)

    Rezende, M.A.; Guerrini, I.A.; Ferraz, E.S.B.

    1990-01-01

    Specific gravity annual increments in volume, mass and energy of Eucalyptus grandis at thirteen years of age were made taking into account measurements of the calorific value for wood. It was observed that the calorific value for wood decrease slightly, while the specific gravity increase significantly with age. The so-called culmination age for the Annual Volume Increment was determined to be around fourth year of growth while for the Annual Mass and Energy Increment was around the eighty year. These results show that a tree in a particular age may not have a significant growth in volume, yet one is mass and energy. (author)

  3. Stem analysis program (GOAP for evaluating of increment and growth data at individual tree

    Directory of Open Access Journals (Sweden)

    Gafura Aylak Özdemir

    2016-07-01

    Full Text Available Stem analysis is a method evaluating in a detailed way data of increment and growth of individual tree at the past periods and widely used in various forestry disciplines. Untreated data of stem analysis consist of annual ring count and measurement procedures performed on cross sections taken from individual tree by section method. The evaluation of obtained this untreated data takes quite some time. Thus, a computer software was developed in this study to quickly and efficiently perform stem analysis. This computer software developed to evaluate untreated data of stem analysis as numerical and graphical was programmed as macro by utilizing Visual Basic for Application feature of MS Excel 2013 program currently the most widely used. In developed this computer software, growth height model is formed from two different approaches, individual tree volume depending on section method, cross-sectional area, increments of diameter, height and volume, volume increment percent and stem form factor at breast height are calculated depending on desired period lengths. This calculated values are given as table. Development of diameter, height, volume, increments of these variables, volume increment percent and stem form factor at breast height according to periodic age are given as chart. Stem model showing development of diameter, height and shape of individual tree in the past periods also can be taken from computer software as chart.

  4. Incremental exercise test performance with and without a respiratory ...

    African Journals Online (AJOL)

    Incremental exercise test performance with and without a respiratory gas collection system. ... PROMOTING ACCESS TO AFRICAN RESEARCH ... Industrial- type mask wear is thought to impair exercise performance through increased respiratory dead space, flow ... EMAIL FREE FULL TEXT EMAIL FREE FULL TEXT

  5. Complexity analysis of the turbulent environmental fluid flow time series

    Science.gov (United States)

    Mihailović, D. T.; Nikolić-Đorić, E.; Drešković, N.; Mimić, G.

    2014-02-01

    We have used the Kolmogorov complexities, sample and permutation entropies to quantify the randomness degree in river flow time series of two mountain rivers in Bosnia and Herzegovina, representing the turbulent environmental fluid, for the period 1926-1990. In particular, we have examined the monthly river flow time series from two rivers (the Miljacka and the Bosnia) in the mountain part of their flow and then calculated the Kolmogorov complexity (KL) based on the Lempel-Ziv Algorithm (LZA) (lower-KLL and upper-KLU), sample entropy (SE) and permutation entropy (PE) values for each time series. The results indicate that the KLL, KLU, SE and PE values in two rivers are close to each other regardless of the amplitude differences in their monthly flow rates. We have illustrated the changes in mountain river flow complexity by experiments using (i) the data set for the Bosnia River and (ii) anticipated human activities and projected climate changes. We have explored the sensitivity of considered measures in dependence on the length of time series. In addition, we have divided the period 1926-1990 into three subintervals: (a) 1926-1945, (b) 1946-1965, (c) 1966-1990, and calculated the KLL, KLU, SE, PE values for the various time series in these subintervals. It is found that during the period 1946-1965, there is a decrease in their complexities, and corresponding changes in the SE and PE, in comparison to the period 1926-1990. This complexity loss may be primarily attributed to (i) human interventions, after the Second World War, on these two rivers because of their use for water consumption and (ii) climate change in recent times.

  6. Association between increased EEG signal complexity and cannabis dependence.

    Science.gov (United States)

    Laprevote, Vincent; Bon, Laura; Krieg, Julien; Schwitzer, Thomas; Bourion-Bedes, Stéphanie; Maillard, Louis; Schwan, Raymund

    2017-12-01

    Both acute and regular cannabis use affects the functioning of the brain. While several studies have demonstrated that regular cannabis use can impair the capacity to synchronize neural assemblies during specific tasks, less is known about spontaneous brain activity. This can be explored by measuring EEG complexity, which reflects the spontaneous variability of human brain activity. A recent study has shown that acute cannabis use can affect that complexity. Since the characteristics of cannabis use can affect the impact on brain functioning, this study sets out to measure EEG complexity in regular cannabis users with or without dependence, in comparison with healthy controls. We recruited 26 healthy controls, 25 cannabis users without cannabis dependence and 14 cannabis users with cannabis dependence, based on DSM IV TR criteria. The EEG signal was extracted from at least 250 epochs of the 500ms pre-stimulation phase during a visual evoked potential paradigm. Brain complexity was estimated using Lempel-Ziv Complexity (LZC), which was compared across groups by non-parametric Kruskall-Wallis ANOVA. The analysis revealed a significant difference between the groups, with higher LZC in participants with cannabis dependence than in non-dependent cannabis users. There was no specific localization of this effect across electrodes. We showed that cannabis dependence is associated to an increased spontaneous brain complexity in regular users. This result is in line with previous results in acute cannabis users. It may reflect increased randomness of neural activity in cannabis dependence. Future studies should explore whether this effect is permanent or diminishes with cannabis cessation. Copyright © 2017 Elsevier B.V. and ECNP. All rights reserved.

  7. Image quality (IQ) guided multispectral image compression

    Science.gov (United States)

    Zheng, Yufeng; Chen, Genshe; Wang, Zhonghai; Blasch, Erik

    2016-05-01

    Image compression is necessary for data transportation, which saves both transferring time and storage space. In this paper, we focus on our discussion on lossy compression. There are many standard image formats and corresponding compression algorithms, for examples, JPEG (DCT -- discrete cosine transform), JPEG 2000 (DWT -- discrete wavelet transform), BPG (better portable graphics) and TIFF (LZW -- Lempel-Ziv-Welch). The image quality (IQ) of decompressed image will be measured by numerical metrics such as root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and structural Similarity (SSIM) Index. Given an image and a specified IQ, we will investigate how to select a compression method and its parameters to achieve an expected compression. Our scenario consists of 3 steps. The first step is to compress a set of interested images by varying parameters and compute their IQs for each compression method. The second step is to create several regression models per compression method after analyzing the IQ-measurement versus compression-parameter from a number of compressed images. The third step is to compress the given image with the specified IQ using the selected compression method (JPEG, JPEG2000, BPG, or TIFF) according to the regressed models. The IQ may be specified by a compression ratio (e.g., 100), then we will select the compression method of the highest IQ (SSIM, or PSNR). Or the IQ may be specified by a IQ metric (e.g., SSIM = 0.8, or PSNR = 50), then we will select the compression method of the highest compression ratio. Our experiments tested on thermal (long-wave infrared) images (in gray scales) showed very promising results.

  8. Generation of Referring Expressions: Assessing the Incremental Algorithm

    Science.gov (United States)

    van Deemter, Kees; Gatt, Albert; van der Sluis, Ielka; Power, Richard

    2012-01-01

    A substantial amount of recent work in natural language generation has focused on the generation of "one-shot" referring expressions whose only aim is to identify a target referent. Dale and Reiter's Incremental Algorithm (IA) is often thought to be the best algorithm for maximizing the similarity to referring expressions produced by people. We…

  9. Incremental exposure facilitates adaptation to sensory rearrangement. [vestibular stimulation patterns

    Science.gov (United States)

    Lackner, J. R.; Lobovits, D. N.

    1978-01-01

    Visual-target pointing experiments were performed on 24 adult volunteers in order to compare the relative effectiveness of incremental (stepwise) and single-step exposure conditions on adaptation to visual rearrangement. The differences between the preexposure and postexposure scores served as an index of the adaptation elicited during the exposure period. It is found that both single-step and stepwise exposure to visual rearrangement elicit compensatory changes in sensorimotor coordination. However, stepwise exposure, when compared to single-step exposur in terms of the average magnitude of visual displacement over the exposure period, clearly enhances the rate of adaptation. It seems possible that the enhancement of adaptation to unusual patterns of sensory stimulation produced by incremental exposure reflects a general principle of sensorimotor function.

  10. Weighted tunable clustering in local-world networks with increment behavior

    International Nuclear Information System (INIS)

    Ma, Ying-Hong; Li, Huijia; Zhang, Xiao-Dong

    2010-01-01

    Since some realistic networks are influenced not only by increment behavior but also by the tunable clustering mechanism with new nodes to be added to networks, it is interesting to characterize the model for those actual networks. In this paper, a weighted local-world model, which incorporates increment behavior and the tunable clustering mechanism, is proposed and its properties are investigated, such as degree distribution and clustering coefficient. Numerical simulations are fitted to the model and also display good right-skewed scale-free properties. Furthermore, the correlation of vertices in our model is studied which shows the assortative property. The epidemic spreading process by weighted transmission rate on the model shows that the tunable clustering behavior has a great impact on the epidemic dynamic

  11. On kinematical minimum principles for rates and increments in plasticity

    International Nuclear Information System (INIS)

    Zouain, N.

    1984-01-01

    The optimization approach for elastoplastic analysis is discussed showing that some minimum principles related to numerical methods can be derived by means of duality and penalization procedures. Three minimum principles for velocity and plastic multiplier rate fields are presented in the framework of perfect plasticity. The first one is the classical Greenberg formulation. The second one, due to Capurso, is developed here with different motivation, and modified by penalization of constraints so as to arrive at a third principle for rates. The counterparts of these optimization formulations in terms of discrete increments of displacements of displacements and plastic multipliers are discussed. The third one of these minimum principles for finite increments is recognized to be closely related to Maier's formulation of holonomic plasticity. (Author) [pt

  12. Variance-optimal hedging for processes with stationary independent increments

    DEFF Research Database (Denmark)

    Hubalek, Friedrich; Kallsen, J.; Krawczyk, L.

    We determine the variance-optimal hedge when the logarithm of the underlying price follows a process with stationary independent increments in discrete or continuous time. Although the general solution to this problem is known as backward recursion or backward stochastic differential equation, we...

  13. FDTD Stability: Critical Time Increment

    Directory of Open Access Journals (Sweden)

    Z. Skvor

    2003-06-01

    Full Text Available A new approach suitable for determination of the maximal stable timeincrement for the Finite-Difference Time-Domain (FDTD algorithm incommon curvilinear coordinates, for general mesh shapes and certaintypes of boundaries is presented. The maximal time incrementcorresponds to a characteristic value of a Helmholz equation that issolved by a finite-difference (FD method. If this method uses exactlythe same discretization as the given FDTD method (same mesh, boundaryconditions, order of precision etc., the maximal stable time incrementis obtained from the highest characteristic value. The FD system issolved by an iterative method, which uses only slightly alteredoriginal FDTD formulae. The Courant condition yields a stable timeincrement, but in certain cases the maximum increment is slightlygreater [2].

  14. Incremental cryptography and security of public hash functions ...

    African Journals Online (AJOL)

    An investigation of incremental algorithms for crytographic functions was initiated. The problem, for collision-free hashing, is to design a scheme for which there exists an efficient “update” algorithm: this algorithm is given the hash function H, the hash h = H(M) of message M and the “replacement request” (j, m), and outputs ...

  15. Product Quality Modelling Based on Incremental Support Vector Machine

    International Nuclear Information System (INIS)

    Wang, J; Zhang, W; Qin, B; Shi, W

    2012-01-01

    Incremental Support vector machine (ISVM) is a new learning method developed in recent years based on the foundations of statistical learning theory. It is suitable for the problem of sequentially arriving field data and has been widely used for product quality prediction and production process optimization. However, the traditional ISVM learning does not consider the quality of the incremental data which may contain noise and redundant data; it will affect the learning speed and accuracy to a great extent. In order to improve SVM training speed and accuracy, a modified incremental support vector machine (MISVM) is proposed in this paper. Firstly, the margin vectors are extracted according to the Karush-Kuhn-Tucker (KKT) condition; then the distance from the margin vectors to the final decision hyperplane is calculated to evaluate the importance of margin vectors, where the margin vectors are removed while their distance exceed the specified value; finally, the original SVs and remaining margin vectors are used to update the SVM. The proposed MISVM can not only eliminate the unimportant samples such as noise samples, but also can preserve the important samples. The MISVM has been experimented on two public data and one field data of zinc coating weight in strip hot-dip galvanizing, and the results shows that the proposed method can improve the prediction accuracy and the training speed effectively. Furthermore, it can provide the necessary decision supports and analysis tools for auto control of product quality, and also can extend to other process industries, such as chemical process and manufacturing process.

  16. First UHF Implementation of the Incremental Scheme for Open-Shell Systems.

    Science.gov (United States)

    Anacker, Tony; Tew, David P; Friedrich, Joachim

    2016-01-12

    The incremental scheme makes it possible to compute CCSD(T) correlation energies to high accuracy for large systems. We present the first extension of this fully automated black-box approach to open-shell systems using an Unrestricted Hartree-Fock (UHF) wave function, extending the efficient domain-specific basis set approach to handle open-shell references. We test our approach on a set of organic and metal organic structures and molecular clusters and demonstrate standard deviations from canonical CCSD(T) values of only 1.35 kJ/mol using a triple ζ basis set. We find that the incremental scheme is significantly more cost-effective than the canonical implementation even for relatively small systems and that the ease of parallelization makes it possible to perform high-level calculations on large systems in a few hours on inexpensive computers. We show that the approximations that make our approach widely applicable are significantly smaller than both the basis set incompleteness error and the intrinsic error of the CCSD(T) method, and we further demonstrate that incremental energies can be reliably used in extrapolation schemes to obtain near complete basis set limit CCSD(T) reaction energies for large systems.

  17. Pornographic image recognition and filtering using incremental learning in compressed domain

    Science.gov (United States)

    Zhang, Jing; Wang, Chao; Zhuo, Li; Geng, Wenhao

    2015-11-01

    With the rapid development and popularity of the network, the openness, anonymity, and interactivity of networks have led to the spread and proliferation of pornographic images on the Internet, which have done great harm to adolescents' physical and mental health. With the establishment of image compression standards, pornographic images are mainly stored with compressed formats. Therefore, how to efficiently filter pornographic images is one of the challenging issues for information security. A pornographic image recognition and filtering method in the compressed domain is proposed by using incremental learning, which includes the following steps: (1) low-resolution (LR) images are first reconstructed from the compressed stream of pornographic images, (2) visual words are created from the LR image to represent the pornographic image, and (3) incremental learning is adopted to continuously adjust the classification rules to recognize the new pornographic image samples after the covering algorithm is utilized to train and recognize the visual words in order to build the initial classification model of pornographic images. The experimental results show that the proposed pornographic image recognition method using incremental learning has a higher recognition rate as well as costing less recognition time in the compressed domain.

  18. Conceptual plural information is used to guide early parsing decisions: Evidence from garden-path sentences with reciprocal verbs.

    Science.gov (United States)

    Patson, Nikole D; Ferreira, Fernanda

    2009-05-01

    In three eyetracking studies, we investigated the role of conceptual plurality in initial parsing decisions in temporarily ambiguous sentences with reciprocal verbs (e.g., While the lovers kissed the baby played alone). We varied the subject of the first clause using three types of plural noun phrases: conjoined noun phrases (the bride and the groom), plural definite descriptions (the lovers), and numerically quantified noun phrases (the two lovers). We found no evidence for garden-path effects when the subject was conjoined (Ferreira & McClure, 1997), but traditional garden-path effects were found with the other plural noun phrases. In addition, we tested plural anaphors that had a plural antecedent present in the discourse. We found that when the antecedent was conjoined, garden-path effects were absent compared to cases in which the antecedent was a plural definite description. Our results indicate that the parser is sensitive to the conceptual representation of a plural constituent. In particular, it appears that a Complex Reference Object (Moxey et al., 2004) automatically activates a reciprocal reading of a reciprocal verb.

  19. A design of LED adaptive dimming lighting system based on incremental PID controller

    Science.gov (United States)

    He, Xiangyan; Xiao, Zexin; He, Shaojia

    2010-11-01

    As a new generation energy-saving lighting source, LED is applied widely in various technology and industry fields. The requirement of its adaptive lighting technology is more and more rigorous, especially in the automatic on-line detecting system. In this paper, a closed loop feedback LED adaptive dimming lighting system based on incremental PID controller is designed, which consists of MEGA16 chip as a Micro-controller Unit (MCU), the ambient light sensor BH1750 chip with Inter-Integrated Circuit (I2C), and constant-current driving circuit. A given value of light intensity required for the on-line detecting environment need to be saved to the register of MCU. The optical intensity, detected by BH1750 chip in real time, is converted to digital signal by AD converter of the BH1750 chip, and then transmitted to MEGA16 chip through I2C serial bus. Since the variation law of light intensity in the on-line detecting environment is usually not easy to be established, incremental Proportional-Integral-Differential (PID) algorithm is applied in this system. Control variable obtained by the incremental PID determines duty cycle of Pulse-Width Modulation (PWM). Consequently, LED's forward current is adjusted by PWM, and the luminous intensity of the detection environment is stabilized by self-adaptation. The coefficients of incremental PID are obtained respectively after experiments. Compared with the traditional LED dimming system, it has advantages of anti-interference, simple construction, fast response, and high stability by the use of incremental PID algorithm and BH1750 chip with I2C serial bus. Therefore, it is suitable for the adaptive on-line detecting applications.

  20. Incremental Learning of Perceptual Categories for Open-Domain Sketch Recognition

    National Research Council Canada - National Science Library

    Lovett, Andrew; Dehghani, Morteza; Forbus, Kenneth

    2007-01-01

    .... This paper describes an incremental learning technique for opendomain recognition. Our system builds generalizations for categories of objects based upon previous sketches of those objects and uses those generalizations to classify new sketches...

  1. Incremental learning for automated knowledge capture

    Energy Technology Data Exchange (ETDEWEB)

    Benz, Zachary O. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Basilico, Justin Derrick [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Davis, Warren Leon [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dixon, Kevin R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Jones, Brian S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Martin, Nathaniel [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Wendt, Jeremy Daniel [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2013-12-01

    People responding to high-consequence national-security situations need tools to help them make the right decision quickly. The dynamic, time-critical, and ever-changing nature of these situations, especially those involving an adversary, require models of decision support that can dynamically react as a situation unfolds and changes. Automated knowledge capture is a key part of creating individualized models of decision making in many situations because it has been demonstrated as a very robust way to populate computational models of cognition. However, existing automated knowledge capture techniques only populate a knowledge model with data prior to its use, after which the knowledge model is static and unchanging. In contrast, humans, including our national-security adversaries, continually learn, adapt, and create new knowledge as they make decisions and witness their effect. This artificial dichotomy between creation and use exists because the majority of automated knowledge capture techniques are based on traditional batch machine-learning and statistical algorithms. These algorithms are primarily designed to optimize the accuracy of their predictions and only secondarily, if at all, concerned with issues such as speed, memory use, or ability to be incrementally updated. Thus, when new data arrives, batch algorithms used for automated knowledge capture currently require significant recomputation, frequently from scratch, which makes them ill suited for use in dynamic, timecritical, high-consequence decision making environments. In this work we seek to explore and expand upon the capabilities of dynamic, incremental models that can adapt to an ever-changing feature space.

  2. Revisiting the fundamentals of single point incremental forming by

    DEFF Research Database (Denmark)

    Silva, Beatriz; Skjødt, Martin; Martins, Paulo A.F.

    2008-01-01

    Knowledge of the physics behind the fracture of material at the transition between the inclined wall and the corner radius of the sheet is of great importance for understanding the fundamentals of single point incremental forming (SPIF). How the material fractures, what is the state of strain...

  3. Will Incremental Hemodialysis Preserve Residual Function and Improve Patient Survival?

    Science.gov (United States)

    Davenport, Andrew

    2015-01-01

    The progressive loss of residual renal function in peritoneal dialysis patients is associated with increased mortality. It has been suggested that incremental dialysis may help preserve residual renal function and improve patient survival. Residual renal function depends upon both patient related and dialysis associated factors. Maintaining patients in an over-hydrated state may be associated with better preservation of residual renal function but any benefit comes with a significant risk of cardiovascular consequences. Notably, it is only observational studies that have reported an association between dialysis patient survival and residual renal function; causality has not been established for dialysis patient survival. The tenuous connections between residual renal function and outcomes and between incremental hemodialysis and residual renal function should temper our enthusiasm for interventions in this area. PMID:25385441

  4. Object class hierarchy for an incremental hypertext editor

    Directory of Open Access Journals (Sweden)

    A. Colesnicov

    1995-02-01

    Full Text Available The object class hierarchy design is considered due to a hypertext editor implementation. The following basic classes were selected: the editor's coordinate system, the memory manager, the text buffer executing basic editing operations, the inherited hypertext buffer, the edit window, the multi-window shell. Special hypertext editing features, the incremental hypertext creation support and further generalizations are discussed.

  5. Incremental Learning of Skill Collections based on Intrinsic Motivation

    Directory of Open Access Journals (Sweden)

    Jan Hendrik Metzen

    2013-07-01

    Full Text Available Life-long learning of reusable, versatile skills is a key prerequisite forembodied agents that act in a complex, dynamic environment and are faced withdifferent tasks over their lifetime. We address the question of how an agentcan learn useful skills efficiently during a developmental period,i.e., when no task is imposed on him and no external reward signal is provided.Learning of skills in a developmental period needs to be incremental andself-motivated. We propose a new incremental, task-independent skill discoveryapproach that is suited for continuous domains. Furthermore, the agent learnsspecific skills based on intrinsic motivation mechanisms thatdetermine on which skills learning is focused at a given point in time. Weevaluate the approach in a reinforcement learning setup in two continuousdomains with complex dynamics. We show that an intrinsically motivated, skilllearning agent outperforms an agent which learns task solutions from scratch.Furthermore, we compare different intrinsic motivation mechanisms and howefficiently they make use of the agent's developmental period.

  6. Efficient incremental relaying for packet transmission over fading channels

    KAUST Repository

    Fareed, Muhammad Mehboob

    2014-07-01

    In this paper, we propose a novel relaying scheme for packet transmission over fading channels, which improves the spectral efficiency of cooperative diversity systems by utilizing limited feedback from the destination. Our scheme capitalizes on the fact that relaying is only required when direct transmission suffers deep fading. We calculate the packet error rate for the proposed efficient incremental relaying (EIR) scheme with both amplify and forward and decode and forward relaying. We compare the performance of the EIR scheme with the threshold-based incremental relaying (TIR) scheme. It is shown that the efficiency of the TIR scheme is better for lower values of the threshold. However, the efficiency of the TIR scheme for higher values of threshold is outperformed by the EIR. In addition, three new threshold-based adaptive EIR are devised to further improve the efficiency of the EIR scheme. We calculate the packet error rate and the efficiency of these new schemes to provide the analytical insight. © 2014 IEEE.

  7. Table incremental slow injection CE-CT in lung cancer

    International Nuclear Information System (INIS)

    Yoshida, Shoji; Maeda, Tomoho; Morita, Masaru

    1988-01-01

    The purpose of this study is to evaluate tumor enhancement in lung cancer under the table incremental study with slow injection of contrast media. The early serial 8 sliced images during the slow injection (1.5 ml/sec) of contrant media were obtained. Following the early images, delayed 8 same sliced images were taken in 2 minutes later. Chacteristic enhanced patterns of the primary cancer and metastatic mediastinal lymphnode were recognized in this study. Enhancement of the primary lesion was classified in 4 patterns, irregular geographic pattern, heterogeneous pattern, homogeneous pattern and rim-enhanced pattern. In mediastinal metastatic lymphadenopathy, three enhanced patterns were obtained, heterogeneous, homogeneous and ring enhanced pattern. Some characteristic enhancement patterns according to the histopathological finding of the lung cancer were obtained. With using this incremental slow injection CE-CT, precise information about the relationship between lung cancer and adjacent mediastinal structure, and obvious staining patterns of the tumor and mediastinal lymphnode were recognized. (author)

  8. Broad Learning System: An Effective and Efficient Incremental Learning System Without the Need for Deep Architecture.

    Science.gov (United States)

    Chen, C L Philip; Liu, Zhulin

    2018-01-01

    Broad Learning System (BLS) that aims to offer an alternative way of learning in deep structure is proposed in this paper. Deep structure and learning suffer from a time-consuming training process because of a large number of connecting parameters in filters and layers. Moreover, it encounters a complete retraining process if the structure is not sufficient to model the system. The BLS is established in the form of a flat network, where the original inputs are transferred and placed as "mapped features" in feature nodes and the structure is expanded in wide sense in the "enhancement nodes." The incremental learning algorithms are developed for fast remodeling in broad expansion without a retraining process if the network deems to be expanded. Two incremental learning algorithms are given for both the increment of the feature nodes (or filters in deep structure) and the increment of the enhancement nodes. The designed model and algorithms are very versatile for selecting a model rapidly. In addition, another incremental learning is developed for a system that has been modeled encounters a new incoming input. Specifically, the system can be remodeled in an incremental way without the entire retraining from the beginning. Satisfactory result for model reduction using singular value decomposition is conducted to simplify the final structure. Compared with existing deep neural networks, experimental results on the Modified National Institute of Standards and Technology database and NYU NORB object recognition dataset benchmark data demonstrate the effectiveness of the proposed BLS.

  9. Phase retrieval via incremental truncated amplitude flow algorithm

    Science.gov (United States)

    Zhang, Quanbing; Wang, Zhifa; Wang, Linjie; Cheng, Shichao

    2017-10-01

    This paper considers the phase retrieval problem of recovering the unknown signal from the given quadratic measurements. A phase retrieval algorithm based on Incremental Truncated Amplitude Flow (ITAF) which combines the ITWF algorithm and the TAF algorithm is proposed. The proposed ITAF algorithm enhances the initialization by performing both of the truncation methods used in ITWF and TAF respectively, and improves the performance in the gradient stage by applying the incremental method proposed in ITWF to the loop stage of TAF. Moreover, the original sampling vector and measurements are preprocessed before initialization according to the variance of the sensing matrix. Simulation experiments verified the feasibility and validity of the proposed ITAF algorithm. The experimental results show that it can obtain higher success rate and faster convergence speed compared with other algorithms. Especially, for the noiseless random Gaussian signals, ITAF can recover any real-valued signal accurately from the magnitude measurements whose number is about 2.5 times of the signal length, which is close to the theoretic limit (about 2 times of the signal length). And it usually converges to the optimal solution within 20 iterations which is much less than the state-of-the-art algorithms.

  10. A simple extension of contraction theory to study incremental stability properties

    DEFF Research Database (Denmark)

    Jouffroy, Jerome

    Contraction theory is a recent tool enabling to study the stability of nonlinear systems trajectories with respect to one another, and therefore belongs to the class of incremental stability methods. In this paper, we extend the original definition of contraction theory to incorporate...... in an explicit manner the control input of the considered system. Such an extension, called universal contraction, is quite analogous in spirit to the well-known Input-to-State Stability (ISS). It serves as a simple formulation of incremental ISS, external stability, and detectability in a differential setting....... The hierarchical combination result of contraction theory is restated in this framework, and a differential small-gain theorem is derived from results already available in Lyapunov theory....

  11. Two-terminal video coding.

    Science.gov (United States)

    Yang, Yang; Stanković, Vladimir; Xiong, Zixiang; Zhao, Wei

    2009-03-01

    Following recent works on the rate region of the quadratic Gaussian two-terminal source coding problem and limit-approaching code designs, this paper examines multiterminal source coding of two correlated, i.e., stereo, video sequences to save the sum rate over independent coding of both sequences. Two multiterminal video coding schemes are proposed. In the first scheme, the left sequence of the stereo pair is coded by H.264/AVC and used at the joint decoder to facilitate Wyner-Ziv coding of the right video sequence. The first I-frame of the right sequence is successively coded by H.264/AVC Intracoding and Wyner-Ziv coding. An efficient stereo matching algorithm based on loopy belief propagation is then adopted at the decoder to produce pixel-level disparity maps between the corresponding frames of the two decoded video sequences on the fly. Based on the disparity maps, side information for both motion vectors and motion-compensated residual frames of the right sequence are generated at the decoder before Wyner-Ziv encoding. In the second scheme, source splitting is employed on top of classic and Wyner-Ziv coding for compression of both I-frames to allow flexible rate allocation between the two sequences. Experiments with both schemes on stereo video sequences using H.264/AVC, LDPC codes for Slepian-Wolf coding of the motion vectors, and scalar quantization in conjunction with LDPC codes for Wyner-Ziv coding of the residual coefficients give a slightly lower sum rate than separate H.264/AVC coding of both sequences at the same video quality.

  12. Language as skill

    DEFF Research Database (Denmark)

    Chater, Nick; McCauley, Stewart M.; Christiansen, M. H.

    2016-01-01

    occurs on-line. These properties are difficult to reconcile with the 'abstract knowledge' viewpoint, and crucially suggest that language comprehension and production are facets of a unitary skill. This viewpoint is exemplified in the Chunk-Based Learner, a computational acquisition model that processes...... incrementally and learns on-line. The model both parses and produces language; and implements the idea that language acquisition is nothing more than learning to process. We suggest that the Now-or-Never bottleneck also provides a strong motivation for unified perception-production models in other domains......Are comprehension and production a single, integrated skill, or are they separate processes drawing on a shared abstract knowledge of language? We argue that a fundamental constraint on memory, the Now-or-Never bottleneck, implies that language processing is incremental and that language learning...

  13. Size, Stability and Incremental Budgeting Outcomes in Public Universities.

    Science.gov (United States)

    Schick, Allen G.; Hills, Frederick S.

    1982-01-01

    Examined the influence of relative size in the analysis of total dollar and workforce budgets, and changes in total dollar and workforce budgets when correlational/regression methods are used. Data suggested that size dominates the analysis of total budgets, and is not a factor when discretionary dollar increments are analyzed. (JAC)

  14. Performance of hybrid-ARQ with incremental redundancy over relay channels

    KAUST Repository

    Chelli, Ali; Alouini, Mohamed-Slim

    2012-01-01

    In this paper, we consider a relay network consisting of a source, a relay, and a destination. The source transmits a message to the destination using hybrid automatic repeat request (HARQ) with incremental redundancy (IR). The relay overhears

  15. Thermomechanical simulations and experimental validation for high speed incremental forming

    Science.gov (United States)

    Ambrogio, Giuseppina; Gagliardi, Francesco; Filice, Luigino; Romero, Natalia

    2016-10-01

    Incremental sheet forming (ISF) consists in deforming only a small region of the workspace through a punch driven by a NC machine. The drawback of this process is its slowness. In this study, a high speed variant has been investigated from both numerical and experimental points of view. The aim has been the design of a FEM model able to perform the material behavior during the high speed process by defining a thermomechanical model. An experimental campaign has been performed by a CNC lathe with high speed to test process feasibility. The first results have shown how the material presents the same performance than in conventional speed ISF and, in some cases, better material behavior due to the temperature increment. An accurate numerical simulation has been performed to investigate the material behavior during the high speed process confirming substantially experimental evidence.

  16. Incremental Volumetric Remapping Method: Analysis and Error Evaluation

    International Nuclear Information System (INIS)

    Baptista, A. J.; Oliveira, M. C.; Rodrigues, D. M.; Menezes, L. F.; Alves, J. L.

    2007-01-01

    In this paper the error associated with the remapping problem is analyzed. A range of numerical results that assess the performance of three different remapping strategies, applied to FE meshes that typically are used in sheet metal forming simulation, are evaluated. One of the selected strategies is the previously presented Incremental Volumetric Remapping method (IVR), which was implemented in the in-house code DD3TRIM. The IVR method fundaments consists on the premise that state variables in all points associated to a Gauss volume of a given element are equal to the state variable quantities placed in the correspondent Gauss point. Hence, given a typical remapping procedure between a donor and a target mesh, the variables to be associated to a target Gauss volume (and point) are determined by a weighted average. The weight function is the Gauss volume percentage of each donor element that is located inside the target Gauss volume. The calculus of the intersecting volumes between the donor and target Gauss volumes is attained incrementally, for each target Gauss volume, by means of a discrete approach. The other two remapping strategies selected are based in the interpolation/extrapolation of variables by using the finite element shape functions or moving least square interpolants. The performance of the three different remapping strategies is address with two tests. The first remapping test was taken from a literature work. The test consists in remapping successively a rotating symmetrical mesh, throughout N increments, in an angular span of 90 deg. The second remapping error evaluation test consists of remapping an irregular element shape target mesh from a given regular element shape donor mesh and proceed with the inverse operation. In this second test the computation effort is also measured. The results showed that the error level associated to IVR can be very low and with a stable evolution along the number of remapping procedures when compared with the

  17. Methods of determining incremental energy costs for economic dispatch and inter-utility interchange in Canadian utilities

    International Nuclear Information System (INIS)

    El-Hawary, M.E.; El-Hawary, F.; Mbamalu, G.A.N.

    1991-01-01

    A questionnaire was mailed to ten Canadian utilities to determine the methods the utilities use in determining the incremental cost of delivering energy at any time. The questionnaire was divided into three parts: generation, transmission and general. The generation section dealt with heat rates, fuel, operation and maintenance, startup and shutdown, and method of prioritizing and economic evaluation of interchange transactions. Transmission dealt with inclusion of transmission system incremental maintenance costs, and transmission losses determination. The general section dealt with incremental costs aspects, and various other economic considerations. A summary is presented of responses to the questionnaire

  18. Endogenous-cue prospective memory involving incremental updating of working memory: an fMRI study.

    Science.gov (United States)

    Halahalli, Harsha N; John, John P; Lukose, Ammu; Jain, Sanjeev; Kutty, Bindu M

    2015-11-01

    Prospective memory paradigms are conventionally classified on the basis of event-, time-, or activity-based intention retrieval. In the vast majority of such paradigms, intention retrieval is provoked by some kind of external event. However, prospective memory retrieval cues that prompt intention retrieval in everyday life are commonly endogenous, i.e., linked to a specific imagined retrieval context. We describe herein a novel prospective memory paradigm wherein the endogenous cue is generated by incremental updating of working memory, and investigated the hemodynamic correlates of this task. Eighteen healthy adult volunteers underwent functional magnetic resonance imaging while they performed a prospective memory task where the delayed intention was triggered by an endogenous cue generated by incremental updating of working memory. Working memory and ongoing task control conditions were also administered. The 'endogenous-cue prospective memory condition' with incremental working memory updating was associated with maximum activations in the right rostral prefrontal cortex, and additional activations in the brain regions that constitute the bilateral fronto-parietal network, central and dorsal salience networks as well as cerebellum. In the working memory control condition, maximal activations were noted in the left dorsal anterior insula. Activation of the bilateral dorsal anterior insula, a component of the central salience network, was found to be unique to this 'endogenous-cue prospective memory task' in comparison to previously reported exogenous- and endogenous-cue prospective memory tasks without incremental working memory updating. Thus, the findings of the present study highlight the important role played by the dorsal anterior insula in incremental working memory updating that is integral to our endogenous-cue prospective memory task.

  19. An Incremental Time-delay Neural Network for Dynamical Recurrent Associative Memory

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    An incremental time-delay neural network based on synapse growth, which is suitable for dynamic control and learning of autonomous robots, is proposed to improve the learning and retrieving performance of dynamical recurrent associative memory architecture. The model allows steady and continuous establishment of associative memory for spatio-temporal regularities and time series in discrete sequence of inputs. The inserted hidden units can be taken as the long-term memories that expand the capacity of network and sometimes may fade away under certain condition. Preliminary experiment has shown that this incremental network may be a promising approach to endow autonomous robots with the ability of adapting to new data without destroying the learned patterns. The system also benefits from its potential chaos character for emergence.

  20. Analytic description of the frictionally engaged in-plane bending process incremental swivel bending (ISB)

    Science.gov (United States)

    Frohn, Peter; Engel, Bernd; Groth, Sebastian

    2018-05-01

    Kinematic forming processes shape geometries by the process parameters to achieve a more universal process utilizations regarding geometric configurations. The kinematic forming process Incremental Swivel Bending (ISB) bends sheet metal strips or profiles in plane. The sequence for bending an arc increment is composed of the steps clamping, bending, force release and feed. The bending moment is frictionally engaged by two clamping units in a laterally adjustable bending pivot. A minimum clamping force hindering the material from slipping through the clamping units is a crucial criterion to achieve a well-defined incremental arc. Therefore, an analytic description of a singular bent increment is developed in this paper. The bending moment is calculated by the uniaxial stress distribution over the profiles' width depending on the bending pivot's position. By a Coulomb' based friction model, necessary clamping force is described in dependence of friction, offset, dimensions of the clamping tools and strip thickness as well as material parameters. Boundaries for the uniaxial stress calculation are given in dependence of friction, tools' dimensions and strip thickness. The results indicate that changing the bending pivot to an eccentric position significantly affects the process' bending moment and, hence, clamping force, which is given in dependence of yield stress and hardening exponent. FE simulations validate the model with satisfactory accordance.

  1. Public Key Infrastructure Increment 2 (PKI Inc 2)

    Science.gov (United States)

    2016-03-01

    across the Global Information Grid (GIG) and at rest. Using authoritative data, obtained via face-to-face identity proofing, PKI creates a credential ...operating on a network by provision of assured PKI-based credentials for any device on that network. ​​​​PKI Increment One made significant...provide assured/secure validation of revocation of an electronic/ digital credential . 2.DoD PKI shall support assured revocation status requests of

  2. Translation of incremental talk test responses to steady-state exercise training intensity.

    Science.gov (United States)

    Lyon, Ellen; Menke, Miranda; Foster, Carl; Porcari, John P; Gibson, Mark; Bubbers, Terresa

    2014-01-01

    The Talk Test (TT) is a submaximal, incremental exercise test that has been shown to be useful in prescribing exercise training intensity. It is based on a subject's ability to speak comfortably during exercise. This study defined the amount of reduction in absolute workload intensity from an incremental exercise test using the TT to give appropriate absolute training intensity for cardiac rehabilitation patients. Patients in an outpatient rehabilitation program (N = 30) performed an incremental exercise test with the TT given every 2-minute stage. Patients rated their speech comfort after reciting a standardized paragraph. Anything other than a "yes" response was considered the "equivocal" stage, while all preceding stages were "positive" stages. The last stage with the unequivocally positive ability to speak was the Last Positive (LP), and the preceding stages were (LP-1 and LP-2). Subsequently, three 20-minute steady-state training bouts were performed in random order at the absolute workload at the LP, LP-1, and LP-2 stages of the incremental test. Speech comfort, heart rate (HR), and rating of perceived exertion (RPE) were recorded every 5 minutes. The 20-minute exercise training bout was completed fully by LP (n = 19), LP-1 (n = 28), and LP-2 (n = 30). Heart rate, RPE, and speech comfort were similar through the LP-1 and LP-2 tests, but the LP stage was markedly more difficult. Steady-state exercise training intensity was easily and appropriately prescribed at intensity associated with the LP-1 and LP-2 stages of the TT. The LP stage may be too difficult for patients in a cardiac rehabilitation program.

  3. Some theoretical aspects of capacity increment in gaseous diffusion

    Energy Technology Data Exchange (ETDEWEB)

    Coates, J. H.; Guais, J. C.; Lamorlette, G.

    1975-09-01

    Facing to the sharply growing needs of enrichment services, the problem of implementing new capacities must be included in an optimized scheme spread out in time. In this paper the alternative solutions will be studied first for an unique increment decision, and then in an optimum schedule. The limits of the analysis will be discussed.

  4. How to Perform Precise Soil and Sediment Sampling? One solution: The Fine Increment Soil Collector (FISC)

    Energy Technology Data Exchange (ETDEWEB)

    Mabit, L.; Toloza, A. [Soil and Water Management and Crop Nutrition Laboratory, IAEA, Seibersdorf (Austria); Meusburger, K.; Alewell, C. [Environmental Geosciences, Department of Environmental Sciences, University of Basel, Basel (Switzerland); Iurian, A-R. [Babes-Bolyai University, Faculty of Environmental Science and Engineering, Cluj-Napoca (Romania); Owens, P. N. [Environmental Science Program and Quesnel River Research Centre, University of Northern British Columbia, Prince George, British Columbia (Canada)

    2014-07-15

    Soil and sediment related research for terrestrial agrienvironmental assessments requires accurate depth incremental sampling to perform detailed analysis of physical, geochemical and biological properties of soil and exposed sediment profiles. Existing equipment does not allow collecting soil/sediment increments at millimetre resolution. The Fine Increment Soil Collector (FISC), developed by the SWMCN Laboratory, allows much greater precision in incremental soil/sediment sampling. It facilitates the easy recovery of collected material by using a simple screw-thread extraction system (see Figure 1). The FISC has been designed specifically to enable standardized scientific investigation of shallow soil/sediment samples. In particular, applications have been developed in two IAEA Coordinated Research Projects (CRPs): CRP D1.20.11 on “Integrated Isotopic Approaches for an Area-wide Precision Conservation to Control the Impacts of Agricultural Practices on Land Degradation and Soil Erosion” and CRP D1.50.15 on “Response to Nuclear Emergencies Affecting Food and Agriculture.”.

  5. How to Perform Precise Soil and Sediment Sampling? One solution: The Fine Increment Soil Collector (FISC)

    International Nuclear Information System (INIS)

    Mabit, L.; Toloza, A.; Meusburger, K.; Alewell, C.; Iurian, A-R.; Owens, P.N.

    2014-01-01

    Soil and sediment related research for terrestrial agrienvironmental assessments requires accurate depth incremental sampling to perform detailed analysis of physical, geochemical and biological properties of soil and exposed sediment profiles. Existing equipment does not allow collecting soil/sediment increments at millimetre resolution. The Fine Increment Soil Collector (FISC), developed by the SWMCN Laboratory, allows much greater precision in incremental soil/sediment sampling. It facilitates the easy recovery of collected material by using a simple screw-thread extraction system (see Figure 1). The FISC has been designed specifically to enable standardized scientific investigation of shallow soil/sediment samples. In particular, applications have been developed in two IAEA Coordinated Research Projects (CRPs): CRP D1.20.11 on “Integrated Isotopic Approaches for an Area-wide Precision Conservation to Control the Impacts of Agricultural Practices on Land Degradation and Soil Erosion” and CRP D1.50.15 on “Response to Nuclear Emergencies Affecting Food and Agriculture.”

  6. Respiratory ammonia output and blood ammonia concentration during incremental exercise

    NARCIS (Netherlands)

    Ament, W; Huizenga, [No Value; Kort, E; van der Mark, TW; Grevink, RG; Verkerke, GJ

    The aim of this study was to investigate whether the increase of ammonia concentration and lactate concentration in blood was accompanied by an increased expiration of ammonia during graded exercise. Eleven healthy subjects performed an incremental cycle ergometer test. Blood ammonia, blood lactate

  7. Identifying the Academic Rising Stars via Pairwise Citation Increment Ranking

    KAUST Repository

    Zhang, Chuxu; Liu, Chuang; Yu, Lu; Zhang, Zi-Ke; Zhou, Tao

    2017-01-01

    success academic careers. In this work, given a set of young researchers who have published the first first-author paper recently, we solve the problem of how to effectively predict the top k% researchers who achieve the highest citation increment in Δt

  8. Incremental concept learning with few training examples and hierarchical classification

    NARCIS (Netherlands)

    Bouma, H.; Eendebak, P.T.; Schutte, K.; Azzopardi, G.; Burghouts, G.J.

    2015-01-01

    Object recognition and localization are important to automatically interpret video and allow better querying on its content. We propose a method for object localization that learns incrementally and addresses four key aspects. Firstly, we show that for certain applications, recognition is feasible

  9. Incremental learning of skill collections based on intrinsic motivation

    Science.gov (United States)

    Metzen, Jan H.; Kirchner, Frank

    2013-01-01

    Life-long learning of reusable, versatile skills is a key prerequisite for embodied agents that act in a complex, dynamic environment and are faced with different tasks over their lifetime. We address the question of how an agent can learn useful skills efficiently during a developmental period, i.e., when no task is imposed on him and no external reward signal is provided. Learning of skills in a developmental period needs to be incremental and self-motivated. We propose a new incremental, task-independent skill discovery approach that is suited for continuous domains. Furthermore, the agent learns specific skills based on intrinsic motivation mechanisms that determine on which skills learning is focused at a given point in time. We evaluate the approach in a reinforcement learning setup in two continuous domains with complex dynamics. We show that an intrinsically motivated, skill learning agent outperforms an agent which learns task solutions from scratch. Furthermore, we compare different intrinsic motivation mechanisms and how efficiently they make use of the agent's developmental period. PMID:23898265

  10. The Boundary Between Planning and Incremental Budgeting: Empirical Examination in a Publicly-Owned Corporation

    OpenAIRE

    S. K. Lioukas; D. J. Chambers

    1981-01-01

    This paper is a study within the field of public budgeting. It focuses on the capital budget, and it attempts to model and analyze the capital budgeting process using a framework previously developed in the literature of incremental budgeting. Within this framework the paper seeks to determine empirically whether the movement of capital expenditure budgets can be represented as the routine application of incremental adjustments over an existing base of allocations and whether further, forward...

  11. Maximal power output during incremental exercise by resistance and endurance trained athletes.

    Science.gov (United States)

    Sakthivelavan, D S; Sumathilatha, S

    2010-01-01

    This study was aimed at comparing the maximal power output by resistance trained and endurance trained athletes during incremental exercise. Thirty male athletes who received resistance training (Group I) and thirty male athletes of similar age group who received endurance training (Group II) for a period of more than 1 year were chosen for the study. Physical parameters were measured and exercise stress testing was done on a cycle ergometer with a portable gas analyzing system. The maximal progressive incremental cycle ergometer power output at peak exercise and carbon dioxide production at VO2max were measured. Highly significant (P biofeedback and perk up the athlete's performance.

  12. Enthalpy-increment measurements for CsI(s) and Cs2CrO4(s) by high-temperature Calvet calorimetry

    International Nuclear Information System (INIS)

    Venugopal, V.; Agarwal, R.; Roy, K.N.; Prasad, R.; Sood, D.D.

    1987-01-01

    Molar thermodynamic properties of CsI(s) and Cs 2 Cr O 4 (s) have been evaluated by enthalpy-increment measurements, using a Calvet high-temperature calorimeter. Least squares analyses were performed on the enthalpy increment results. Data is presented in tabular form for the dependence of enthalpy increments on temperature, in the range 333 to 822 K, for both caesium compounds, along with the thermal properties of the compounds. Good agreement is found between the present data and previously reported results on reduced enthalpy increments of CsI(s) and Cs 2 CrO 4 (s). (U.K.)

  13. Toward translational incremental similarity-based reasoning in breast cancer grading

    Science.gov (United States)

    Tutac, Adina E.; Racoceanu, Daniel; Leow, Wee-Keng; Müller, Henning; Putti, Thomas; Cretu, Vladimir

    2009-02-01

    One of the fundamental issues in bridging the gap between the proliferation of Content-Based Image Retrieval (CBIR) systems in the scientific literature and the deficiency of their usage in medical community is based on the characteristic of CBIR to access information by images or/and text only. Yet, the way physicians are reasoning about patients leads intuitively to a case representation. Hence, a proper solution to overcome this gap is to consider a CBIR approach inspired by Case-Based Reasoning (CBR), which naturally introduces medical knowledge structured by cases. Moreover, in a CBR system, the knowledge is incrementally added and learned. The purpose of this study is to initiate a translational solution from CBIR algorithms to clinical practice, using a CBIR/CBR hybrid approach. Therefore, we advance the idea of a translational incremental similarity-based reasoning (TISBR), using combined CBIR and CBR characteristics: incremental learning of medical knowledge, medical case-based structure of the knowledge (CBR), image usage to retrieve similar cases (CBIR), similarity concept (central for both paradigms). For this purpose, three major axes are explored: the indexing, the cases retrieval and the search refinement, applied to Breast Cancer Grading (BCG), a powerful breast cancer prognosis exam. The effectiveness of this strategy is currently evaluated over cases provided by the Pathology Department of Singapore National University Hospital, for the indexing. With its current accuracy, TISBR launches interesting perspectives for complex reasoning in future medical research, opening the way to a better knowledge traceability and a better acceptance rate of computer-aided diagnosis assistance among practitioners.

  14. Evaluation of incremental reactivity and its uncertainty in Southern California.

    Science.gov (United States)

    Martien, Philip T; Harley, Robert A; Milford, Jana B; Russell, Armistead G

    2003-04-15

    The incremental reactivity (IR) and relative incremental reactivity (RIR) of carbon monoxide and 30 individual volatile organic compounds (VOC) were estimated for the South Coast Air Basin using two photochemical air quality models: a 3-D, grid-based model and a vertically resolved trajectory model. Both models include an extended version of the SAPRC99 chemical mechanism. For the 3-D modeling, the decoupled direct method (DDM-3D) was used to assess reactivities. The trajectory model was applied to estimate uncertainties in reactivities due to uncertainties in chemical rate parameters, deposition parameters, and emission rates using Monte Carlo analysis with Latin hypercube sampling. For most VOC, RIRs were found to be consistent in rankings with those produced by Carter using a box model. However, 3-D simulations show that coastal regions, upwind of most of the emissions, have comparatively low IR but higher RIR than predicted by box models for C4-C5 alkenes and carbonyls that initiate the production of HOx radicals. Biogenic VOC emissions were found to have a lower RIR than predicted by box model estimates, because emissions of these VOC were mostly downwind of the areas of primary ozone production. Uncertainties in RIR of individual VOC were found to be dominated by uncertainties in the rate parameters of their primary oxidation reactions. The coefficient of variation (COV) of most RIR values ranged from 20% to 30%, whereas the COV of absolute incremental reactivity ranged from about 30% to 40%. In general, uncertainty and variability both decreased when relative rather than absolute reactivity metrics were used.

  15. Cost of Incremental Expansion of an Existing Family Medicine Residency Program.

    Science.gov (United States)

    Ashkin, Evan A; Newton, Warren P; Toomey, Brian; Lingley, Ronald; Page, Cristen P

    2017-07-01

    Expanding residency training programs to address shortages in the primary care workforce is challenged by the present graduate medical education (GME) environment. The Medicare funding cap on new GME positions and reductions in the Health Resources and Services Administration (HRSA) Teaching Health Center (THC) GME program require innovative solutions to support primary care residency expansion. Sparse literature exists to assist in predicting the actual cost of incremental expansion of a family medicine residency program without federal or state GME support. In 2011 a collaboration to develop a community health center (CHC) academic medical partnership (CHAMP), was formed and created a THC as a training site for expansion of an existing family medicine residency program. The cost of expansion was a critical factor as no Federal GME funding or HRSA THC GME program support was available. Initial start-up costs were supported by a federal grant and local foundations. Careful financial analysis of the expansion has provided actual costs per resident of the incremental expansion of the residencyRESULTS: The CHAMP created a new THC and expanded the residency from eight to ten residents per year. The cost of expansion was approximately $72,000 per resident per year. The cost of incremental expansion of our residency program in the CHAMP model was more than 50% less than that of the recently reported cost of training in the HRSA THC GME program.

  16. 78 FR 22770 - Immigration Benefits Business Transformation, Increment I; Correction

    Science.gov (United States)

    2013-04-17

    ...-2009-0022] RIN 1615-AB83 Immigration Benefits Business Transformation, Increment I; Correction AGENCY...: Background On August 29, 2011, DHS issued a final rule titled, Immigration Benefits Business Transformation... business processes. In this notice, we are correcting three technical errors. DATES: The effective date of...

  17. Sustained change blindness to incremental scene rotation: a dissociation between explicit change detection and visual memory.

    Science.gov (United States)

    Hollingworth, Andrew; Henderson, John M

    2004-07-01

    In a change detection paradigm, the global orientation of a natural scene was incrementally changed in 1 degree intervals. In Experiments 1 and 2, participants demonstrated sustained change blindness to incremental rotation, often coming to consider a significantly different scene viewpoint as an unchanged continuation of the original view. Experiment 3 showed that participants who failed to detect the incremental rotation nevertheless reliably detected a single-step rotation back to the initial view. Together, these results demonstrate an important dissociation between explicit change detection and visual memory. Following a change, visual memory is updated to reflect the changed state of the environment, even if the change was not detected.

  18. Identifying the Academic Rising Stars via Pairwise Citation Increment Ranking

    KAUST Repository

    Zhang, Chuxu

    2017-08-02

    Predicting the fast-rising young researchers (the Academic Rising Stars) in the future provides useful guidance to the research community, e.g., offering competitive candidates to university for young faculty hiring as they are expected to have success academic careers. In this work, given a set of young researchers who have published the first first-author paper recently, we solve the problem of how to effectively predict the top k% researchers who achieve the highest citation increment in Δt years. We explore a series of factors that can drive an author to be fast-rising and design a novel pairwise citation increment ranking (PCIR) method that leverages those factors to predict the academic rising stars. Experimental results on the large ArnetMiner dataset with over 1.7 million authors demonstrate the effectiveness of PCIR. Specifically, it outperforms all given benchmark methods, with over 8% average improvement. Further analysis demonstrates that temporal features are the best indicators for rising stars prediction, while venue features are less relevant.

  19. Effects of frequency and duration on psychometric functions for detection of increments and decrements in sinusoids in noise.

    Science.gov (United States)

    Moore, B C; Peters, R W; Glasberg, B R

    1999-12-01

    Psychometric functions for detecting increments or decrements in level of sinusoidal pedestals were measured for increment and decrement durations of 5, 10, 20, 50, 100, and 200 ms and for frequencies of 250, 1000, and 4000 Hz. The sinusoids were presented in background noise intended to mask spectral splatter. A three-interval, three-alternative procedure was used. The results indicated that, for increments, the detectability index d' was approximately proportional to delta I/I. For decrements, d' was approximately proportional to delta L. The slopes of the psychometric functions increased (indicating better performance) with increasing frequency for both increments and decrements. For increments, the slopes increased with increasing increment duration up to 200 ms at 250 and 1000 Hz, but at 4000 Hz they increased only up to 50 ms. For decrements, the slopes increased for durations up to 50 ms, and then remained roughly constant, for all frequencies. For a center frequency of 250 Hz, the slopes of the psychometric functions for increment detection increased with duration more rapidly than predicted by a "multiple-looks" hypothesis, i.e., more rapidly than the square root of duration, for durations up to 50 ms. For center frequencies of 1000 and 4000 Hz, the slopes increased less rapidly than predicted by a multiple-looks hypothesis, for durations greater than about 20 ms. The slopes of the psychometric functions for decrement detection increased with decrement duration at a rate slightly greater than the square root of duration, for durations up to 50 ms, at all three frequencies. For greater durations, the increase in slope was less than proportional to the square root of duration. The results were analyzed using a model incorporating a simulated auditory filter, a compressive nonlinearity, a sliding temporal integrator, and a decision device based on a template mechanism. The model took into account the effects of both the external noise and an assumed internal

  20. Systematic Luby Transform codes as incremental redundancy scheme

    CSIR Research Space (South Africa)

    Grobler, TL

    2011-09-01

    Full Text Available Transform Codes as Incremental Redundancy Scheme T. L. Grobler y, E. R. Ackermann y, J. C. Olivier y and A. J. van Zylz Department of Electrical, Electronic and Computer Engineering University of Pretoria, Pretoria 0002, South Africa Email: trienkog...@gmail.com, etienne.ackermann@ieee.org yDefence, Peace, Safety and Security (DPSS) Council for Scientific and Industrial Research (CSIR), Pretoria 0001, South Africa zDepartment of Mathematics and Applied Mathematics University of Pretoria, Pretoria 0002, South...

  1. Hippotherapy acute impact on heart rate variability non-linear dynamics in neurological disorders.

    Science.gov (United States)

    Cabiddu, Ramona; Borghi-Silva, Audrey; Trimer, Renata; Trimer, Vitor; Ricci, Paula Angélica; Italiano Monteiro, Clara; Camargo Magalhães Maniglia, Marcela; Silva Pereira, Ana Maria; Rodrigues das Chagas, Gustavo; Carvalho, Eliane Maria

    2016-05-15

    Neurological disorders are associated with autonomic dysfunction. Hippotherapy (HT) is a therapy treatment strategy that utilizes a horse in an interdisciplinary approach for the physical and mental rehabilitation of people with physical, mental and/or psychological disabilities. However, no studies have been carried out which evaluated the effects of HT on the autonomic control in these patients. Therefore, the objective of the present study was to investigate the effects of a single HT session on cardiovascular autonomic control by time domain and non-linear analysis of heart rate variability (HRV). The HRV signal was recorded continuously in twelve children affected by neurological disorders during a HT session, consisting in a 10-minute sitting position rest (P1), a 15-minute preparatory phase sitting on the horse (P2), a 15-minute HT session (P3) and a final 10-minute sitting position recovery (P4). Time domain and non-linear HRV indices, including Sample Entropy (SampEn), Lempel-Ziv Complexity (LZC) and Detrended Fluctuation Analysis (DFA), were calculated for each treatment phase. We observed that SampEn increased during P3 (SampEn=0.56±0.10) with respect to P1 (SampEn=0.40±0.14, p<0.05), while DFA decreased during P3 (DFA=1.10±0.10) with respect to P1 (DFA=1.26±0.14, p<0.05). A significant SDRR increase (p<0.05) was observed during the recovery period P4 (SDRR=50±30ms) with respect to the HT session period P3 (SDRR=30±10ms). Our results suggest that HT might benefit children with disabilities attributable to neurological disorders by eliciting an acute autonomic response during the therapy and during the recovery period. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. LZW-Kernel: fast kernel utilizing variable length code blocks from LZW compressors for protein sequence classification.

    Science.gov (United States)

    Filatov, Gleb; Bauwens, Bruno; Kertész-Farkas, Attila

    2018-05-07

    Bioinformatics studies often rely on similarity measures between sequence pairs, which often pose a bottleneck in large-scale sequence analysis. Here, we present a new convolutional kernel function for protein sequences called the LZW-Kernel. It is based on code words identified with the Lempel-Ziv-Welch (LZW) universal text compressor. The LZW-Kernel is an alignment-free method, it is always symmetric, is positive, always provides 1.0 for self-similarity and it can directly be used with Support Vector Machines (SVMs) in classification problems, contrary to normalized compression distance (NCD), which often violates the distance metric properties in practice and requires further techniques to be used with SVMs. The LZW-Kernel is a one-pass algorithm, which makes it particularly plausible for big data applications. Our experimental studies on remote protein homology detection and protein classification tasks reveal that the LZW-Kernel closely approaches the performance of the Local Alignment Kernel (LAK) and the SVM-pairwise method combined with Smith-Waterman (SW) scoring at a fraction of the time. Moreover, the LZW-Kernel outperforms the SVM-pairwise method when combined with BLAST scores, which indicates that the LZW code words might be a better basis for similarity measures than local alignment approximations found with BLAST. In addition, the LZW-Kernel outperforms n-gram based mismatch kernels, hidden Markov model based SAM and Fisher kernel, and protein family based PSI-BLAST, among others. Further advantages include the LZW-Kernel's reliance on a simple idea, its ease of implementation, and its high speed, three times faster than BLAST and several magnitudes faster than SW or LAK in our tests. LZW-Kernel is implemented as a standalone C code and is a free open-source program distributed under GPLv3 license and can be downloaded from https://github.com/kfattila/LZW-Kernel. akerteszfarkas@hse.ru. Supplementary data are available at Bioinformatics Online.

  3. Deliberate and Crisis Action Planning and Execution Segments Increment 2A (DCAPES Inc 2A)

    Science.gov (United States)

    2016-03-01

    2016 Major Automated Information System Annual Report Deliberate and Crisis Action Planning and Execution Segments Increment 2A (DCAPES Inc 2A...Program Name Deliberate and Crisis Action Planning and Execution Segments Increment 2A (DCAPES Inc 2A) DoD Component Air Force Responsible Office Program...APB) dated March 9, 2015 DCAPES Inc 2A 2016 MAR UNCLASSIFIED 4 Program Description Deliberate and Crisis Action Planning and Execution Segments

  4. Deliberate and Crisis Action Planning and Execution Segments Increment 2B (DCAPES Inc 2B)

    Science.gov (United States)

    2016-03-01

    2016 Major Automated Information System Annual Report Deliberate and Crisis Action Planning and Execution Segments Increment 2B (DCAPES Inc 2B...Information Program Name Deliberate and Crisis Action Planning and Execution Segments Increment 2B (DCAPES Inc 2B) DoD Component Air Force Responsible Office...been established. DCAPES Inc 2B 2016 MAR UNCLASSIFIED 4 Program Description Deliberate and Crisis Action Planning and Execution Segments (DCAPES) is

  5. Blood flow patterns during incremental and steady-state aerobic exercise.

    Science.gov (United States)

    Coovert, Daniel; Evans, LeVisa D; Jarrett, Steven; Lima, Carla; Lima, Natalia; Gurovich, Alvaro N

    2017-05-30

    Endothelial shear stress (ESS) is a physiological stimulus for vascular homeostasis, highly dependent on blood flow patterns. Exercise-induced ESS might be beneficial on vascular health. However, it is unclear what type of ESS aerobic exercise (AX) produces. The aims of this study are to characterize exercise-induced blood flow patterns during incremental and steady-state AX. We expect blood flow pattern during exercise will be intensity-dependent and bidirectional. Six college-aged students (2 males and 4 females) were recruited to perform 2 exercise tests on cycleergometer. First, an 8-12-min incremental test (Test 1) where oxygen uptake (VO2), heart rate (HR), blood pressure (BP), and blood lactate (La) were measured at rest and after each 2-min step. Then, at least 48-hr. after the first test, a 3-step steady state exercise test (Test 2) was performed measuring VO2, HR, BP, and La. The three steps were performed at the following exercise intensities according to La: 0-2 mmol/L, 2-4 mmol/L, and 4-6 mmol/L. During both tests, blood flow patterns were determined by high-definition ultrasound and Doppler on the brachial artery. These measurements allowed to determine blood flow velocities and directions during exercise. On Test 1 VO2, HR, BP, La, and antegrade blood flow velocity significantly increased in an intensity-dependent manner (repeated measures ANOVA, pflow velocity did not significantly change during Test 1. On Test 2 all the previous variables significantly increased in an intensity-dependent manner (repeated measures ANOVA, pflow patterns during incremental and steady-state exercises include both antegrade and retrograde blood flows.

  6. Incremental Observer Relative Data Extraction

    DEFF Research Database (Denmark)

    Bukauskas, Linas; Bøhlen, Michael Hanspeter

    2004-01-01

    The visual exploration of large databases calls for a tight coupling of database and visualization systems. Current visualization systems typically fetch all the data and organize it in a scene tree that is then used to render the visible data. For immersive data explorations in a Cave...... or a Panorama, where an observer is data space this approach is far from optimal. A more scalable approach is to make the observer-aware database system and to restrict the communication between the database and visualization systems to the relevant data. In this paper VR-tree, an extension of the R......-tree, is used to index visibility ranges of objects. We introduce a new operator for incremental Observer Relative data Extraction (iORDE). We propose the Volatile Access STructure (VAST), a lightweight main memory structure that is created on the fly and is maintained during visual data explorations. VAST...

  7. Influence of increment thickness on dentin bond strength and light transmission of composite base materials.

    Science.gov (United States)

    Omran, Tarek A; Garoushi, Sufyan; Abdulmajeed, Aous A; Lassila, Lippo V; Vallittu, Pekka K

    2017-06-01

    Bulk-fill resin composites (BFCs) are gaining popularity in restorative dentistry due to the reduced chair time and ease of application. This study aimed to evaluate the influence of increment thickness on dentin bond strength and light transmission of different BFCs and a new discontinuous fiber-reinforced composite. One hundred eighty extracted sound human molars were prepared for a shear bond strength (SBS) test. The teeth were divided into four groups (n = 45) according to the resin composite used: regular particulate filler resin composite: (1) G-ænial Anterior [GA] (control); bulk-fill resin composites: (2) Tetric EvoCeram Bulk Fill [TEBF] and (3) SDR; and discontinuous fiber-reinforced composite: (4) everX Posterior [EXP]. Each group was subdivided according to increment thickness (2, 4, and 6 mm). The irradiance power through the material of all groups/subgroups was quantified (MARC® Resin Calibrator; BlueLight Analytics Inc.). Data were analyzed using two-way ANOVA followed by Tukey's post hoc test. SBS and light irradiance decreased as the increment's height increased (p composite used. EXP presented the highest SBS in 2- and 4-mm-thick increments when compared to other composites, although the differences were not statistically significant (p > 0.05). Light irradiance mean values arranged in descending order were (p composites. Discontinuous fiber-reinforced composite showed the highest value of curing light transmission, which was also seen in improved bonding strength to the underlying dentin surface. Discontinuous fiber-reinforced composite can be applied safely in bulks of 4-mm increments same as other bulk-fill composites, although, in 2-mm thickness, the investigated composites showed better performance.

  8. Incrementally Detecting Change Types of Spatial Area Object: A Hierarchical Matching Method Considering Change Process

    Directory of Open Access Journals (Sweden)

    Yanhui Wang

    2018-01-01

    Full Text Available Detecting and extracting the change types of spatial area objects can track area objects’ spatiotemporal change pattern and provide the change backtracking mechanism for incrementally updating spatial datasets. To respond to the problems of high complexity of detection methods, high redundancy rate of detection factors, and the low automation degree during incrementally update process, we take into account the change process of area objects in an integrated way and propose a hierarchical matching method to detect the nine types of changes of area objects, while minimizing the complexity of the algorithm and the redundancy rate of detection factors. We illustrate in details the identification, extraction, and database entry of change types, and how we achieve a close connection and organic coupling of incremental information extraction and object type-of-change detection so as to characterize the whole change process. The experimental results show that this method can successfully detect incremental information about area objects in practical applications, with the overall accuracy reaching above 90%, which is much higher than the existing weighted matching method, making it quite feasible and applicable. It helps establish the corresponding relation between new-version and old-version objects, and facilitate the linked update processing and quality control of spatial data.

  9. STS-102 Expedition 2 Increment and Science Briefing

    Science.gov (United States)

    2001-01-01

    Merri Sanchez, Expedition 2 Increment Manager, John Uri, Increment Scientist, and Lybrease Woodard, Lead Payload Operations Director, give an overview of the upcoming activities and objectives of the Expedition 2's (E2's) mission in this prelaunch press conference. Ms. Sanchez describes the crew rotation of Expedition 1 to E2, the timeline E2 will follow during their stay on the International Space Station (ISS), and the various flights going to the ISS and what each will bring to ISS. Mr. Uri gives details on the on-board experiments that will take place on the ISS in the fields of microgravity research, commercial, earth, life, and space sciences (such as radiation characterization, H-reflex, colloids formation and interaction, protein crystal growth, plant growth, fermentation in microgravity, etc.). He also gives details on the scientific facilities to be used (laboratory racks and equipment such as the human torso facsimile or 'phantom torso'). Ms. Woodard gives an overview of Marshall Flight Center's role in the mission. Computerized simulations show the installation of the Space Station Remote Manipulator System (SSRMS) onto the ISS and the installation of the airlock using SSRMS. Live footage shows the interior of the ISS, including crew living quarters, the Progress Module, and the Destiny Laboratory. The three then answer questions from the press.

  10. MUNIX and incremental stimulation MUNE in ALS patients and control subjects

    DEFF Research Database (Denmark)

    Furtula, Jasna; Johnsen, Birger; Christensen, Peter Broegger

    2013-01-01

    This study compares the new Motor Unit Number Estimation (MUNE) technique, MUNIX, with the more common incremental stimulation MUNE (IS-MUNE) with respect to reproducibility in healthy subjects and as potential biomarker of disease progression in patients with ALS....

  11. Limitations of Spectral Electromyogramic Analysis to Determine the Onset of Neuromuscular Fatigue Threshold during Incremental Ergometer Cycling

    Directory of Open Access Journals (Sweden)

    Iban Latasa, Alfredo Cordova, Armando Malanda, Javier Navallas, Ana Lavilla-Oiz, Javier Rodriguez-Falces

    2016-03-01

    Full Text Available Recently, a new method has been proposed to detect the onset of neuromuscular fatigue during an incremental cycling test by assessing the changes in spectral electromyographic (sEMG frequencies within individual exercise periods of the test. The method consists on determining the highest power output that can be sustained without a significant decrease in spectral frequencies. This study evaluated the validity of the new approach by assessing the changes in spectral indicators both throughout the whole test and within individual exercise periods of the test. Fourteen cyclists performed incremental cycle ergometer rides to exhaustion with bipolar surface EMG signals recorded from the vastus lateralis. The mean and median frequencies (Fmean and Fmedian, respectively of the sEMG power spectrum were calculated. The main findings were: (1 Examination of spectral indicators within individual exercise periods of the test showed that neither Fmean nor Fmedian decreased significantly during the last (most fatiguing exercise periods. (2 Examination of the whole incremental test showed that the behaviour of Fmean and Fmedian with increasing power output was highly inconsistent and varied greatly among subjects. (3 Over the whole incremental test, half of the participants exhibited a positive relation between spectral indicators and workload, whereas the other half demonstrated the opposite behavior. Collectively, these findings indicate that spectral sEMG indexes do not provide a reliable measure of the fatigue state of the muscle during an incremental cycling test. Moreover, it is concluded that it is not possible to determine the onset of neuromuscular fatigue during an incremental cycling test by examining spectral indicators within individual exercise periods of the test.

  12. Fault-tolerant incremental diagnosis with limited historical data

    OpenAIRE

    Gillblad, Daniel; Holst, Anders; Steinert, Rebecca

    2006-01-01

    In many diagnosis situations it is desirable to perform a classification in an iterative and interactive manner. All relevant information may not be available initially and must be acquired manually or at a cost. The matter is often complicated by very limited amounts of knowledge and examples when a new system to be diagnosed is initially brought into use. Here, we will describe how to create an incremental classification system based on a statistical model that is trained from empirical dat...

  13. Improved incremental conductance method for maximum power point tracking using cuk converter

    Directory of Open Access Journals (Sweden)

    M. Saad Saoud

    2014-03-01

    Full Text Available The Algerian government relies on a strategy focused on the development of inexhaustible resources such as solar and uses to diversify energy sources and prepare the Algeria of tomorrow: about 40% of the production of electricity for domestic consumption will be from renewable sources by 2030, Therefore it is necessary to concentrate our forces in order to reduce the application costs and to increment their performances, Their performance is evaluated and compared through theoretical analysis and digital simulation. This paper presents simulation of improved incremental conductance method for maximum power point tracking (MPPT using DC-DC cuk converter. This improved algorithm is used to track MPPs because it performs precise control under rapidly changing Atmospheric conditions, Matlab/ Simulink were employed for simulation studies.

  14. Statistics of a mixed Eulerian-Lagrangian velocity increment in fully developed turbulence

    International Nuclear Information System (INIS)

    Friedrich, R; Kamps, O; Grauer, R; Homann, H

    2009-01-01

    We investigate the relationship between Eulerian and Lagrangian probability density functions obtained from numerical simulations of two-dimensional as well as three-dimensional turbulence. We show that in contrast to the structure functions of the Lagrangian velocity increment δ τ v(y)=u(x(y, τ), τ)- u(y, 0), where u(x, t) denotes the Eulerian velocity and x(y, t) the particle path initially starting at x(y, 0)=y, the structure functions of the velocity increment δ τ w(y)=u(x(y, τ), τ)- u(y, τ) exhibit a wide range of scaling behavior. Similar scaling indices are detected for the structure functions for particles diffusing in frozen turbulent fields. Furthermore, we discuss a connection to the scaling of Eulerian transversal structure functions.

  15. Factors for Radical Creativity, Incremental Creativity, and Routine, Noncreative Performance

    Science.gov (United States)

    Madjar, Nora; Greenberg, Ellen; Chen, Zheng

    2011-01-01

    This study extends theory and research by differentiating between routine, noncreative performance and 2 distinct types of creativity: radical and incremental. We also use a sensemaking perspective to examine the interplay of social and personal factors that may influence a person's engagement in a certain level of creative action versus routine,…

  16. Development of the Nonstationary Incremental Analysis Update Algorithm for Sequential Data Assimilation System

    Directory of Open Access Journals (Sweden)

    Yoo-Geun Ham

    2016-01-01

    Full Text Available This study introduces a modified version of the incremental analysis updates (IAU, called the nonstationary IAU (NIAU method, to improve the assimilation accuracy of the IAU while keeping the continuity of the analysis. Similar to the IAU, the NIAU is designed to add analysis increments at every model time step to improve the continuity in the intermittent data assimilation. However, unlike the IAU, the NIAU procedure uses time-evolved forcing using the forward operator as corrections to the model. The solution of the NIAU is superior to that of the forward IAU, of which analysis is performed at the beginning of the time window for adding the IAU forcing, in terms of the accuracy of the analysis field. It is because, in the linear systems, the NIAU solution equals that in an intermittent data assimilation method at the end of the assimilation interval. To have the filtering property in the NIAU, a forward operator to propagate the increment is reconstructed with only dominant singular vectors. An illustration of those advantages of the NIAU is given using the simple 40-variable Lorenz model.

  17. Adaptive scallop height tool path generation for robot-based incremental sheet metal forming

    Science.gov (United States)

    Seim, Patrick; Möllensiep, Dennis; Störkle, Denis Daniel; Thyssen, Lars; Kuhlenkötter, Bernd

    2016-10-01

    Incremental sheet metal forming is an emerging process for the production of individualized products or prototypes in low batch sizes and with short times to market. In these processes, the desired shape is produced by the incremental inward motion of the workpiece-independent forming tool in depth direction and its movement along the contour in lateral direction. Based on this shape production, the tool path generation is a key factor on e.g. the resulting geometric accuracy, the resulting surface quality, and the working time. This paper presents an innovative tool path generation based on a commercial milling CAM package considering the surface quality and working time. This approach offers the ability to define a specific scallop height as an indicator of the surface quality for specific faces of a component. Moreover, it decreases the required working time for the production of the entire component compared to the use of a commercial software package without this adaptive approach. Different forming experiments have been performed to verify the newly developed tool path generation. Mainly, this approach serves to solve the existing conflict of combining the working time and the surface quality within the process of incremental sheet metal forming.

  18. Stable isotope time series and dentin increments elucidate Pleistocene proboscidean paleobiology

    Science.gov (United States)

    Fisher, Daniel; Rountrey, Adam; Smith, Kathlyn; Fox, David

    2010-05-01

    Investigations of stable isotope composition of mineralized tissues have added greatly to our knowledge of past climates and dietary behaviors of organisms, even when they are implemented through 'bulk sampling', in which a single assay yields a single, time-averaged value. Likewise, the practice of 'sclerochronology', which documents periodic structural increments comprising a growth record for accretionary tissues, offers insights into rates of growth and age data at a scale of temporal resolution permitted by the nature of structural increments. We combine both of these approaches to analyze dental tissues of late Pleistocene proboscideans. Tusk dentin typically preserves a record of accretionary growth consisting of histologically distinct increments on daily, approximately weekly, and yearly time scales. Working on polished transverse or longitudinal sections, we mill out a succession of temporally controlled dentin samples bounded by clear structural increments with a known position in the sequence of tusk growth. We further subject each sample (or an aliquot thereof) to multiple compositional analyses - most frequently to assess δ18O and δ13C of hydroxyapatite carbonate, and δ13C and δ15N of collagen. This yields, for each animal and each series of years investigated, a set of parallel compositional time series with a temporal resolution of 1-2 months (or finer if we need additional precision). Patterns in variation of thickness of periodic sub-annual increments yield insight into intra-annual and inter-annual variation of tusk growth rate. This is informative even by itself, but it is still more valuable when coupled with compositional time series. Further, the controls on different stable isotope systems are sufficiently different that the data ensemble yields 'much more than the sum of its parts.' By assessing how compositions and growth rates covary, we monitor with greater confidence changes in local climate, diet, behavior, and health status. We

  19. Enthalpy increment measurements of Sr3Zr2O7(s) and Sr4Zr3O10(s)

    International Nuclear Information System (INIS)

    Banerjee, A.; Dash, S.; Prasad, R.; Venugopal, V.

    1998-01-01

    Enthalpy increment measurements on Sr 3 Zr 2 O 7 (s) and Sr 4 Zr 3 O 10 (s) were carried out using a Calvet micro-calorimeter. The enthalpy increment values were least squares analyzed with the constraints that H 0 (T)-H 0 (298.15 K) at 298.15 K equals to zero and C p 0 (298.15 K) equals to the estimated value. The dependence of enthalpy increment with temperature is given. (orig.)

  20. Influence of the selection from incremental stages on lactate minimum intensity: a pilot study

    Directory of Open Access Journals (Sweden)

    Willian Eiji Miyagi

    2013-09-01

    Full Text Available The purposes of this study were to assess the influence of stage selection from the incremental phase and the use of peak lactate after hyperlactatemia induction on the determination of the lactate minimum intensity (iLACmin. Twelve moderately active university students (23±5 years, 78.3±14.1 kg, 175.3±5.1 cm performed a maximal incremental test to determine the respiratory compensation point (RCP (initial intensity at 70 W and increments of 17.5 W every 2 minutes and a lactate minimum test (induction with the Wingate test, the incremental test started at 30 W below RCP with increments of 10 W every 3 minutes on a cycle ergometer. The iLACmin was determined using second order polynomial adjustment applying five exercise stage selection: 1 using all stages (iLACmin P; 2 using all stages below and two stages above iLACminP (iLACminA; 3 using two stages below and all stages above iLACminP (iLACminB; 4 using the largest and same possible number of stages below and above the iLACminP (iLACminI; 5 using all stages and peak lactate after hyperlactatemia induction (iLACminD. No differences were found between the iLACminP (138.2±30.2 W, iLACminA (139.1±29.1 W, iLACminB (135.3±14.2 W, iLACminI (138.6±20.5 W and iLACmiD (136.7±28.5 W protocols, and a high level of agreement between these intensities and iLACminP was observed. Oxygen uptake, heart rate, rating of perceived exertion and lactate corresponding to these intensities was not different and was strongly correlated. However, the iLACminB presented the lowest success rate (66.7%. In conclusion, stage selection did not influence the determination of iLACmin but modified the success rate

  1. Radical and Incremental Innovation Preferences in Information Technology: An Empirical Study in an Emerging Economy

    Directory of Open Access Journals (Sweden)

    Tarun K. Sen

    2011-11-01

    Full Text Available Radical and Incremental Innovation Preferences in Information Technology: An Empirical Study in an Emerging Economy Abstract Innovation in information technology is a primary driver for growth in developed economies. Research indicates that countries go through three stages in the adoption of innovation strategies: buying innovation through global trade, incremental innovation from other countries by enhancing efficiency, and, at the most developed stage, radically innovating independently for competitive advantage. The first two stages of innovation maturity depend more on cross-border trade than the third stage. In this paper, we find that IT professionals in in an emerging economy such as India believe in radical innovation over incremental innovation (adaptation as a growth strategy, even though competitive advantage may rest in adaptation. The results of the study report the preference for innovation strategies among IT professionals in India and its implications for other rapidly growing emerging economies.

  2. Adults with initial metabolic syndrome have altered muscle deoxygenation during incremental exercise.

    Science.gov (United States)

    Machado, Alessandro da Costa; Barbosa, Thales Coelho; Kluser Sales, Allan Robson; de Souza, Marcio Nogueira; da Nóbrega, Antonio Claudio Lucas; Silva, Bruno Moreira

    2017-02-01

    Reduced aerobic power is independently associated with metabolic syndrome (MetS) incidence and prevalence in adults. This study investigated whether muscle deoxygenation (proxy of microvascular O 2 extraction) during incremental exercise is altered in MetS and associated with reduced oxygen consumption ( V˙O 2peak ). Twelve men with initial MetS (no overt diseases and medication-naive; mean ± SD, age 38 ± 7 years) and 12 healthy controls (HCs) (34 ± 7 years) completed an incremental cycling test to exhaustion, in which pulmonary ventilation and gas exchange (metabolic analyzer), as well as vastus lateralis deoxygenation (near infrared spectroscopy), were measured. Subjects with MetS, in contrast to HCs, showed lower V˙O 2peak normalized to total lean mass, similar V˙O 2 response to exercise, and earlier break point (BP) in muscle deoxygenation. Consequently, deoxygenation slope from BP to peak exercise was greater. Furthermore, absolute V˙O 2peak was positively associated with BP in correlations adjusted for total lean mass. MetS, without overt diseases, altered kinetics of muscle deoxygenation during incremental exercise, particularly at high-intensity exercise. Therefore, the balance between utilization and delivery of O 2 within skeletal muscle is impaired early in MetS natural history, which may contribute to the reduction in aerobic power. © 2017 The Obesity Society.

  3. Incremental Query Rewriting with Resolution

    Science.gov (United States)

    Riazanov, Alexandre; Aragão, Marcelo A. T.

    We address the problem of semantic querying of relational databases (RDB) modulo knowledge bases using very expressive knowledge representation formalisms, such as full first-order logic or its various fragments. We propose to use a resolution-based first-order logic (FOL) reasoner for computing schematic answers to deductive queries, with the subsequent translation of these schematic answers to SQL queries which are evaluated using a conventional relational DBMS. We call our method incremental query rewriting, because an original semantic query is rewritten into a (potentially infinite) series of SQL queries. In this chapter, we outline the main idea of our technique - using abstractions of databases and constrained clauses for deriving schematic answers, and provide completeness and soundness proofs to justify the applicability of this technique to the case of resolution for FOL without equality. The proposed method can be directly used with regular RDBs, including legacy databases. Moreover, we propose it as a potential basis for an efficient Web-scale semantic search technology.

  4. An Incremental Weighted Least Squares Approach to Surface Lights Fields

    Science.gov (United States)

    Coombe, Greg; Lastra, Anselmo

    An Image-Based Rendering (IBR) approach to appearance modelling enables the capture of a wide variety of real physical surfaces with complex reflectance behaviour. The challenges with this approach are handling the large amount of data, rendering the data efficiently, and previewing the model as it is being constructed. In this paper, we introduce the Incremental Weighted Least Squares approach to the representation and rendering of spatially and directionally varying illumination. Each surface patch consists of a set of Weighted Least Squares (WLS) node centers, which are low-degree polynomial representations of the anisotropic exitant radiance. During rendering, the representations are combined in a non-linear fashion to generate a full reconstruction of the exitant radiance. The rendering algorithm is fast, efficient, and implemented entirely on the GPU. The construction algorithm is incremental, which means that images are processed as they arrive instead of in the traditional batch fashion. This human-in-the-loop process enables the user to preview the model as it is being constructed and to adapt to over-sampling and under-sampling of the surface appearance.

  5. Incremental Ontology-Based Extraction and Alignment in Semi-structured Documents

    Science.gov (United States)

    Thiam, Mouhamadou; Bennacer, Nacéra; Pernelle, Nathalie; Lô, Moussa

    SHIRIis an ontology-based system for integration of semi-structured documents related to a specific domain. The system’s purpose is to allow users to access to relevant parts of documents as answers to their queries. SHIRI uses RDF/OWL for representation of resources and SPARQL for their querying. It relies on an automatic, unsupervised and ontology-driven approach for extraction, alignment and semantic annotation of tagged elements of documents. In this paper, we focus on the Extract-Align algorithm which exploits a set of named entity and term patterns to extract term candidates to be aligned with the ontology. It proceeds in an incremental manner in order to populate the ontology with terms describing instances of the domain and to reduce the access to extern resources such as Web. We experiment it on a HTML corpus related to call for papers in computer science and the results that we obtain are very promising. These results show how the incremental behaviour of Extract-Align algorithm enriches the ontology and the number of terms (or named entities) aligned directly with the ontology increases.

  6. Lines of Evidence–Incremental Markings in Molar Enamel of Soay Sheep as Revealed by a Fluorochrome Labeling and Backscattered Electron Imaging Study

    Science.gov (United States)

    Kierdorf, Horst; Kierdorf, Uwe; Frölich, Kai; Witzel, Carsten

    2013-01-01

    We studied the structural characteristics and periodicities of regular incremental markings in sheep enamel using fluorochrome injections for vital labeling of forming enamel and backscattered electron imaging in the scanning electron microscope. Microscopic analysis of mandibular first molars revealed the presence of incremental markings with a daily periodicity (laminations) that indicated successive positions of the forming front of interprismatic enamel. In addition to the laminations, incremental markings with a sub-daily periodicity were discernible both in interprismatic enamel and in enamel prisms. Five sub-daily increments were present between two consecutive laminations. Backscattered electron imaging revealed that each sub-daily growth increment consisted of a broader and more highly mineralized band and a narrower and less mineralized band (line). The sub-daily markings in the prisms of sheep enamel morphologically resembled the (daily) prisms cross striations seen in primate enamel. Incremental markings with a supra-daily periodicity were not observed in sheep enamel. Based on the periodicity of the incremental markings, maximum mean daily apposition rates of 17.0 µm in buccal enamel and of 13.4 µm in lingual enamel were recorded. Enamel extension rates were also high, with maximum means of 180 µm/day and 217 µm/day in upper crown areas of buccal and lingual enamel, respectively. Values in more cervical crown portions were markedly lower. Our results are in accordance with previous findings in other ungulate species. Using the incremental markings present in primate enamel as a reference could result in a misinterpretation of the incremental markings in ungulate enamel. Thus, the sub-daily growth increments in the prisms of ungulate enamel might be mistaken as prism cross striations with a daily periodicity, and the laminations misidentified as striae of Retzius with a supra-daily periodicity. This would lead to a considerable overestimation of

  7. A parallel ILP algorithm that incorporates incremental batch learning

    OpenAIRE

    Nuno Fonseca; Rui Camacho; Fernado Silva

    2003-01-01

    In this paper we tackle the problems of eciency and scala-bility faced by Inductive Logic Programming (ILP) systems. We proposethe use of parallelism to improve eciency and the use of an incrementalbatch learning to address the scalability problem. We describe a novelparallel algorithm that incorporates into ILP the method of incremen-tal batch learning. The theoretical complexity of the algorithm indicatesthat a linear speedup can be achieved.

  8. Partial and incremental PCMH practice transformation: implications for quality and costs.

    Science.gov (United States)

    Paustian, Michael L; Alexander, Jeffrey A; El Reda, Darline K; Wise, Chris G; Green, Lee A; Fetters, Michael D

    2014-02-01

    To examine the associations between partial and incremental implementation of the Patient Centered Medical Home (PCMH) model and measures of cost and quality of care. We combined validated, self-reported PCMH capabilities data with administrative claims data for a diverse statewide population of 2,432 primary care practices in Michigan. These data were supplemented with contextual data from the Area Resource File. We measured medical home capabilities in place as of June 2009 and change in medical home capabilities implemented between July 2009 and June 2010. Generalized estimating equations were used to estimate the mean effect of these PCMH measures on total medical costs and quality of care delivered in physician practices between July 2009 and June 2010, while controlling for potential practice, patient cohort, physician organization, and practice environment confounders. Based on the observed relationships for partial implementation, full implementation of the PCMH model is associated with a 3.5 percent higher quality composite score, a 5.1 percent higher preventive composite score, and $26.37 lower per member per month medical costs for adults. Full PCMH implementation is also associated with a 12.2 percent higher preventive composite score, but no reductions in costs for pediatric populations. Incremental improvements in PCMH model implementation yielded similar positive effects on quality of care for both adult and pediatric populations but were not associated with cost savings for either population. Estimated effects of the PCMH model on quality and cost of care appear to improve with the degree of PCMH implementation achieved and with incremental improvements in implementation. © Health Research and Educational Trust.

  9. Deep PDF parsing to extract features for detecting embedded malware.

    Energy Technology Data Exchange (ETDEWEB)

    Munson, Miles Arthur; Cross, Jesse S. (Missouri University of Science and Technology, Rolla, MO)

    2011-09-01

    The number of PDF files with embedded malicious code has risen significantly in the past few years. This is due to the portability of the file format, the ways Adobe Reader recovers from corrupt PDF files, the addition of many multimedia and scripting extensions to the file format, and many format properties the malware author may use to disguise the presence of malware. Current research focuses on executable, MS Office, and HTML formats. In this paper, several features and properties of PDF Files are identified. Features are extracted using an instrumented open source PDF viewer. The feature descriptions of benign and malicious PDFs can be used to construct a machine learning model for detecting possible malware in future PDF files. The detection rate of PDF malware by current antivirus software is very low. A PDF file is easy to edit and manipulate because it is a text format, providing a low barrier to malware authors. Analyzing PDF files for malware is nonetheless difficult because of (a) the complexity of the formatting language, (b) the parsing idiosyncrasies in Adobe Reader, and (c) undocumented correction techniques employed in Adobe Reader. In May 2011, Esparza demonstrated that PDF malware could be hidden from 42 of 43 antivirus packages by combining multiple obfuscation techniques [4]. One reason current antivirus software fails is the ease of varying byte sequences in PDF malware, thereby rendering conventional signature-based virus detection useless. The compression and encryption functions produce sequences of bytes that are each functions of multiple input bytes. As a result, padding the malware payload with some whitespace before compression/encryption can change many of the bytes in the final payload. In this study we analyzed a corpus of 2591 benign and 87 malicious PDF files. While this corpus is admittedly small, it allowed us to test a system for collecting indicators of embedded PDF malware. We will call these indicators features throughout

  10. An Empirical Analysis of Incremental Capital Structure Decisions Under Managerial Entrenchment

    NARCIS (Netherlands)

    de Jong, A.; Veld, C.H.

    1998-01-01

    We study incremental capital structure decisions of Dutch companies. From 1977 to 1996 these companies have made 110 issues of public and private seasoned equity and 137 public issues of straight debt. Managers of Dutch companies are entrenched. For this reason a discrepancy exists between

  11. The effects of the pine processionary moth on the increment of ...

    African Journals Online (AJOL)

    STORAGESEVER

    2009-05-18

    May 18, 2009 ... sycophanta L. (Coleoptera: Carabidae) used against the pine processionary moth (Thaumetopoea pityocampa Den. & Schiff.) (Lepidoptera: Thaumetopoeidae) in biological control. T. J. Zool. 30:181-185. Kanat M, Sivrikaya F (2005). Effect of the pine processionary moth on diameter increment of Calabrian ...

  12. Substructuring in the implicit simulation of single point incremental sheet forming

    NARCIS (Netherlands)

    Hadoush, A.; van den Boogaard, Antonius H.

    2009-01-01

    This paper presents a direct substructuring method to reduce the computing time of implicit simulations of single point incremental forming (SPIF). Substructuring is used to divide the finite element (FE) mesh into several non-overlapping parts. Based on the hypothesis that plastic deformation is

  13. The incremental role of trait emotional intelligence on perceived cervical screening barriers.

    Science.gov (United States)

    Costa, Sebastiano; Barberis, Nadia; Larcan, Rosalba; Cuzzocrea, Francesca

    2018-02-13

    Researchers have become increasingly interested in investigating the role of the psychological aspects related to the perception of cervical screening barriers. This study investigates the influence of trait EI on perceived cervical screening barriers. Furthermore, this study investigates the incremental validity of trait EI beyond the Big Five, as well as emotion regulation in the perceived barrier towards the Pap test as revealed in a sample of 206 Italian women that were undergoing cervical screening. Results have shown that trait EI is negatively related to cervical screening barriers. Furthermore, trait EI can be considered as a strong incremental predictor of a woman's perception of screening over and above the Big Five, emotion regulation, age, sexual intercourse experience and past Pap test. Detailed information on the study findings and future research directions are discussed.

  14. The scope of application of incremental rapid prototyping methods in foundry engineering

    Directory of Open Access Journals (Sweden)

    M. Stankiewicz

    2010-01-01

    Full Text Available The article presents the scope of application of selected incremental Rapid Prototyping methods in the process of manufacturing casting models, casting moulds and casts. The Rapid Prototyping methods (SL, SLA, FDM, 3DP, JS are predominantly used for the production of models and model sets for casting moulds. The Rapid Tooling methods, such as: ZCast-3DP, ProMetalRCT and VoxelJet, enable the fabrication of casting moulds in the incremental process. The application of the RP methods in cast production makes it possible to speed up the prototype preparation process. This is particularly vital to elements of complex shapes. The time required for the manufacture of the model, the mould and the cast proper may vary from a few to several dozen hours.

  15. Incremental learning of concept drift in nonstationary environments.

    Science.gov (United States)

    Elwell, Ryan; Polikar, Robi

    2011-10-01

    We introduce an ensemble of classifiers-based approach for incremental learning of concept drift, characterized by nonstationary environments (NSEs), where the underlying data distributions change over time. The proposed algorithm, named Learn(++). NSE, learns from consecutive batches of data without making any assumptions on the nature or rate of drift; it can learn from such environments that experience constant or variable rate of drift, addition or deletion of concept classes, as well as cyclical drift. The algorithm learns incrementally, as other members of the Learn(++) family of algorithms, that is, without requiring access to previously seen data. Learn(++). NSE trains one new classifier for each batch of data it receives, and combines these classifiers using a dynamically weighted majority voting. The novelty of the approach is in determining the voting weights, based on each classifier's time-adjusted accuracy on current and past environments. This approach allows the algorithm to recognize, and act accordingly, to the changes in underlying data distributions, as well as to a possible reoccurrence of an earlier distribution. We evaluate the algorithm on several synthetic datasets designed to simulate a variety of nonstationary environments, as well as a real-world weather prediction dataset. Comparisons with several other approaches are also included. Results indicate that Learn(++). NSE can track the changing environments very closely, regardless of the type of concept drift. To allow future use, comparison and benchmarking by interested researchers, we also release our data used in this paper. © 2011 IEEE

  16. Classifier-ensemble incremental-learning procedure for nuclear transient identification at different operational conditions

    Energy Technology Data Exchange (ETDEWEB)

    Baraldi, Piero, E-mail: piero.baraldi@polimi.i [Dipartimento di Energia - Sezione Ingegneria Nucleare, Politecnico di Milano, via Ponzio 34/3, 20133 Milano (Italy); Razavi-Far, Roozbeh [Dipartimento di Energia - Sezione Ingegneria Nucleare, Politecnico di Milano, via Ponzio 34/3, 20133 Milano (Italy); Zio, Enrico [Dipartimento di Energia - Sezione Ingegneria Nucleare, Politecnico di Milano, via Ponzio 34/3, 20133 Milano (Italy); Ecole Centrale Paris-Supelec, Paris (France)

    2011-04-15

    An important requirement for the practical implementation of empirical diagnostic systems is the capability of classifying transients in all plant operational conditions. The present paper proposes an approach based on an ensemble of classifiers for incrementally learning transients under different operational conditions. New classifiers are added to the ensemble where transients occurring in new operational conditions are not satisfactorily classified. The construction of the ensemble is made by bagging; the base classifier is a supervised Fuzzy C Means (FCM) classifier whose outcomes are combined by majority voting. The incremental learning procedure is applied to the identification of simulated transients in the feedwater system of a Boiling Water Reactor (BWR) under different reactor power levels.

  17. Classifier-ensemble incremental-learning procedure for nuclear transient identification at different operational conditions

    International Nuclear Information System (INIS)

    Baraldi, Piero; Razavi-Far, Roozbeh; Zio, Enrico

    2011-01-01

    An important requirement for the practical implementation of empirical diagnostic systems is the capability of classifying transients in all plant operational conditions. The present paper proposes an approach based on an ensemble of classifiers for incrementally learning transients under different operational conditions. New classifiers are added to the ensemble where transients occurring in new operational conditions are not satisfactorily classified. The construction of the ensemble is made by bagging; the base classifier is a supervised Fuzzy C Means (FCM) classifier whose outcomes are combined by majority voting. The incremental learning procedure is applied to the identification of simulated transients in the feedwater system of a Boiling Water Reactor (BWR) under different reactor power levels.

  18. Incremental inverse kinematics based vision servo for autonomous robotic capture of non-cooperative space debris

    Science.gov (United States)

    Dong, Gangqi; Zhu, Z. H.

    2016-04-01

    This paper proposed a new incremental inverse kinematics based vision servo approach for robotic manipulators to capture a non-cooperative target autonomously. The target's pose and motion are estimated by a vision system using integrated photogrammetry and EKF algorithm. Based on the estimated pose and motion of the target, the instantaneous desired position of the end-effector is predicted by inverse kinematics and the robotic manipulator is moved incrementally from its current configuration subject to the joint speed limits. This approach effectively eliminates the multiple solutions in the inverse kinematics and increases the robustness of the control algorithm. The proposed approach is validated by a hardware-in-the-loop simulation, where the pose and motion of the non-cooperative target is estimated by a real vision system. The simulation results demonstrate the effectiveness and robustness of the proposed estimation approach for the target and the incremental control strategy for the robotic manipulator.

  19. One Size Does Not Fit All: Managing Radical and Incremental Creativity

    Science.gov (United States)

    Gilson, Lucy L.; Lim, Hyoun Sook; D'Innocenzo, Lauren; Moye, Neta

    2012-01-01

    This research extends creativity theory by re-conceptualizing creativity as a two-dimensional construct (radical and incremental) and examining the differential effects of intrinsic motivation, extrinsic rewards, and supportive supervision on perceptions of creativity. We hypothesize and find two distinct types of creativity that are associated…

  20. The intermetallic ThRh5: microstructure and enthalpy increments

    International Nuclear Information System (INIS)

    Banerjee, Aparna; Joshi, A.R.; Kaity, Santu; Mishra, R.; Roy, S.B.

    2013-01-01

    Actinide intermetallics are one of the most interesting and important series of compounds. Thermochemistry of these compounds play significant role in understand the nature of bonding in alloys and nuclear fuel performance. In the present paper we report synthesis and characterization of thorium based intermetallic compound ThRh 5 (s) by SEM/EDX technique. The mechanical properties and enthalpy increment as a function of temperature of the alloy has been measured. (author)

  1. Efficient Incremental Garbage Collection for Workstation/Server Database Systems

    OpenAIRE

    Amsaleg , Laurent; Gruber , Olivier; Franklin , Michael

    1994-01-01

    Projet RODIN; We describe an efficient server-based algorithm for garbage collecting object-oriented databases in a workstation/server environment. The algorithm is incremental and runs concurrently with client transactions, however, it does not hold any locks on data and does not require callbacks to clients. It is fault tolerant, but performs very little logging. The algorithm has been designed to be integrated into existing OODB systems, and therefore it works with standard implementation ...

  2. Incremental validity of positive and negative valence in predicting personality disorder.

    Science.gov (United States)

    Simms, Leonard J; Yufik, Tom; Gros, Daniel F

    2010-04-01

    The Big Seven model of personality includes five dimensions similar to the Big Five model as well as two evaluative dimensions—Positive Valence (PV) and Negative Valence (NV)—which reflect extremely positive and negative person descriptors, respectively. Recent theory and research have suggested that PV and NV predict significant variance in personality disorder (PD) above that predicted by the Big Five, but firm conclusions have not been possible because previous studies have been limited to only single measures of PV, NV, and the Big Five traits. In the present study, we replicated and extended previous findings using three markers of all key constructs—including PV, NV, and the Big Five—in a diverse sample of 338 undergraduates. Results of hierarchical multiple regression analyses revealed that PV incrementally predicted Narcissistic and Histrionic PDs above the Big Five and that NV nonspecifically incremented the prediction of most PDs. Implications for dimensional models of personality pathology are discussed. PsycINFO Database Record (c) 2010 APA, all rights reserved.

  3. An Environment for Incremental Development of Distributed Extensible Asynchronous Real-time Systems

    Science.gov (United States)

    Ames, Charles K.; Burleigh, Scott; Briggs, Hugh C.; Auernheimer, Brent

    1996-01-01

    Incremental parallel development of distributed real-time systems is difficult. Architectural techniques and software tools developed at the Jet Propulsion Laboratory's (JPL's) Flight System Testbed make feasible the integration of complex systems in various stages of development.

  4. A gradient surface produced by combined electroplating and incremental frictional sliding

    DEFF Research Database (Denmark)

    Yu, Tianbo; Hong, Chuanshi; Kitamura, K.

    2017-01-01

    A Cu plate was first electroplated with a Ni layer, with a thickness controlled to be between 1 and 2 mu m. The coated surface was then deformed by incremental frictional sliding with liquid nitrogen cooling. The combined treatment led to a multifunctional surface with a gradient in strain...

  5. The more you learn, the less you store : Memory-controlled incremental SVM for visual place recognition

    OpenAIRE

    Pronobis, Andrzej; Jie, Luo; Caputo, Barbara

    2010-01-01

    The capability to learn from experience is a key property for autonomous cognitive systems working in realistic settings. To this end, this paper presents an SVM-based algorithm, capable of learning model representations incrementally while keeping under control memory requirements. We combine an incremental extension of SVMs [43] with a method reducing the number of support vectors needed to build the decision function without any loss in performance [15] introducing a parameter which permit...

  6. Incremental health care utilization and costs for acute otitis media in children.

    Science.gov (United States)

    Ahmed, Sameer; Shapiro, Nina L; Bhattacharyya, Neil

    2014-01-01

    Determine the incremental health care costs associated with the diagnosis and treatment of acute otitis media (AOM) in children. Cross-sectional analysis of a national health-care cost database. Pediatric patients (age children with and without a diagnosis of AOM, adjusting for age, sex, region, race, ethnicity, insurance coverage, and Charlson comorbidity Index. A total of 8.7 ± 0.4 million children were diagnosed with AOM (10.7 ± 0.4% annually, mean age 5.3 years, 51.3% male) among 81.5 ± 2.3 million children sampled (mean age 8.9 years, 51.3% male). Children with AOM manifested an additional +2.0 office visits, +0.2 emergency department visits, and +1.6 prescription fills (all P <0.001) per year versus those without AOM, adjusting for demographics and medical comorbidities. Similarly, AOM was associated with an incremental increase in outpatient health care costs of $314 per child annually (P <0.001) and an increase of $17 in patient medication costs (P <0.001), but was not associated with an increase in total prescription expenses ($13, P = 0.766). The diagnosis of AOM confers a significant incremental health-care utilization burden on both patients and the health care system. With its high prevalence across the United States, pediatric AOM accounts for approximately $2.88 billion in added health care expense annually and is a significant health-care utilization concern. © 2013 The American Laryngological, Rhinological and Otological Society, Inc.

  7. Robust flight control using incremental nonlinear dynamic inversion and angular acceleration prediction

    NARCIS (Netherlands)

    Sieberling, S.; Chu, Q.P.; Mulder, J.A.

    2010-01-01

    This paper presents a flight control strategy based on nonlinear dynamic inversion. The approach presented, called incremental nonlinear dynamic inversion, uses properties of general mechanical systems and nonlinear dynamic inversion by feeding back angular accelerations. Theoretically, feedback of

  8. On the search for an appropriate metric for reaction time to suprathreshold increments and decrements.

    Science.gov (United States)

    Vassilev, Angel; Murzac, Adrian; Zlatkova, Margarita B; Anderson, Roger S

    2009-03-01

    Weber contrast, DeltaL/L, is a widely used contrast metric for aperiodic stimuli. Zele, Cao & Pokorny [Zele, A. J., Cao, D., & Pokorny, J. (2007). Threshold units: A correct metric for reaction time? Vision Research, 47, 608-611] found that neither Weber contrast nor its transform to detection-threshold units equates human reaction times in response to luminance increments and decrements under selective rod stimulation. Here we show that their rod reaction times are equated when plotted against the spatial luminance ratio between the stimulus and its background (L(max)/L(min), the larger and smaller of background and stimulus luminances). Similarly, reaction times to parafoveal S-cone selective increments and decrements from our previous studies [Murzac, A. (2004). A comparative study of the temporal characteristics of processing of S-cone incremental and decremental signals. PhD thesis, New Bulgarian University, Sofia, Murzac, A., & Vassilev, A. (2004). Reaction time to S-cone increments and decrements. In: 7th European conference on visual perception, Budapest, August 22-26. Perception, 33, 180 (Abstract).], are better described by the spatial luminance ratio than by Weber contrast. We assume that the type of stimulus detection by temporal (successive) luminance discrimination, by spatial (simultaneous) luminance discrimination or by both [Sperling, G., & Sondhi, M. M. (1968). Model for visual luminance discrimination and flicker detection. Journal of the Optical Society of America, 58, 1133-1145.] determines the appropriateness of one or other contrast metric for reaction time.

  9. KAMUS BAHASA ARAB – INDONESIA ONLINE DENGAN PEMECAHAN SUKU KATA MENGGUNAKAN METODE PARSING

    Directory of Open Access Journals (Sweden)

    Anny Yuniarti

    2004-01-01

    Full Text Available Normal 0 false false false IN X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Kebutuhan umat Islam akan fasilitas penunjang belajar bahasa Arab di Indonesia masih belum terpenuhi dengan optimal. Kamus bahasa Arab yang beredar di pasaran sulit dipahami karena minimnya pengetahuan tentang ilmu tata bahasa Arab di kalangan umat Islam. Pada penelitian ini dikembangkan sebuah perangkat lunak yang berfungsi menerjemahkan kata berbahasa Arab dengan metode parsing sehingga dapat mencakup kata-kata yang telah mengalami perubahan bentuk dari bentuk dasarnya. Karena kata bahasa Arab memiliki turunan kata yang jumlahnya cukup besar, dan supaya kamus efisien, maka tidak semua turunan kata disimpan dalam basisdata. Oleh sebab itu diperlukan suatu cara untuk mengenali pola kata, dan cara mengetahui bentuk dasar suatu kata. Keseluruhan perangkat lunak ini diimplementasikan berbasis web sehingga memudahkan pengaksesan pengguna. Dan pengguna tidak memerlukan proses instalasi perangkat lunak atau sistem operasi tertentu. Pembuatan perangkat lunak ini didahului dengan perancangan proses dan perancangan interface. Kemudian rancangan tersebut diimplementasikan menjadi sebuah perangkat lunak yang siap untuk dipakai. Perangkat lunak yang sudah jadi tersebut telah diuji coba sesuai dengan spesifikasi kebutuhan

  10. A power-driven increment borer for sampling high-density tropical wood

    Czech Academy of Sciences Publication Activity Database

    Krottenthaler, S.; Pitsch, P.; Helle, G.; Locosselli, G. M.; Ceccantini, G.; Altman, Jan; Svoboda, M.; Doležal, Jiří; Schleser, G.; Anhuf, D.

    2015-01-01

    Roč. 36, November (2015), s. 40-44 ISSN 1125-7865 R&D Projects: GA ČR GAP504/12/1952; GA ČR(CZ) GA14-12262S Institutional support: RVO:67985939 Keywords : tropical dendrochronology * tree sampling methods * increment cores Subject RIV: EF - Botanics Impact factor: 2.107, year: 2015

  11. Variational formulation for dissipative continua and an incremental J-integral

    Science.gov (United States)

    Rahaman, Md. Masiur; Dhas, Bensingh; Roy, D.; Reddy, J. N.

    2018-01-01

    Our aim is to rationally formulate a proper variational principle for dissipative (viscoplastic) solids in the presence of inertia forces. As a first step, a consistent linearization of the governing nonlinear partial differential equations (PDEs) is carried out. An additional set of complementary (adjoint) equations is then formed to recover an underlying variational structure for the augmented system of linearized balance laws. This makes it possible to introduce an incremental Lagrangian such that the linearized PDEs, including the complementary equations, become the Euler-Lagrange equations. Continuous groups of symmetries of the linearized PDEs are computed and an analysis is undertaken to identify the variational groups of symmetries of the linearized dissipative system. Application of Noether's theorem leads to the conservation laws (conserved currents) of motion corresponding to the variational symmetries. As a specific outcome, we exploit translational symmetries of the functional in the material space and recover, via Noether's theorem, an incremental J-integral for viscoplastic solids in the presence of inertia forces. Numerical demonstrations are provided through a two-dimensional plane strain numerical simulation of a compact tension specimen of annealed mild steel under dynamic loading.

  12. Applying CLSM to increment core surfaces for histometric analyses: A novel advance in quantitative wood anatomy

    OpenAIRE

    Wei Liang; Ingo Heinrich; Gerhard Helle; I. Dorado Liñán; T. Heinken

    2013-01-01

    A novel procedure has been developed to conduct cell structure measurements on increment core samples of conifers. The procedure combines readily available hardware and software equipment. The essential part of the procedure is the application of a confocal laser scanning microscope (CLSM) which captures images directly from increment cores surfaced with the advanced WSL core-microtome. Cell wall and lumen are displayed with a strong contrast due to the monochrome black and green nature of th...

  13. A maximal incremental effort alters tear osmolarity depending on the fitness level in military helicopter pilots.

    Science.gov (United States)

    Vera, Jesús; Jiménez, Raimundo; Madinabeitia, Iker; Masiulis, Nerijus; Cárdenas, David

    2017-10-01

    Fitness level modulates the physiological responses to exercise for a variety of indices. While intense bouts of exercise have been demonstrated to increase tear osmolarity (Tosm), it is not known if fitness level can affect the Tosm response to acute exercise. This study aims to compare the effect of a maximal incremental test on Tosm between trained and untrained military helicopter pilots. Nineteen military helicopter pilots (ten trained and nine untrained) performed a maximal incremental test on a treadmill. A tear sample was collected before and after physical effort to determine the exercise-induced changes on Tosm. The Bayesian statistical analysis demonstrated that Tosm significantly increased from 303.72 ± 6.76 to 310.56 ± 8.80 mmol/L after performance of a maximal incremental test. However, while the untrained group showed an acute Tosm rise (12.33 mmol/L of increment), the trained group experienced a stable Tosm physical effort (1.45 mmol/L). There was a significant positive linear association between fat indices and Tosm changes (correlation coefficients [r] range: 0.77-0.89), whereas the Tosm changes displayed a negative relationship with the cardiorespiratory capacity (VO2 max; r = -0.75) and performance parameters (r = -0.75 for velocity, and r = -0.67 for time to exhaustion). The findings from this study provide evidence that fitness level is a major determinant of Tosm response to maximal incremental physical effort, showing a fairly linear association with several indices related to fitness level. High fitness level seems to be beneficial to avoid Tosm changes as consequence of intense exercise. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Parsing multiple processes of high temperature impacts on corn/soybean yield using a newly developed CLM-APSIM modeling framework

    Science.gov (United States)

    Peng, B.; Guan, K.; Chen, M.

    2016-12-01

    Future agricultural production faces a grand challenge of higher temperature under climate change. There are multiple physiological or metabolic processes of how high temperature affects crop yield. Specifically, we consider the following major processes: (1) direct temperature effects on photosynthesis and respiration; (2) speed-up growth rate and the shortening of growing season; (3) heat stress during reproductive stage (flowering and grain-filling); (4) high-temperature induced increase of atmospheric water demands. In this work, we use a newly developed modeling framework (CLM-APSIM) to simulate the corn and soybean growth and explicitly parse the above four processes. By combining the strength of CLM in modeling surface biophysical (e.g., hydrology and energy balance) and biogeochemical (e.g., photosynthesis and carbon-nitrogen interactions), as well as that of APSIM in modeling crop phenology and reproductive stress, the newly developed CLM-APSIM modeling framework enables us to diagnose the impacts of high temperature stress through different processes at various crop phenology stages. Ground measurements from the advanced SoyFACE facility at University of Illinois is used here to calibrate, validate, and improve the CLM-APSIM modeling framework at the site level. We finally use the CLM-APSIM modeling framework to project crop yield for the whole US Corn Belt under different climate scenarios.

  15. Just-in-Time Technology to Encourage Incremental, Dietary Behavior Change

    OpenAIRE

    Intille, Stephen S.; Kukla, Charles; Farzanfar, Ramesh; Bakr, Waseem

    2003-01-01

    Our multi-disciplinary team is developing mobile computing software that uses “just-in-time” presentation of information to motivate behavior change. Using a participatory design process, preliminary interviews have helped us to establish 10 design goals. We have employed some to create a prototype of a tool that encourages better dietary decision making through incremental, just-in-time motivation at the point of purchase.

  16. Just-in-Time Technology to Encourage Incremental, Dietary Behavior Change

    Science.gov (United States)

    Intille, Stephen S.; Kukla, Charles; Farzanfar, Ramesh; Bakr, Waseem

    2003-01-01

    Our multi-disciplinary team is developing mobile computing software that uses “just-in-time” presentation of information to motivate behavior change. Using a participatory design process, preliminary interviews have helped us to establish 10 design goals. We have employed some to create a prototype of a tool that encourages better dietary decision making through incremental, just-in-time motivation at the point of purchase. PMID:14728379

  17. Incremental support vector machines for fast reliable image recognition

    International Nuclear Information System (INIS)

    Makili, L.; Vega, J.; Dormido-Canto, S.

    2013-01-01

    Highlights: ► A conformal predictor using SVM as the underlying algorithm was implemented. ► It was applied to image recognition in the TJ–II's Thomson Scattering Diagnostic. ► To improve time efficiency an approach to incremental SVM training has been used. ► Accuracy is similar to the one reached when standard SVM is used. ► Computational time saving is significant for large training sets. -- Abstract: This paper addresses the reliable classification of images in a 5-class problem. To this end, an automatic recognition system, based on conformal predictors and using Support Vector Machines (SVM) as the underlying algorithm has been developed and applied to the recognition of images in the Thomson Scattering Diagnostic of the TJ–II fusion device. Using such conformal predictor based classifier is a computationally intensive task since it implies to train several SVM models to classify a single example and to perform this training from scratch takes a significant amount of time. In order to improve the classification time efficiency, an approach to the incremental training of SVM has been used as the underlying algorithm. Experimental results show that the overall performance of the new classifier is high, comparable to the one corresponding to the use of standard SVM as the underlying algorithm and there is a significant improvement in time efficiency

  18. Incremental support vector machines for fast reliable image recognition

    Energy Technology Data Exchange (ETDEWEB)

    Makili, L., E-mail: makili_le@yahoo.com [Instituto Superior Politécnico da Universidade Katyavala Bwila, Benguela (Angola); Vega, J. [Asociación EURATOM/CIEMAT para Fusión, Madrid (Spain); Dormido-Canto, S. [Dpto. Informática y Automática – UNED, Madrid (Spain)

    2013-10-15

    Highlights: ► A conformal predictor using SVM as the underlying algorithm was implemented. ► It was applied to image recognition in the TJ–II's Thomson Scattering Diagnostic. ► To improve time efficiency an approach to incremental SVM training has been used. ► Accuracy is similar to the one reached when standard SVM is used. ► Computational time saving is significant for large training sets. -- Abstract: This paper addresses the reliable classification of images in a 5-class problem. To this end, an automatic recognition system, based on conformal predictors and using Support Vector Machines (SVM) as the underlying algorithm has been developed and applied to the recognition of images in the Thomson Scattering Diagnostic of the TJ–II fusion device. Using such conformal predictor based classifier is a computationally intensive task since it implies to train several SVM models to classify a single example and to perform this training from scratch takes a significant amount of time. In order to improve the classification time efficiency, an approach to the incremental training of SVM has been used as the underlying algorithm. Experimental results show that the overall performance of the new classifier is high, comparable to the one corresponding to the use of standard SVM as the underlying algorithm and there is a significant improvement in time efficiency.

  19. Single Point Incremental Forming using a Dummy Sheet

    DEFF Research Database (Denmark)

    Skjødt, Martin; Silva, Beatriz; Bay, Niels

    2007-01-01

    A new version of single point incremental forming (SPIF) is presented. This version includes a dummy sheet on top of the work piece, thus forming two sheets instead of one. The dummy sheet, which is in contact with the rotating tool pin, is discarded after forming. The new set-up influences....... The possible influence of friction between the two sheets is furthermore investigated. The results show that the use of a dummy sheet reduces wear of the work piece to almost zero, but also causes a decrease in formability. Bulging of the planar sides of the pyramid is reduced and surface roughness...

  20. Combining Compact Representation and Incremental Generation in Large Games with Sequential Strategies

    DEFF Research Database (Denmark)

    Bosansky, Branislav; Xin Jiang, Albert; Tambe, Milind

    2015-01-01

    representation of sequential strategies and linear programming, or by incremental strategy generation of iterative double-oracle methods. In this paper, we present novel hybrid of these two approaches: compact-strategy double-oracle (CS-DO) algorithm that combines the advantages of the compact representation...

  1. From incremental to fundamental substitution in chemical alternatives assessment

    DEFF Research Database (Denmark)

    Fantke, Peter; Weber, Roland; Scheringer, Martin

    2015-01-01

    to similarity in chemical structures and, hence, similar hazard profiles between phase-out and substitute chemicals, leading to a rather incremental than fundamental substitution. A hampered phase-out process, the lack of implementing Green Chemistry principles in chemicals design, and lack of Sustainable...... an integrated approach of all stakeholders involved toward more fundamental and function-based substitution by greener and more sustainable alternatives. Our recommendations finally constitute a starting point for identifying further research needs and for improving current alternatives assessment practice....

  2. Diagnosis of small hepatocellular carcinoma by incremental dynamic CT

    International Nuclear Information System (INIS)

    Uchida, Masafumi; Kumabe, Tsutomu; Edamitsu, Osamu

    1993-01-01

    Thirty cases of pathologically confirmed small hepatocellular carcinoma were examined by Incremental Dynamic CT (ICT). ICT scanned the whole liver with single-breath-hold technique; therefore, effective early contrast enhancement could be obtained for diagnosis. Among the 30 tumors, 26 were detected. The detection rate was 87%. A high detection rate was obtained in tumors more than 20 mm in diameter. Twenty-two of 26 tumors could be diagnosed correctly. ICT examination was useful for detection of small hepatocellular carcinoma. (author)

  3. Observers for a class of systems with nonlinearities satisfying an incremental quadratic inequality

    Science.gov (United States)

    Acikmese, Ahmet Behcet; Martin, Corless

    2004-01-01

    We consider the problem of state estimation from nonlinear time-varying system whose nonlinearities satisfy an incremental quadratic inequality. Observers are presented which guarantee that the state estimation error exponentially converges to zero.

  4. Dynamic physiological responses to the incremental shuttle walk test in adults

    Directory of Open Access Journals (Sweden)

    Evandro Fornias Sperandio

    Full Text Available Abstract Introduction: Understanding the normal dynamic physiological responses to the incremental shuttle walk test might enhance the interpretation of walking performance in clinical settings. Objective: To assess dynamic physiological responses to the incremental shuttle walk test and its predictors in healthy adults. Methods: We assessed the simultaneous rates of changes of Δoxygen uptake/Δwalking velocity (ΔVO 2 /ΔWV, Δheart rate/Δoxygen uptake (ΔHR/ΔVO 2 , Δventilation/Δcarbon dioxide production (ΔVE/ΔVCO 2 , and Δtidal volume/Δlinearized ventilation (ΔVT/ΔlnVE during the incremental shuttle walk test in 100 men and women older than 40 years. Fat and lean body masses (bioimpedance were also evaluated. Results: We found that the dynamic relationships were not sex-dependent. Participants aged ≥ 70 presented declines in ΔVO 2 /ΔWV slope compared to those aged 40-49 (215 ± 69 vs. 288 ± 84 mL.min-1.km.h-1. Obese participants presented shallower slopes for ΔVO 2 /ΔWV (2.94 ± 0.90 vs. 3.84 ± 1.21 mL.min-1.kg-1.km.h-1 and ΔVT/ΔlnVE (0.57 ± 0.20 vs. 0.67 ± 0.26. We found negative influence of fat body mass on ΔVT/ΔlnVE (R2 = 0.20 and positive influence of lean body mass on ΔVO 2 /ΔWV (R2 = 0.31, ΔHR/ΔVO2 (R2 = 0.25, and ΔVT/ΔlnVE (R2 = 0.44. Conclusion: Dynamic relationships during walking were slightly influenced by age, but not sex-dependent. Body composition played an important role in these indices. Our results may provide better interpretation of walking performance in patients with chronic diseases.

  5. An electromyographic-based test for estimating neuromuscular fatigue during incremental treadmill running

    International Nuclear Information System (INIS)

    Camic, Clayton L; Kovacs, Attila J; Hill, Ethan C; Calantoni, Austin M; Yemm, Allison J; Enquist, Evan A; VanDusseldorp, Trisha A

    2014-01-01

    The purposes of the present study were two fold: (1) to determine if the model used for estimating the physical working capacity at the fatigue threshold (PWC FT ) from electromyographic (EMG) amplitude data during incremental cycle ergometry could be applied to treadmill running to derive a new neuromuscular fatigue threshold for running, and (2) to compare the running velocities associated with the PWC FT , ventilatory threshold (VT), and respiratory compensation point (RCP). Fifteen college-aged subjects (21.5  ±  1.3 y, 68.7  ±  10.5 kg, 175.9  ±  6.7 cm) performed an incremental treadmill test to exhaustion with bipolar surface EMG signals recorded from the vastus lateralis. There were significant (p < 0.05) mean differences in running velocities between the VT (11.3  ±  1.3 km h −1 ) and PWC FT (14.0  ±  2.3 km h −1 ), VT and RCP (14.0  ±  1.8 km h −1 ), but not the PWC FT and RCP. The findings of the present study indicated that the PWC FT model could be applied to a single continuous, incremental treadmill test to estimate the maximal running velocity that can be maintained prior to the onset of neuromuscular fatigue. In addition, these findings suggested that the PWC FT , like the RCP, may be used to differentiate the heavy from severe domains of exercise intensity. (paper)

  6. Stable Myoelectric Control of a Hand Prosthesis using Non-Linear Incremental Learning

    Directory of Open Access Journals (Sweden)

    Arjan eGijsberts

    2014-02-01

    Full Text Available Stable myoelectric control of hand prostheses remains an open problem. The only successful human-machine interface is surface electromyography, typically allowing control of a few degrees of freedom. Machine learning techniques may have the potential to remove these limitations, but their performance is thus far inadequate: myoelectric signals change over time under the influence of various factors, deteriorating control performance. It is therefore necessary, in the standard approach, to regularly retrain a new model from scratch.We hereby propose a non-linear incremental learning method in which occasional updates with a modest amount of novel training data allow continual adaptation to the changes in the signals. In particular, Incremental Ridge Regression and an approximation of the Gaussian Kernel known as Random Fourier Features are combined to predict finger forces from myoelectric signals, both finger-by-finger and grouped in grasping patterns.We show that the approach is effective and practically applicable to this problem by first analyzing its performance while predicting single-finger forces. Surface electromyography and finger forces were collected from 10 intact subjects during four sessions spread over two different days; the results of the analysis show that small incremental updates are indeed effective to maintain a stable level of performance.Subsequently, we employed the same method on-line to teleoperate a humanoid robotic arm equipped with a state-of-the-art commercial prosthetic hand. The subject could reliably grasp, carry and release everyday-life objects, enforcing stable grasping irrespective of the signal changes, hand/arm movements and wrist pronation and supination.

  7. Investigating the incremental validity of cognitive variables in early mathematics screening.

    Science.gov (United States)

    Clarke, Ben; Shanley, Lina; Kosty, Derek; Baker, Scott K; Cary, Mari Strand; Fien, Hank; Smolkowski, Keith

    2018-03-26

    The purpose of this study was to investigate the incremental validity of a set of domain general cognitive measures added to a traditional screening battery of early numeracy measures. The sample consisted of 458 kindergarten students of whom 285 were designated as severely at-risk for mathematics difficulty. Hierarchical multiple regression results indicated that Wechsler Abbreviated Scales of Intelligence (WASI) Matrix Reasoning and Vocabulary subtests, and Digit Span Forward and Backward measures explained a small, but unique portion of the variance in kindergarten students' mathematics performance on the Test of Early Mathematics Ability-Third Edition (TEMA-3) when controlling for Early Numeracy Curriculum Based Measurement (EN-CBM) screening measures (R² change = .01). Furthermore, the incremental validity of the domain general cognitive measures was relatively stronger for the severely at-risk sample. We discuss results from the study in light of instructional decision-making and note the findings do not justify adding domain general cognitive assessments to mathematics screening batteries. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  8. An Incremental Classification Algorithm for Mining Data with Feature Space Heterogeneity

    Directory of Open Access Journals (Sweden)

    Yu Wang

    2014-01-01

    Full Text Available Feature space heterogeneity often exists in many real world data sets so that some features are of different importance for classification over different subsets. Moreover, the pattern of feature space heterogeneity might dynamically change over time as more and more data are accumulated. In this paper, we develop an incremental classification algorithm, Supervised Clustering for Classification with Feature Space Heterogeneity (SCCFSH, to address this problem. In our approach, supervised clustering is implemented to obtain a number of clusters such that samples in each cluster are from the same class. After the removal of outliers, relevance of features in each cluster is calculated based on their variations in this cluster. The feature relevance is incorporated into distance calculation for classification. The main advantage of SCCFSH lies in the fact that it is capable of solving a classification problem with feature space heterogeneity in an incremental way, which is favorable for online classification tasks with continuously changing data. Experimental results on a series of data sets and application to a database marketing problem show the efficiency and effectiveness of the proposed approach.

  9. Incremental Validity of the DSM-5 Section III Personality Disorder Traits With Respect to Psychosocial Impairment.

    Science.gov (United States)

    Simms, Leonard J; Calabrese, William R

    2016-02-01

    Traditional personality disorders (PDs) are associated with significant psychosocial impairment. DSM-5 Section III includes an alternative hybrid personality disorder (PD) classification approach, with both type and trait elements, but relatively little is known about the impairments associated with Section III traits. Our objective was to study the incremental validity of Section III traits--compared to normal-range traits, traditional PD criterion counts, and common psychiatric symptomatology--in predicting psychosocial impairment. To that end, 628 current/recent psychiatric patients completed measures of PD traits, normal-range traits, traditional PD criteria, psychiatric symptomatology, and psychosocial impairments. Hierarchical regressions revealed that Section III PD traits incrementally predicted psychosocial impairment over normal-range personality traits, PD criterion counts, and common psychiatric symptomatology. In contrast, the incremental effects for normal-range traits, PD symptom counts, and common psychiatric symptomatology were substantially smaller than for PD traits. These findings have implications for PD classification and the impairment literature more generally.

  10. Determining frustum depth of 304 stainless steel plates with various diameters and thicknesses by incremental forming

    Energy Technology Data Exchange (ETDEWEB)

    Golabi, Sa' id [University of Kashan, Kashan (Iran, Islamic Republic of); Khazaali, Hossain [Bu-Ali Sina University, Hamedan (Iran, Islamic Republic of)

    2014-08-15

    Nowadays incremental forming is more popular because of its flexibility and cost saving. However, no engineering data is available for manufacturers for forming simple shapes like a frustum by incremental forming, and either expensive experimental tests or finite element analysis (FEA) should be employed to determine the depth of a frustum considering: thickness, material, cone diameter, wall angle, feed rate, tool diameter, etc. In this study, finite element technique, confirmed by experimental study, was employed for developing applicable curves for determining the depth of frustums made from 304 stainless steel (SS304) sheet with various cone angles, thicknesses from 0.3 to 1 mm and major diameters from 50 to 200 mm using incremental forming. Using these curves, the frustum angle and its depth knowing its thickness and major diameter can be predicted. The effects of feed rate, vertical pitch and tool diameter on frustum depth and surface quality were also addressed in this study.

  11. Desarrollo de una marcaproducto para Gesta Diseño. Un caso de innovación incremental

    Directory of Open Access Journals (Sweden)

    ANDRÉS JULIÁN HURTADO RUIZ

    2012-01-01

    Full Text Available Este caso tiene el objetivo de guiar al lector en el proceso de innovación incremental de un producto. La guía hacia la innovación comienza con la revisión de tendencias de diseño y finaliza con la elaboración de un brief de diseño. El caso toma el ejemplo de la empresa Gesta Diseño® como medio para guiar un proceso de innovación que permita a la empresa apoyar la representación de su marca. En el marco de la innovación incremental, Gesta Diseño® supone una innovación pequeña pero constante que, apoyada en su imagen de marca, permita a la empresa obtener una ventaja competitiva en el mercado. La principal recomendación del caso es usar la estrategia de innovación incremental como herramienta para el fortalecimiento de marca de una pequeña empresa.

  12. How to set the stage for a full-fledged clinical trial testing 'incremental haemodialysis'.

    Science.gov (United States)

    Casino, Francesco Gaetano; Basile, Carlo

    2017-07-21

    Most people who make the transition to maintenance haemodialysis (HD) therapy are treated with a fixed dose of thrice-weekly HD (3HD/week) regimen without consideration of their residual kidney function (RKF). The RKF provides an effective and naturally continuous clearance of both small and middle molecules, plays a major role in metabolic homeostasis, nutritional status and cardiovascular health, and aids in fluid management. The RKF is associated with better patient survival and greater health-related quality of life. Its preservation is instrumental to the prescription of incremental (1HD/week to 2HD/week) HD. The recently heightened interest in incremental HD has been hindered by the current limitations of the urea kinetic model (UKM), which tend to overestimate the needed dialysis dose in the presence of a substantial RKF. A recent paper by Casino and Basile suggested a variable target model (VTM), which gives more clinical weight to the RKF and allows less frequent HD treatments at lower RKF as opposed to the fixed target model, based on the wrong concept of the clinical equivalence between renal and dialysis clearance. A randomized controlled trial (RCT) enrolling incident patients and comparing incremental HD (prescribed according to the VTM) with the standard 3HD/week schedule and focused on hard outcomes, such as survival and health-related quality of life of patients, is urgently needed. The first step in designing such a study is to compute the 'adequacy lines' and the associated fitting equations necessary for the most appropriate allocation of the patients in the two arms and their correct and safe follow-up. In conclusion, the potentially important clinical and financial implications of the incremental HD render it highly promising and warrant RCTs. The UKM is the keystone for conducting such studies. © The Author 2017. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.

  13. Cross-correlation of instantaneous phase increments in pressure-flow fluctuations: Applications to cerebral autoregulation

    Science.gov (United States)

    Chen, Zhi; Hu, Kun; Stanley, H. Eugene; Novak, Vera; Ivanov, Plamen Ch.

    2006-03-01

    We investigate the relationship between the blood flow velocities (BFV) in the middle cerebral arteries and beat-to-beat blood pressure (BP) recorded from a finger in healthy and post-stroke subjects during the quasisteady state after perturbation for four different physiologic conditions: supine rest, head-up tilt, hyperventilation, and CO2 rebreathing in upright position. To evaluate whether instantaneous BP changes in the steady state are coupled with instantaneous changes in the BFV, we compare dynamical patterns in the instantaneous phases of these signals, obtained from the Hilbert transform, as a function of time. We find that in post-stroke subjects the instantaneous phase increments of BP and BFV exhibit well-pronounced patterns that remain stable in time for all four physiologic conditions, while in healthy subjects these patterns are different, less pronounced, and more variable. We propose an approach based on the cross-correlation of the instantaneous phase increments to quantify the coupling between BP and BFV signals. We find that the maximum correlation strength is different for the two groups and for the different conditions. For healthy subjects the amplitude of the cross-correlation between the instantaneous phase increments of BP and BFV is small and attenuates within 3-5 heartbeats. In contrast, for post-stroke subjects, this amplitude is significantly larger and cross-correlations persist up to 20 heartbeats. Further, we show that the instantaneous phase increments of BP and BFV are cross-correlated even within a single heartbeat cycle. We compare the results of our approach with three complementary methods: direct BP-BFV cross-correlation, transfer function analysis, and phase synchronization analysis. Our findings provide insight into the mechanism of cerebral vascular control in healthy subjects, suggesting that this control mechanism may involve rapid adjustments (within a heartbeat) of the cerebral vessels, so that BFV remains steady in

  14. Ductility, strength and hardness relation after prior incremental deformation (ratcheting) of austenitic steel

    International Nuclear Information System (INIS)

    Kussmaul, K.; Diem, H.K.; Wachter, O.

    1993-01-01

    Experimental investigations into the stress/strain behavior of the niobium stabilized austenitic material with the German notation X6 CrNiNb 18 10 proved that a limited incrementally applied prior deformation will reduce the total deformation capability only by the amount of the prior deformation. It could especially be determined on the little changes in the reduction of area that the basically ductile deformation behavior will not be changed by the type of the prior loading. There is a correlation between the amount of deformation and the increase in hardness. It is possible to correlate both the changes in hardness and the material properties. In the case of low cycle fatigue tests with alternating temperature an incremental increase in total strain (ratcheting) was noted to depend on the strain range applied

  15. Sufficient conditions for a period incrementing big bang bifurcation in one-dimensional maps

    International Nuclear Information System (INIS)

    Avrutin, V; Granados, A; Schanz, M

    2011-01-01

    Typically, big bang bifurcation occurs for one (or higher)-dimensional piecewise-defined discontinuous systems whenever two border collision bifurcation curves collide transversely in the parameter space. At that point, two (feasible) fixed points collide with one boundary in state space and become virtual, and, in the one-dimensional case, the map becomes continuous. Depending on the properties of the map near the codimension-two bifurcation point, there exist different scenarios regarding how the infinite number of periodic orbits are born, mainly the so-called period adding and period incrementing. In our work we prove that, in order to undergo a big bang bifurcation of the period incrementing type, it is sufficient for a piecewise-defined one-dimensional map that the colliding fixed points are attractive and with associated eigenvalues of different signs

  16. Sufficient conditions for a period incrementing big bang bifurcation in one-dimensional maps

    Science.gov (United States)

    Avrutin, V.; Granados, A.; Schanz, M.

    2011-09-01

    Typically, big bang bifurcation occurs for one (or higher)-dimensional piecewise-defined discontinuous systems whenever two border collision bifurcation curves collide transversely in the parameter space. At that point, two (feasible) fixed points collide with one boundary in state space and become virtual, and, in the one-dimensional case, the map becomes continuous. Depending on the properties of the map near the codimension-two bifurcation point, there exist different scenarios regarding how the infinite number of periodic orbits are born, mainly the so-called period adding and period incrementing. In our work we prove that, in order to undergo a big bang bifurcation of the period incrementing type, it is sufficient for a piecewise-defined one-dimensional map that the colliding fixed points are attractive and with associated eigenvalues of different signs.

  17. Incremental electrohydraulic forming - A new approach for the manufacture of structured multifunctional sheet metal blanks

    Science.gov (United States)

    Djakow, Eugen; Springer, Robert; Homberg, Werner; Piper, Mark; Tran, Julian; Zibart, Alexander; Kenig, Eugeny

    2017-10-01

    Electrohydraulic Forming (EHF) processes permit the production of complex, sharp-edged geometries even when high-strength materials are used. Unfortunately, the forming zone is often limited as compared to other sheet metal forming processes. The use of a special industrial-robot-based tool setup and an incremental process strategy could provide a promising solution for this problem. This paper describes such an innovative approach using an electrohydraulic incremental forming machine, which can be employed to manufacture the large multifunctional and complex part geometries in steel, aluminium, magnesium and reinforced plastic that are employed in lightweight constructions or heating elements.

  18. The impact of weather conditions on dynamics of Hylocomium splendens annual increment and net production in forest communities of forest-steppe zone in Khakassia

    Directory of Open Access Journals (Sweden)

    I. A. Goncharova

    2015-12-01

    Full Text Available Dynamics of annual increments of green moss Hylocomium splendens (Hedw. Schimp. in B.S.G. in the Khakassia forest-steppe zone has been studied. The values of the moss linear and phytomass increments were investigated in different habitats for 6 years. The aboveground annual production of the H. splendens in phytocenosis was estimated. Linear increments of the H. splendens growing under the tree canopy and opening between trees were not significantly different. Phytomass increments under the tree canopy are significantly higher than in the openings between trees. The density of moss mats, proportion between leaves and stems were calculated. It was revealed that climatic factors have a different degree and duration influence on the moss increments in different habitats. Linear increments of H. splendens in different habitats synchronously respond to weather factor changes. The air temperature was the most important at the beginning and the end of the vegetation period; the amount of precipitation was more important in the middle of the growth period. Phytomass increments of H. splendens in different habitats respond differently to influence of weather conditions. Phytomass increments under the tree canopy are not sensitive to air temperature, and more sensitive to precipitations in the middle of growth period than one of opening between trees. The specificity of the climatic factors’ influence on the biomass growth depends on habitat conditions.

  19. A Batch-Incremental Video Background Estimation Model using Weighted Low-Rank Approximation of Matrices

    KAUST Repository

    Dutta, Aritra

    2017-07-02

    Principal component pursuit (PCP) is a state-of-the-art approach for background estimation problems. Due to their higher computational cost, PCP algorithms, such as robust principal component analysis (RPCA) and its variants, are not feasible in processing high definition videos. To avoid the curse of dimensionality in those algorithms, several methods have been proposed to solve the background estimation problem in an incremental manner. We propose a batch-incremental background estimation model using a special weighted low-rank approximation of matrices. Through experiments with real and synthetic video sequences, we demonstrate that our method is superior to the state-of-the-art background estimation algorithms such as GRASTA, ReProCS, incPCP, and GFL.

  20. A Batch-Incremental Video Background Estimation Model using Weighted Low-Rank Approximation of Matrices

    KAUST Repository

    Dutta, Aritra; Li, Xin; Richtarik, Peter

    2017-01-01

    Principal component pursuit (PCP) is a state-of-the-art approach for background estimation problems. Due to their higher computational cost, PCP algorithms, such as robust principal component analysis (RPCA) and its variants, are not feasible in processing high definition videos. To avoid the curse of dimensionality in those algorithms, several methods have been proposed to solve the background estimation problem in an incremental manner. We propose a batch-incremental background estimation model using a special weighted low-rank approximation of matrices. Through experiments with real and synthetic video sequences, we demonstrate that our method is superior to the state-of-the-art background estimation algorithms such as GRASTA, ReProCS, incPCP, and GFL.

  1. Distance-independent individual tree diameter-increment model for Thuya [Tetraclinis articulata (VAHL.) MAST.] stands in Tunisia

    OpenAIRE

    T. Sghaier; M. Tome; J. Tome; M. Sanchez-Gonzalez; I. Cañellas; R. Calama

    2013-01-01

    Aim of study: The aim of the work was to develop an individual tree diameter-increment model for Thuya (Tetraclinis articulata) in Tunisia.Area of study: The natural Tetraclinis articulata stands at Jbel Lattrech in north-eastern of Tunisia.Material and methods:  Data came from 200 trees located in 50 sample plots. The diameter at age t and the diameter increment for the last five years obtained from cores taken at breast height were measured for each tree. Four difference equations derived f...

  2. The limit distribution of the maximum increment of a random walk with dependent regularly varying jump sizes

    DEFF Research Database (Denmark)

    Mikosch, Thomas Valentin; Moser, Martin

    2013-01-01

    We investigate the maximum increment of a random walk with heavy-tailed jump size distribution. Here heavy-tailedness is understood as regular variation of the finite-dimensional distributions. The jump sizes constitute a strictly stationary sequence. Using a continuous mapping argument acting...... on the point processes of the normalized jump sizes, we prove that the maximum increment of the random walk converges in distribution to a Fréchet distributed random variable....

  3. An application of the J-integral to an incremental analysis of blunting crack behavior

    International Nuclear Information System (INIS)

    Merkle, J.G.

    1989-01-01

    This paper describes an analytical approach to estimating the elastic-plastic stresses and strains near the tip of a blunting crack with a finite root radius. Rice's original derivation of the path independent J-integral considered the possibility of a finite crack tip root radius. For this problem Creager's elastic analysis gives the relation between the stress intensity factor K I and the near tip stresses. It can be shown that the relation K I 2 = E'J holds when the root radius is finite. Recognizing that elastic-plastic behavior is incrementally linear then allows a derivation to be performed for a bielastic specimen having a crack tip region of reduced modulus, and the result differentiated to estimate elastic-plastic behavior. The result is the incremental form of Neuber's equation. This result does not require the assumption of any particular stress-strain relation. However by assuming a pure power law stress-strain relation and using Ilyushin's principle, the ordinary deformation theory form of Neuber's equation, K σ K var epsilon = K t 2 , is obtained. Applications of the incremental form of Neuber's equation have already been made to fatigue and fracture analysis. This paper helps to provide a theoretical basis for these methods previously considered semiempirical. 26 refs., 4 figs

  4. Exploiting Outage and Error Probability of Cooperative Incremental Relaying in Underwater Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Hina Nasir

    2016-07-01

    Full Text Available This paper embeds a bi-fold contribution for Underwater Wireless Sensor Networks (UWSNs; performance analysis of incremental relaying in terms of outage and error probability, and based on the analysis proposition of two new cooperative routing protocols. Subject to the first contribution, a three step procedure is carried out; a system model is presented, the number of available relays are determined, and based on cooperative incremental retransmission methodology, closed-form expressions for outage and error probability are derived. Subject to the second contribution, Adaptive Cooperation in Energy (ACE efficient depth based routing and Enhanced-ACE (E-ACE are presented. In the proposed model, feedback mechanism indicates success or failure of data transmission. If direct transmission is successful, there is no need for relaying by cooperative relay nodes. In case of failure, all the available relays retransmit the data one by one till the desired signal quality is achieved at destination. Simulation results show that the ACE and E-ACE significantly improves network performance, i.e., throughput, when compared with other incremental relaying protocols like Cooperative Automatic Repeat reQuest (CARQ. E-ACE and ACE achieve 69% and 63% more throughput respectively as compared to CARQ in hard underwater environment.

  5. Performance Analysis of Selective Decode-and-Forward Multinode Incremental Relaying with Maximal Ratio Combining

    KAUST Repository

    Hadjtaieb, Amir

    2013-09-12

    In this paper, we propose an incremental multinode relaying protocol with arbitrary N-relay nodes that allows an efficient use of the channel spectrum. The destination combines the received signals from the source and the relays using maximal ratio Combining (MRC). The transmission ends successfully once the accumulated signal-to-noise ratio (SNR) exceeds a predefined threshold. The number of relays participating in the transmission is adapted to the channel conditions based on the feedback from the destination. The use of incremental relaying allows obtaining a higher spectral efficiency. Moreover, the symbol error probability (SEP) performance is enhanced by using MRC at the relays. The use of MRC at the relays implies that each relay overhears the signals from the source and all previous relays and combines them using MRC. The proposed protocol differs from most of existing relaying protocol by the fact that it combines both incremental relaying and MRC at the relays for a multinode topology. Our analyses for a decode-and-forward mode show that: (i) compared to existing multinode relaying schemes, the proposed scheme can essentially achieve the same SEP performance but with less average number of time slots, (ii) compared to schemes without MRC at the relays, the proposed scheme can approximately achieve a 3 dB gain.

  6. Individualized 6-mercaptopurine increments in consolidation treatment of childhood acute lymphoblastic leukemia

    DEFF Research Database (Denmark)

    Tulstrup, Morten; Frandsen, Thomas L; Abrahamsson, Jonas

    2018-01-01

    increments of additional 25 mg/m2 /day beginning on days 50 and/or 71 unless dose-limiting myelosuppression had occurred. RESULTS: In the experimental arm, 166 patients (42%) received one dose increment, and 62 (16%) received two. Fifty-seven of 387 (15%) patients in the experimental arm were MRD positive...... minimal residual disease (MRD) positivity and event-free survival. METHODS: 392 patients were randomized to experimental and 396 to standard therapy. Patients allocated to standard therapy received oral 6-mercaptopurine (25 mg/m2 /day) from days 30 to 85, while the experimental arm received stepwise...... at end of consolidation vs 77 of 389 (20%) in the control arm (P = .08). Five-year probability of event-free survival was 0.89 (95% CI: 0.85-0.93) in the experimental arm vs 0.93 (0.90-0.96) in the control arm (P = .13). The median accumulated length of 6-mercaptopurine treatment interruptions was 7 (IQR...

  7. Incremental first pass technique to measure left ventricular ejection fraction

    International Nuclear Information System (INIS)

    Kocak, R.; Gulliford, P.; Hoggard, C.; Critchley, M.

    1980-01-01

    An incremental first pass technique was devised to assess the acute effects of any drug on left ventricular ejection fraction (LVEF) with or without a physiological stress. In particular, the effects of the vasodilater isosorbide dinitrate on LVEF before and after exercise were studied in 11 patients who had suffered cardiac failure. This was achieved by recording the passage of sup(99m)Tc pertechnetate through the heart at each stage of the study using a gamma camera computer system. Consistent values for four consecutive first pass values without exercise or drug in normal subjects illustrated the reproducibility of the technique. There was no significant difference between LVEF values obtained at rest and exercise before or after oral isosorbide dinitrate with the exception of one patient with gross mitral regurgitation. The advantages of the incremental first pass technique are that the patient need not be in sinus rhythm, the effects of physiological intervention may be studied and tests may also be repeated at various intervals during long term follow-up of patients. A disadvantage of the method is the limitation in the number of sequential measurements which can be carried out due to the amount of radioactivity injected. (U.K.)

  8. Successive 1-Month Weight Increments in Infancy Can Be Used to Screen for Faltering Linear Growth.

    Science.gov (United States)

    Onyango, Adelheid W; Borghi, Elaine; de Onis, Mercedes; Frongillo, Edward A; Victora, Cesar G; Dewey, Kathryn G; Lartey, Anna; Bhandari, Nita; Baerug, Anne; Garza, Cutberto

    2015-12-01

    Linear growth faltering in the first 2 y contributes greatly to a high stunting burden, and prevention is hampered by the limited capacity in primary health care for timely screening and intervention. This study aimed to determine an approach to predicting long-term stunting from consecutive 1-mo weight increments in the first year of life. By using the reference sample of the WHO velocity standards, the analysis explored patterns of consecutive monthly weight increments among healthy infants. Four candidate screening thresholds of successive increments that could predict stunting were considered, and one was selected for further testing. The selected threshold was applied in a cohort of Bangladeshi infants to assess its predictive value for stunting at ages 12 and 24 mo. Between birth and age 12 mo, 72.6% of infants in the WHO sample tracked within 1 SD of their weight and length. The selected screening criterion ("event") was 2 consecutive monthly increments below the 15th percentile. Bangladeshi infants were born relatively small and, on average, tracked downward from approximately age 6 to strategy is effective, the estimated preventable proportion in the group who experienced the event would be 34% at 12 mo and 24% at 24 mo. This analysis offers an approach for frontline workers to identify children at risk of stunting, allowing for timely initiation of preventive measures. It opens avenues for further investigation into evidence-informed application of the WHO growth velocity standards. © 2015 American Society for Nutrition.

  9. Observers for Systems with Nonlinearities Satisfying an Incremental Quadratic Inequality

    Science.gov (United States)

    Acikmese, Ahmet Behcet; Corless, Martin

    2004-01-01

    We consider the problem of state estimation for nonlinear time-varying systems whose nonlinearities satisfy an incremental quadratic inequality. These observer results unifies earlier results in the literature; and extend it to some additional classes of nonlinearities. Observers are presented which guarantee that the state estimation error exponentially converges to zero. Observer design involves solving linear matrix inequalities for the observer gain matrices. Results are illustrated by application to a simple model of an underwater.

  10. Top-down attention based on object representation and incremental memory for knowledge building and inference.

    Science.gov (United States)

    Kim, Bumhwi; Ban, Sang-Woo; Lee, Minho

    2013-10-01

    Humans can efficiently perceive arbitrary visual objects based on an incremental learning mechanism with selective attention. This paper proposes a new task specific top-down attention model to locate a target object based on its form and color representation along with a bottom-up saliency based on relativity of primitive visual features and some memory modules. In the proposed model top-down bias signals corresponding to the target form and color features are generated, which draw the preferential attention to the desired object by the proposed selective attention model in concomitance with the bottom-up saliency process. The object form and color representation and memory modules have an incremental learning mechanism together with a proper object feature representation scheme. The proposed model includes a Growing Fuzzy Topology Adaptive Resonance Theory (GFTART) network which plays two important roles in object color and form biased attention; one is to incrementally learn and memorize color and form features of various objects, and the other is to generate a top-down bias signal to localize a target object by focusing on the candidate local areas. Moreover, the GFTART network can be utilized for knowledge inference which enables the perception of new unknown objects on the basis of the object form and color features stored in the memory during training. Experimental results show that the proposed model is successful in focusing on the specified target objects, in addition to the incremental representation and memorization of various objects in natural scenes. In addition, the proposed model properly infers new unknown objects based on the form and color features of previously trained objects. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. Single Point Incremental Forming to increase material knowledge and production flexibility

    Science.gov (United States)

    Habraken, A. M.

    2016-08-01

    Nowadays, manufactured pieces can be divided into two groups: mass production and production of low volume number of parts. Within the second group (prototyping or small batch production), an emerging solution relies on Incremental Sheet Forming or ISF. ISF refers to processes where the plastic deformation occurs by repeated contact with a relatively small tool. More specifically, many publications over the past decade investigate Single Point Incremental Forming (SPIF) where the final shape is determined only by the tool movement. This manufacturing process is characterized by the forming of sheets by means of a CNC controlled generic tool stylus, with the sheets clamped by means of a non-workpiece-specific clamping system and in absence of a partial or a full die. The advantage is no tooling requirements and often enhanced formability, however it poses a challenge in term of process control and accuracy assurance. Note that the most commonly used materials in incremental forming are aluminum and steel alloys however other alloys are also used especially for medical industry applications, such as cobalt and chromium alloys, stainless steel and titanium alloys. Some scientists have applied incremental forming on PVC plates and other on sandwich panels composed of propylene with mild steel and aluminum metallic foams with aluminum sheet metal. Micro incremental forming of thin foils has also been developed. Starting from the scattering of the results of Finite Element (FE) simulations, when one tries to predict the tool force (see SPIF benchmark of 2014 Numisheet conference), we will see how SPIF and even micro SPIF (process applied on thin metallic sheet with a few grains within the thickness) allow investigating the material behavior. This lecture will focus on the identification of constitutive laws, on the SPIF forming mechanisms and formability as well as the failure mechanism. Different hypotheses have been proposed to explain SPIF formability, they will be

  12. Single Point Incremental Forming to increase material knowledge and production flexibility

    International Nuclear Information System (INIS)

    Habraken, A.M.

    2016-01-01

    Nowadays, manufactured pieces can be divided into two groups: mass production and production of low volume number of parts. Within the second group (prototyping or small batch production), an emerging solution relies on Incremental Sheet Forming or ISF. ISF refers to processes where the plastic deformation occurs by repeated contact with a relatively small tool. More specifically, many publications over the past decade investigate Single Point Incremental Forming (SPIF) where the final shape is determined only by the tool movement. This manufacturing process is characterized by the forming of sheets by means of a CNC controlled generic tool stylus, with the sheets clamped by means of a non-workpiece-specific clamping system and in absence of a partial or a full die. The advantage is no tooling requirements and often enhanced formability, however it poses a challenge in term of process control and accuracy assurance. Note that the most commonly used materials in incremental forming are aluminum and steel alloys however other alloys are also used especially for medical industry applications, such as cobalt and chromium alloys, stainless steel and titanium alloys. Some scientists have applied incremental forming on PVC plates and other on sandwich panels composed of propylene with mild steel and aluminum metallic foams with aluminum sheet metal. Micro incremental forming of thin foils has also been developed. Starting from the scattering of the results of Finite Element (FE) simulations, when one tries to predict the tool force (see SPIF benchmark of 2014 Numisheet conference), we will see how SPIF and even micro SPIF (process applied on thin metallic sheet with a few grains within the thickness) allow investigating the material behavior. This lecture will focus on the identification of constitutive laws, on the SPIF forming mechanisms and formability as well as the failure mechanism. Different hypotheses have been proposed to explain SPIF formability, they will be

  13. An Approach to Incremental Design of Distributed Embedded Systems

    DEFF Research Database (Denmark)

    Pop, Paul; Eles, Petru; Pop, Traian

    2001-01-01

    In this paper we present an approach to incremental design of distributed embedded systems for hard real-time applications. We start from an already existing system running a set of applications and the design problem is to implement new functionality on this system. Thus, we propose mapping...... strategies of functionality so that the already running functionality is not disturbed and there is a good chance that, later, new functionality can easily be mapped on the resulted system. The mapping and scheduling for hard real-time embedded systems are considered the context of a realistic communication...

  14. Automating the Incremental Evolution of Controllers for Physical Robots

    DEFF Research Database (Denmark)

    Faina, Andres; Jacobsen, Lars Toft; Risi, Sebastian

    2017-01-01

    the evolution of digital objects.…” The work presented here investigates how fully autonomous evolution of robot controllers can be realized in hardware, using an industrial robot and a marker-based computer vision system. In particular, this article presents an approach to automate the reconfiguration...... of the test environment and shows that it is possible, for the first time, to incrementally evolve a neural robot controller for different obstacle avoidance tasks with no human intervention. Importantly, the system offers a high level of robustness and precision that could potentially open up the range...

  15. Failure mechanisms in single-point incremental forming of metals

    DEFF Research Database (Denmark)

    Silva, Maria B.; Nielsen, Peter Søe; Bay, Niels

    2011-01-01

    The last years saw the development of two different views on how failure develops in single-point incremental forming (SPIF). Today, researchers are split between those claiming that fracture is always preceded by necking and those considering that fracture occurs with suppression of necking. Each...... on formability limits and development of fracture. The unified view conciliates the aforementioned different explanations on the role of necking in fracture and is consistent with the experimental observations that have been reported in the past years. The work is performed on aluminium AA1050-H111 sheets...

  16. Defense Enterprise Accounting and Management System-Increment 1 (DEAMS Inc 1)

    Science.gov (United States)

    2016-03-01

    information accurately and in conformance with Generally Accepted Accounting Principles , to comply with Congressional requirements of the Chief Financial ...2016 Major Automated Information System Annual Report Defense Enterprise Accounting and Management System-Increment 1 (DEAMS Inc 1) Defense...Phone: 937-257-2714 Fax: DSN Phone: 787-2714 DSN Fax: Date Assigned: August 17, 2015 Program Information Program Name Defense Enterprise Accounting

  17. Lead 210 and moss-increment dating of two Finnish Sphagnum hummocks

    International Nuclear Information System (INIS)

    El-Daoushy, F.

    1982-01-01

    A comparison is presented of 210 Pb dating data with mass-increment dates of selected peat material from Finland. The measurements of 210 Pb were carried out by determining the granddaughter product 210 Po by means of the isotope dilution. The ages in 210 Pb yr were calculated using the constant initial concentration and the constant rate of supply models. (U.K.)

  18. Raising Cervical Cancer Awareness: Analysing the Incremental Efficacy of Short Message Service

    Science.gov (United States)

    Lemos, Marina Serra; Rothes, Inês Areal; Oliveira, Filipa; Soares, Luisa

    2017-01-01

    Objective: To evaluate the incremental efficacy of a Short Message Service (SMS) combined with a brief video intervention in increasing the effects of a health education intervention for cervical cancer prevention, over and beyond a video-alone intervention, with respect to key determinants of health behaviour change--knowledge, motivation and…

  19. The Interpersonal Measure of Psychopathy: Construct and Incremental Validity in Male Prisoners

    Science.gov (United States)

    Zolondek, Stacey; Lilienfeld, Scott O.; Patrick, Christopher J.; Fowler, Katherine A.

    2006-01-01

    The authors examined the construct and incremental validity of the Interpersonal Measure of Psychopathy (IM-P), a relatively new instrument designed to detect interpersonal behaviors associated with psychopathy. Observers of videotaped Psychopathy Checklist-Revised (PCL-R) interviews rated male prisoners (N = 93) on the IM-P. The IM-P correlated…

  20. Investigating Postgraduate College Admission Interviews: Generalizability Theory Reliability and Incremental Predictive Validity

    Science.gov (United States)

    Arce-Ferrer, Alvaro J.; Castillo, Irene Borges

    2007-01-01

    The use of face-to-face interviews is controversial for college admissions decisions in light of the lack of availability of validity and reliability evidence for most college admission processes. This study investigated reliability and incremental predictive validity of a face-to-face postgraduate college admission interview with a sample of…

  1. Gradient nanostructured surface of a Cu plate processed by incremental frictional sliding

    DEFF Research Database (Denmark)

    Hong, Chuanshi; Huang, Xiaoxu; Hansen, Niels

    2015-01-01

    The flat surface of a Cu plate was processed by incremental frictional sliding at liquid nitrogen temperature. The surface treatment results in a hardened gradient surface layer as thick as 1 mm in the Cu plate, which contains a nanostructured layer on the top with a boundary spacing of the order...

  2. TCAM-based High Speed Longest Prefix Matching with Fast Incremental Table Updates

    DEFF Research Database (Denmark)

    Rasmussen, Anders; Kragelund, A.; Berger, Michael Stübert

    2013-01-01

    and consequently a higher throughput of the network search engine, since the TCAM down time caused by incremental updates is eliminated. The LPM scheme is described in HDL for FPGA implementation and compared to an existing scheme for customized CAM circuits. The paper shows that the proposed scheme can process...

  3. The use of single point incremental forming for customized implants of unicondylar knee arthroplasty: a review

    Directory of Open Access Journals (Sweden)

    Pankaj Kailasrao Bhoyar

    Full Text Available Abstract Introduction The implantable devices are having enormous market. These products are basically made by traditional manufacturing process, but for the custom-made implants Incremental Sheet Forming is a paramount alternative. Single Point Incremental Forming (SPIF is a manufacturing process to form intricate, asymmetrical components. It forms the component using stretching and bending by maintaining materials crystal structure. SPIF process can be performed using conventional Computer Numerical Control (CNC milling machine. Review This review paper elaborates the various manufacturing processes carried on various biocompatible metallic and nonmetallic customised implantable devices. Conclusion Ti-6Al-4V alloy is broadly used for biomedical implants, but in this alloy, Vanadium is toxic so this alloy is not compatible for implants. The attention of researchers is towards the non toxic and suitable biocompatible materials. For this reason, a novel approach was developed in order to enhance the mechanical properties of this material. . The development of incremental forming technique can improve the formability of existing alloys and may meet the current strict requirements for performance of dies and punches.

  4. Incremental Validity of Personality Measures in Predicting Underwater Performance and Adaptation.

    Science.gov (United States)

    Colodro, Joaquín; Garcés-de-Los-Fayos, Enrique J; López-García, Juan J; Colodro-Conde, Lucía

    2015-03-17

    Intelligence and personality traits are currently considered effective predictors of human behavior and job performance. However, there are few studies about their relevance in the underwater environment. Data from a sample of military personnel performing scuba diving courses were analyzed with regression techniques, testing the contribution of individual differences and ascertaining the incremental validity of the personality in an environment with extreme psychophysical demands. The results confirmed the incremental validity of personality traits (ΔR 2 = .20, f 2 = .25) over the predictive contribution of general mental ability (ΔR 2 = .07, f 2 = .08) in divers' performance. Moreover, personality (R(L)2 = .34) also showed a higher validity to predict underwater adaptation than general mental ability ( R(L)2 = .09). The ROC curve indicated 86% of the maximum possible discrimination power for the prediction of underwater adaptation, AUC = .86, p personality traits as predictors of an effective response to the changing circumstances of military scuba diving. They also may improve the understanding of the behavioral effects and psychophysiological complications of diving and can also provide guidance for psychological intervention and prevention of risk in this extreme environment.

  5. Finite element simulation and Experimental verification of Incremental Sheet metal Forming

    Science.gov (United States)

    Kaushik Yanamundra, Krishna; Karthikeyan, R., Dr.; Naranje, Vishal, Dr

    2018-04-01

    Incremental sheet metal forming is now a proven manufacturing technique that can be employed to obtain application specific, customized, symmetric or asymmetric shapes that are required by automobile or biomedical industries for specific purposes like car body parts, dental implants or knee implants. Finite element simulation of metal forming process is being performed successfully using explicit dynamics analysis of commercial FE software. The simulation is mainly useful in optimization of the process as well design of the final product. This paper focuses on simulating the incremental sheet metal forming process in ABAQUS, and validating the results using experimental methods. The shapes generated for testing are of trapezoid, dome and elliptical shapes whose G codes are written and fed into the CNC milling machine with an attached forming tool with a hemispherical bottom. The same pre-generated coordinates are used to simulate a similar machining conditions in ABAQUS and the tool forces, stresses and strains in the workpiece while machining are obtained as the output data. The forces experimentally were recorded using a dynamometer. The experimental and simulated results were then compared and thus conclusions were drawn.

  6. Mutual-Information-Based Incremental Relaying Communications for Wireless Biomedical Implant Systems

    Directory of Open Access Journals (Sweden)

    Yangzhe Liao

    2018-02-01

    Full Text Available Network lifetime maximization of wireless biomedical implant systems is one of the major research challenges of wireless body area networks (WBANs. In this paper, a mutual information (MI-based incremental relaying communication protocol is presented where several on-body relay nodes and one coordinator are attached to the clothes of a patient. Firstly, a comprehensive analysis of a system model is investigated in terms of channel path loss, energy consumption, and the outage probability from the network perspective. Secondly, only when the MI value becomes smaller than the predetermined threshold is data transmission allowed. The communication path selection can be either from the implanted sensor to the on-body relay then forwards to the coordinator or from the implanted sensor to the coordinator directly, depending on the communication distance. Moreover, mathematical models of quality of service (QoS metrics are derived along with the related subjective functions. The results show that the MI-based incremental relaying technique achieves better performance in comparison to our previous proposed protocol techniques regarding several selected performance metrics. The outcome of this paper can be applied to intra-body continuous physiological signal monitoring, artificial biofeedback-oriented WBANs, and telemedicine system design.

  7. Relationship between skin temperature and muscle activation during incremental cycle exercise.

    Science.gov (United States)

    Priego Quesada, Jose I; Carpes, Felipe P; Bini, Rodrigo R; Salvador Palmer, Rosario; Pérez-Soriano, Pedro; Cibrián Ortiz de Anda, Rosa M

    2015-02-01

    While different studies showed that better fitness level adds to the efficiency of the thermoregulatory system, the relationship between muscular effort and skin temperature is still unknown. Therefore, the present study assessed the relationship between neuromuscular activation and skin temperature during cycle exercise. Ten physically active participants performed an incremental workload cycling test to exhaustion while neuromuscular activations were recorded (via surface electromyography - EMG) from rectus femoris, vastus lateralis, biceps femoris and gastrocnemius medialis. Thermographic images were recorded before, immediately after and 10 min after finishing the cycling test, at four body regions of interest corresponding to the muscles where neuromuscular activations were monitored. Frequency band analysis was conducted to assess spectral properties of EMG signals in order to infer on priority in recruitment of motor units. Significant inverse relationship between changes in skin temperature and changes in overall neuromuscular activation for vastus lateralis was observed (r0.7 and p<0.01). Participants with larger overall activation and reduced low frequency component for vastus lateralis activation presented a better adaptive response of their thermoregulatory system by showing fewer changes in skin temperature after incremental cycling test. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. A numerical analysis on forming limits during spiral and concentric single point incremental forming

    Science.gov (United States)

    Gipiela, M. L.; Amauri, V.; Nikhare, C.; Marcondes, P. V. P.

    2017-01-01

    Sheet metal forming is one of the major manufacturing industries, which are building numerous parts for aerospace, automotive and medical industry. Due to the high demand in vehicle industry and environmental regulations on less fuel consumption on other hand, researchers are innovating new methods to build these parts with energy efficient sheet metal forming process instead of conventionally used punch and die to form the parts to achieve the lightweight parts. One of the most recognized manufacturing process in this category is Single Point Incremental Forming (SPIF). SPIF is the die-less sheet metal forming process in which the single point tool incrementally forces any single point of sheet metal at any process time to plastic deformation zone. In the present work, finite element method (FEM) is applied to analyze the forming limits of high strength low alloy steel formed by single point incremental forming (SPIF) by spiral and concentric tool path. SPIF numerical simulations were model with 24 and 29 mm cup depth, and the results were compare with Nakajima results obtained by experiments and FEM. It was found that the cup formed with Nakajima tool failed at 24 mm while cups formed by SPIF surpassed the limit for both depths with both profiles. It was also notice that the strain achieved in concentric profile are lower than that in spiral profile.

  9. Cyclic and seasonal features in the behaviour of linear growth increment of Rayleigh-Taylor instability in equatorial F-region

    International Nuclear Information System (INIS)

    Farkullin, M.N.; Nikitin, M.A.; Kashchenko, N.M.

    1989-01-01

    Calculations of linear increment of the Rayleigh-Taylor instability for various geophysical conditions are presented. It is shwn that space-time characteristics of increment depend strongly on conditions of solar activity and seasons. The calculation results are in a good agreement with statistical regularities of F-scattering observation in equatorial F-area, which points to the Rayleigh-Taylor natur of the penomena

  10. Are Fearless Dominance Traits Superfluous in Operationalizing Psychopathy? Incremental Validity and Sex Differences

    Science.gov (United States)

    Murphy, Brett; Lilienfeld, Scott; Skeem, Jennifer; Edens, John

    2016-01-01

    Researchers are vigorously debating whether psychopathic personality includes seemingly adaptive traits, especially social and physical boldness. In a large sample (N=1565) of adult offenders, we examined the incremental validity of two operationalizations of boldness (Fearless Dominance traits in the Psychopathy Personality Inventory, Lilienfeld & Andrews, 1996; Boldness traits in the Triarchic Model of Psychopathy, Patrick et al, 2009), above and beyond other characteristics of psychopathy, in statistically predicting scores on four psychopathy-related measures, including the Psychopathy Checklist-Revised (PCL-R). The incremental validity added by boldness traits in predicting the PCL-R’s representation of psychopathy was especially pronounced for interpersonal traits (e.g., superficial charm, deceitfulness). Our analyses, however, revealed unexpected sex differences in the relevance of these traits to psychopathy, with boldness traits exhibiting reduced importance for psychopathy in women. We discuss the implications of these findings for measurement models of psychopathy. PMID:26866795

  11. Is incremental hemodialysis ready to return on the scene? From empiricism to kinetic modelling.

    Science.gov (United States)

    Basile, Carlo; Casino, Francesco Gaetano; Kalantar-Zadeh, Kamyar

    2017-08-01

    Most people who make the transition to maintenance dialysis therapy are treated with a fixed dose thrice-weekly hemodialysis regimen without considering their residual kidney function (RKF). The RKF provides effective and naturally continuous clearance of both small and middle molecules, plays a major role in metabolic homeostasis, nutritional status, and cardiovascular health, and aids in fluid management. The RKF is associated with better patient survival and greater health-related quality of life, although these effects may be confounded by patient comorbidities. Preservation of the RKF requires a careful approach, including regular monitoring, avoidance of nephrotoxins, gentle control of blood pressure to avoid intradialytic hypotension, and an individualized dialysis prescription including the consideration of incremental hemodialysis. There is currently no standardized method for applying incremental hemodialysis in practice. Infrequent (once- to twice-weekly) hemodialysis regimens are often used arbitrarily, without knowing which patients would benefit the most from them or how to escalate the dialysis dose as RKF declines over time. The recently heightened interest in incremental hemodialysis has been hindered by the current limitations of the urea kinetic models (UKM) which tend to overestimate the dialysis dose required in the presence of substantial RKF. This is due to an erroneous extrapolation of the equivalence between renal urea clearance (Kru) and dialyser urea clearance (Kd), correctly assumed by the UKM, to the clinical domain. In this context, each ml/min of Kd clears the urea from the blood just as 1 ml/min of Kru does. By no means should such kinetic equivalence imply that 1 ml/min of Kd is clinically equivalent to 1 ml/min of urea clearance provided by the native kidneys. A recent paper by Casino and Basile suggested a variable target model (VTM) as opposed to the fixed model, because the VTM gives more clinical weight to the RKF and allows

  12. Optimal Output of Distributed Generation Based On Complex Power Increment

    Science.gov (United States)

    Wu, D.; Bao, H.

    2017-12-01

    In order to meet the growing demand for electricity and improve the cleanliness of power generation, new energy generation, represented by wind power generation, photovoltaic power generation, etc has been widely used. The new energy power generation access to distribution network in the form of distributed generation, consumed by local load. However, with the increase of the scale of distribution generation access to the network, the optimization of its power output is becoming more and more prominent, which needs further study. Classical optimization methods often use extended sensitivity method to obtain the relationship between different power generators, but ignore the coupling parameter between nodes makes the results are not accurate; heuristic algorithm also has defects such as slow calculation speed, uncertain outcomes. This article proposes a method called complex power increment, the essence of this method is the analysis of the power grid under steady power flow. After analyzing the results we can obtain the complex scaling function equation between the power supplies, the coefficient of the equation is based on the impedance parameter of the network, so the description of the relation of variables to the coefficients is more precise Thus, the method can accurately describe the power increment relationship, and can obtain the power optimization scheme more accurately and quickly than the extended sensitivity method and heuristic method.

  13. How to Understand Incrementalism?: Politics of Charles Lindblom’s Theory

    Directory of Open Access Journals (Sweden)

    Krešimir Petković

    2007-01-01

    Full Text Available The paper is dedicated to the political process theory by the American political scientist and economist Charles E. Lindblom. After providing a contextual insight into Lindblom’s complete theoretical opus, which is a necessary prerequisite for the interpretative manoeuvre in the central part of the text, the paper is primarily focused on Lindblom’s theory of incremental decision-making, developed in The Science of Muddling Through (1959 and in A Strategy of Decision (1963, which is related to his concept of “partisan mutual adjustment” developed in The Intelligence of Democracy (1965. The paper offers an interpretation of Lindblom’s argument which moves away from its past understanding in Croatian political science literature. There, Lindblom’s decision-making model has been basically interpreted descriptively, as a description of the actual decision-making practices, and opposed to the prescriptive rational decision-making model, which is a characteristic feature even of some foreign interpretations. This paper, however, suggests that Lindblom’s theory contains a strong prescriptive element. Lindblom’s theory of incrementalism, taken together with the pluralist model of partisan mutual adjustment, offers a complete and consistent model of politics with marked normative implications, which justifies the use of the syntagm the politics of theory, substantiated in greater detail in the final section of the paper.

  14. Incremental-hinge piping analysis methods for inelastic seismic response prediction

    International Nuclear Information System (INIS)

    Jaquay, K.R.; Castle, W.R.; Larson, J.E.

    1989-01-01

    This paper proposes nonlinear seismic response prediction methods for nuclear piping systems based on simplified plastic hinge analyses. The simplified plastic hinge analyses utilize an incremental series of flat response spectrum loadings and replace yielded components with hinge elements when a predefined hinge moment is reached. These hinge moment values, developed by Rodabaugh, result in inelastic energy dissipation of the same magnitude as observed in seismic tests of piping components. Two definitions of design level equivalent loads are employed: one conservatively based on the peaks of the design acceleration response spectra, the other based on inelastic frequencies determined by the method of Krylov and Bogolyuboff recently extended by Lazzeri to piping. Both definitions account for piping system inelastic energy dissipation using Newmark-Hall inelastic response spectrum reduction factors and the displacement ductility results of the incremental-hinge analysis. Two ratchet-fatigue damage models are used: one developed by Rodabaugh that conservatively correlates Markl static fatigue expressions to seismic tests to failure of piping components; the other developed by Severud that uses the ratchet expression of Bree for elbows and Edmunds and Beer for straights, and defines ratchet-fatigue interaction using Coffin's ductility based fatigue equation. Comparisons of predicted behavior versus experimental results are provided for a high-level seismic test of a segment of a representative nuclear plant piping system. (orig.)

  15. The Effects of Cells Temperature Increment and Variations of Irradiation for Monocrystalline Photovoltaic

    Science.gov (United States)

    Fuad Rahman Soeharto, Faishal; Hermawan

    2017-04-01

    Photovoltaic cell technology has been developed to meet the target of 17% Renewable Energy in 2025 accordance with Indonesia Government Regulation No. 5 2006. Photovoltaic cells are made of semiconductor materials, namely silicon or germanium (p-n junction). These cells need the light that comes from solar irradiation which brings energy photons to convert light energy into electrical energy. It is different from the solar heater that requires heat energy or thermal of sunlight that is normally used for drying or heating water. Photovoltaic cells requires energy photons to perform the energy conversion process, the photon energy can be derived from sunlight. Energy photon is taken from the sun light along with the advent of heat due to black-body radiation, which can lead to temperature increments of photovoltaic cells. Increment of 1°C can decreased photovoltaic cell voltage of up to 2.3 mV per cell. In this research, it will be discuss the analysis of the effect of rising temperatures and variations of irradiation on the type monocrystalline photovoltaic. Those variation are analyzed, simulated and experiment by using a module of experiment. The test results show that increment temperature from 25° C to 80° C at cell of photovoltaic decrease the output voltage of the photovoltaic cell at 4.21 V, and it also affect the power output of the cell which decreases up to 0.7523 Watt. In addition, the bigger the value of irradiation received by cell at amount of 1000 W / m2, produce more output power cells at the same temperature.

  16. Incremental Knowledge Acquisition for WSD: A Rough Set and IL based Method

    Directory of Open Access Journals (Sweden)

    Xu Huang

    2015-07-01

    Full Text Available Word sense disambiguation (WSD is one of tricky tasks in natural language processing (NLP as it needs to take into full account all the complexities of language. Because WSD involves in discovering semantic structures from unstructured text, automatic knowledge acquisition of word sense is profoundly difficult. To acquire knowledge about Chinese multi-sense verbs, we introduce an incremental machine learning method which combines rough set method and instance based learning. First, context of a multi-sense verb is extracted into a table; its sense is annotated by a skilled human and stored in the same table. By this way, decision table is formed, and then rules can be extracted within the framework of attributive value reduction of rough set. Instances not entailed by any rule are treated as outliers. When new instances are added to decision table, only the new added and outliers need to be learned further, thus incremental leaning is fulfilled. Experiments show the scale of decision table can be reduced dramatically by this method without performance decline.

  17. Protecciones eléctricas en DIgSILENT powerfactory : modelos de fabricantes españoles (II)

    OpenAIRE

    Recas Meirinho, Antonio

    2012-01-01

    El objetivo de este proyecto ha sido llevar a cabo la incorporación de dos relés del fabricante español ZIV a la librería del programa PowerFactory, concretamente los relés IDN e IDV. La principal fuente de información sobre el relé ha sido el manual de instrucciones de la protección, redactado por la propia empresa comercializadora de la protección aunque a lo largo del desarrollo del proyecto ha sido necesario buscar otras fuentes de información alternativas de la propia empresa ZIV. ...

  18. Performance and delay analysis of hybrid ARQ with incremental redundancy over double rayleigh fading channels

    KAUST Repository

    Chelli, Ali

    2014-11-01

    In this paper, we study the performance of hybrid automatic repeat request (HARQ) with incremental redundancy over double Rayleigh channels, a common model for the fading amplitude of vehicle-to-vehicle communication systems. We investigate the performance of HARQ from an information theoretic perspective. Analytical expressions are derived for the \\\\epsilon-outage capacity, the average number of transmissions, and the average transmission rate of HARQ with incremental redundancy assuming a maximum number of HARQ rounds. Moreover, we evaluate the delay experienced by Poisson arriving packets for HARQ with incremental redundancy. We provide analytical expressions for the expected waiting time, the packet\\'s sojourn time in the queue, the average consumed power, and the energy efficiency. In our study, the communication rate per HARQ round is adjusted to the average signal-to-noise ratio (SNR) such that a target outage probability is not exceeded. This setting conforms with communication systems in which a quality of service is expected regardless of the channel conditions. Our analysis underscores the importance of HARQ in improving the spectral efficiency and reliability of communication systems. We demonstrate as well that the explored HARQ scheme achieves full diversity. Additionally, we investigate the tradeoff between energy efficiency and spectral efficiency.

  19. Transferring the Incremental Capacity Analysis to Lithium-Sulfur Batteries

    DEFF Research Database (Denmark)

    Knap, Vaclav; Kalogiannis, Theodoros; Purkayastha, Rajlakshmi

    2017-01-01

    In order to investigate the battery degradation and to estimate their health, various techniques can be applied. One of them, which is widely used for Lithium-ion batteries, is the incremental capacity analysis (ICA). In this work, we apply the ICA to Lithium-Sulfur batteries, which differ in many...... aspects from Lithium-ion batteries and possess unique behavior. One of the challenges of applying the ICA to Lithium-Sulfur batteries is the representation of the IC curves, as their voltage profiles are often non-monotonic, resulting in more complex IC curves. The ICA is at first applied to charge...

  20. Complex Physiological Response of Norway Spruce to Atmospheric Pollution - Decreased Carbon Isotope Discrimination and Unchanged Tree Biomass Increment.

    Science.gov (United States)

    Čada, Vojtěch; Šantrůčková, Hana; Šantrůček, Jiří; Kubištová, Lenka; Seedre, Meelis; Svoboda, Miroslav

    2016-01-01

    Atmospheric pollution critically affects forest ecosystems around the world by directly impacting the assimilation apparatus of trees and indirectly by altering soil conditions, which subsequently also leads to changes in carbon cycling. To evaluate the extent of the physiological effect of moderate level sulfate and reactive nitrogen acidic deposition, we performed a retrospective dendrochronological analysis of several physiological parameters derived from periodic measurements of carbon stable isotope composition ((13)C discrimination, intercellular CO2 concentration and intrinsic water use efficiency) and annual diameter increments (tree biomass increment, its inter-annual variability and correlation with temperature, cloud cover, precipitation and Palmer drought severity index). The analysis was performed in two mountain Norway spruce (Picea abies) stands of the Bohemian Forest (Czech Republic, central Europe), where moderate levels of pollution peaked in the 1970s and 1980s and no evident impact on tree growth or link to mortality has been reported. The significant influence of pollution on trees was expressed most sensitively by a 1.88‰ reduction of carbon isotope discrimination (Δ(13)C). The effects of atmospheric pollution interacted with increasing atmospheric CO2 concentration and temperature. As a result, we observed no change in intercellular CO2 concentrations (Ci), an abrupt increase in water use efficiency (iWUE) and no change in biomass increment, which could also partly result from changes in carbon partitioning (e.g., from below- to above-ground). The biomass increment was significantly related to Δ(13)C on an individual tree level, but the relationship was lost during the pollution period. We suggest that this was caused by a shift from the dominant influence of the photosynthetic rate to stomatal conductance on Δ(13)C during the pollution period. Using biomass increment-climate correlation analyses, we did not identify any clear pollution