WorldWideScience

Sample records for preprocessing step called

  1. Discrete pre-processing step effects in registration-based pipelines, a preliminary volumetric study on T1-weighted images.

    Science.gov (United States)

    Muncy, Nathan M; Hedges-Muncy, Ariana M; Kirwan, C Brock

    2017-01-01

    Pre-processing MRI scans prior to performing volumetric analyses is common practice in MRI studies. As pre-processing steps adjust the voxel intensities, the space in which the scan exists, and the amount of data in the scan, it is possible that the steps have an effect on the volumetric output. To date, studies have compared between and not within pipelines, and so the impact of each step is unknown. This study aims to quantify the effects of pre-processing steps on volumetric measures in T1-weighted scans within a single pipeline. It was our hypothesis that pre-processing steps would significantly impact ROI volume estimations. One hundred fifteen participants from the OASIS dataset were used, where each participant contributed three scans. All scans were then pre-processed using a step-wise pipeline. Bilateral hippocampus, putamen, and middle temporal gyrus volume estimations were assessed following each successive step, and all data were processed by the same pipeline 5 times. Repeated-measures analyses tested for a main effects of pipeline step, scan-rescan (for MRI scanner consistency) and repeated pipeline runs (for algorithmic consistency). A main effect of pipeline step was detected, and interestingly an interaction between pipeline step and ROI exists. No effect for either scan-rescan or repeated pipeline run was detected. We then supply a correction for noise in the data resulting from pre-processing.

  2. The Influence of Preprocessing Steps on Graph Theory Measures Derived from Resting State fMRI.

    Science.gov (United States)

    Gargouri, Fatma; Kallel, Fathi; Delphine, Sebastien; Ben Hamida, Ahmed; Lehéricy, Stéphane; Valabregue, Romain

    2018-01-01

    Resting state functional MRI (rs-fMRI) is an imaging technique that allows the spontaneous activity of the brain to be measured. Measures of functional connectivity highly depend on the quality of the BOLD signal data processing. In this study, our aim was to study the influence of preprocessing steps and their order of application on small-world topology and their efficiency in resting state fMRI data analysis using graph theory. We applied the most standard preprocessing steps: slice-timing, realign, smoothing, filtering, and the tCompCor method. In particular, we were interested in how preprocessing can retain the small-world economic properties and how to maximize the local and global efficiency of a network while minimizing the cost. Tests that we conducted in 54 healthy subjects showed that the choice and ordering of preprocessing steps impacted the graph measures. We found that the csr (where we applied realignment, smoothing, and tCompCor as a final step) and the scr (where we applied realignment, tCompCor and smoothing as a final step) strategies had the highest mean values of global efficiency (eg) . Furthermore, we found that the fscr strategy (where we applied realignment, tCompCor, smoothing, and filtering as a final step), had the highest mean local efficiency (el) values. These results confirm that the graph theory measures of functional connectivity depend on the ordering of the processing steps, with the best results being obtained using smoothing and tCompCor as the final steps for global efficiency with additional filtering for local efficiency.

  3. The Influence of Preprocessing Steps on Graph Theory Measures Derived from Resting State fMRI

    Directory of Open Access Journals (Sweden)

    Fatma Gargouri

    2018-02-01

    Full Text Available Resting state functional MRI (rs-fMRI is an imaging technique that allows the spontaneous activity of the brain to be measured. Measures of functional connectivity highly depend on the quality of the BOLD signal data processing. In this study, our aim was to study the influence of preprocessing steps and their order of application on small-world topology and their efficiency in resting state fMRI data analysis using graph theory. We applied the most standard preprocessing steps: slice-timing, realign, smoothing, filtering, and the tCompCor method. In particular, we were interested in how preprocessing can retain the small-world economic properties and how to maximize the local and global efficiency of a network while minimizing the cost. Tests that we conducted in 54 healthy subjects showed that the choice and ordering of preprocessing steps impacted the graph measures. We found that the csr (where we applied realignment, smoothing, and tCompCor as a final step and the scr (where we applied realignment, tCompCor and smoothing as a final step strategies had the highest mean values of global efficiency (eg. Furthermore, we found that the fscr strategy (where we applied realignment, tCompCor, smoothing, and filtering as a final step, had the highest mean local efficiency (el values. These results confirm that the graph theory measures of functional connectivity depend on the ordering of the processing steps, with the best results being obtained using smoothing and tCompCor as the final steps for global efficiency with additional filtering for local efficiency.

  4. The Influence of Preprocessing Steps on Graph Theory Measures Derived from Resting State fMRI

    Science.gov (United States)

    Gargouri, Fatma; Kallel, Fathi; Delphine, Sebastien; Ben Hamida, Ahmed; Lehéricy, Stéphane; Valabregue, Romain

    2018-01-01

    Resting state functional MRI (rs-fMRI) is an imaging technique that allows the spontaneous activity of the brain to be measured. Measures of functional connectivity highly depend on the quality of the BOLD signal data processing. In this study, our aim was to study the influence of preprocessing steps and their order of application on small-world topology and their efficiency in resting state fMRI data analysis using graph theory. We applied the most standard preprocessing steps: slice-timing, realign, smoothing, filtering, and the tCompCor method. In particular, we were interested in how preprocessing can retain the small-world economic properties and how to maximize the local and global efficiency of a network while minimizing the cost. Tests that we conducted in 54 healthy subjects showed that the choice and ordering of preprocessing steps impacted the graph measures. We found that the csr (where we applied realignment, smoothing, and tCompCor as a final step) and the scr (where we applied realignment, tCompCor and smoothing as a final step) strategies had the highest mean values of global efficiency (eg). Furthermore, we found that the fscr strategy (where we applied realignment, tCompCor, smoothing, and filtering as a final step), had the highest mean local efficiency (el) values. These results confirm that the graph theory measures of functional connectivity depend on the ordering of the processing steps, with the best results being obtained using smoothing and tCompCor as the final steps for global efficiency with additional filtering for local efficiency. PMID:29497372

  5. A Conversation on Data Mining Strategies in LC-MS Untargeted Metabolomics: Pre-Processing and Pre-Treatment Steps

    Directory of Open Access Journals (Sweden)

    Fidele Tugizimana

    2016-11-01

    Full Text Available Untargeted metabolomic studies generate information-rich, high-dimensional, and complex datasets that remain challenging to handle and fully exploit. Despite the remarkable progress in the development of tools and algorithms, the “exhaustive” extraction of information from these metabolomic datasets is still a non-trivial undertaking. A conversation on data mining strategies for a maximal information extraction from metabolomic data is needed. Using a liquid chromatography-mass spectrometry (LC-MS-based untargeted metabolomic dataset, this study explored the influence of collection parameters in the data pre-processing step, scaling and data transformation on the statistical models generated, and feature selection, thereafter. Data obtained in positive mode generated from a LC-MS-based untargeted metabolomic study (sorghum plants responding dynamically to infection by a fungal pathogen were used. Raw data were pre-processed with MarkerLynxTM software (Waters Corporation, Manchester, UK. Here, two parameters were varied: the intensity threshold (50–100 counts and the mass tolerance (0.005–0.01 Da. After the pre-processing, the datasets were imported into SIMCA (Umetrics, Umea, Sweden for more data cleaning and statistical modeling. In addition, different scaling (unit variance, Pareto, etc. and data transformation (log and power methods were explored. The results showed that the pre-processing parameters (or algorithms influence the output dataset with regard to the number of defined features. Furthermore, the study demonstrates that the pre-treatment of data prior to statistical modeling affects the subspace approximation outcome: e.g., the amount of variation in X-data that the model can explain and predict. The pre-processing and pre-treatment steps subsequently influence the number of statistically significant extracted/selected features (variables. Thus, as informed by the results, to maximize the value of untargeted metabolomic data

  6. Conversation on data mining strategies in LC-MS untargeted metabolomics: pre-processing and pre-treatment steps

    CSIR Research Space (South Africa)

    Tugizimana, F

    2016-11-01

    Full Text Available -MS)-based untargeted metabolomic dataset, this study explored the influence of collection parameters in the data pre-processing step, scaling and data transformation on the statistical models generated, and feature selection, thereafter. Data obtained in positive mode...

  7. Data preprocessing in data mining

    CERN Document Server

    García, Salvador; Herrera, Francisco

    2015-01-01

    Data Preprocessing for Data Mining addresses one of the most important issues within the well-known Knowledge Discovery from Data process. Data directly taken from the source will likely have inconsistencies, errors or most importantly, it is not ready to be considered for a data mining process. Furthermore, the increasing amount of data in recent science, industry and business applications, calls to the requirement of more complex tools to analyze it. Thanks to data preprocessing, it is possible to convert the impossible into possible, adapting the data to fulfill the input demands of each data mining algorithm. Data preprocessing includes the data reduction techniques, which aim at reducing the complexity of the data, detecting or removing irrelevant and noisy elements from the data. This book is intended to review the tasks that fill the gap between the data acquisition from the source and the data mining process. A comprehensive look from a practical point of view, including basic concepts and surveying t...

  8. A synthetic operational account of call-by-need evaluation

    DEFF Research Database (Denmark)

    Zerny, Ian; Danvy, Olivier

    2013-01-01

    . The syntactic theory was initiated by Ariola, Felleisen, Maraist, Odersky and Wadler and is prevalent today to reason equationally about lazy programs, on par with Barendregt et al.'s term graphs. Nobody knows, however, how the theory of call by need compares to the practice of call by need: all that is known...... machine implementing lazy evaluation. The machines are intensionally compatible with extensional reasoning about lazy programs and they are lock-step equivalent. Each machine functionally corresponds to a natural semantics for call by need in the style of Launchbury, though for non-preprocessed λ...

  9. Practical Secure Computation with Pre-Processing

    DEFF Research Database (Denmark)

    Zakarias, Rasmus Winther

    Secure Multiparty Computation has been divided between protocols best suited for binary circuits and protocols best suited for arithmetic circuits. With their MiniMac protocol in [DZ13], Damgård and Zakarias take an important step towards bridging these worlds with an arithmetic protocol tuned...... space for pre-processing material than computing the non-linear parts online (depends on the quality of circuit of course). Surprisingly, even for our optimized AES-circuit this is not the case. We further improve the design of the pre-processing material and end up with only 10 megabyes of pre...... a protocol for small field arithmetic to do fast large integer multipli- cations. This is achieved by devising pre-processing material that allows the Toom-Cook multiplication algorithm to run between the parties with linear communication complexity. With this result computation on the CPU by the parties...

  10. New indicator for optimal preprocessing and wavelength selection of near-infrared spectra

    NARCIS (Netherlands)

    Skibsted, E. T. S.; Boelens, H. F. M.; Westerhuis, J. A.; Witte, D. T.; Smilde, A. K.

    2004-01-01

    Preprocessing of near-infrared spectra to remove unwanted, i.e., non-related spectral variation and selection of informative wavelengths is considered to be a crucial step prior to the construction of a quantitative calibration model. The standard methodology when comparing various preprocessing

  11. Evaluating the impact of image preprocessing on iris segmentation

    Directory of Open Access Journals (Sweden)

    José F. Valencia-Murillo

    2014-08-01

    Full Text Available Segmentation is one of the most important stages in iris recognition systems. In this paper, image preprocessing algorithms are applied in order to evaluate their impact on successful iris segmentation. The preprocessing algorithms are based on histogram adjustment, Gaussian filters and suppression of specular reflections in human eye images. The segmentation method introduced by Masek is applied on 199 images acquired under unconstrained conditions, belonging to the CASIA-irisV3 database, before and after applying the preprocessing algorithms. Then, the impact of image preprocessing algorithms on the percentage of successful iris segmentation is evaluated by means of a visual inspection of images in order to determine if circumferences of iris and pupil were detected correctly. An increase from 59% to 73% in percentage of successful iris segmentation is obtained with an algorithm that combine elimination of specular reflections, followed by the implementation of a Gaussian filter having a 5x5 kernel. The results highlight the importance of a preprocessing stage as a previous step in order to improve the performance during the edge detection and iris segmentation processes.

  12. Micro-Analyzer: automatic preprocessing of Affymetrix microarray data.

    Science.gov (United States)

    Guzzi, Pietro Hiram; Cannataro, Mario

    2013-08-01

    A current trend in genomics is the investigation of the cell mechanism using different technologies, in order to explain the relationship among genes, molecular processes and diseases. For instance, the combined use of gene-expression arrays and genomic arrays has been demonstrated as an effective instrument in clinical practice. Consequently, in a single experiment different kind of microarrays may be used, resulting in the production of different types of binary data (images and textual raw data). The analysis of microarray data requires an initial preprocessing phase, that makes raw data suitable for use on existing analysis platforms, such as the TIGR M4 (TM4) Suite. An additional challenge to be faced by emerging data analysis platforms is the ability to treat in a combined way those different microarray formats coupled with clinical data. In fact, resulting integrated data may include both numerical and symbolic data (e.g. gene expression and SNPs regarding molecular data), as well as temporal data (e.g. the response to a drug, time to progression and survival rate), regarding clinical data. Raw data preprocessing is a crucial step in analysis but is often performed in a manual and error prone way using different software tools. Thus novel, platform independent, and possibly open source tools enabling the semi-automatic preprocessing and annotation of different microarray data are needed. The paper presents Micro-Analyzer (Microarray Analyzer), a cross-platform tool for the automatic normalization, summarization and annotation of Affymetrix gene expression and SNP binary data. It represents the evolution of the μ-CS tool, extending the preprocessing to SNP arrays that were not allowed in μ-CS. The Micro-Analyzer is provided as a Java standalone tool and enables users to read, preprocess and analyse binary microarray data (gene expression and SNPs) by invoking TM4 platform. It avoids: (i) the manual invocation of external tools (e.g. the Affymetrix Power

  13. A New Indicator for Optimal Preprocessing and Wavelengths Selection of Near-Infrared Spectra

    NARCIS (Netherlands)

    Skibsted, E.; Boelens, H.F.M.; Westerhuis, J.A.; Witte, D.T.; Smilde, A.K.

    2004-01-01

    Preprocessing of near-infrared spectra to remove unwanted, i.e., non-related spectral variation and selection of informative wavelengths is considered to be a crucial step prior to the construction of a quantitative calibration model. The standard methodology when comparing various preprocessing

  14. On-Board, Real-Time Preprocessing System for Optical Remote-Sensing Imagery.

    Science.gov (United States)

    Qi, Baogui; Shi, Hao; Zhuang, Yin; Chen, He; Chen, Liang

    2018-04-25

    With the development of remote-sensing technology, optical remote-sensing imagery processing has played an important role in many application fields, such as geological exploration and natural disaster prevention. However, relative radiation correction and geometric correction are key steps in preprocessing because raw image data without preprocessing will cause poor performance during application. Traditionally, remote-sensing data are downlinked to the ground station, preprocessed, and distributed to users. This process generates long delays, which is a major bottleneck in real-time applications for remote-sensing data. Therefore, on-board, real-time image preprocessing is greatly desired. In this paper, a real-time processing architecture for on-board imagery preprocessing is proposed. First, a hierarchical optimization and mapping method is proposed to realize the preprocessing algorithm in a hardware structure, which can effectively reduce the computation burden of on-board processing. Second, a co-processing system using a field-programmable gate array (FPGA) and a digital signal processor (DSP; altogether, FPGA-DSP) based on optimization is designed to realize real-time preprocessing. The experimental results demonstrate the potential application of our system to an on-board processor, for which resources and power consumption are limited.

  15. On-Board, Real-Time Preprocessing System for Optical Remote-Sensing Imagery

    Science.gov (United States)

    Qi, Baogui; Zhuang, Yin; Chen, He; Chen, Liang

    2018-01-01

    With the development of remote-sensing technology, optical remote-sensing imagery processing has played an important role in many application fields, such as geological exploration and natural disaster prevention. However, relative radiation correction and geometric correction are key steps in preprocessing because raw image data without preprocessing will cause poor performance during application. Traditionally, remote-sensing data are downlinked to the ground station, preprocessed, and distributed to users. This process generates long delays, which is a major bottleneck in real-time applications for remote-sensing data. Therefore, on-board, real-time image preprocessing is greatly desired. In this paper, a real-time processing architecture for on-board imagery preprocessing is proposed. First, a hierarchical optimization and mapping method is proposed to realize the preprocessing algorithm in a hardware structure, which can effectively reduce the computation burden of on-board processing. Second, a co-processing system using a field-programmable gate array (FPGA) and a digital signal processor (DSP; altogether, FPGA-DSP) based on optimization is designed to realize real-time preprocessing. The experimental results demonstrate the potential application of our system to an on-board processor, for which resources and power consumption are limited. PMID:29693585

  16. Reliable RANSAC Using a Novel Preprocessing Model

    Directory of Open Access Journals (Sweden)

    Xiaoyan Wang

    2013-01-01

    Full Text Available Geometric assumption and verification with RANSAC has become a crucial step for corresponding to local features due to its wide applications in biomedical feature analysis and vision computing. However, conventional RANSAC is very time-consuming due to redundant sampling times, especially dealing with the case of numerous matching pairs. This paper presents a novel preprocessing model to explore a reduced set with reliable correspondences from initial matching dataset. Both geometric model generation and verification are carried out on this reduced set, which leads to considerable speedups. Afterwards, this paper proposes a reliable RANSAC framework using preprocessing model, which was implemented and verified using Harris and SIFT features, respectively. Compared with traditional RANSAC, experimental results show that our method is more efficient.

  17. Development and integration of block operations for data invariant automation of digital preprocessing and analysis of biological and biomedical Raman spectra.

    Science.gov (United States)

    Schulze, H Georg; Turner, Robin F B

    2015-06-01

    High-throughput information extraction from large numbers of Raman spectra is becoming an increasingly taxing problem due to the proliferation of new applications enabled using advances in instrumentation. Fortunately, in many of these applications, the entire process can be automated, yielding reproducibly good results with significant time and cost savings. Information extraction consists of two stages, preprocessing and analysis. We focus here on the preprocessing stage, which typically involves several steps, such as calibration, background subtraction, baseline flattening, artifact removal, smoothing, and so on, before the resulting spectra can be further analyzed. Because the results of some of these steps can affect the performance of subsequent ones, attention must be given to the sequencing of steps, the compatibility of these sequences, and the propensity of each step to generate spectral distortions. We outline here important considerations to effect full automation of Raman spectral preprocessing: what is considered full automation; putative general principles to effect full automation; the proper sequencing of processing and analysis steps; conflicts and circularities arising from sequencing; and the need for, and approaches to, preprocessing quality control. These considerations are discussed and illustrated with biological and biomedical examples reflecting both successful and faulty preprocessing.

  18. An Automated, Adaptive Framework for Optimizing Preprocessing Pipelines in Task-Based Functional MRI.

    Directory of Open Access Journals (Sweden)

    Nathan W Churchill

    Full Text Available BOLD fMRI is sensitive to blood-oxygenation changes correlated with brain function; however, it is limited by relatively weak signal and significant noise confounds. Many preprocessing algorithms have been developed to control noise and improve signal detection in fMRI. Although the chosen set of preprocessing and analysis steps (the "pipeline" significantly affects signal detection, pipelines are rarely quantitatively validated in the neuroimaging literature, due to complex preprocessing interactions. This paper outlines and validates an adaptive resampling framework for evaluating and optimizing preprocessing choices by optimizing data-driven metrics of task prediction and spatial reproducibility. Compared to standard "fixed" preprocessing pipelines, this optimization approach significantly improves independent validation measures of within-subject test-retest, and between-subject activation overlap, and behavioural prediction accuracy. We demonstrate that preprocessing choices function as implicit model regularizers, and that improvements due to pipeline optimization generalize across a range of simple to complex experimental tasks and analysis models. Results are shown for brief scanning sessions (<3 minutes each, demonstrating that with pipeline optimization, it is possible to obtain reliable results and brain-behaviour correlations in relatively small datasets.

  19. Optimization of miRNA-seq data preprocessing.

    Science.gov (United States)

    Tam, Shirley; Tsao, Ming-Sound; McPherson, John D

    2015-11-01

    The past two decades of microRNA (miRNA) research has solidified the role of these small non-coding RNAs as key regulators of many biological processes and promising biomarkers for disease. The concurrent development in high-throughput profiling technology has further advanced our understanding of the impact of their dysregulation on a global scale. Currently, next-generation sequencing is the platform of choice for the discovery and quantification of miRNAs. Despite this, there is no clear consensus on how the data should be preprocessed before conducting downstream analyses. Often overlooked, data preprocessing is an essential step in data analysis: the presence of unreliable features and noise can affect the conclusions drawn from downstream analyses. Using a spike-in dilution study, we evaluated the effects of several general-purpose aligners (BWA, Bowtie, Bowtie 2 and Novoalign), and normalization methods (counts-per-million, total count scaling, upper quartile scaling, Trimmed Mean of M, DESeq, linear regression, cyclic loess and quantile) with respect to the final miRNA count data distribution, variance, bias and accuracy of differential expression analysis. We make practical recommendations on the optimal preprocessing methods for the extraction and interpretation of miRNA count data from small RNA-sequencing experiments. © The Author 2015. Published by Oxford University Press.

  20. A Step-indexed Semantic Model of Types for the Call-by-Name Lambda Calculus

    OpenAIRE

    Meurer, Benedikt

    2011-01-01

    Step-indexed semantic models of types were proposed as an alternative to purely syntactic safety proofs using subject-reduction. Building upon the work by Appel and others, we introduce a generalized step-indexed model for the call-by-name lambda calculus. We also show how to prove type safety of general recursion in our call-by-name model.

  1. Preprocessing for Optimization of Probabilistic-Logic Models for Sequence Analysis

    DEFF Research Database (Denmark)

    Christiansen, Henning; Lassen, Ole Torp

    2009-01-01

    and approximation are needed. The first steps are taken towards a methodology for optimizing such models by approximations using auxiliary models for preprocessing or splitting them into submodels. Evaluation of such approximating models is challenging as authoritative test data may be sparse. On the other hand...

  2. Optimal preprocessing of serum and urine metabolomic data fusion for staging prostate cancer through design of experiment

    International Nuclear Information System (INIS)

    Zheng, Hong; Cai, Aimin; Zhou, Qi; Xu, Pengtao; Zhao, Liangcai; Li, Chen; Dong, Baijun; Gao, Hongchang

    2017-01-01

    Accurate classification of cancer stages will achieve precision treatment for cancer. Metabolomics presents biological phenotypes at the metabolite level and holds a great potential for cancer classification. Since metabolomic data can be obtained from different samples or analytical techniques, data fusion has been applied to improve classification accuracy. Data preprocessing is an essential step during metabolomic data analysis. Therefore, we developed an innovative optimization method to select a proper data preprocessing strategy for metabolomic data fusion using a design of experiment approach for improving the classification of prostate cancer (PCa) stages. In this study, urine and serum samples were collected from participants at five phases of PCa and analyzed using a 1 H NMR-based metabolomic approach. Partial least squares-discriminant analysis (PLS-DA) was used as a classification model and its performance was assessed by goodness of fit (R 2 ) and predictive ability (Q 2 ). Results show that data preprocessing significantly affect classification performance and depends on data properties. Using the fused metabolomic data from urine and serum, PLS-DA model with the optimal data preprocessing (R 2  = 0.729, Q 2  = 0.504, P < 0.0001) can effectively improve model performance and achieve a better classification result for PCa stages as compared with that without data preprocessing (R 2  = 0.139, Q 2  = 0.006, P = 0.450). Therefore, we propose that metabolomic data fusion integrated with an optimal data preprocessing strategy can significantly improve the classification of cancer stages for precision treatment. - Highlights: • NMR metabolomic analysis of body fluids can be used for staging prostate cancer. • Data preprocessing is an essential step for metabolomic analysis. • Data fusion improves information recovery for cancer classification. • Design of experiment achieves optimal preprocessing of metabolomic data fusion.

  3. Effective Feature Preprocessing for Time Series Forecasting

    DEFF Research Database (Denmark)

    Zhao, Junhua; Dong, Zhaoyang; Xu, Zhao

    2006-01-01

    Time series forecasting is an important area in data mining research. Feature preprocessing techniques have significant influence on forecasting accuracy, therefore are essential in a forecasting model. Although several feature preprocessing techniques have been applied in time series forecasting...... performance in time series forecasting. It is demonstrated in our experiment that, effective feature preprocessing can significantly enhance forecasting accuracy. This research can be a useful guidance for researchers on effectively selecting feature preprocessing techniques and integrating them with time...... series forecasting models....

  4. Facilitating Watermark Insertion by Preprocessing Media

    Directory of Open Access Journals (Sweden)

    Matt L. Miller

    2004-10-01

    Full Text Available There are several watermarking applications that require the deployment of a very large number of watermark embedders. These applications often have severe budgetary constraints that limit the computation resources that are available. Under these circumstances, only simple embedding algorithms can be deployed, which have limited performance. In order to improve performance, we propose preprocessing the original media. It is envisaged that this preprocessing occurs during content creation and has no budgetary or computational constraints. Preprocessing combined with simple embedding creates a watermarked Work, the performance of which exceeds that of simple embedding alone. However, this performance improvement is obtained without any increase in the computational complexity of the embedder. Rather, the additional computational burden is shifted to the preprocessing stage. A simple example of this procedure is described and experimental results confirm our assertions.

  5. Retinal Image Preprocessing: Background and Noise Segmentation

    Directory of Open Access Journals (Sweden)

    Usman Akram

    2012-09-01

    Full Text Available Retinal images are used for the automated screening and diagnosis of diabetic retinopathy. The retinal image quality must be improved for the detection of features and abnormalities and for this purpose preprocessing of retinal images is vital. In this paper, we present a novel automated approach for preprocessing of colored retinal images. The proposed technique improves the quality of input retinal image by separating the background and noisy area from the overall image. It contains coarse segmentation and fine segmentation. Standard retinal images databases Diaretdb0, Diaretdb1, DRIVE and STARE are used to test the validation of our preprocessing technique. The experimental results show the validity of proposed preprocessing technique.

  6. The Evaluation of Preprocessing Choices in Single-Subject BOLD fMRI Using NPAIRS Performance Metrics

    DEFF Research Database (Denmark)

    Stephen, LaConte; Rottenberg, David; Strother, Stephen

    2003-01-01

    to obtain cross-validation-based model performance estimates of prediction accuracy and global reproducibility for various degrees of model complexity. We rely on the concept of an analysis chain meta-model in which all parameters of the preprocessing steps along with the final statistical model are treated...

  7. The Effect of Preprocessing on Arabic Document Categorization

    Directory of Open Access Journals (Sweden)

    Abdullah Ayedh

    2016-04-01

    Full Text Available Preprocessing is one of the main components in a conventional document categorization (DC framework. This paper aims to highlight the effect of preprocessing tasks on the efficiency of the Arabic DC system. In this study, three classification techniques are used, namely, naive Bayes (NB, k-nearest neighbor (KNN, and support vector machine (SVM. Experimental analysis on Arabic datasets reveals that preprocessing techniques have a significant impact on the classification accuracy, especially with complicated morphological structure of the Arabic language. Choosing appropriate combinations of preprocessing tasks provides significant improvement on the accuracy of document categorization depending on the feature size and classification techniques. Findings of this study show that the SVM technique has outperformed the KNN and NB techniques. The SVM technique achieved 96.74% micro-F1 value by using the combination of normalization and stemming as preprocessing tasks.

  8. Preprocessing of A-scan GPR data based on energy features

    Science.gov (United States)

    Dogan, Mesut; Turhan-Sayan, Gonul

    2016-05-01

    There is an increasing demand for noninvasive real-time detection and classification of buried objects in various civil and military applications. The problem of detection and annihilation of landmines is particularly important due to strong safety concerns. The requirement for a fast real-time decision process is as important as the requirements for high detection rates and low false alarm rates. In this paper, we introduce and demonstrate a computationally simple, timeefficient, energy-based preprocessing approach that can be used in ground penetrating radar (GPR) applications to eliminate reflections from the air-ground boundary and to locate the buried objects, simultaneously, at one easy step. The instantaneous power signals, the total energy values and the cumulative energy curves are extracted from the A-scan GPR data. The cumulative energy curves, in particular, are shown to be useful to detect the presence and location of buried objects in a fast and simple way while preserving the spectral content of the original A-scan data for further steps of physics-based target classification. The proposed method is demonstrated using the GPR data collected at the facilities of IPA Defense, Ankara at outdoor test lanes. Cylindrically shaped plastic containers were buried in fine-medium sand to simulate buried landmines. These plastic containers were half-filled by ammonium nitrate including metal pins. Results of this pilot study are demonstrated to be highly promising to motivate further research for the use of energy-based preprocessing features in landmine detection problem.

  9. Simultaneous data pre-processing and SVM classification model selection based on a parallel genetic algorithm applied to spectroscopic data of olive oils.

    Science.gov (United States)

    Devos, Olivier; Downey, Gerard; Duponchel, Ludovic

    2014-04-01

    Classification is an important task in chemometrics. For several years now, support vector machines (SVMs) have proven to be powerful for infrared spectral data classification. However such methods require optimisation of parameters in order to control the risk of overfitting and the complexity of the boundary. Furthermore, it is established that the prediction ability of classification models can be improved using pre-processing in order to remove unwanted variance in the spectra. In this paper we propose a new methodology based on genetic algorithm (GA) for the simultaneous optimisation of SVM parameters and pre-processing (GENOPT-SVM). The method has been tested for the discrimination of the geographical origin of Italian olive oil (Ligurian and non-Ligurian) on the basis of near infrared (NIR) or mid infrared (FTIR) spectra. Different classification models (PLS-DA, SVM with mean centre data, GENOPT-SVM) have been tested and statistically compared using McNemar's statistical test. For the two datasets, SVM with optimised pre-processing give models with higher accuracy than the one obtained with PLS-DA on pre-processed data. In the case of the NIR dataset, most of this accuracy improvement (86.3% compared with 82.8% for PLS-DA) occurred using only a single pre-processing step. For the FTIR dataset, three optimised pre-processing steps are required to obtain SVM model with significant accuracy improvement (82.2%) compared to the one obtained with PLS-DA (78.6%). Furthermore, this study demonstrates that even SVM models have to be developed on the basis of well-corrected spectral data in order to obtain higher classification rates. Copyright © 2013 Elsevier Ltd. All rights reserved.

  10. Validation of DWI pre-processing procedures for reliable differentiation between human brain gliomas.

    Science.gov (United States)

    Vellmer, Sebastian; Tonoyan, Aram S; Suter, Dieter; Pronin, Igor N; Maximov, Ivan I

    2018-02-01

    Diffusion magnetic resonance imaging (dMRI) is a powerful tool in clinical applications, in particular, in oncology screening. dMRI demonstrated its benefit and efficiency in the localisation and detection of different types of human brain tumours. Clinical dMRI data suffer from multiple artefacts such as motion and eddy-current distortions, contamination by noise, outliers etc. In order to increase the image quality of the derived diffusion scalar metrics and the accuracy of the subsequent data analysis, various pre-processing approaches are actively developed and used. In the present work we assess the effect of different pre-processing procedures such as a noise correction, different smoothing algorithms and spatial interpolation of raw diffusion data, with respect to the accuracy of brain glioma differentiation. As a set of sensitive biomarkers of the glioma malignancy grades we chose the derived scalar metrics from diffusion and kurtosis tensor imaging as well as the neurite orientation dispersion and density imaging (NODDI) biophysical model. Our results show that the application of noise correction, anisotropic diffusion filtering, and cubic-order spline interpolation resulted in the highest sensitivity and specificity for glioma malignancy grading. Thus, these pre-processing steps are recommended for the statistical analysis in brain tumour studies. Copyright © 2017. Published by Elsevier GmbH.

  11. Supervised pre-processing approaches in multiple class variables classification for fish recruitment forecasting

    KAUST Repository

    Fernandes, José Antonio

    2013-02-01

    A multi-species approach to fisheries management requires taking into account the interactions between species in order to improve recruitment forecasting of the fish species. Recent advances in Bayesian networks direct the learning of models with several interrelated variables to be forecasted simultaneously. These models are known as multi-dimensional Bayesian network classifiers (MDBNs). Pre-processing steps are critical for the posterior learning of the model in these kinds of domains. Therefore, in the present study, a set of \\'state-of-the-art\\' uni-dimensional pre-processing methods, within the categories of missing data imputation, feature discretization and feature subset selection, are adapted to be used with MDBNs. A framework that includes the proposed multi-dimensional supervised pre-processing methods, coupled with a MDBN classifier, is tested with synthetic datasets and the real domain of fish recruitment forecasting. The correctly forecasting of three fish species (anchovy, sardine and hake) simultaneously is doubled (from 17.3% to 29.5%) using the multi-dimensional approach in comparison to mono-species models. The probability assessments also show high improvement reducing the average error (estimated by means of Brier score) from 0.35 to 0.27. Finally, these differences are superior to the forecasting of species by pairs. © 2012 Elsevier Ltd.

  12. Ensemble preprocessing of near-infrared (NIR) spectra for multivariate calibration

    International Nuclear Information System (INIS)

    Xu Lu; Zhou Yanping; Tang Lijuan; Wu Hailong; Jiang Jianhui; Shen Guoli; Yu Ruqin

    2008-01-01

    Preprocessing of raw near-infrared (NIR) spectral data is indispensable in multivariate calibration when the measured spectra are subject to significant noises, baselines and other undesirable factors. However, due to the lack of sufficient prior information and an incomplete knowledge of the raw data, NIR spectra preprocessing in multivariate calibration is still trial and error. How to select a proper method depends largely on both the nature of the data and the expertise and experience of the practitioners. This might limit the applications of multivariate calibration in many fields, where researchers are not very familiar with the characteristics of many preprocessing methods unique in chemometrics and have difficulties to select the most suitable methods. Another problem is many preprocessing methods, when used alone, might degrade the data in certain aspects or lose some useful information while improving certain qualities of the data. In order to tackle these problems, this paper proposes a new concept of data preprocessing, ensemble preprocessing method, where partial least squares (PLSs) models built on differently preprocessed data are combined by Monte Carlo cross validation (MCCV) stacked regression. Little or no prior information of the data and expertise are required. Moreover, fusion of complementary information obtained by different preprocessing methods often leads to a more stable and accurate calibration model. The investigation of two real data sets has demonstrated the advantages of the proposed method

  13. Impact of functional MRI data preprocessing pipeline on default-mode network detectability in patients with disorders of consciousness

    Directory of Open Access Journals (Sweden)

    Adrian eAndronache

    2013-08-01

    Full Text Available An emerging application of resting-state functional MRI is the study of patients with disorders of consciousness (DoC, where integrity of default-mode network (DMN activity is associated to the clinical level of preservation of consciousness. Due to the inherent inability to follow verbal instructions, arousal induced by scanning noise and postural pain, these patients tend to exhibit substantial levels of movement. This results in spurious, non-neural fluctuations of the blood-oxygen level-dependent (BOLD signal, which impair the evaluation of residual functional connectivity. Here, the effect of data preprocessing choices on the detectability of the DMN was systematically evaluated in a representative cohort of 30 clinically and etiologically heterogeneous DoC patients and 33 healthy controls. Starting from a standard preprocessing pipeline, additional steps were gradually inserted, namely band-pass filtering, removal of co-variance with the movement vectors, removal of co-variance with the global brain parenchyma signal, rejection of realignment outlier volumes and ventricle masking. Both independent-component analysis (ICA and seed-based analysis (SBA were performed, and DMN detectability was assessed quantitatively as well as visually. The results of the present study strongly show that the detection of DMN activity in the sub-optimal fMRI series acquired on DoC patients is contingent on the use of adequate filtering steps. ICA and SBA are differently affected but give convergent findings for high-grade preprocessing. We propose that future studies in this area should adopt the described preprocessing procedures as a minimum standard to reduce the probability of wrongly inferring that DMN activity is absent.

  14. Image preprocessing for improving computational efficiency in implementation of restoration and superresolution algorithms.

    Science.gov (United States)

    Sundareshan, Malur K; Bhattacharjee, Supratik; Inampudi, Radhika; Pang, Ho-Yuen

    2002-12-10

    Computational complexity is a major impediment to the real-time implementation of image restoration and superresolution algorithms in many applications. Although powerful restoration algorithms have been developed within the past few years utilizing sophisticated mathematical machinery (based on statistical optimization and convex set theory), these algorithms are typically iterative in nature and require a sufficient number of iterations to be executed to achieve the desired resolution improvement that may be needed to meaningfully perform postprocessing image exploitation tasks in practice. Additionally, recent technological breakthroughs have facilitated novel sensor designs (focal plane arrays, for instance) that make it possible to capture megapixel imagery data at video frame rates. A major challenge in the processing of these large-format images is to complete the execution of the image processing steps within the frame capture times and to keep up with the output rate of the sensor so that all data captured by the sensor can be efficiently utilized. Consequently, development of novel methods that facilitate real-time implementation of image restoration and superresolution algorithms is of significant practical interest and is the primary focus of this study. The key to designing computationally efficient processing schemes lies in strategically introducing appropriate preprocessing steps together with the superresolution iterations to tailor optimized overall processing sequences for imagery data of specific formats. For substantiating this assertion, three distinct methods for tailoring a preprocessing filter and integrating it with the superresolution processing steps are outlined. These methods consist of a region-of-interest extraction scheme, a background-detail separation procedure, and a scene-derived information extraction step for implementing a set-theoretic restoration of the image that is less demanding in computation compared with the

  15. Preprocessing Moist Lignocellulosic Biomass for Biorefinery Feedstocks

    Energy Technology Data Exchange (ETDEWEB)

    Neal Yancey; Christopher T. Wright; Craig Conner; J. Richard Hess

    2009-06-01

    Biomass preprocessing is one of the primary operations in the feedstock assembly system of a lignocellulosic biorefinery. Preprocessing is generally accomplished using industrial grinders to format biomass materials into a suitable biorefinery feedstock for conversion to ethanol and other bioproducts. Many factors affect machine efficiency and the physical characteristics of preprocessed biomass. For example, moisture content of the biomass as received from the point of production has a significant impact on overall system efficiency and can significantly affect the characteristics (particle size distribution, flowability, storability, etc.) of the size-reduced biomass. Many different grinder configurations are available on the market, each with advantages under specific conditions. Ultimately, the capacity and/or efficiency of the grinding process can be enhanced by selecting the grinder configuration that optimizes grinder performance based on moisture content and screen size. This paper discusses the relationships of biomass moisture with respect to preprocessing system performance and product physical characteristics and compares data obtained on corn stover, switchgrass, and wheat straw as model feedstocks during Vermeer HG 200 grinder testing. During the tests, grinder screen configuration and biomass moisture content were varied and tested to provide a better understanding of their relative impact on machine performance and the resulting feedstock physical characteristics and uniformity relative to each crop tested.

  16. Comparison of pre-processing methods for multiplex bead-based immunoassays.

    Science.gov (United States)

    Rausch, Tanja K; Schillert, Arne; Ziegler, Andreas; Lüking, Angelika; Zucht, Hans-Dieter; Schulz-Knappe, Peter

    2016-08-11

    High throughput protein expression studies can be performed using bead-based protein immunoassays, such as the Luminex® xMAP® technology. Technical variability is inherent to these experiments and may lead to systematic bias and reduced power. To reduce technical variability, data pre-processing is performed. However, no recommendations exist for the pre-processing of Luminex® xMAP® data. We compared 37 different data pre-processing combinations of transformation and normalization methods in 42 samples on 384 analytes obtained from a multiplex immunoassay based on the Luminex® xMAP® technology. We evaluated the performance of each pre-processing approach with 6 different performance criteria. Three performance criteria were plots. All plots were evaluated by 15 independent and blinded readers. Four different combinations of transformation and normalization methods performed well as pre-processing procedure for this bead-based protein immunoassay. The following combinations of transformation and normalization were suitable for pre-processing Luminex® xMAP® data in this study: weighted Box-Cox followed by quantile or robust spline normalization (rsn), asinh transformation followed by loess normalization and Box-Cox followed by rsn.

  17. Effects of preprocessing method on TVOC emission of car mat

    Science.gov (United States)

    Wang, Min; Jia, Li

    2013-02-01

    The effects of the mat preprocessing method on total volatile organic compounds (TVOC) emission of car mat are studied in this paper. An appropriate TVOC emission period for car mat is suggested. The emission factors for total volatile organic compounds from three kinds of new car mats are discussed. The car mats are preprocessed by washing, baking and ventilation. When car mats are preprocessed by washing, the TVOC emission for all samples tested are lower than that preprocessed in other methods. The TVOC emission is in stable situation for a minimum of 4 days. The TVOC emitted from some samples may exceed 2500μg/kg. But the TVOC emitted from washed Polyamide (PA) and wool mat is less than 2500μg/kg. The emission factors of total volatile organic compounds (TVOC) are experimentally investigated in the case of different preprocessing methods. The air temperature in environment chamber and the water temperature for washing are important factors influencing on emission of car mats.

  18. Preprocessing of 18F-DMFP-PET Data Based on Hidden Markov Random Fields and the Gaussian Distribution

    Directory of Open Access Journals (Sweden)

    Fermín Segovia

    2017-10-01

    Full Text Available 18F-DMFP-PET is an emerging neuroimaging modality used to diagnose Parkinson's disease (PD that allows us to examine postsynaptic dopamine D2/3 receptors. Like other neuroimaging modalities used for PD diagnosis, most of the total intensity of 18F-DMFP-PET images is concentrated in the striatum. However, other regions can also be useful for diagnostic purposes. An appropriate delimitation of the regions of interest contained in 18F-DMFP-PET data is crucial to improve the automatic diagnosis of PD. In this manuscript we propose a novel methodology to preprocess 18F-DMFP-PET data that improves the accuracy of computer aided diagnosis systems for PD. First, the data were segmented using an algorithm based on Hidden Markov Random Field. As a result, each neuroimage was divided into 4 maps according to the intensity and the neighborhood of the voxels. The maps were then individually normalized so that the shape of their histograms could be modeled by a Gaussian distribution with equal parameters for all the neuroimages. This approach was evaluated using a dataset with neuroimaging data from 87 parkinsonian patients. After these preprocessing steps, a Support Vector Machine classifier was used to separate idiopathic and non-idiopathic PD. Data preprocessed by the proposed method provided higher accuracy results than the ones preprocessed with previous approaches.

  19. Preprocessing Algorithm for Deciphering Historical Inscriptions Using String Metric

    Directory of Open Access Journals (Sweden)

    Lorand Lehel Toth

    2016-07-01

    Full Text Available The article presents the improvements in the preprocessing part of the deciphering method (shortly preprocessing algorithm for historical inscriptions of unknown origin. Glyphs used in historical inscriptions changed through time; therefore, various versions of the same script may contain different glyphs for each grapheme. The purpose of the preprocessing algorithm is reducing the running time of the deciphering process by filtering out the less probable interpretations of the examined inscription. However, the first version of the preprocessing algorithm leads incorrect outcome or no result in the output in certain cases. Therefore, its improved version was developed to find the most similar words in the dictionary by relaying the search conditions more accurately, but still computationally effectively. Moreover, a sophisticated similarity metric used to determine the possible meaning of the unknown inscription is introduced. The results of the evaluations are also detailed.

  20. The 1996 ENDF pre-processing codes

    International Nuclear Information System (INIS)

    Cullen, D.E.

    1996-01-01

    The codes are named 'the Pre-processing' codes, because they are designed to pre-process ENDF/B data, for later, further processing for use in applications. This is a modular set of computer codes, each of which reads and writes evaluated nuclear data in the ENDF/B format. Each code performs one or more independent operations on the data, as described below. These codes are designed to be computer independent, and are presently operational on every type of computer from large mainframe computer to small personal computers, such as IBM-PC and Power MAC. The codes are available from the IAEA Nuclear Data Section, free of charge upon request. (author)

  1. The recursive combination filter approach of pre-processing for the estimation of standard deviation of RR series.

    Science.gov (United States)

    Mishra, Alok; Swati, D

    2015-09-01

    Variation in the interval between the R-R peaks of the electrocardiogram represents the modulation of the cardiac oscillations by the autonomic nervous system. This variation is contaminated by anomalous signals called ectopic beats, artefacts or noise which mask the true behaviour of heart rate variability. In this paper, we have proposed a combination filter of recursive impulse rejection filter and recursive 20% filter, with recursive application and preference of replacement over removal of abnormal beats to improve the pre-processing of the inter-beat intervals. We have tested this novel recursive combinational method with median method replacement to estimate the standard deviation of normal to normal (SDNN) beat intervals of congestive heart failure (CHF) and normal sinus rhythm subjects. This work discusses the improvement in pre-processing over single use of impulse rejection filter and removal of abnormal beats for heart rate variability for the estimation of SDNN and Poncaré plot descriptors (SD1, SD2, and SD1/SD2) in detail. We have found the 22 ms value of SDNN and 36 ms value of SD2 descriptor of Poincaré plot as clinical indicators in discriminating the normal cases from CHF cases. The pre-processing is also useful in calculation of Lyapunov exponent which is a nonlinear index as Lyapunov exponents calculated after proposed pre-processing modified in a way that it start following the notion of less complex behaviour of diseased states.

  2. A survey of visual preprocessing and shape representation techniques

    Science.gov (United States)

    Olshausen, Bruno A.

    1988-01-01

    Many recent theories and methods proposed for visual preprocessing and shape representation are summarized. The survey brings together research from the fields of biology, psychology, computer science, electrical engineering, and most recently, neural networks. It was motivated by the need to preprocess images for a sparse distributed memory (SDM), but the techniques presented may also prove useful for applying other associative memories to visual pattern recognition. The material of this survey is divided into three sections: an overview of biological visual processing; methods of preprocessing (extracting parts of shape, texture, motion, and depth); and shape representation and recognition (form invariance, primitives and structural descriptions, and theories of attention).

  3. Applying Enhancement Filters in the Pre-processing of Images of Lymphoma

    International Nuclear Information System (INIS)

    Silva, Sérgio Henrique; Do Nascimento, Marcelo Zanchetta; Neves, Leandro Alves; Batista, Valério Ramos

    2015-01-01

    Lymphoma is a type of cancer that affects the immune system, and is classified as Hodgkin or non-Hodgkin. It is one of the ten types of cancer that are the most common on earth. Among all malignant neoplasms diagnosed in the world, lymphoma ranges from three to four percent of them. Our work presents a study of some filters devoted to enhancing images of lymphoma at the pre-processing step. Here the enhancement is useful for removing noise from the digital images. We have analysed the noise caused by different sources like room vibration, scraps and defocusing, and in the following classes of lymphoma: follicular, mantle cell and B-cell chronic lymphocytic leukemia. The filters Gaussian, Median and Mean-Shift were applied to different colour models (RGB, Lab and HSV). Afterwards, we performed a quantitative analysis of the images by means of the Structural Similarity Index. This was done in order to evaluate the similarity between the images. In all cases we have obtained a certainty of at least 75%, which rises to 99% if one considers only HSV. Namely, we have concluded that HSV is an important choice of colour model at pre-processing histological images of lymphoma, because in this case the resulting image will get the best enhancement

  4. Real-time topic-aware influence maximization using preprocessing.

    Science.gov (United States)

    Chen, Wei; Lin, Tian; Yang, Cheng

    2016-01-01

    Influence maximization is the task of finding a set of seed nodes in a social network such that the influence spread of these seed nodes based on certain influence diffusion model is maximized. Topic-aware influence diffusion models have been recently proposed to address the issue that influence between a pair of users are often topic-dependent and information, ideas, innovations etc. being propagated in networks are typically mixtures of topics. In this paper, we focus on the topic-aware influence maximization task. In particular, we study preprocessing methods to avoid redoing influence maximization for each mixture from scratch. We explore two preprocessing algorithms with theoretical justifications. Our empirical results on data obtained in a couple of existing studies demonstrate that one of our algorithms stands out as a strong candidate providing microsecond online response time and competitive influence spread, with reasonable preprocessing effort.

  5. Compact Circuit Preprocesses Accelerometer Output

    Science.gov (United States)

    Bozeman, Richard J., Jr.

    1993-01-01

    Compact electronic circuit transfers dc power to, and preprocesses ac output of, accelerometer and associated preamplifier. Incorporated into accelerometer case during initial fabrication or retrofit onto commercial accelerometer. Made of commercial integrated circuits and other conventional components; made smaller by use of micrologic and surface-mount technology.

  6. Change detection using landsat time series: A review of frequencies, preprocessing, algorithms, and applications

    Science.gov (United States)

    Zhu, Zhe

    2017-08-01

    The free and open access to all archived Landsat images in 2008 has completely changed the way of using Landsat data. Many novel change detection algorithms based on Landsat time series have been developed We present a comprehensive review of four important aspects of change detection studies based on Landsat time series, including frequencies, preprocessing, algorithms, and applications. We observed the trend that the more recent the study, the higher the frequency of Landsat time series used. We reviewed a series of image preprocessing steps, including atmospheric correction, cloud and cloud shadow detection, and composite/fusion/metrics techniques. We divided all change detection algorithms into six categories, including thresholding, differencing, segmentation, trajectory classification, statistical boundary, and regression. Within each category, six major characteristics of different algorithms, such as frequency, change index, univariate/multivariate, online/offline, abrupt/gradual change, and sub-pixel/pixel/spatial were analyzed. Moreover, some of the widely-used change detection algorithms were also discussed. Finally, we reviewed different change detection applications by dividing these applications into two categories, change target and change agent detection.

  7. Preprocessing of emotional visual information in the human piriform cortex.

    Science.gov (United States)

    Schulze, Patrick; Bestgen, Anne-Kathrin; Lech, Robert K; Kuchinke, Lars; Suchan, Boris

    2017-08-23

    This study examines the processing of visual information by the olfactory system in humans. Recent data point to the processing of visual stimuli by the piriform cortex, a region mainly known as part of the primary olfactory cortex. Moreover, the piriform cortex generates predictive templates of olfactory stimuli to facilitate olfactory processing. This study fills the gap relating to the question whether this region is also capable of preprocessing emotional visual information. To gain insight into the preprocessing and transfer of emotional visual information into olfactory processing, we recorded hemodynamic responses during affective priming using functional magnetic resonance imaging (fMRI). Odors of different valence (pleasant, neutral and unpleasant) were primed by images of emotional facial expressions (happy, neutral and disgust). Our findings are the first to demonstrate that the piriform cortex preprocesses emotional visual information prior to any olfactory stimulation and that the emotional connotation of this preprocessing is subsequently transferred and integrated into an extended olfactory network for olfactory processing.

  8. A data preprocessing strategy for metabolomics to reduce the mask effect in data analysis

    Directory of Open Access Journals (Sweden)

    Jun eYang

    2015-02-01

    Full Text Available Metabolomics is a booming research field. Its success highly relies on the discovery of differential metabolites by comparing different data sets (for example, patients vs. controls. One of the challenges is that differences of the low abundant metabolites between groups are often masked by the high variation of abundant metabolites -. In order to solve this challenge, a novel data preprocessing strategy consisting of 3 steps was proposed in this study. In step 1, a ‘modified 80%’ rule was used to reduce effect of missing values; in step 2, unit-variance and Pareto scaling methods were used to reduce the mask effect from the abundant metabolites. In step 3, in order to fix the adverse effect of scaling, stability information of the variables deduced from intensity information and the class information, was used to assign suitable weights to the variables. When applying to an LC/MS based metabolomics dataset from chronic hepatitis B patients study and two simulated datasets, the mask effect was found to be partially eliminated and several new low abundant differential metabolites were rescued.

  9. Image preprocessing study on KPCA-based face recognition

    Science.gov (United States)

    Li, Xuan; Li, Dehua

    2015-12-01

    Face recognition as an important biometric identification method, with its friendly, natural, convenient advantages, has obtained more and more attention. This paper intends to research a face recognition system including face detection, feature extraction and face recognition, mainly through researching on related theory and the key technology of various preprocessing methods in face detection process, using KPCA method, focuses on the different recognition results in different preprocessing methods. In this paper, we choose YCbCr color space for skin segmentation and choose integral projection for face location. We use erosion and dilation of the opening and closing operation and illumination compensation method to preprocess face images, and then use the face recognition method based on kernel principal component analysis method for analysis and research, and the experiments were carried out using the typical face database. The algorithms experiment on MATLAB platform. Experimental results show that integration of the kernel method based on PCA algorithm under certain conditions make the extracted features represent the original image information better for using nonlinear feature extraction method, which can obtain higher recognition rate. In the image preprocessing stage, we found that images under various operations may appear different results, so as to obtain different recognition rate in recognition stage. At the same time, in the process of the kernel principal component analysis, the value of the power of the polynomial function can affect the recognition result.

  10. Data pre-processing for web log mining: Case study of commercial bank website usage analysis

    Directory of Open Access Journals (Sweden)

    Jozef Kapusta

    2013-01-01

    Full Text Available We use data cleaning, integration, reduction and data conversion methods in the pre-processing level of data analysis. Data processing techniques improve the overall quality of the patterns mined. The paper describes using of standard pre-processing methods for preparing data of the commercial bank website in the form of the log file obtained from the web server. Data cleaning, as the simplest step of data pre-processing, is non–trivial as the analysed content is highly specific. We had to deal with the problem of frequent changes of the content and even frequent changes of the structure. Regular changes in the structure make use of the sitemap impossible. We presented approaches how to deal with this problem. We were able to create the sitemap dynamically just based on the content of the log file. In this case study, we also examined just the one part of the website over the standard analysis of an entire website, as we did not have access to all log files for the security reason. As the result, the traditional practices had to be adapted for this special case. Analysing just the small fraction of the website resulted in the short session time of regular visitors. We were not able to use recommended methods to determine the optimal value of session time. Therefore, we proposed new methods based on outliers identification for raising the accuracy of the session length in this paper.

  11. An Effective Measured Data Preprocessing Method in Electrical Impedance Tomography

    Directory of Open Access Journals (Sweden)

    Chenglong Yu

    2014-01-01

    Full Text Available As an advanced process detection technology, electrical impedance tomography (EIT has widely been paid attention to and studied in the industrial fields. But the EIT techniques are greatly limited to the low spatial resolutions. This problem may result from the incorrect preprocessing of measuring data and lack of general criterion to evaluate different preprocessing processes. In this paper, an EIT data preprocessing method is proposed by all rooting measured data and evaluated by two constructed indexes based on all rooted EIT measured data. By finding the optimums of the two indexes, the proposed method can be applied to improve the EIT imaging spatial resolutions. In terms of a theoretical model, the optimal rooting times of the two indexes range in [0.23, 0.33] and in [0.22, 0.35], respectively. Moreover, these factors that affect the correctness of the proposed method are generally analyzed. The measuring data preprocessing is necessary and helpful for any imaging process. Thus, the proposed method can be generally and widely used in any imaging process. Experimental results validate the two proposed indexes.

  12. Research on pre-processing of QR Code

    Science.gov (United States)

    Sun, Haixing; Xia, Haojie; Dong, Ning

    2013-10-01

    QR code encodes many kinds of information because of its advantages: large storage capacity, high reliability, full arrange of utter-high-speed reading, small printing size and high-efficient representation of Chinese characters, etc. In order to obtain the clearer binarization image from complex background, and improve the recognition rate of QR code, this paper researches on pre-processing methods of QR code (Quick Response Code), and shows algorithms and results of image pre-processing for QR code recognition. Improve the conventional method by changing the Souvola's adaptive text recognition method. Additionally, introduce the QR code Extraction which adapts to different image size, flexible image correction approach, and improve the efficiency and accuracy of QR code image processing.

  13. Influence of Averaging Preprocessing on Image Analysis with a Markov Random Field Model

    Science.gov (United States)

    Sakamoto, Hirotaka; Nakanishi-Ohno, Yoshinori; Okada, Masato

    2018-02-01

    This paper describes our investigations into the influence of averaging preprocessing on the performance of image analysis. Averaging preprocessing involves a trade-off: image averaging is often undertaken to reduce noise while the number of image data available for image analysis is decreased. We formulated a process of generating image data by using a Markov random field (MRF) model to achieve image analysis tasks such as image restoration and hyper-parameter estimation by a Bayesian approach. According to the notions of Bayesian inference, posterior distributions were analyzed to evaluate the influence of averaging. There are three main results. First, we found that the performance of image restoration with a predetermined value for hyper-parameters is invariant regardless of whether averaging is conducted. We then found that the performance of hyper-parameter estimation deteriorates due to averaging. Our analysis of the negative logarithm of the posterior probability, which is called the free energy based on an analogy with statistical mechanics, indicated that the confidence of hyper-parameter estimation remains higher without averaging. Finally, we found that when the hyper-parameters are estimated from the data, the performance of image restoration worsens as averaging is undertaken. We conclude that averaging adversely influences the performance of image analysis through hyper-parameter estimation.

  14. Effect of packaging on physicochemical characteristics of irradiated pre-processed chicken

    International Nuclear Information System (INIS)

    Jiang Xiujie; Zhang Dongjie; Zhang Dequan; Li Shurong; Gao Meixu; Wang Zhidong

    2011-01-01

    To explore the effect of modified atmosphere packaging and antioxidants on the physicochemical characteristics of irradiated pre-processed chicken, the pre-processed chicken was added antioxidants first, and then packaged in common, vacuum and gas respectively, and finally irradiated at 5 kGy dosage. All samples was stored at 4 ℃. The pH, TBA, TVB-N and color deviation were evaluated after 0, 3, 7, 10, 14, 18 and 21 d of storage. The results showed that pH value of pre-processed chicken with antioxidants and vacuum packaged increased with the storage time but not significantly among different treatments. The TBA value was also increased but not significantly (P > 0.05), which indicated that vacuum package inhibited the lipid oxidation. TVB-N value increased with storage time, TVB-N value of vacuum package samples reached 14.29 mg/100 g at 21 d storage, which did not exceeded the reference indexes of fresh meat. a * value of the pre-processed chicken of vacuum package and non-oxygen package samples increased significantly during storage (P > 0.05), and chicken color kept bright red after 21 d storage with vacuum package It is concluded that vacuum packaging of irradiated pre-processed chicken is effective on ensuring its physical and chemical properties during storage. (authors)

  15. EARLINET Single Calculus Chain - technical - Part 1: Pre-processing of raw lidar data

    Science.gov (United States)

    D'Amico, Giuseppe; Amodeo, Aldo; Mattis, Ina; Freudenthaler, Volker; Pappalardo, Gelsomina

    2016-02-01

    In this paper we describe an automatic tool for the pre-processing of aerosol lidar data called ELPP (EARLINET Lidar Pre-Processor). It is one of two calculus modules of the EARLINET Single Calculus Chain (SCC), the automatic tool for the analysis of EARLINET data. ELPP is an open source module that executes instrumental corrections and data handling of the raw lidar signals, making the lidar data ready to be processed by the optical retrieval algorithms. According to the specific lidar configuration, ELPP automatically performs dead-time correction, atmospheric and electronic background subtraction, gluing of lidar signals, and trigger-delay correction. Moreover, the signal-to-noise ratio of the pre-processed signals can be improved by means of configurable time integration of the raw signals and/or spatial smoothing. ELPP delivers the statistical uncertainties of the final products by means of error propagation or Monte Carlo simulations. During the development of ELPP, particular attention has been payed to make the tool flexible enough to handle all lidar configurations currently used within the EARLINET community. Moreover, it has been designed in a modular way to allow an easy extension to lidar configurations not yet implemented. The primary goal of ELPP is to enable the application of quality-assured procedures in the lidar data analysis starting from the raw lidar data. This provides the added value of full traceability of each delivered lidar product. Several tests have been performed to check the proper functioning of ELPP. The whole SCC has been tested with the same synthetic data sets, which were used for the EARLINET algorithm inter-comparison exercise. ELPP has been successfully employed for the automatic near-real-time pre-processing of the raw lidar data measured during several EARLINET inter-comparison campaigns as well as during intense field campaigns.

  16. Examination of Speed Contribution of Parallelization for Several Fingerprint Pre-Processing Algorithms

    Directory of Open Access Journals (Sweden)

    GORGUNOGLU, S.

    2014-05-01

    Full Text Available In analysis of minutiae based fingerprint systems, fingerprints needs to be pre-processed. The pre-processing is carried out to enhance the quality of the fingerprint and to obtain more accurate minutiae points. Reducing the pre-processing time is important for identification and verification in real time systems and especially for databases holding large fingerprints information. Parallel processing and parallel CPU computing can be considered as distribution of processes over multi core processor. This is done by using parallel programming techniques. Reducing the execution time is the main objective in parallel processing. In this study, pre-processing of minutiae based fingerprint system is implemented by parallel processing on multi core computers using OpenMP and on graphics processor using CUDA to improve execution time. The execution times and speedup ratios are compared with the one that of single core processor. The results show that by using parallel processing, execution time is substantially improved. The improvement ratios obtained for different pre-processing algorithms allowed us to make suggestions on the more suitable approaches for parallelization.

  17. Normalization: A Preprocessing Stage

    OpenAIRE

    Patro, S. Gopal Krishna; Sahu, Kishore Kumar

    2015-01-01

    As we know that the normalization is a pre-processing stage of any type problem statement. Especially normalization takes important role in the field of soft computing, cloud computing etc. for manipulation of data like scale down or scale up the range of data before it becomes used for further stage. There are so many normalization techniques are there namely Min-Max normalization, Z-score normalization and Decimal scaling normalization. So by referring these normalization techniques we are ...

  18. Effect of microaerobic fermentation in preprocessing fibrous lignocellulosic materials.

    Science.gov (United States)

    Alattar, Manar Arica; Green, Terrence R; Henry, Jordan; Gulca, Vitalie; Tizazu, Mikias; Bergstrom, Robby; Popa, Radu

    2012-06-01

    Amending soil with organic matter is common in agricultural and logging practices. Such amendments have benefits to soil fertility and crop yields. These benefits may be increased if material is preprocessed before introduction into soil. We analyzed the efficiency of microaerobic fermentation (MF), also referred to as Bokashi, in preprocessing fibrous lignocellulosic (FLC) organic materials using varying produce amendments and leachate treatments. Adding produce amendments increased leachate production and fermentation rates and decreased the biological oxygen demand of the leachate. Continuously draining leachate without returning it to the fermentors led to acidification and decreased concentrations of polysaccharides (PS) in leachates. PS fragmentation and the production of soluble metabolites and gases stabilized in fermentors in about 2-4 weeks. About 2 % of the carbon content was lost as CO(2). PS degradation rates, upon introduction of processed materials into soil, were similar to unfermented FLC. Our results indicate that MF is insufficient for adequate preprocessing of FLC material.

  19. Performance of Pre-processing Schemes with Imperfect Channel State Information

    DEFF Research Database (Denmark)

    Christensen, Søren Skovgaard; Kyritsi, Persa; De Carvalho, Elisabeth

    2006-01-01

    Pre-processing techniques have several benefits when the CSI is perfect. In this work we investigate three linear pre-processing filters, assuming imperfect CSI caused by noise degradation and channel temporal variation. Results indicate, that the LMMSE filter achieves the lowest BER and the high......Pre-processing techniques have several benefits when the CSI is perfect. In this work we investigate three linear pre-processing filters, assuming imperfect CSI caused by noise degradation and channel temporal variation. Results indicate, that the LMMSE filter achieves the lowest BER...... and the highest SINR when the CSI is perfect, whereas the simple matched filter may be a good choice when the CSI is imperfect. Additionally the results give insight into the inherent trade-off between robustness against CSI imperfections and spatial focusing ability....

  20. The 1989 ENDF pre-processing codes

    International Nuclear Information System (INIS)

    Cullen, D.E.; McLaughlin, P.K.

    1989-12-01

    This document summarizes the 1989 version of the ENDF pre-processing codes which are required for processing evaluated nuclear data coded in the format ENDF-4, ENDF-5, or ENDF-6. The codes are available from the IAEA Nuclear Data Section, free of charge upon request. (author)

  1. Parallelizing flow-accumulation calculations on graphics processing units—From iterative DEM preprocessing algorithm to recursive multiple-flow-direction algorithm

    Science.gov (United States)

    Qin, Cheng-Zhi; Zhan, Lijun

    2012-06-01

    As one of the important tasks in digital terrain analysis, the calculation of flow accumulations from gridded digital elevation models (DEMs) usually involves two steps in a real application: (1) using an iterative DEM preprocessing algorithm to remove the depressions and flat areas commonly contained in real DEMs, and (2) using a recursive flow-direction algorithm to calculate the flow accumulation for every cell in the DEM. Because both algorithms are computationally intensive, quick calculation of the flow accumulations from a DEM (especially for a large area) presents a practical challenge to personal computer (PC) users. In recent years, rapid increases in hardware capacity of the graphics processing units (GPUs) provided in modern PCs have made it possible to meet this challenge in a PC environment. Parallel computing on GPUs using a compute-unified-device-architecture (CUDA) programming model has been explored to speed up the execution of the single-flow-direction algorithm (SFD). However, the parallel implementation on a GPU of the multiple-flow-direction (MFD) algorithm, which generally performs better than the SFD algorithm, has not been reported. Moreover, GPU-based parallelization of the DEM preprocessing step in the flow-accumulation calculations has not been addressed. This paper proposes a parallel approach to calculate flow accumulations (including both iterative DEM preprocessing and a recursive MFD algorithm) on a CUDA-compatible GPU. For the parallelization of an MFD algorithm (MFD-md), two different parallelization strategies using a GPU are explored. The first parallelization strategy, which has been used in the existing parallel SFD algorithm on GPU, has the problem of computing redundancy. Therefore, we designed a parallelization strategy based on graph theory. The application results show that the proposed parallel approach to calculate flow accumulations on a GPU performs much faster than either sequential algorithms or other parallel GPU

  2. Data Acquisition and Preprocessing in Studies on Humans: What Is Not Taught in Statistics Classes?

    Science.gov (United States)

    Zhu, Yeyi; Hernandez, Ladia M; Mueller, Peter; Dong, Yongquan; Forman, Michele R

    2013-01-01

    The aim of this paper is to address issues in research that may be missing from statistics classes and important for (bio-)statistics students. In the context of a case study, we discuss data acquisition and preprocessing steps that fill the gap between research questions posed by subject matter scientists and statistical methodology for formal inference. Issues include participant recruitment, data collection training and standardization, variable coding, data review and verification, data cleaning and editing, and documentation. Despite the critical importance of these details in research, most of these issues are rarely discussed in an applied statistics program. One reason for the lack of more formal training is the difficulty in addressing the many challenges that can possibly arise in the course of a study in a systematic way. This article can help to bridge this gap between research questions and formal statistical inference by using an illustrative case study for a discussion. We hope that reading and discussing this paper and practicing data preprocessing exercises will sensitize statistics students to these important issues and achieve optimal conduct, quality control, analysis, and interpretation of a study.

  3. Comparison of multivariate preprocessing techniques as applied to electronic tongue based pattern classification for black tea

    International Nuclear Information System (INIS)

    Palit, Mousumi; Tudu, Bipan; Bhattacharyya, Nabarun; Dutta, Ankur; Dutta, Pallab Kumar; Jana, Arun; Bandyopadhyay, Rajib; Chatterjee, Anutosh

    2010-01-01

    In an electronic tongue, preprocessing on raw data precedes pattern analysis and choice of the appropriate preprocessing technique is crucial for the performance of the pattern classifier. While attempting to classify different grades of black tea using a voltammetric electronic tongue, different preprocessing techniques have been explored and a comparison of their performances is presented in this paper. The preprocessing techniques are compared first by a quantitative measurement of separability followed by principle component analysis; and then two different supervised pattern recognition models based on neural networks are used to evaluate the performance of the preprocessing techniques.

  4. Value of Distributed Preprocessing of Biomass Feedstocks to a Bioenergy Industry

    Energy Technology Data Exchange (ETDEWEB)

    Christopher T Wright

    2006-07-01

    Biomass preprocessing is one of the primary operations in the feedstock assembly system and the front-end of a biorefinery. Its purpose is to chop, grind, or otherwise format the biomass into a suitable feedstock for conversion to ethanol and other bioproducts. Many variables such as equipment cost and efficiency, and feedstock moisture content, particle size, bulk density, compressibility, and flowability affect the location and implementation of this unit operation. Previous conceptual designs show this operation to be located at the front-end of the biorefinery. However, data are presented that show distributed preprocessing at the field-side or in a fixed preprocessing facility can provide significant cost benefits by producing a higher value feedstock with improved handling, transporting, and merchandising potential. In addition, data supporting the preferential deconstruction of feedstock materials due to their bio-composite structure identifies the potential for significant improvements in equipment efficiencies and compositional quality upgrades. Theses data are collected from full-scale low and high capacity hammermill grinders with various screen sizes. Multiple feedstock varieties with a range of moisture values were used in the preprocessing tests. The comparative values of the different grinding configurations, feedstock varieties, and moisture levels are assessed through post-grinding analysis of the different particle fractions separated with a medium-scale forage particle separator and a Rototap separator. The results show that distributed preprocessing produces a material that has bulk flowable properties and fractionation benefits that can improve the ease of transporting, handling and conveying the material to the biorefinery and improve the biochemical and thermochemical conversion processes.

  5. Incremental Learning of Medical Data for Multi-Step Patient Health Classification

    DEFF Research Database (Denmark)

    Kranen, Philipp; Müller, Emmanuel; Assent, Ira

    2010-01-01

    of textile sensors, body sensors and preprocessing techniques as well as the integration and merging of sensor data in electronic health record systems. Emergency detection on multiple levels will show the benefits of multi-step classification and further enhance the scalability of emergency detection...

  6. Pre-processing for Triangulation of Probabilistic Networks

    NARCIS (Netherlands)

    Bodlaender, H.L.; Koster, A.M.C.A.; Eijkhof, F. van den; Gaag, L.C. van der

    2001-01-01

    The currently most efficient algorithm for inference with a probabilistic network builds upon a triangulation of a networks graph. In this paper, we show that pre-processing can help in finding good triangulations for probabilistic networks, that is, triangulations with a minimal maximum

  7. Comparative performance evaluation of transform coding in image pre-processing

    Science.gov (United States)

    Menon, Vignesh V.; NB, Harikrishnan; Narayanan, Gayathri; CK, Niveditha

    2017-07-01

    We are in the midst of a communication transmute which drives the development as largely as dissemination of pioneering communication systems with ever-increasing fidelity and resolution. Distinguishable researches have been appreciative in image processing techniques crazed by a growing thirst for faster and easier encoding, storage and transmission of visual information. In this paper, the researchers intend to throw light on many techniques which could be worn at the transmitter-end in order to ease the transmission and reconstruction of the images. The researchers investigate the performance of different image transform coding schemes used in pre-processing, their comparison, and effectiveness, the necessary and sufficient conditions, properties and complexity in implementation. Whimsical by prior advancements in image processing techniques, the researchers compare various contemporary image pre-processing frameworks- Compressed Sensing, Singular Value Decomposition, Integer Wavelet Transform on performance. The paper exposes the potential of Integer Wavelet transform to be an efficient pre-processing scheme.

  8. ASAP: an environment for automated preprocessing of sequencing data

    Directory of Open Access Journals (Sweden)

    Torstenson Eric S

    2013-01-01

    Full Text Available Abstract Background Next-generation sequencing (NGS has yielded an unprecedented amount of data for genetics research. It is a daunting task to process the data from raw sequence reads to variant calls and manually processing this data can significantly delay downstream analysis and increase the possibility for human error. The research community has produced tools to properly prepare sequence data for analysis and established guidelines on how to apply those tools to achieve the best results, however, existing pipeline programs to automate the process through its entirety are either inaccessible to investigators, or web-based and require a certain amount of administrative expertise to set up. Findings Advanced Sequence Automated Pipeline (ASAP was developed to provide a framework for automating the translation of sequencing data into annotated variant calls with the goal of minimizing user involvement without the need for dedicated hardware or administrative rights. ASAP works both on computer clusters and on standalone machines with minimal human involvement and maintains high data integrity, while allowing complete control over the configuration of its component programs. It offers an easy-to-use interface for submitting and tracking jobs as well as resuming failed jobs. It also provides tools for quality checking and for dividing jobs into pieces for maximum throughput. Conclusions ASAP provides an environment for building an automated pipeline for NGS data preprocessing. This environment is flexible for use and future development. It is freely available at http://biostat.mc.vanderbilt.edu/ASAP.

  9. ASAP: an environment for automated preprocessing of sequencing data.

    Science.gov (United States)

    Torstenson, Eric S; Li, Bingshan; Li, Chun

    2013-01-04

    Next-generation sequencing (NGS) has yielded an unprecedented amount of data for genetics research. It is a daunting task to process the data from raw sequence reads to variant calls and manually processing this data can significantly delay downstream analysis and increase the possibility for human error. The research community has produced tools to properly prepare sequence data for analysis and established guidelines on how to apply those tools to achieve the best results, however, existing pipeline programs to automate the process through its entirety are either inaccessible to investigators, or web-based and require a certain amount of administrative expertise to set up. Advanced Sequence Automated Pipeline (ASAP) was developed to provide a framework for automating the translation of sequencing data into annotated variant calls with the goal of minimizing user involvement without the need for dedicated hardware or administrative rights. ASAP works both on computer clusters and on standalone machines with minimal human involvement and maintains high data integrity, while allowing complete control over the configuration of its component programs. It offers an easy-to-use interface for submitting and tracking jobs as well as resuming failed jobs. It also provides tools for quality checking and for dividing jobs into pieces for maximum throughput. ASAP provides an environment for building an automated pipeline for NGS data preprocessing. This environment is flexible for use and future development. It is freely available at http://biostat.mc.vanderbilt.edu/ASAP.

  10. ASAP: an environment for automated preprocessing of sequencing data

    Science.gov (United States)

    2013-01-01

    Background Next-generation sequencing (NGS) has yielded an unprecedented amount of data for genetics research. It is a daunting task to process the data from raw sequence reads to variant calls and manually processing this data can significantly delay downstream analysis and increase the possibility for human error. The research community has produced tools to properly prepare sequence data for analysis and established guidelines on how to apply those tools to achieve the best results, however, existing pipeline programs to automate the process through its entirety are either inaccessible to investigators, or web-based and require a certain amount of administrative expertise to set up. Findings Advanced Sequence Automated Pipeline (ASAP) was developed to provide a framework for automating the translation of sequencing data into annotated variant calls with the goal of minimizing user involvement without the need for dedicated hardware or administrative rights. ASAP works both on computer clusters and on standalone machines with minimal human involvement and maintains high data integrity, while allowing complete control over the configuration of its component programs. It offers an easy-to-use interface for submitting and tracking jobs as well as resuming failed jobs. It also provides tools for quality checking and for dividing jobs into pieces for maximum throughput. Conclusions ASAP provides an environment for building an automated pipeline for NGS data preprocessing. This environment is flexible for use and future development. It is freely available at http://biostat.mc.vanderbilt.edu/ASAP. PMID:23289815

  11. Classification-based comparison of pre-processing methods for interpretation of mass spectrometry generated clinical datasets

    Directory of Open Access Journals (Sweden)

    Hoefsloot Huub CJ

    2009-05-01

    Full Text Available Abstract Background Mass spectrometry is increasingly being used to discover proteins or protein profiles associated with disease. Experimental design of mass-spectrometry studies has come under close scrutiny and the importance of strict protocols for sample collection is now understood. However, the question of how best to process the large quantities of data generated is still unanswered. Main challenges for the analysis are the choice of proper pre-processing and classification methods. While these two issues have been investigated in isolation, we propose to use the classification of patient samples as a clinically relevant benchmark for the evaluation of pre-processing methods. Results Two in-house generated clinical SELDI-TOF MS datasets are used in this study as an example of high throughput mass-spectrometry data. We perform a systematic comparison of two commonly used pre-processing methods as implemented in Ciphergen ProteinChip Software and in the Cromwell package. With respect to reproducibility, Ciphergen and Cromwell pre-processing are largely comparable. We find that the overlap between peaks detected by either Ciphergen ProteinChip Software or Cromwell is large. This is especially the case for the more stringent peak detection settings. Moreover, similarity of the estimated intensities between matched peaks is high. We evaluate the pre-processing methods using five different classification methods. Classification is done in a double cross-validation protocol using repeated random sampling to obtain an unbiased estimate of classification accuracy. No pre-processing method significantly outperforms the other for all peak detection settings evaluated. Conclusion We use classification of patient samples as a clinically relevant benchmark for the evaluation of pre-processing methods. Both pre-processing methods lead to similar classification results on an ovarian cancer and a Gaucher disease dataset. However, the settings for pre-processing

  12. Impact of data transformation and preprocessing in supervised ...

    African Journals Online (AJOL)

    Impact of data transformation and preprocessing in supervised learning ... Nowadays, the ideas of integrating machine learning techniques in power system has ... The proposed algorithm used Python-based split train and k-fold model ...

  13. Combined principal component preprocessing and n-tuple neural networks for improved classification

    DEFF Research Database (Denmark)

    Høskuldsson, Agnar; Linneberg, Christian

    2000-01-01

    We present a combined principal component analysis/neural network scheme for classification. The data used to illustrate the method consist of spectral fluorescence recordings from seven different production facilities, and the task is to relate an unknown sample to one of these seven factories....... The data are first preprocessed by performing an individual principal component analysis on each of the seven groups of data. The components found are then used for classifying the data, but instead of making a single multiclass classifier, we follow the ideas of turning a multiclass problem into a number...... of two-class problems. For each possible pair of classes we further apply a transformation to the calculated principal components in order to increase the separation between the classes. Finally we apply the so-called n-tuple neural network to the transformed data in order to give the classification...

  14. Thinning: A Preprocessing Technique for an OCR System for the Brahmi Script

    Directory of Open Access Journals (Sweden)

    H. K. Anasuya Devi

    2006-12-01

    Full Text Available In this paper we study the methodology employed for preprocessing the archaeological images. We present the various algorithms used in the low level processing stage of image analysis for Optical Character Recognition System for Brahmi Script. The image preprocessing technique covered in this paper include Thinning method. We also try to analyze the results obtained by the pixel-level processing algorithms.

  15. Boosting reversible pushdown machines by preprocessing

    DEFF Research Database (Denmark)

    Axelsen, Holger Bock; Kutrib, Martin; Malcher, Andreas

    2016-01-01

    languages, whereas for reversible pushdown automata the accepted family of languages lies strictly in between the reversible deterministic context-free languages and the real-time deterministic context-free languages. Moreover, it is shown that the computational power of both types of machines...... is not changed by allowing the preprocessing sequential transducer to work irreversibly. Finally, we examine the closure properties of the family of languages accepted by such machines....

  16. A validated pipeline for detection of SNVs and short InDels from RNA Sequencing

    Directory of Open Access Journals (Sweden)

    Nitin Mandloi

    2017-12-01

    In this study, we have developed a pipeline to detect germline variants from RNA-seq data. The pipeline steps include: pre-processing, alignment, GATK best practices for RNA-seq and variant filtering. The pre-processing step includes base and adapter trimming and removal of contamination reads from rRNA, tRNA, mitochondrial DNA and repeat regions. The read alignment of the pre-processed reads is performed using STAR/HiSAT. After this we used GATK best practices for the RNA-seq dataset to call germline variants. We benchmarked our pipeline on NA12878 RNA-seq data downloaded from SRA (SRR1258218. After variant calling, the quality passed variants were compared against the gold standard variants provided by GIAB consortium. Of the total ~3.6 million high quality variants reported as gold standard variants for this sample (considering whole genome, our pipeline identified ~58,104 variants to be expressed in RNA-seq. Our pipeline achieved more than 99% of sensitivity in detection of germline variants.

  17. Pre-processing by data augmentation for improved ellipse fitting.

    Science.gov (United States)

    Kumar, Pankaj; Belchamber, Erika R; Miklavcic, Stanley J

    2018-01-01

    Ellipse fitting is a highly researched and mature topic. Surprisingly, however, no existing method has thus far considered the data point eccentricity in its ellipse fitting procedure. Here, we introduce the concept of eccentricity of a data point, in analogy with the idea of ellipse eccentricity. We then show empirically that, irrespective of ellipse fitting method used, the root mean square error (RMSE) of a fit increases with the eccentricity of the data point set. The main contribution of the paper is based on the hypothesis that if the data point set were pre-processed to strategically add additional data points in regions of high eccentricity, then the quality of a fit could be improved. Conditional validity of this hypothesis is demonstrated mathematically using a model scenario. Based on this confirmation we propose an algorithm that pre-processes the data so that data points with high eccentricity are replicated. The improvement of ellipse fitting is then demonstrated empirically in real-world application of 3D reconstruction of a plant root system for phenotypic analysis. The degree of improvement for different underlying ellipse fitting methods as a function of data noise level is also analysed. We show that almost every method tested, irrespective of whether it minimizes algebraic error or geometric error, shows improvement in the fit following data augmentation using the proposed pre-processing algorithm.

  18. A Stereo Music Preprocessing Scheme for Cochlear Implant Users.

    Science.gov (United States)

    Buyens, Wim; van Dijk, Bas; Wouters, Jan; Moonen, Marc

    2015-10-01

    Listening to music is still one of the more challenging aspects of using a cochlear implant (CI) for most users. Simple musical structures, a clear rhythm/beat, and lyrics that are easy to follow are among the top factors contributing to music appreciation for CI users. Modifying the audio mix of complex music potentially improves music enjoyment in CI users. A stereo music preprocessing scheme is described in which vocals, drums, and bass are emphasized based on the representation of the harmonic and the percussive components in the input spectrogram, combined with the spatial allocation of instruments in typical stereo recordings. The scheme is assessed with postlingually deafened CI subjects (N = 7) using pop/rock music excerpts with different complexity levels. The scheme is capable of modifying relative instrument level settings, with the aim of improving music appreciation in CI users, and allows individual preference adjustments. The assessment with CI subjects confirms the preference for more emphasis on vocals, drums, and bass as offered by the preprocessing scheme, especially for songs with higher complexity. The stereo music preprocessing scheme has the potential to improve music enjoyment in CI users by modifying the audio mix in widespread (stereo) music recordings. Since music enjoyment in CI users is generally poor, this scheme can assist the music listening experience of CI users as a training or rehabilitation tool.

  19. Detailed Investigation and Comparison of the XCMS and MZmine 2 Chromatogram Construction and Chromatographic Peak Detection Methods for Preprocessing Mass Spectrometry Metabolomics Data.

    Science.gov (United States)

    Myers, Owen D; Sumner, Susan J; Li, Shuzhao; Barnes, Stephen; Du, Xiuxia

    2017-09-05

    XCMS and MZmine 2 are two widely used software packages for preprocessing untargeted LC/MS metabolomics data. Both construct extracted ion chromatograms (EICs) and detect peaks from the EICs, the first two steps in the data preprocessing workflow. While both packages have performed admirably in peak picking, they also detect a problematic number of false positive EIC peaks and can also fail to detect real EIC peaks. The former and latter translate downstream into spurious and missing compounds and present significant limitations with most existing software packages that preprocess untargeted mass spectrometry metabolomics data. We seek to understand the specific reasons why XCMS and MZmine 2 find the false positive EIC peaks that they do and in what ways they fail to detect real compounds. We investigate differences of EIC construction methods in XCMS and MZmine 2 and find several problems in the XCMS centWave peak detection algorithm which we show are partly responsible for the false positive and false negative compound identifications. In addition, we find a problem with MZmine 2's use of centWave. We hope that a detailed understanding of the XCMS and MZmine 2 algorithms will allow users to work with them more effectively and will also help with future algorithmic development.

  20. A new approach to pre-processing digital image for wavelet-based watermark

    Science.gov (United States)

    Agreste, Santa; Andaloro, Guido

    2008-11-01

    The growth of the Internet has increased the phenomenon of digital piracy, in multimedia objects, like software, image, video, audio and text. Therefore it is strategic to individualize and to develop methods and numerical algorithms, which are stable and have low computational cost, that will allow us to find a solution to these problems. We describe a digital watermarking algorithm for color image protection and authenticity: robust, not blind, and wavelet-based. The use of Discrete Wavelet Transform is motivated by good time-frequency features and a good match with Human Visual System directives. These two combined elements are important for building an invisible and robust watermark. Moreover our algorithm can work with any image, thanks to the step of pre-processing of the image that includes resize techniques that adapt to the size of the original image for Wavelet transform. The watermark signal is calculated in correlation with the image features and statistic properties. In the detection step we apply a re-synchronization between the original and watermarked image according to the Neyman-Pearson statistic criterion. Experimentation on a large set of different images has been shown to be resistant against geometric, filtering, and StirMark attacks with a low rate of false alarm.

  1. Data preprocessing methods of FT-NIR spectral data for the classification cooking oil

    Science.gov (United States)

    Ruah, Mas Ezatul Nadia Mohd; Rasaruddin, Nor Fazila; Fong, Sim Siong; Jaafar, Mohd Zuli

    2014-12-01

    This recent work describes the data pre-processing method of FT-NIR spectroscopy datasets of cooking oil and its quality parameters with chemometrics method. Pre-processing of near-infrared (NIR) spectral data has become an integral part of chemometrics modelling. Hence, this work is dedicated to investigate the utility and effectiveness of pre-processing algorithms namely row scaling, column scaling and single scaling process with Standard Normal Variate (SNV). The combinations of these scaling methods have impact on exploratory analysis and classification via Principle Component Analysis plot (PCA). The samples were divided into palm oil and non-palm cooking oil. The classification model was build using FT-NIR cooking oil spectra datasets in absorbance mode at the range of 4000cm-1-14000cm-1. Savitzky Golay derivative was applied before developing the classification model. Then, the data was separated into two sets which were training set and test set by using Duplex method. The number of each class was kept equal to 2/3 of the class that has the minimum number of sample. Then, the sample was employed t-statistic as variable selection method in order to select which variable is significant towards the classification models. The evaluation of data pre-processing were looking at value of modified silhouette width (mSW), PCA and also Percentage Correctly Classified (%CC). The results show that different data processing strategies resulting to substantial amount of model performances quality. The effects of several data pre-processing i.e. row scaling, column standardisation and single scaling process with Standard Normal Variate indicated by mSW and %CC. At two PCs model, all five classifier gave high %CC except Quadratic Distance Analysis.

  2. Optimal production scheduling for energy efficiency improvement in biofuel feedstock preprocessing considering work-in-process particle separation

    International Nuclear Information System (INIS)

    Li, Lin; Sun, Zeyi; Yao, Xufeng; Wang, Donghai

    2016-01-01

    Biofuel is considered a promising alternative to traditional liquid transportation fuels. The large-scale substitution of biofuel can greatly enhance global energy security and mitigate greenhouse gas emissions. One major concern of the broad adoption of biofuel is the intensive energy consumption in biofuel manufacturing. This paper focuses on the energy efficiency improvement of biofuel feedstock preprocessing, a major process of cellulosic biofuel manufacturing. An improved scheme of the feedstock preprocessing considering work-in-process particle separation is introduced to reduce energy waste and improve energy efficiency. A scheduling model based on the improved scheme is also developed to identify an optimal production schedule that can minimize the energy consumption of the feedstock preprocessing under production target constraint. A numerical case study is used to illustrate the effectiveness of the proposed method. The research outcome is expected to improve the energy efficiency and enhance the environmental sustainability of biomass feedstock preprocessing. - Highlights: • A novel method to schedule production in biofuel feedstock preprocessing process. • Systems modeling approach is used. • Capable of optimize preprocessing to reduce energy waste and improve energy efficiency. • A numerical case is used to illustrate the effectiveness of the method. • Energy consumption per unit production can be significantly reduced.

  3. Reproducible cancer biomarker discovery in SELDI-TOF MS using different pre-processing algorithms.

    Directory of Open Access Journals (Sweden)

    Jinfeng Zou

    Full Text Available BACKGROUND: There has been much interest in differentiating diseased and normal samples using biomarkers derived from mass spectrometry (MS studies. However, biomarker identification for specific diseases has been hindered by irreproducibility. Specifically, a peak profile extracted from a dataset for biomarker identification depends on a data pre-processing algorithm. Until now, no widely accepted agreement has been reached. RESULTS: In this paper, we investigated the consistency of biomarker identification using differentially expressed (DE peaks from peak profiles produced by three widely used average spectrum-dependent pre-processing algorithms based on SELDI-TOF MS data for prostate and breast cancers. Our results revealed two important factors that affect the consistency of DE peak identification using different algorithms. One factor is that some DE peaks selected from one peak profile were not detected as peaks in other profiles, and the second factor is that the statistical power of identifying DE peaks in large peak profiles with many peaks may be low due to the large scale of the tests and small number of samples. Furthermore, we demonstrated that the DE peak detection power in large profiles could be improved by the stratified false discovery rate (FDR control approach and that the reproducibility of DE peak detection could thereby be increased. CONCLUSIONS: Comparing and evaluating pre-processing algorithms in terms of reproducibility can elucidate the relationship among different algorithms and also help in selecting a pre-processing algorithm. The DE peaks selected from small peak profiles with few peaks for a dataset tend to be reproducibly detected in large peak profiles, which suggests that a suitable pre-processing algorithm should be able to produce peaks sufficient for identifying useful and reproducible biomarkers.

  4. Parallel pipeline algorithm of real time star map preprocessing

    Science.gov (United States)

    Wang, Hai-yong; Qin, Tian-mu; Liu, Jia-qi; Li, Zhi-feng; Li, Jian-hua

    2016-03-01

    To improve the preprocessing speed of star map and reduce the resource consumption of embedded system of star tracker, a parallel pipeline real-time preprocessing algorithm is presented. The two characteristics, the mean and the noise standard deviation of the background gray of a star map, are firstly obtained dynamically by the means that the intervene of the star image itself to the background is removed in advance. The criterion on whether or not the following noise filtering is needed is established, then the extraction threshold value is assigned according to the level of background noise, so that the centroiding accuracy is guaranteed. In the processing algorithm, as low as two lines of pixel data are buffered, and only 100 shift registers are used to record the connected domain label, by which the problems of resources wasting and connected domain overflow are solved. The simulating results show that the necessary data of the selected bright stars could be immediately accessed in a delay time as short as 10us after the pipeline processing of a 496×496 star map in 50Mb/s is finished, and the needed memory and registers resource total less than 80kb. To verify the accuracy performance of the algorithm proposed, different levels of background noise are added to the processed ideal star map, and the statistic centroiding error is smaller than 1/23 pixel under the condition that the signal to noise ratio is greater than 1. The parallel pipeline algorithm of real time star map preprocessing helps to increase the data output speed and the anti-dynamic performance of star tracker.

  5. Summary of ENDF/B pre-processing codes

    International Nuclear Information System (INIS)

    Cullen, D.E.

    1981-12-01

    This document contains the summary documentation for the ENDF/B pre-processing codes: LINEAR, RECENT, SIGMA1, GROUPIE, EVALPLOT, MERGER, DICTION, CONVERT. This summary documentation is merely a copy of the comment cards that appear at the beginning of each programme; these comment cards always reflect the latest status of input options, etc. For the latest published documentation on the methods used in these codes see UCRL-50400, Vol.17 parts A-E, Lawrence Livermore Laboratory (1979)

  6. Parallel finite elements with domain decomposition and its pre-processing

    International Nuclear Information System (INIS)

    Yoshida, A.; Yagawa, G.; Hamada, S.

    1993-01-01

    This paper describes a parallel finite element analysis using a domain decomposition method, and the pre-processing for the parallel calculation. Computer simulations are about to replace experiments in various fields, and the scale of model to be simulated tends to be extremely large. On the other hand, computational environment has drastically changed in these years. Especially, parallel processing on massively parallel computers or computer networks is considered to be promising techniques. In order to achieve high efficiency on such parallel computation environment, large granularity of tasks, a well-balanced workload distribution are key issues. It is also important to reduce the cost of pre-processing in such parallel FEM. From the point of view, the authors developed the domain decomposition FEM with the automatic and dynamic task-allocation mechanism and the automatic mesh generation/domain subdivision system for it. (author)

  7. ITSG-Grace2016 data preprocessing methodologies revisited: impact of using Level-1A data products

    Science.gov (United States)

    Klinger, Beate; Mayer-Gürr, Torsten

    2017-04-01

    For the ITSG-Grace2016 release, the gravity field recovery is based on the use of official GRACE (Gravity Recovery and Climate Experiment) Level-1B data products, generated by the Jet Propulsion Laboratory (JPL). Before gravity field recovery, the Level-1B instrument data are preprocessed. This data preprocessing step includes the combination of Level-1B star camera (SCA1B) and angular acceleration (ACC1B) data for an improved attitude determination (sensor fusion), instrument data screening and ACC1B data calibration. Based on a Level-1A test dataset, provided for individual month throughout the GRACE period by the Center of Space Research at the University of Texas at Austin (UTCSR), the impact of using Level-1A instead of Level-1B data products within the ITSG-Grace2016 processing chain is analyzed. We discuss (1) the attitude determination through an optimal combination of SCA1A and ACC1A data using our sensor fusion approach, (2) the impact of the new attitude product on temporal gravity field solutions, and (3) possible benefits of using Level-1A data for instrument data screening and calibration. As the GRACE mission is currently reaching its end-of-life, the presented work aims not only at a better understanding of GRACE science data to reduce the impact of possible error sources on the gravity field recovery, but it also aims at preparing Level-1A data handling capabilities for the GRACE Follow-On mission.

  8. A base composition analysis of natural patterns for the preprocessing of metagenome sequences.

    Science.gov (United States)

    Bonham-Carter, Oliver; Ali, Hesham; Bastola, Dhundy

    2013-01-01

    On the pretext that sequence reads and contigs often exhibit the same kinds of base usage that is also observed in the sequences from which they are derived, we offer a base composition analysis tool. Our tool uses these natural patterns to determine relatedness across sequence data. We introduce spectrum sets (sets of motifs) which are permutations of bacterial restriction sites and the base composition analysis framework to measure their proportional content in sequence data. We suggest that this framework will increase the efficiency during the pre-processing stages of metagenome sequencing and assembly projects. Our method is able to differentiate organisms and their reads or contigs. The framework shows how to successfully determine the relatedness between these reads or contigs by comparison of base composition. In particular, we show that two types of organismal-sequence data are fundamentally different by analyzing their spectrum set motif proportions (coverage). By the application of one of the four possible spectrum sets, encompassing all known restriction sites, we provide the evidence to claim that each set has a different ability to differentiate sequence data. Furthermore, we show that the spectrum set selection having relevance to one organism, but not to the others of the data set, will greatly improve performance of sequence differentiation even if the fragment size of the read, contig or sequence is not lengthy. We show the proof of concept of our method by its application to ten trials of two or three freshly selected sequence fragments (reads and contigs) for each experiment across the six organisms of our set. Here we describe a novel and computationally effective pre-processing step for metagenome sequencing and assembly tasks. Furthermore, our base composition method has applications in phylogeny where it can be used to infer evolutionary distances between organisms based on the notion that related organisms often have much conserved code.

  9. Evaluation of a Stereo Music Preprocessing Scheme for Cochlear Implant Users.

    Science.gov (United States)

    Buyens, Wim; van Dijk, Bas; Moonen, Marc; Wouters, Jan

    2018-01-01

    Although for most cochlear implant (CI) users good speech understanding is reached (at least in quiet environments), the perception and the appraisal of music are generally unsatisfactory. The improvement in music appraisal was evaluated in CI participants by using a stereo music preprocessing scheme implemented on a take-home device, in a comfortable listening environment. The preprocessing allowed adjusting the balance among vocals/bass/drums and other instruments, and was evaluated for different genres of music. The correlation between the preferred settings and the participants' speech and pitch detection performance was investigated. During the initial visit preceding the take-home test, the participants' speech-in-noise perception and pitch detection performance were measured, and a questionnaire about their music involvement was completed. The take-home device was provided, including the stereo music preprocessing scheme and seven playlists with six songs each. The participants were asked to adjust the balance by means of a turning wheel to make the music sound most enjoyable, and to repeat this three times for all songs. Twelve postlingually deafened CI users participated in the study. The data were collected by means of a take-home device, which preserved all the preferred settings for the different songs. Statistical analysis was done with a Friedman test (with post hoc Wilcoxon signed-rank test) to check the effect of "Genre." The correlations were investigated with Pearson's and Spearman's correlation coefficients. All participants preferred a balance significantly different from the original balance. Differences across participants were observed which could not be explained by perceptual abilities. An effect of "Genre" was found, showing significantly smaller preferred deviation from the original balance for Golden Oldies compared to the other genres. The stereo music preprocessing scheme showed an improvement in music appraisal with complex music and

  10. Summary of ENDF/B Pre-Processing Codes June 1983

    International Nuclear Information System (INIS)

    Cullen, D.E.

    1983-06-01

    This is the summary documentation for the 1983 version of the ENDF/B Pre-Processing Codes LINEAR, RECENT, SIGMA1, GROUPIE, EVALPLOT, MERGER, DICTION, COMPLOT, CONVERT. This summary documentation is merely a copy of the comment cards that appear at the beginning of each programme; these comment cards always reflect the latest status of input options, etc

  11. The bounded proof property via step algebras and step frames

    NARCIS (Netherlands)

    Bezhanishvili, N.; Ghilardi, Silvio

    2013-01-01

    We develop a semantic criterion for a specific rule-based calculus Ax axiomatizing a given logic L to have the so-called bounded proof property. This property is a kind of an analytic subformula property limiting the proof search space. Our main tools are one-step frames and one-step algebras. These

  12. Action research: A practical step-by-step guide for Agricultural ...

    African Journals Online (AJOL)

    Based on the findings, the extensionists will be able to identify the action required to improve upon the existing situation. This calls for knowledge and skills in action oriented research. This paper provides simple, easy to follow, step-by-step guidelines which should be suitable for many situations in extension research ...

  13. A First Step in Learning Analytics: Pre-Processing Low-Level Alice Logging Data of Middle School Students

    Science.gov (United States)

    Werner, Linda; McDowell, Charlie; Denner, Jill

    2013-01-01

    Educational data mining can miss or misidentify key findings about student learning without a transparent process of analyzing the data. This paper describes the first steps in the process of using low-level logging data to understand how middle school students used Alice, an initial programming environment. We describe the steps that were…

  14. Voice preprocessing system incorporating a real-time spectrum analyzer with programmable switched-capacitor filters

    Science.gov (United States)

    Knapp, G.

    1984-01-01

    As part of a speaker verification program for BISS (Base Installation Security System), a test system is being designed with a flexible preprocessing system for the evaluation of voice spectrum/verification algorithm related problems. The main part of this report covers the design, construction, and testing of a voice analyzer with 16 integrating real-time frequency channels ranging from 300 Hz to 3 KHz. The bandpass filter response of each channel is programmable by NMOS switched capacitor quad filter arrays. Presently, the accuracy of these units is limited to a moderate precision by the finite steps of programming. However, repeatability of characteristics between filter units and sections seems to be excellent for the implemented fourth-order Butterworth bandpass responses. We obtained a 0.1 dB linearity error of signal detection and measured a signal-to-noise ratio of approximately 70 dB. The proprocessing system discussed includes preemphasis filter design, gain normalizer design, and data acquisition system design as well as test results.

  15. Poisson pre-processing of nonstationary photonic signals: Signals with equality between mean and variance.

    Science.gov (United States)

    Poplová, Michaela; Sovka, Pavel; Cifra, Michal

    2017-01-01

    Photonic signals are broadly exploited in communication and sensing and they typically exhibit Poisson-like statistics. In a common scenario where the intensity of the photonic signals is low and one needs to remove a nonstationary trend of the signals for any further analysis, one faces an obstacle: due to the dependence between the mean and variance typical for a Poisson-like process, information about the trend remains in the variance even after the trend has been subtracted, possibly yielding artifactual results in further analyses. Commonly available detrending or normalizing methods cannot cope with this issue. To alleviate this issue we developed a suitable pre-processing method for the signals that originate from a Poisson-like process. In this paper, a Poisson pre-processing method for nonstationary time series with Poisson distribution is developed and tested on computer-generated model data and experimental data of chemiluminescence from human neutrophils and mung seeds. The presented method transforms a nonstationary Poisson signal into a stationary signal with a Poisson distribution while preserving the type of photocount distribution and phase-space structure of the signal. The importance of the suggested pre-processing method is shown in Fano factor and Hurst exponent analysis of both computer-generated model signals and experimental photonic signals. It is demonstrated that our pre-processing method is superior to standard detrending-based methods whenever further signal analysis is sensitive to variance of the signal.

  16. Pre-processing data using wavelet transform and PCA based on ...

    Indian Academy of Sciences (India)

    Abazar Solgi

    2017-07-14

    Jul 14, 2017 ... Pre-processing data using wavelet transform and PCA based on support vector regression and gene expression programming for river flow simulation. Abazar Solgi1,*, Amir Pourhaghi1, Ramin Bahmani2 and Heidar Zarei3. 1. Department of Water Resources Engineering, Shahid Chamran University of ...

  17. Scientific data products and the data pre-processing subsystem of the Chang'e-3 mission

    International Nuclear Information System (INIS)

    Tan Xu; Liu Jian-Jun; Li Chun-Lai; Feng Jian-Qing; Ren Xin; Wang Fen-Fei; Yan Wei; Zuo Wei; Wang Xiao-Qian; Zhang Zhou-Bin

    2014-01-01

    The Chang'e-3 (CE-3) mission is China's first exploration mission on the surface of the Moon that uses a lander and a rover. Eight instruments that form the scientific payloads have the following objectives: (1) investigate the morphological features and geological structures at the landing site; (2) integrated in-situ analysis of minerals and chemical compositions; (3) integrated exploration of the structure of the lunar interior; (4) exploration of the lunar-terrestrial space environment, lunar surface environment and acquire Moon-based ultraviolet astronomical observations. The Ground Research and Application System (GRAS) is in charge of data acquisition and pre-processing, management of the payload in orbit, and managing the data products and their applications. The Data Pre-processing Subsystem (DPS) is a part of GRAS. The task of DPS is the pre-processing of raw data from the eight instruments that are part of CE-3, including channel processing, unpacking, package sorting, calibration and correction, identification of geographical location, calculation of probe azimuth angle, probe zenith angle, solar azimuth angle, and solar zenith angle and so on, and conducting quality checks. These processes produce Level 0, Level 1 and Level 2 data. The computing platform of this subsystem is comprised of a high-performance computing cluster, including a real-time subsystem used for processing Level 0 data and a post-time subsystem for generating Level 1 and Level 2 data. This paper describes the CE-3 data pre-processing method, the data pre-processing subsystem, data classification, data validity and data products that are used for scientific studies

  18. Pre-processing of Fourier transform infrared spectra by means of multivariate analysis implemented in the R environment.

    Science.gov (United States)

    Banas, Krzysztof; Banas, Agnieszka; Gajda, Mariusz; Pawlicki, Bohdan; Kwiatek, Wojciech M; Breese, Mark B H

    2015-04-21

    Pre-processing of Fourier transform infrared (FTIR) spectra is typically the first and crucial step in data analysis. Very often hyperspectral datasets include the regions characterized by the spectra of very low intensity, for example two-dimensional (2D) maps where the areas with only support materials (like mylar foil) are present. In that case segmentation of the complete dataset is required before subsequent evaluation. The method proposed in this contribution is based on a multivariate approach (hierarchical cluster analysis), and shows its superiority when compared to the standard method of cutting-off by using only the mean spectral intensity. Both techniques were implemented and their performance was tested in the R statistical environment - open-source platform - that is a favourable solution if the repeatability and transparency are the key aspects.

  19. Relative effects of statistical preprocessing and postprocessing on a regional hydrological ensemble prediction system

    Science.gov (United States)

    Sharma, Sanjib; Siddique, Ridwan; Reed, Seann; Ahnert, Peter; Mendoza, Pablo; Mejia, Alfonso

    2018-03-01

    The relative roles of statistical weather preprocessing and streamflow postprocessing in hydrological ensemble forecasting at short- to medium-range forecast lead times (day 1-7) are investigated. For this purpose, a regional hydrologic ensemble prediction system (RHEPS) is developed and implemented. The RHEPS is comprised of the following components: (i) hydrometeorological observations (multisensor precipitation estimates, gridded surface temperature, and gauged streamflow); (ii) weather ensemble forecasts (precipitation and near-surface temperature) from the National Centers for Environmental Prediction 11-member Global Ensemble Forecast System Reforecast version 2 (GEFSRv2); (iii) NOAA's Hydrology Laboratory-Research Distributed Hydrologic Model (HL-RDHM); (iv) heteroscedastic censored logistic regression (HCLR) as the statistical preprocessor; (v) two statistical postprocessors, an autoregressive model with a single exogenous variable (ARX(1,1)) and quantile regression (QR); and (vi) a comprehensive verification strategy. To implement the RHEPS, 1 to 7 days weather forecasts from the GEFSRv2 are used to force HL-RDHM and generate raw ensemble streamflow forecasts. Forecasting experiments are conducted in four nested basins in the US Middle Atlantic region, ranging in size from 381 to 12 362 km2. Results show that the HCLR preprocessed ensemble precipitation forecasts have greater skill than the raw forecasts. These improvements are more noticeable in the warm season at the longer lead times (> 3 days). Both postprocessors, ARX(1,1) and QR, show gains in skill relative to the raw ensemble streamflow forecasts, particularly in the cool season, but QR outperforms ARX(1,1). The scenarios that implement preprocessing and postprocessing separately tend to perform similarly, although the postprocessing-alone scenario is often more effective. The scenario involving both preprocessing and postprocessing consistently outperforms the other scenarios. In some cases

  20. Thresholding: A Pixel-Level Image Processing Methodology Preprocessing Technique for an OCR System for the Brahmi Script

    Directory of Open Access Journals (Sweden)

    H. K. Anasuya Devi

    2006-12-01

    Full Text Available In this paper we study the methodology employed for preprocessing the archaeological images. We present the various algorithms used in the low-level processing stage of image analysis for Optical Character Recognition System for Brahmi Script. The image preprocessing technique covered in this paper is thresholding. We also try to analyze the results obtained by the pixel-level processing algorithms.

  1. Denoising by semi-supervised kernel PCA preimaging

    DEFF Research Database (Denmark)

    Hansen, Toke Jansen; Abrahamsen, Trine Julie; Hansen, Lars Kai

    2014-01-01

    Kernel Principal Component Analysis (PCA) has proven a powerful tool for nonlinear feature extraction, and is often applied as a pre-processing step for classification algorithms. In denoising applications Kernel PCA provides the basis for dimensionality reduction, prior to the so-called pre-imag...

  2. Comparison of planar images and SPECT with bayesean preprocessing for the demonstration of facial anatomy and craniomandibular disorders

    International Nuclear Information System (INIS)

    Kircos, L.T.; Ortendahl, D.A.; Hattner, R.S.; Faulkner, D.; Taylor, R.L.

    1984-01-01

    Craniomandiublar disorders involving the facial anatomy may be difficult to demonstrate in planar images. Although bone scanning is generally more sensitive than radiography, facial bone anatomy is complex and focal areas of increased or decreased radiotracer may become obscured by overlapping structures in planar images. Thus SPECT appears ideally suited to examination of the facial skeleton. A series of patients with craniomandibular disorders of unknown origin were imaged using 20 mCi Tc-99m MDP. Planar and SPECT (Siemens 7500 ZLC Orbiter) images were obtained four hours after injection. The SPECT images were reconstructed with a filtered back-projection algorithm. In order to improve image contrast and resolution in SPECT images, the rotation views were pre-processed with a Bayesean deblurring algorithm which has previously been show to offer improved contrast and resolution in planar images. SPECT images using the pre-processed rotation views were obtained and compared to the SPECT images without pre-processing and the planar images. TMJ arthropathy involving either the glenoid fossa or the mandibular condyle, orthopedic changes involving the mandible or maxilla, localized dental pathosis, as well as changes in structures peripheral to the facial skeleton were identified. Bayesean pre-processed SPECT depicted the facial skeleton more clearly as well as providing a more obvious demonstration of the bony changes associated with craniomandibular disorders than either planar images or SPECT without pre-processing

  3. Linguistic Preprocessing and Tagging for Problem Report Trend Analysis

    Science.gov (United States)

    Beil, Robert J.; Malin, Jane T.

    2012-01-01

    Mr. Robert Beil, Systems Engineer at Kennedy Space Center (KSC), requested the NASA Engineering and Safety Center (NESC) develop a prototype tool suite that combines complementary software technology used at Johnson Space Center (JSC) and KSC for problem report preprocessing and semantic tag extraction, to improve input to data mining and trend analysis. This document contains the outcome of the assessment and the Findings, Observations and NESC Recommendations.

  4. The 1992 ENDF Pre-processing codes

    International Nuclear Information System (INIS)

    Cullen, D.E.

    1992-01-01

    This document summarizes the 1992 version of the ENDF pre-processing codes which are required for processing evaluated nuclear data coded in the format ENDF-4, ENDF-5, or ENDF-6. Included are the codes CONVERT, MERGER, LINEAR, RECENT, SIGMA1, LEGEND, FIXUP, GROUPIE, DICTION, MIXER, VIRGIN, COMPLOT, EVALPLOT, RELABEL. Some of the functions of these codes are: to calculate cross-sections from resonance parameters; to calculate angular distributions, group average, mixtures of cross-sections, etc; to produce graphical plottings and data comparisons. The codes are designed to operate on virtually any type of computer including PC's. They are available from the IAEA Nuclear Data Section, free of charge upon request, on magnetic tape or a set of HD diskettes. (author)

  5. Input data preprocessing method for exchange rate forecasting via neural network

    Directory of Open Access Journals (Sweden)

    Antić Dragan S.

    2014-01-01

    Full Text Available The aim of this paper is to present a method for neural network input parameters selection and preprocessing. The purpose of this network is to forecast foreign exchange rates using artificial intelligence. Two data sets are formed for two different economic systems. Each system is represented by six categories with 70 economic parameters which are used in the analysis. Reduction of these parameters within each category was performed by using the principal component analysis method. Component interdependencies are established and relations between them are formed. Newly formed relations were used to create input vectors of a neural network. The multilayer feed forward neural network is formed and trained using batch training. Finally, simulation results are presented and it is concluded that input data preparation method is an effective way for preprocessing neural network data. [Projekat Ministarstva nauke Republike Srbije, br.TR 35005, br. III 43007 i br. III 44006

  6. CudaPre3D: An Alternative Preprocessing Algorithm for Accelerating 3D Convex Hull Computation on the GPU

    Directory of Open Access Journals (Sweden)

    MEI, G.

    2015-05-01

    Full Text Available In the calculating of convex hulls for point sets, a preprocessing procedure that is to filter the input points by discarding non-extreme points is commonly used to improve the computational efficiency. We previously proposed a quite straightforward preprocessing approach for accelerating 2D convex hull computation on the GPU. In this paper, we extend that algorithm to being used in 3D cases. The basic ideas behind these two preprocessing algorithms are similar: first, several groups of extreme points are found according to the original set of input points and several rotated versions of the input set; then, a convex polyhedron is created using the found extreme points; and finally those interior points locating inside the formed convex polyhedron are discarded. Experimental results show that: when employing the proposed preprocessing algorithm, it achieves the speedups of about 4x on average and 5x to 6x in the best cases over the cases where the proposed approach is not used. In addition, more than 95 percent of the input points can be discarded in most experimental tests.

  7. Status of pre-processing of waste electrical and electronic equipment in Germany and its influence on the recovery of gold.

    Science.gov (United States)

    Chancerel, Perrine; Bolland, Til; Rotter, Vera Susanne

    2011-03-01

    Waste electrical and electronic equipment (WEEE) contains gold in low but from an environmental and economic point of view relevant concentration. After collection, WEEE is pre-processed in order to generate appropriate material fractions that are sent to the subsequent end-processing stages (recovery, reuse or disposal). The goal of this research is to quantify the overall recovery rates of pre-processing technologies used in Germany for the reference year 2007. To achieve this goal, facilities operating in Germany were listed and classified according to the technology they apply. Information on their processing capacity was gathered by evaluating statistical databases. Based on a literature review of experimental results for gold recovery rates of different pre-processing technologies, the German overall recovery rate of gold at the pre-processing level was quantified depending on the characteristics of the treated WEEE. The results reveal that - depending on the equipment groups - pre-processing recovery rates of gold of 29 to 61% are achieved in Germany. Some practical recommendations to reduce the losses during pre-processing could be formulated. Defining mass-based recovery targets in the legislation does not set incentives to recover trace elements. Instead, the priorities for recycling could be defined based on other parameters like the environmental impacts of the materials. The implementation of measures to reduce the gold losses would also improve the recovery of several other non-ferrous metals like tin, nickel, and palladium.

  8. The Python Spectral Analysis Tool (PySAT): A Powerful, Flexible, Preprocessing and Machine Learning Library and Interface

    Science.gov (United States)

    Anderson, R. B.; Finch, N.; Clegg, S. M.; Graff, T. G.; Morris, R. V.; Laura, J.; Gaddis, L. R.

    2017-12-01

    Machine learning is a powerful but underutilized approach that can enable planetary scientists to derive meaningful results from the rapidly-growing quantity of available spectral data. For example, regression methods such as Partial Least Squares (PLS) and Least Absolute Shrinkage and Selection Operator (LASSO), can be used to determine chemical concentrations from ChemCam and SuperCam Laser-Induced Breakdown Spectroscopy (LIBS) data [1]. Many scientists are interested in testing different spectral data processing and machine learning methods, but few have the time or expertise to write their own software to do so. We are therefore developing a free open-source library of software called the Python Spectral Analysis Tool (PySAT) along with a flexible, user-friendly graphical interface to enable scientists to process and analyze point spectral data without requiring significant programming or machine-learning expertise. A related but separately-funded effort is working to develop a graphical interface for orbital data [2]. The PySAT point-spectra tool includes common preprocessing steps (e.g. interpolation, normalization, masking, continuum removal, dimensionality reduction), plotting capabilities, and capabilities to prepare data for machine learning such as creating stratified folds for cross validation, defining training and test sets, and applying calibration transfer so that data collected on different instruments or under different conditions can be used together. The tool leverages the scikit-learn library [3] to enable users to train and compare the results from a variety of multivariate regression methods. It also includes the ability to combine multiple "sub-models" into an overall model, a method that has been shown to improve results and is currently used for ChemCam data [4]. Although development of the PySAT point-spectra tool has focused primarily on the analysis of LIBS spectra, the relevant steps and methods are applicable to any spectral data. The

  9. Acoustic Biometric System Based on Preprocessing Techniques and Linear Support Vector Machines.

    Science.gov (United States)

    del Val, Lara; Izquierdo-Fuente, Alberto; Villacorta, Juan J; Raboso, Mariano

    2015-06-17

    Drawing on the results of an acoustic biometric system based on a MSE classifier, a new biometric system has been implemented. This new system preprocesses acoustic images, extracts several parameters and finally classifies them, based on Support Vector Machine (SVM). The preprocessing techniques used are spatial filtering, segmentation-based on a Gaussian Mixture Model (GMM) to separate the person from the background, masking-to reduce the dimensions of images-and binarization-to reduce the size of each image. An analysis of classification error and a study of the sensitivity of the error versus the computational burden of each implemented algorithm are presented. This allows the selection of the most relevant algorithms, according to the benefits required by the system. A significant improvement of the biometric system has been achieved by reducing the classification error, the computational burden and the storage requirements.

  10. A simpler method of preprocessing MALDI-TOF MS data for differential biomarker analysis: stem cell and melanoma cancer studies

    Directory of Open Access Journals (Sweden)

    Tong Dong L

    2011-09-01

    Full Text Available Abstract Introduction Raw spectral data from matrix-assisted laser desorption/ionisation time-of-flight (MALDI-TOF with MS profiling techniques usually contains complex information not readily providing biological insight into disease. The association of identified features within raw data to a known peptide is extremely difficult. Data preprocessing to remove uncertainty characteristics in the data is normally required before performing any further analysis. This study proposes an alternative yet simple solution to preprocess raw MALDI-TOF-MS data for identification of candidate marker ions. Two in-house MALDI-TOF-MS data sets from two different sample sources (melanoma serum and cord blood plasma are used in our study. Method Raw MS spectral profiles were preprocessed using the proposed approach to identify peak regions in the spectra. The preprocessed data was then analysed using bespoke machine learning algorithms for data reduction and ion selection. Using the selected ions, an ANN-based predictive model was constructed to examine the predictive power of these ions for classification. Results Our model identified 10 candidate marker ions for both data sets. These ion panels achieved over 90% classification accuracy on blind validation data. Receiver operating characteristics analysis was performed and the area under the curve for melanoma and cord blood classifiers was 0.991 and 0.986, respectively. Conclusion The results suggest that our data preprocessing technique removes unwanted characteristics of the raw data, while preserving the predictive components of the data. Ion identification analysis can be carried out using MALDI-TOF-MS data with the proposed data preprocessing technique coupled with bespoke algorithms for data reduction and ion selection.

  11. Effect of pre-processing on the physico-chemical properties of ...

    African Journals Online (AJOL)

    The findings indicated that the pre-processing treatments produced significant differences (p < 0.05) in protein (1.50 ± 0.18g/100g) and carbohydrate (1.09 ± 0.94g/100g) composition of the baking soda blanched milk sample. The viscosity of the baking soda blanched milk (18.91 ± 3.38cps) was significantly higher than that ...

  12. Orthogonal feature selection method. [For preprocessing of man spectral data

    Energy Technology Data Exchange (ETDEWEB)

    Kowalski, B R [Univ. of Washington, Seattle; Bender, C F

    1976-01-01

    A new method of preprocessing spectral data for extraction of molecular structural information is desired. This SELECT method generates orthogonal features that are important for classification purposes and that also retain their identity to the original measurements. A brief introduction to chemical pattern recognition is presented. A brief description of the method and an application to mass spectral data analysis follow. (BLM)

  13. Data pre-processing: a case study in predicting student's retention in ...

    African Journals Online (AJOL)

    dataset with features that are ready for data mining task. The study also proposed a process model and suggestions, which can be applied to support more comprehensible tools for educational domain who is the end user. Subsequently, the data pre-processing become more efficient for predicting student's retention in ...

  14. Learning and Generalisation in Neural Networks with Local Preprocessing

    OpenAIRE

    Kutsia, Merab

    2007-01-01

    We study learning and generalisation ability of a specific two-layer feed-forward neural network and compare its properties to that of a simple perceptron. The input patterns are mapped nonlinearly onto a hidden layer, much larger than the input layer, and this mapping is either fixed or may result from an unsupervised learning process. Such preprocessing of initially uncorrelated random patterns results in the correlated patterns in the hidden layer. The hidden-to-output mapping of the net...

  15. Pendekatan Pelatihan On-Site dan Step by Step untuk Optimalisasi Fungsi Guru dalam Pembelajaran

    OpenAIRE

    Moch. Sholeh Y.A. Ichrom

    2016-01-01

    Remoteness of programme content from teachers' real work situation and unsuitability of approach employed were suspected as main reasons contributing to the failure of many inservise teacher training programmes. A step by step, onsite teacher training (SSOTT) model was tried out in this experiment to study if the weakness of inservise programmes could be rectified. As it was tried out in relation with kindergarten mathemathics it was then called SSOTT-MTW (Step by Step Onsite Teacher Training...

  16. Pendekatan Pelatihan On-Site Dan Step by Step Untuk Optimalisasi Fungsi Guru Dalam Pembelajaran

    OpenAIRE

    Ichrom, Moch. Sholeh Y.A

    1996-01-01

    Remoteness of programme content from teachers' real work situation and unsuitability of approach employed were suspected as main reasons contributing to the failure of many inservise teacher training programmes. A step by step, onsite teacher training (SSOTT) model was tried out in this experiment to study if the weakness of inservise programmes could be rectified. As it was tried out in relation with kindergarten mathemathics it was then called SSOTT-MTW (Step by Step Onsite Teacher Training...

  17. A clinical evaluation of the RNCA study using Fourier filtering as a preprocessing method

    Energy Technology Data Exchange (ETDEWEB)

    Robeson, W.; Alcan, K.E.; Graham, M.C.; Palestro, C.; Oliver, F.H.; Benua, R.S.

    1984-06-01

    Forty-one patients (25 male, 16 female) were studied by Radionuclide Cardangiography (RNCA) in our institution. There were 42 rest studies and 24 stress studies (66 studies total). Sixteen patients were normal, 15 had ASHD, seven had a cardiomyopathy, and three had left-sided valvular regurgitation. Each study was preprocessed using both the standard nine-point smoothing method and Fourier filtering. Amplitude and phase images were also generated. Both preprocessing methods were compared with respect to image quality, border definition, reliability and reproducibility of the LVEF, and cine wall motion interpretation. Image quality and border definition were judged superior by the consensus of two independent observers in 65 of 66 studies (98%) using Fourier filtered data. The LVEF differed between the two processes by greater than .05 in 17 of 66 studies (26%) including five studies in which the LVEF could not be determined using nine-point smoothed data. LV wall motion was normal by both techniques in all control patients by cine analysis. However, cine wall motion analysis using Fourier filtered data demonstrated additional abnormalities in 17 of 25 studies (68%) in the ASHD group, including three uninterpretable studies using nine-point smoothed data. In the cardiomyopathy/valvular heart disease group, ten of 18 studies (56%) had additional wall motion abnormalities using Fourier filtered data (including four uninterpretable studies using nine-point smoothed data). We conclude that Fourier filtering is superior to the nine-point smooth preprocessing method now in general use in terms of image quality, border definition, generation of an LVEF, and cine wall motion analysis. The advent of the array processor makes routine preprocessing by Fourier filtering a feasible technologic advance in the development of the RNCA study.

  18. A clinical evaluation of the RNCA study using Fourier filtering as a preprocessing method

    International Nuclear Information System (INIS)

    Robeson, W.; Alcan, K.E.; Graham, M.C.; Palestro, C.; Oliver, F.H.; Benua, R.S.

    1984-01-01

    Forty-one patients (25 male, 16 female) were studied by Radionuclide Cardangiography (RNCA) in our institution. There were 42 rest studies and 24 stress studies (66 studies total). Sixteen patients were normal, 15 had ASHD, seven had a cardiomyopathy, and three had left-sided valvular regurgitation. Each study was preprocessed using both the standard nine-point smoothing method and Fourier filtering. Amplitude and phase images were also generated. Both preprocessing methods were compared with respect to image quality, border definition, reliability and reproducibility of the LVEF, and cine wall motion interpretation. Image quality and border definition were judged superior by the consensus of two independent observers in 65 of 66 studies (98%) using Fourier filtered data. The LVEF differed between the two processes by greater than .05 in 17 of 66 studies (26%) including five studies in which the LVEF could not be determined using nine-point smoothed data. LV wall motion was normal by both techniques in all control patients by cine analysis. However, cine wall motion analysis using Fourier filtered data demonstrated additional abnormalities in 17 of 25 studies (68%) in the ASHD group, including three uninterpretable studies using nine-point smoothed data. In the cardiomyopathy/valvular heart disease group, ten of 18 studies (56%) had additional wall motion abnormalities using Fourier filtered data (including four uninterpretable studies using nine-point smoothed data). We conclude that Fourier filtering is superior to the nine-point smooth preprocessing method now in general use in terms of image quality, border definition, generation of an LVEF, and cine wall motion analysis. The advent of the array processor makes routine preprocessing by Fourier filtering a feasible technologic advance in the development of the RNCA study

  19. Evaluation of the robustness of the preprocessing technique improving reversible compressibility of CT images: Tested on various CT examinations

    Energy Technology Data Exchange (ETDEWEB)

    Jeon, Chang Ho; Kim, Bohyoung; Gu, Bon Seung; Lee, Jong Min [Department of Radiology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, 300 Gumi-ro, Bundang-gu, Seongnam-si, Gyeonggi-do 463-707 (Korea, Republic of); Kim, Kil Joong [Department of Radiology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, 300 Gumi-ro, Bundang-gu, Seongnam-si, Gyeonggi-do 463-707, South Korea and Department of Radiation Applied Life Science, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul 110-799 (Korea, Republic of); Lee, Kyoung Ho [Department of Radiology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, 300 Gumi-ro, Bundang-gu, Seongnam-si, Gyeonggi-do 463-707, South Korea and Institute of Radiation Medicine, Seoul National University Medical Research Center, and Clinical Research Institute, Seoul National University Hospital, 101 Daehak-ro, Jongno-gu, Seoul 110-744 (Korea, Republic of); Kim, Tae Ki [Medical Information Center, Seoul National University Bundang Hospital, Seoul National University College of Medicine, 300 Gumi-ro, Bundang-gu, Seongnam-si, Gyeonggi-do 463-707 (Korea, Republic of)

    2013-10-15

    Purpose: To modify the preprocessing technique, which was previously proposed, improving compressibility of computed tomography (CT) images to cover the diversity of three dimensional configurations of different body parts and to evaluate the robustness of the technique in terms of segmentation correctness and increase in reversible compression ratio (CR) for various CT examinations.Methods: This study had institutional review board approval with waiver of informed patient consent. A preprocessing technique was previously proposed to improve the compressibility of CT images by replacing pixel values outside the body region with a constant value resulting in maximizing data redundancy. Since the technique was developed aiming at only chest CT images, the authors modified the segmentation method to cover the diversity of three dimensional configurations of different body parts. The modified version was evaluated as follows. In randomly selected 368 CT examinations (352 787 images), each image was preprocessed by using the modified preprocessing technique. Radiologists visually confirmed whether the segmented region covers the body region or not. The images with and without the preprocessing were reversibly compressed using Joint Photographic Experts Group (JPEG), JPEG2000 two-dimensional (2D), and JPEG2000 three-dimensional (3D) compressions. The percentage increase in CR per examination (CR{sub I}) was measured.Results: The rate of correct segmentation was 100.0% (95% CI: 99.9%, 100.0%) for all the examinations. The median of CR{sub I} were 26.1% (95% CI: 24.9%, 27.1%), 40.2% (38.5%, 41.1%), and 34.5% (32.7%, 36.2%) in JPEG, JPEG2000 2D, and JPEG2000 3D, respectively.Conclusions: In various CT examinations, the modified preprocessing technique can increase in the CR by 25% or more without concerning about degradation of diagnostic information.

  20. Contour extraction of echocardiographic images based on pre-processing

    Energy Technology Data Exchange (ETDEWEB)

    Hussein, Zinah Rajab; Rahmat, Rahmita Wirza; Abdullah, Lili Nurliyana [Department of Multimedia, Faculty of Computer Science and Information Technology, Department of Computer and Communication Systems Engineering, Faculty of Engineering University Putra Malaysia 43400 Serdang, Selangor (Malaysia); Zamrin, D M [Department of Surgery, Faculty of Medicine, National University of Malaysia, 56000 Cheras, Kuala Lumpur (Malaysia); Saripan, M Iqbal

    2011-02-15

    In this work we present a technique to extract the heart contours from noisy echocardiograph images. Our technique is based on improving the image before applying contours detection to reduce heavy noise and get better image quality. To perform that, we combine many pre-processing techniques (filtering, morphological operations, and contrast adjustment) to avoid unclear edges and enhance low contrast of echocardiograph images, after implementing these techniques we can get legible detection for heart boundaries and valves movement by traditional edge detection methods.

  1. Contour extraction of echocardiographic images based on pre-processing

    International Nuclear Information System (INIS)

    Hussein, Zinah Rajab; Rahmat, Rahmita Wirza; Abdullah, Lili Nurliyana; Zamrin, D M; Saripan, M Iqbal

    2011-01-01

    In this work we present a technique to extract the heart contours from noisy echocardiograph images. Our technique is based on improving the image before applying contours detection to reduce heavy noise and get better image quality. To perform that, we combine many pre-processing techniques (filtering, morphological operations, and contrast adjustment) to avoid unclear edges and enhance low contrast of echocardiograph images, after implementing these techniques we can get legible detection for heart boundaries and valves movement by traditional edge detection methods.

  2. Parallel preprocessing in a nuclear data acquisition system

    International Nuclear Information System (INIS)

    Pichot, G.; Auriol, E.; Lemarchand, G.; Millaud, J.

    1977-01-01

    The appearance of microprocessors and large memory chips has somewhat modified the spectrum of tools usable by the data acquisition system designer. This is particular true in the nuclear research field where the data flow has been continuously growing as a consequence of the increasing capabilities of new detectors. This paper deals with the insertion, between a data acquisition system and a computer, of a preprocessing structure based on microprocessors and large capacity high speed memories. The results shows a significant improvement on several aspects in the operation of the system with returns paying back the investments in 18 months

  3. TargetSearch--a Bioconductor package for the efficient preprocessing of GC-MS metabolite profiling data.

    Science.gov (United States)

    Cuadros-Inostroza, Alvaro; Caldana, Camila; Redestig, Henning; Kusano, Miyako; Lisec, Jan; Peña-Cortés, Hugo; Willmitzer, Lothar; Hannah, Matthew A

    2009-12-16

    Metabolite profiling, the simultaneous quantification of multiple metabolites in an experiment, is becoming increasingly popular, particularly with the rise of systems-level biology. The workhorse in this field is gas-chromatography hyphenated with mass spectrometry (GC-MS). The high-throughput of this technology coupled with a demand for large experiments has led to data pre-processing, i.e. the quantification of metabolites across samples, becoming a major bottleneck. Existing software has several limitations, including restricted maximum sample size, systematic errors and low flexibility. However, the biggest limitation is that the resulting data usually require extensive hand-curation, which is subjective and can typically take several days to weeks. We introduce the TargetSearch package, an open source tool which is a flexible and accurate method for pre-processing even very large numbers of GC-MS samples within hours. We developed a novel strategy to iteratively correct and update retention time indices for searching and identifying metabolites. The package is written in the R programming language with computationally intensive functions written in C for speed and performance. The package includes a graphical user interface to allow easy use by those unfamiliar with R. TargetSearch allows fast and accurate data pre-processing for GC-MS experiments and overcomes the sample number limitations and manual curation requirements of existing software. We validate our method by carrying out an analysis against both a set of known chemical standard mixtures and of a biological experiment. In addition we demonstrate its capabilities and speed by comparing it with other GC-MS pre-processing tools. We believe this package will greatly ease current bottlenecks and facilitate the analysis of metabolic profiling data.

  4. Comparison of classification algorithms for various methods of preprocessing radar images of the MSTAR base

    Science.gov (United States)

    Borodinov, A. A.; Myasnikov, V. V.

    2018-04-01

    The present work is devoted to comparing the accuracy of the known qualification algorithms in the task of recognizing local objects on radar images for various image preprocessing methods. Preprocessing involves speckle noise filtering and normalization of the object orientation in the image by the method of image moments and by a method based on the Hough transform. In comparison, the following classification algorithms are used: Decision tree; Support vector machine, AdaBoost, Random forest. The principal component analysis is used to reduce the dimension. The research is carried out on the objects from the base of radar images MSTAR. The paper presents the results of the conducted studies.

  5. Protein from preprocessed waste activated sludge as a nutritional supplement in chicken feed.

    Science.gov (United States)

    Chirwa, Evans M N; Lebitso, Moses T

    2014-01-01

    Five groups of broiler chickens were raised on feed containing varying substitutions of single cell protein from preprocessed waste activated sludge (pWAS) in varying compositions of 0:100, 25:75, 50:50, 75:25, and 100:0 pWAS: fishmeal by mass. Forty chickens per batch were evaluated for growth rate, mortality rate, and feed conversion efficiency (ηє). The initial mass gain rate, mortality rate, initial and operational cost analyses showed that protein from pWAS could successfully replace the commercial feed supplements with a significant cost saving without adversely affecting the health of the birds. The chickens raised on preprocessed WAS weighed 19% more than those raised on fishmeal protein supplement over a 45 day test period. Growing chickens on pWAS translated into a 46% cost saving due to the fast growth rate and minimal death losses before maturity.

  6. Convolutional neural networks for vibrational spectroscopic data analysis.

    Science.gov (United States)

    Acquarelli, Jacopo; van Laarhoven, Twan; Gerretzen, Jan; Tran, Thanh N; Buydens, Lutgarde M C; Marchiori, Elena

    2017-02-15

    In this work we show that convolutional neural networks (CNNs) can be efficiently used to classify vibrational spectroscopic data and identify important spectral regions. CNNs are the current state-of-the-art in image classification and speech recognition and can learn interpretable representations of the data. These characteristics make CNNs a good candidate for reducing the need for preprocessing and for highlighting important spectral regions, both of which are crucial steps in the analysis of vibrational spectroscopic data. Chemometric analysis of vibrational spectroscopic data often relies on preprocessing methods involving baseline correction, scatter correction and noise removal, which are applied to the spectra prior to model building. Preprocessing is a critical step because even in simple problems using 'reasonable' preprocessing methods may decrease the performance of the final model. We develop a new CNN based method and provide an accompanying publicly available software. It is based on a simple CNN architecture with a single convolutional layer (a so-called shallow CNN). Our method outperforms standard classification algorithms used in chemometrics (e.g. PLS) in terms of accuracy when applied to non-preprocessed test data (86% average accuracy compared to the 62% achieved by PLS), and it achieves better performance even on preprocessed test data (96% average accuracy compared to the 89% achieved by PLS). For interpretability purposes, our method includes a procedure for finding important spectral regions, thereby facilitating qualitative interpretation of results. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. [Study of near infrared spectral preprocessing and wavelength selection methods for endometrial cancer tissue].

    Science.gov (United States)

    Zhao, Li-Ting; Xiang, Yu-Hong; Dai, Yin-Mei; Zhang, Zhuo-Yong

    2010-04-01

    Near infrared spectroscopy was applied to measure the tissue slice of endometrial tissues for collecting the spectra. A total of 154 spectra were obtained from 154 samples. The number of normal, hyperplasia, and malignant samples was 36, 60, and 58, respectively. Original near infrared spectra are composed of many variables, for example, interference information including instrument errors and physical effects such as particle size and light scatter. In order to reduce these influences, original spectra data should be performed with different spectral preprocessing methods to compress variables and extract useful information. So the methods of spectral preprocessing and wavelength selection have played an important role in near infrared spectroscopy technique. In the present paper the raw spectra were processed using various preprocessing methods including first derivative, multiplication scatter correction, Savitzky-Golay first derivative algorithm, standard normal variate, smoothing, and moving-window median. Standard deviation was used to select the optimal spectral region of 4 000-6 000 cm(-1). Then principal component analysis was used for classification. Principal component analysis results showed that three types of samples could be discriminated completely and the accuracy almost achieved 100%. This study demonstrated that near infrared spectroscopy technology and chemometrics method could be a fast, efficient, and novel means to diagnose cancer. The proposed methods would be a promising and significant diagnosis technique of early stage cancer.

  8. TargetSearch - a Bioconductor package for the efficient preprocessing of GC-MS metabolite profiling data

    Science.gov (United States)

    2009-01-01

    Background Metabolite profiling, the simultaneous quantification of multiple metabolites in an experiment, is becoming increasingly popular, particularly with the rise of systems-level biology. The workhorse in this field is gas-chromatography hyphenated with mass spectrometry (GC-MS). The high-throughput of this technology coupled with a demand for large experiments has led to data pre-processing, i.e. the quantification of metabolites across samples, becoming a major bottleneck. Existing software has several limitations, including restricted maximum sample size, systematic errors and low flexibility. However, the biggest limitation is that the resulting data usually require extensive hand-curation, which is subjective and can typically take several days to weeks. Results We introduce the TargetSearch package, an open source tool which is a flexible and accurate method for pre-processing even very large numbers of GC-MS samples within hours. We developed a novel strategy to iteratively correct and update retention time indices for searching and identifying metabolites. The package is written in the R programming language with computationally intensive functions written in C for speed and performance. The package includes a graphical user interface to allow easy use by those unfamiliar with R. Conclusions TargetSearch allows fast and accurate data pre-processing for GC-MS experiments and overcomes the sample number limitations and manual curation requirements of existing software. We validate our method by carrying out an analysis against both a set of known chemical standard mixtures and of a biological experiment. In addition we demonstrate its capabilities and speed by comparing it with other GC-MS pre-processing tools. We believe this package will greatly ease current bottlenecks and facilitate the analysis of metabolic profiling data. PMID:20015393

  9. TargetSearch - a Bioconductor package for the efficient preprocessing of GC-MS metabolite profiling data

    Directory of Open Access Journals (Sweden)

    Lisec Jan

    2009-12-01

    Full Text Available Abstract Background Metabolite profiling, the simultaneous quantification of multiple metabolites in an experiment, is becoming increasingly popular, particularly with the rise of systems-level biology. The workhorse in this field is gas-chromatography hyphenated with mass spectrometry (GC-MS. The high-throughput of this technology coupled with a demand for large experiments has led to data pre-processing, i.e. the quantification of metabolites across samples, becoming a major bottleneck. Existing software has several limitations, including restricted maximum sample size, systematic errors and low flexibility. However, the biggest limitation is that the resulting data usually require extensive hand-curation, which is subjective and can typically take several days to weeks. Results We introduce the TargetSearch package, an open source tool which is a flexible and accurate method for pre-processing even very large numbers of GC-MS samples within hours. We developed a novel strategy to iteratively correct and update retention time indices for searching and identifying metabolites. The package is written in the R programming language with computationally intensive functions written in C for speed and performance. The package includes a graphical user interface to allow easy use by those unfamiliar with R. Conclusions TargetSearch allows fast and accurate data pre-processing for GC-MS experiments and overcomes the sample number limitations and manual curation requirements of existing software. We validate our method by carrying out an analysis against both a set of known chemical standard mixtures and of a biological experiment. In addition we demonstrate its capabilities and speed by comparing it with other GC-MS pre-processing tools. We believe this package will greatly ease current bottlenecks and facilitate the analysis of metabolic profiling data.

  10. Automated Pre-processing for NMR Assignments with Reduced Tedium

    Energy Technology Data Exchange (ETDEWEB)

    2004-05-11

    An important rate-limiting step in the reasonance asignment process is accurate identification of resonance peaks in MNR spectra. NMR spectra are noisy. Hence, automatic peak-picking programs must navigate between the Scylla of reliable but incomplete picking, and the Charybdis of noisy but complete picking. Each of these extremes complicates the assignment process: incomplete peak-picking results in the loss of essential connectivities, while noisy picking conceals the true connectivities under a combinatiorial explosion of false positives. Intermediate processing can simplify the assignment process by preferentially removing false peaks from noisy peak lists. This is accomplished by requiring consensus between multiple NMR experiments, exploiting a priori information about NMR spectra, and drawing on empirical statistical distributions of chemical shift extracted from the BioMagResBank. Experienced NMR practitioners currently apply many of these techniques "by hand", which is tedious, and may appear arbitrary to the novice. To increase efficiency, we have created a systematic and automated approach to this process, known as APART. Automated pre-processing has three main advantages: reduced tedium, standardization, and pedagogy. In the hands of experienced spectroscopists, the main advantage is reduced tedium (a rapid increase in the ratio of true peaks to false peaks with minimal effort). When a project is passed from hand to hand, the main advantage is standardization. APART automatically documents the peak filtering process by archiving its original recommendations, the accompanying justifications, and whether a user accepted or overrode a given filtering recommendation. In the hands of a novice, this tool can reduce the stumbling block of learning to differentiate between real peaks and noise, by providing real-time examples of how such decisions are made.

  11. An Application for Data Preprocessing and Models Extractions in Web Usage Mining

    Directory of Open Access Journals (Sweden)

    Claudia Elena DINUCA

    2011-11-01

    Full Text Available Web servers worldwide generate a vast amount of information on web users’ browsing activities. Several researchers have studied these so-called clickstream or web access log data to better understand and characterize web users. The goal of this application is to analyze user behaviour by mining enriched web access log data. With the continued growth and proliferation of e-commerce, Web services, and Web-based information systems, the volumes of click stream and user data collected by Web-based organizations in their daily operations has reached astronomical proportions. This information can be exploited in various ways, such as enhancing the effectiveness of websites or developing directed web marketing campaigns. The discovered patterns are usually represented as collections of pages, objects, or re-sources that are frequently accessed by groups of users with common needs or interests. In this paper we will focus on displaying the way how it was implemented the application for data preprocessing and extracting different data models from web logs data, finding association as a data mining technique to extract potentially useful knowledge from web usage data. We find different data models navigation patterns by analysing the log files of the web-site. I implemented the application in Java using NetBeans IDE. For exemplification, I used the log files data from a commercial web site www.nice-layouts.com.

  12. Achieving Accurate Automatic Sleep Staging on Manually Pre-processed EEG Data Through Synchronization Feature Extraction and Graph Metrics.

    Science.gov (United States)

    Chriskos, Panteleimon; Frantzidis, Christos A; Gkivogkli, Polyxeni T; Bamidis, Panagiotis D; Kourtidou-Papadeli, Chrysoula

    2018-01-01

    Sleep staging, the process of assigning labels to epochs of sleep, depending on the stage of sleep they belong, is an arduous, time consuming and error prone process as the initial recordings are quite often polluted by noise from different sources. To properly analyze such data and extract clinical knowledge, noise components must be removed or alleviated. In this paper a pre-processing and subsequent sleep staging pipeline for the sleep analysis of electroencephalographic signals is described. Two novel methods of functional connectivity estimation (Synchronization Likelihood/SL and Relative Wavelet Entropy/RWE) are comparatively investigated for automatic sleep staging through manually pre-processed electroencephalographic recordings. A multi-step process that renders signals suitable for further analysis is initially described. Then, two methods that rely on extracting synchronization features from electroencephalographic recordings to achieve computerized sleep staging are proposed, based on bivariate features which provide a functional overview of the brain network, contrary to most proposed methods that rely on extracting univariate time and frequency features. Annotation of sleep epochs is achieved through the presented feature extraction methods by training classifiers, which are in turn able to accurately classify new epochs. Analysis of data from sleep experiments on a randomized, controlled bed-rest study, which was organized by the European Space Agency and was conducted in the "ENVIHAB" facility of the Institute of Aerospace Medicine at the German Aerospace Center (DLR) in Cologne, Germany attains high accuracy rates, over 90% based on ground truth that resulted from manual sleep staging by two experienced sleep experts. Therefore, it can be concluded that the above feature extraction methods are suitable for semi-automatic sleep staging.

  13. A Real-Time Embedded System for Stereo Vision Preprocessing Using an FPGA

    DEFF Research Database (Denmark)

    Kjær-Nielsen, Anders; Jensen, Lars Baunegaard With; Sørensen, Anders Stengaard

    2008-01-01

    In this paper a low level vision processing node for use in existing IEEE 1394 camera setups is presented. The processing node is a small embedded system, that utilizes an FPGA to perform stereo vision preprocessing at rates limited by the bandwidth of IEEE 1394a (400Mbit). The system is used...

  14. Performance Comparison of Several Pre-Processing Methods in a Hand Gesture Recognition System based on Nearest Neighbor for Different Background Conditions

    Directory of Open Access Journals (Sweden)

    Iwan Setyawan

    2012-12-01

    Full Text Available This paper presents a performance analysis and comparison of several pre-processing methods used in a hand gesture recognition system. The pre-processing methods are based on the combinations of several image processing operations, namely edge detection, low pass filtering, histogram equalization, thresholding and desaturation. The hand gesture recognition system is designed to classify an input image into one of six possible classes. The input images are taken with various background conditions. Our experiments showed that the best result is achieved when the pre-processing method consists of only a desaturation operation, achieving a classification accuracy of up to 83.15%.

  15. Acquiring and preprocessing leaf images for automated plant identification: understanding the tradeoff between effort and information gain

    Directory of Open Access Journals (Sweden)

    Michael Rzanny

    2017-11-01

    Full Text Available Abstract Background Automated species identification is a long term research subject. Contrary to flowers and fruits, leaves are available throughout most of the year. Offering margin and texture to characterize a species, they are the most studied organ for automated identification. Substantially matured machine learning techniques generate the need for more training data (aka leaf images. Researchers as well as enthusiasts miss guidance on how to acquire suitable training images in an efficient way. Methods In this paper, we systematically study nine image types and three preprocessing strategies. Image types vary in terms of in-situ image recording conditions: perspective, illumination, and background, while the preprocessing strategies compare non-preprocessed, cropped, and segmented images to each other. Per image type-preprocessing combination, we also quantify the manual effort required for their implementation. We extract image features using a convolutional neural network, classify species using the resulting feature vectors and discuss classification accuracy in relation to the required effort per combination. Results The most effective, non-destructive way to record herbaceous leaves is to take an image of the leaf’s top side. We yield the highest classification accuracy using destructive back light images, i.e., holding the plucked leaf against the sky for image acquisition. Cropping the image to the leaf’s boundary substantially improves accuracy, while precise segmentation yields similar accuracy at a substantially higher effort. The permanent use or disuse of a flash light has negligible effects. Imaging the typically stronger textured backside of a leaf does not result in higher accuracy, but notably increases the acquisition cost. Conclusions In conclusion, the way in which leaf images are acquired and preprocessed does have a substantial effect on the accuracy of the classifier trained on them. For the first time, this

  16. Clinical data miner: an electronic case report form system with integrated data preprocessing and machine-learning libraries supporting clinical diagnostic model research.

    Science.gov (United States)

    Installé, Arnaud Jf; Van den Bosch, Thierry; De Moor, Bart; Timmerman, Dirk

    2014-10-20

    Using machine-learning techniques, clinical diagnostic model research extracts diagnostic models from patient data. Traditionally, patient data are often collected using electronic Case Report Form (eCRF) systems, while mathematical software is used for analyzing these data using machine-learning techniques. Due to the lack of integration between eCRF systems and mathematical software, extracting diagnostic models is a complex, error-prone process. Moreover, due to the complexity of this process, it is usually only performed once, after a predetermined number of data points have been collected, without insight into the predictive performance of the resulting models. The objective of the study of Clinical Data Miner (CDM) software framework is to offer an eCRF system with integrated data preprocessing and machine-learning libraries, improving efficiency of the clinical diagnostic model research workflow, and to enable optimization of patient inclusion numbers through study performance monitoring. The CDM software framework was developed using a test-driven development (TDD) approach, to ensure high software quality. Architecturally, CDM's design is split over a number of modules, to ensure future extendability. The TDD approach has enabled us to deliver high software quality. CDM's eCRF Web interface is in active use by the studies of the International Endometrial Tumor Analysis consortium, with over 4000 enrolled patients, and more studies planned. Additionally, a derived user interface has been used in six separate interrater agreement studies. CDM's integrated data preprocessing and machine-learning libraries simplify some otherwise manual and error-prone steps in the clinical diagnostic model research workflow. Furthermore, CDM's libraries provide study coordinators with a method to monitor a study's predictive performance as patient inclusions increase. To our knowledge, CDM is the only eCRF system integrating data preprocessing and machine-learning libraries

  17. Characterizing the continuously acquired cardiovascular time series during hemodialysis, using median hybrid filter preprocessing noise reduction.

    Science.gov (United States)

    Wilson, Scott; Bowyer, Andrea; Harrap, Stephen B

    2015-01-01

    The clinical characterization of cardiovascular dynamics during hemodialysis (HD) has important pathophysiological implications in terms of diagnostic, cardiovascular risk assessment, and treatment efficacy perspectives. Currently the diagnosis of significant intradialytic systolic blood pressure (SBP) changes among HD patients is imprecise and opportunistic, reliant upon the presence of hypotensive symptoms in conjunction with coincident but isolated noninvasive brachial cuff blood pressure (NIBP) readings. Considering hemodynamic variables as a time series makes a continuous recording approach more desirable than intermittent measures; however, in the clinical environment, the data signal is susceptible to corruption due to both impulsive and Gaussian-type noise. Signal preprocessing is an attractive solution to this problem. Prospectively collected continuous noninvasive SBP data over the short-break intradialytic period in ten patients was preprocessed using a novel median hybrid filter (MHF) algorithm and compared with 50 time-coincident pairs of intradialytic NIBP measures from routine HD practice. The median hybrid preprocessing technique for continuously acquired cardiovascular data yielded a dynamic regression without significant noise and artifact, suitable for high-level profiling of time-dependent SBP behavior. Signal accuracy is highly comparable with standard NIBP measurement, with the added clinical benefit of dynamic real-time hemodynamic information.

  18. Pre-Processing and Modeling Tools for Bigdata

    Directory of Open Access Journals (Sweden)

    Hashem Hadi

    2016-09-01

    Full Text Available Modeling tools and operators help the user / developer to identify the processing field on the top of the sequence and to send into the computing module only the data related to the requested result. The remaining data is not relevant and it will slow down the processing. The biggest challenge nowadays is to get high quality processing results with a reduced computing time and costs. To do so, we must review the processing sequence, by adding several modeling tools. The existing processing models do not take in consideration this aspect and focus on getting high calculation performances which will increase the computing time and costs. In this paper we provide a study of the main modeling tools for BigData and a new model based on pre-processing.

  19. Making the most of RNA-seq: Pre-processing sequencing data with Opossum for reliable SNP variant detection [version 1; referees: 2 approved, 1 approved with reservations

    Directory of Open Access Journals (Sweden)

    Laura Oikkonen

    2017-01-01

    Full Text Available Identifying variants from RNA-seq (transcriptome sequencing data is a cost-effective and versatile alternative to whole-genome sequencing. However, current variant callers do not generally behave well with RNA-seq data due to reads encompassing intronic regions. We have developed a software programme called Opossum to address this problem. Opossum pre-processes RNA-seq reads prior to variant calling, and although it has been designed to work specifically with Platypus, it can be used equally well with other variant callers such as GATK HaplotypeCaller. In this work, we show that using Opossum in conjunction with either Platypus or GATK HaplotypeCaller maintains precision and improves the sensitivity for SNP detection compared to the GATK Best Practices pipeline. In addition, using it in combination with Platypus offers a substantial reduction in run times compared to the GATK pipeline so it is ideal when there are only limited time or computational resources available.

  20. Application of preprocessing filtering on Decision Tree C4.5 and rough set theory

    Science.gov (United States)

    Chan, Joseph C. C.; Lin, Tsau Y.

    2001-03-01

    This paper compares two artificial intelligence methods: the Decision Tree C4.5 and Rough Set Theory on the stock market data. The Decision Tree C4.5 is reviewed with the Rough Set Theory. An enhanced window application is developed to facilitate the pre-processing filtering by introducing the feature (attribute) transformations, which allows users to input formulas and create new attributes. Also, the application produces three varieties of data set with delaying, averaging, and summation. The results prove the improvement of pre-processing by applying feature (attribute) transformations on Decision Tree C4.5. Moreover, the comparison between Decision Tree C4.5 and Rough Set Theory is based on the clarity, automation, accuracy, dimensionality, raw data, and speed, which is supported by the rules sets generated by both algorithms on three different sets of data.

  1. Cloudy Solar Software - Enhanced Capabilities for Finding, Pre-processing, and Visualizing Solar Data

    Science.gov (United States)

    Istvan Etesi, Laszlo; Tolbert, K.; Schwartz, R.; Zarro, D.; Dennis, B.; Csillaghy, A.

    2010-05-01

    In our project "Extending the Virtual Solar Observatory (VSO)” we have combined some of the features available in Solar Software (SSW) to produce an integrated environment for data analysis, supporting the complete workflow from data location, retrieval, preparation, and analysis to creating publication-quality figures. Our goal is an integrated analysis experience in IDL, easy-to-use but flexible enough to allow more sophisticated procedures such as multi-instrument analysis. To that end, we have made the transition from a locally oriented setting where all the analysis is done on the user's computer, to an extended analysis environment where IDL has access to services available on the Internet. We have implemented a form of Cloud Computing that uses the VSO search and a new data retrieval and pre-processing server (PrepServer) that provides remote execution of instrument-specific data preparation. We have incorporated the interfaces to the VSO search and the PrepServer into an IDL widget (SHOW_SYNOP) that provides user-friendly searching and downloading of raw solar data and optionally sends search results for pre-processing to the PrepServer prior to downloading the data. The raw and pre-processed data can be displayed with our plotting suite, PLOTMAN, which can handle different data types (light curves, images, and spectra) and perform basic data operations such as zooming, image overlays, solar rotation, etc. PLOTMAN is highly configurable and suited for visual data analysis and for creating publishable figures. PLOTMAN and SHOW_SYNOP work hand-in-hand for a convenient working environment. Our environment supports a growing number of solar instruments that currently includes RHESSI, SOHO/EIT, TRACE, SECCHI/EUVI, HINODE/XRT, and HINODE/EIS.

  2. Evaluation of Microarray Preprocessing Algorithms Based on Concordance with RT-PCR in Clinical Samples

    DEFF Research Database (Denmark)

    Hansen, Kasper Lage; Szallasi, Zoltan Imre; Eklund, Aron Charles

    2009-01-01

    evaluated consistency using the Pearson correlation between measurements obtained on the two platforms. Also, we introduce the log-ratio discrepancy as a more relevant measure of discordance between gene expression platforms. Of nine preprocessing algorithms tested, PLIER+16 produced expression values...

  3. A Technical Review on Biomass Processing: Densification, Preprocessing, Modeling and Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Jaya Shankar Tumuluru; Christopher T. Wright

    2010-06-01

    It is now a well-acclaimed fact that burning fossil fuels and deforestation are major contributors to climate change. Biomass from plants can serve as an alternative renewable and carbon-neutral raw material for the production of bioenergy. Low densities of 40–60 kg/m3 for lignocellulosic and 200–400 kg/m3 for woody biomass limits their application for energy purposes. Prior to use in energy applications these materials need to be densified. The densified biomass can have bulk densities over 10 times the raw material helping to significantly reduce technical limitations associated with storage, loading and transportation. Pelleting, briquetting, or extrusion processing are commonly used methods for densification. The aim of the present research is to develop a comprehensive review of biomass processing that includes densification, preprocessing, modeling and optimization. The specific objective include carrying out a technical review on (a) mechanisms of particle bonding during densification; (b) methods of densification including extrusion, briquetting, pelleting, and agglomeration; (c) effects of process and feedstock variables and biomass biochemical composition on the densification (d) effects of preprocessing such as grinding, preheating, steam explosion, and torrefaction on biomass quality and binding characteristics; (e) models for understanding the compression characteristics; and (f) procedures for response surface modeling and optimization.

  4. ENDF/B Pre-Processing Codes: Implementing and testing on a Personal Computer

    International Nuclear Information System (INIS)

    McLaughlin, P.K.

    1987-05-01

    This document describes the contents of the diskettes containing the ENDF/B Pre-Processing codes by D.E. Cullen, and example data for use in implementing and testing these codes on a Personal Computer of the type IBM-PC/AT. Upon request the codes are available from the IAEA Nuclear Data Section, free of charge, on a series of 7 diskettes. (author)

  5. On the Equivalence between Small-Step and Big-Step Abstract Machines: A Simple Application of Lightweight Fusion

    DEFF Research Database (Denmark)

    Danvy, Olivier; Millikin, Kevin

    2007-01-01

    -step specification. We illustrate this observation here with a recognizer for Dyck words, the CEK machine, and Krivine's machine with call/cc. The need for such a simple proof is motivated by our current work on small-step abstract machines as obtained by refocusing a function implementing a reduction semantics (a...

  6. On the equivalence between small-step and big-step abstract machines: a simple application of lightweight fusion

    DEFF Research Database (Denmark)

    Danvy, Olivier; Millikin, Kevin

    2008-01-01

    -step specification. We illustrate this observation here with a recognizer for Dyck words, the CEK machine, and Krivine’s machine with call/cc. The need for such a simple proof is motivated by our current work on small-step abstract machines as obtained by refocusing a function implementing a reduction semantics (a...... syntactic correspondence), and big-step abstract machines as obtained by CPStransforming and then defunctionalizing a function implementing a big-step semantics (a functional correspondence). © 2007 Elsevier B.V. All rights reserved....

  7. Speech perception for adult cochlear implant recipients in a realistic background noise: effectiveness of preprocessing strategies and external options for improving speech recognition in noise.

    Science.gov (United States)

    Gifford, René H; Revit, Lawrence J

    2010-01-01

    Although cochlear implant patients are achieving increasingly higher levels of performance, speech perception in noise continues to be problematic. The newest generations of implant speech processors are equipped with preprocessing and/or external accessories that are purported to improve listening in noise. Most speech perception measures in the clinical setting, however, do not provide a close approximation to real-world listening environments. To assess speech perception for adult cochlear implant recipients in the presence of a realistic restaurant simulation generated by an eight-loudspeaker (R-SPACE) array in order to determine whether commercially available preprocessing strategies and/or external accessories yield improved sentence recognition in noise. Single-subject, repeated-measures design with two groups of participants: Advanced Bionics and Cochlear Corporation recipients. Thirty-four subjects, ranging in age from 18 to 90 yr (mean 54.5 yr), participated in this prospective study. Fourteen subjects were Advanced Bionics recipients, and 20 subjects were Cochlear Corporation recipients. Speech reception thresholds (SRTs) in semidiffuse restaurant noise originating from an eight-loudspeaker array were assessed with the subjects' preferred listening programs as well as with the addition of either Beam preprocessing (Cochlear Corporation) or the T-Mic accessory option (Advanced Bionics). In Experiment 1, adaptive SRTs with the Hearing in Noise Test sentences were obtained for all 34 subjects. For Cochlear Corporation recipients, SRTs were obtained with their preferred everyday listening program as well as with the addition of Focus preprocessing. For Advanced Bionics recipients, SRTs were obtained with the integrated behind-the-ear (BTE) mic as well as with the T-Mic. Statistical analysis using a repeated-measures analysis of variance (ANOVA) evaluated the effects of the preprocessing strategy or external accessory in reducing the SRT in noise. In addition

  8. Calling to Nursing: Concept Analysis.

    Science.gov (United States)

    Emerson, Christie

    The aims of this article are (a) to analyze the concept of a calling as it relates nursing and (b) to develop a definition of calling to nursing with the detail and clarity needed to guide reliable and valid research. The classic steps described by Walker and Avant are used for the analysis. Literature from several disciplines is reviewed including vocational psychology, Christian career counseling, sociology, organizational management, and nursing. The analysis provides an operational definition of a calling to nursing and establishes 3 defining attributes of the concept: (a) a passionate intrinsic motivation or desire (perhaps with a religious component), (b) an aspiration to engage in nursing practice, as a means of fulfilling one's purpose in life, and (c) the desire to help others as one's purpose in life. Antecedents to the concept are personal introspection and cognitive awareness. Positive consequences to the concept are improved work meaningfulness, work engagement, career commitment, personal well-being, and satisfaction. Negative consequences of having a calling might include willingness to sacrifice well-being for work and problems with work-life balance. Following the concept analysis, philosophical assumptions, contextual factors, interdisciplinary work, research opportunities, and practice implications are discussed.

  9. Scene matching based on non-linear pre-processing on reference image and sensed image

    Institute of Scientific and Technical Information of China (English)

    Zhong Sheng; Zhang Tianxu; Sang Nong

    2005-01-01

    To solve the heterogeneous image scene matching problem, a non-linear pre-processing method for the original images before intensity-based correlation is proposed. The result shows that the proper matching probability is raised greatly. Especially for the low S/N image pairs, the effect is more remarkable.

  10. Learning-Based Just-Noticeable-Quantization- Distortion Modeling for Perceptual Video Coding.

    Science.gov (United States)

    Ki, Sehwan; Bae, Sung-Ho; Kim, Munchurl; Ko, Hyunsuk

    2018-07-01

    Conventional predictive video coding-based approaches are reaching the limit of their potential coding efficiency improvements, because of severely increasing computation complexity. As an alternative approach, perceptual video coding (PVC) has attempted to achieve high coding efficiency by eliminating perceptual redundancy, using just-noticeable-distortion (JND) directed PVC. The previous JNDs were modeled by adding white Gaussian noise or specific signal patterns into the original images, which were not appropriate in finding JND thresholds due to distortion with energy reduction. In this paper, we present a novel discrete cosine transform-based energy-reduced JND model, called ERJND, that is more suitable for JND-based PVC schemes. Then, the proposed ERJND model is extended to two learning-based just-noticeable-quantization-distortion (JNQD) models as preprocessing that can be applied for perceptual video coding. The two JNQD models can automatically adjust JND levels based on given quantization step sizes. One of the two JNQD models, called LR-JNQD, is based on linear regression and determines the model parameter for JNQD based on extracted handcraft features. The other JNQD model is based on a convolution neural network (CNN), called CNN-JNQD. To our best knowledge, our paper is the first approach to automatically adjust JND levels according to quantization step sizes for preprocessing the input to video encoders. In experiments, both the LR-JNQD and CNN-JNQD models were applied to high efficiency video coding (HEVC) and yielded maximum (average) bitrate reductions of 38.51% (10.38%) and 67.88% (24.91%), respectively, with little subjective video quality degradation, compared with the input without preprocessing applied.

  11. Preprocessing with Photoshop Software on Microscopic Images of A549 Cells in Epithelial-Mesenchymal Transition.

    Science.gov (United States)

    Ren, Zhou-Xin; Yu, Hai-Bin; Shen, Jun-Ling; Li, Ya; Li, Jian-Sheng

    2015-06-01

    To establish a preprocessing method for cell morphometry in microscopic images of A549 cells in epithelial-mesenchymal transition (EMT). Adobe Photoshop CS2 (Adobe Systems, Inc.) was used for preprocessing the images. First, all images were processed for size uniformity and high distinguishability between the cell and background area. Then, a blank image with the same size and grids was established and cross points of the grids were added into a distinct color. The blank image was merged into a processed image. In the merged images, the cells with 1 or more cross points were chosen, and then the cell areas were enclosed and were replaced in a distinct color. Except for chosen cellular areas, all areas were changed into a unique hue. Three observers quantified roundness of cells in images with the image preprocess (IPP) or without the method (Controls), respectively. Furthermore, 1 observer measured the roundness 3 times with the 2 methods, respectively. The results between IPPs and Controls were compared for repeatability and reproducibility. As compared with the Control method, among 3 observers, use of the IPP method resulted in a higher number and a higher percentage of same-chosen cells in an image. The relative average deviation values of roundness, either for 3 observers or 1 observer, were significantly higher in Controls than in IPPs (p Photoshop, a chosen cell from an image was more objective, regular, and accurate, creating an increase of reproducibility and repeatability on morphometry of A549 cells in epithelial to mesenchymal transition.

  12. A graphical method to evaluate spectral preprocessing in multivariate regression calibrations: example with Savitzky-Golay filters and partial least squares regression.

    Science.gov (United States)

    Delwiche, Stephen R; Reeves, James B

    2010-01-01

    In multivariate regression analysis of spectroscopy data, spectral preprocessing is often performed to reduce unwanted background information (offsets, sloped baselines) or accentuate absorption features in intrinsically overlapping bands. These procedures, also known as pretreatments, are commonly smoothing operations or derivatives. While such operations are often useful in reducing the number of latent variables of the actual decomposition and lowering residual error, they also run the risk of misleading the practitioner into accepting calibration equations that are poorly adapted to samples outside of the calibration. The current study developed a graphical method to examine this effect on partial least squares (PLS) regression calibrations of near-infrared (NIR) reflection spectra of ground wheat meal with two analytes, protein content and sodium dodecyl sulfate sedimentation (SDS) volume (an indicator of the quantity of the gluten proteins that contribute to strong doughs). These two properties were chosen because of their differing abilities to be modeled by NIR spectroscopy: excellent for protein content, fair for SDS sedimentation volume. To further demonstrate the potential pitfalls of preprocessing, an artificial component, a randomly generated value, was included in PLS regression trials. Savitzky-Golay (digital filter) smoothing, first-derivative, and second-derivative preprocess functions (5 to 25 centrally symmetric convolution points, derived from quadratic polynomials) were applied to PLS calibrations of 1 to 15 factors. The results demonstrated the danger of an over reliance on preprocessing when (1) the number of samples used in a multivariate calibration is low (<50), (2) the spectral response of the analyte is weak, and (3) the goodness of the calibration is based on the coefficient of determination (R(2)) rather than a term based on residual error. The graphical method has application to the evaluation of other preprocess functions and various

  13. Automated cleaning and pre-processing of immunoglobulin gene sequences from high-throughput sequencing

    Directory of Open Access Journals (Sweden)

    Miri eMichaeli

    2012-12-01

    Full Text Available High throughput sequencing (HTS yields tens of thousands to millions of sequences that require a large amount of pre-processing work to clean various artifacts. Such cleaning cannot be performed manually. Existing programs are not suitable for immunoglobulin (Ig genes, which are variable and often highly mutated. This paper describes Ig-HTS-Cleaner (Ig High Throughput Sequencing Cleaner, a program containing a simple cleaning procedure that successfully deals with pre-processing of Ig sequences derived from HTS, and Ig-Indel-Identifier (Ig Insertion – Deletion Identifier, a program for identifying legitimate and artifact insertions and/or deletions (indels. Our programs were designed for analyzing Ig gene sequences obtained by 454 sequencing, but they are applicable to all types of sequences and sequencing platforms. Ig-HTS-Cleaner and Ig-Indel-Identifier have been implemented in Java and saved as executable JAR files, supported on Linux and MS Windows. No special requirements are needed in order to run the programs, except for correctly constructing the input files as explained in the text. The programs' performance has been tested and validated on real and simulated data sets.

  14. The effects of pre-processing strategies in sentiment analysis of online movie reviews

    Science.gov (United States)

    Zin, Harnani Mat; Mustapha, Norwati; Murad, Masrah Azrifah Azmi; Sharef, Nurfadhlina Mohd

    2017-10-01

    With the ever increasing of internet applications and social networking sites, people nowadays can easily express their feelings towards any products and services. These online reviews act as an important source for further analysis and improved decision making. These reviews are mostly unstructured by nature and thus, need processing like sentiment analysis and classification to provide a meaningful information for future uses. In text analysis tasks, the appropriate selection of words/features will have a huge impact on the effectiveness of the classifier. Thus, this paper explores the effect of the pre-processing strategies in the sentiment analysis of online movie reviews. In this paper, supervised machine learning method was used to classify the reviews. The support vector machine (SVM) with linear and non-linear kernel has been considered as classifier for the classification of the reviews. The performance of the classifier is critically examined based on the results of precision, recall, f-measure, and accuracy. Two different features representations were used which are term frequency and term frequency-inverse document frequency. Results show that the pre-processing strategies give a significant impact on the classification process.

  15. Characterizing the continuously acquired cardiovascular time series during hemodialysis, using median hybrid filter preprocessing noise reduction

    Directory of Open Access Journals (Sweden)

    Wilson S

    2015-01-01

    Full Text Available Scott Wilson,1,2 Andrea Bowyer,3 Stephen B Harrap4 1Department of Renal Medicine, The Alfred Hospital, 2Baker IDI, Melbourne, 3Department of Anaesthesia, Royal Melbourne Hospital, 4University of Melbourne, Parkville, VIC, Australia Abstract: The clinical characterization of cardiovascular dynamics during hemodialysis (HD has important pathophysiological implications in terms of diagnostic, cardiovascular risk assessment, and treatment efficacy perspectives. Currently the diagnosis of significant intradialytic systolic blood pressure (SBP changes among HD patients is imprecise and opportunistic, reliant upon the presence of hypotensive symptoms in conjunction with coincident but isolated noninvasive brachial cuff blood pressure (NIBP readings. Considering hemodynamic variables as a time series makes a continuous recording approach more desirable than intermittent measures; however, in the clinical environment, the data signal is susceptible to corruption due to both impulsive and Gaussian-type noise. Signal preprocessing is an attractive solution to this problem. Prospectively collected continuous noninvasive SBP data over the short-break intradialytic period in ten patients was preprocessed using a novel median hybrid filter (MHF algorithm and compared with 50 time-coincident pairs of intradialytic NIBP measures from routine HD practice. The median hybrid preprocessing technique for continuously acquired cardiovascular data yielded a dynamic regression without significant noise and artifact, suitable for high-level profiling of time-dependent SBP behavior. Signal accuracy is highly comparable with standard NIBP measurement, with the added clinical benefit of dynamic real-time hemodynamic information. Keywords: continuous monitoring, blood pressure

  16. Safe and sensible preprocessing and baseline correction of pupil-size data.

    Science.gov (United States)

    Mathôt, Sebastiaan; Fabius, Jasper; Van Heusden, Elle; Van der Stigchel, Stefan

    2018-02-01

    Measurement of pupil size (pupillometry) has recently gained renewed interest from psychologists, but there is little agreement on how pupil-size data is best analyzed. Here we focus on one aspect of pupillometric analyses: baseline correction, i.e., analyzing changes in pupil size relative to a baseline period. Baseline correction is useful in experiments that investigate the effect of some experimental manipulation on pupil size. In such experiments, baseline correction improves statistical power by taking into account random fluctuations in pupil size over time. However, we show that baseline correction can also distort data if unrealistically small pupil sizes are recorded during the baseline period, which can easily occur due to eye blinks, data loss, or other distortions. Divisive baseline correction (corrected pupil size = pupil size/baseline) is affected more strongly by such distortions than subtractive baseline correction (corrected pupil size = pupil size - baseline). We discuss the role of baseline correction as a part of preprocessing of pupillometric data, and make five recommendations: (1) before baseline correction, perform data preprocessing to mark missing and invalid data, but assume that some distortions will remain in the data; (2) use subtractive baseline correction; (3) visually compare your corrected and uncorrected data; (4) be wary of pupil-size effects that emerge faster than the latency of the pupillary response allows (within ±220 ms after the manipulation that induces the effect); and (5) remove trials on which baseline pupil size is unrealistically small (indicative of blinks and other distortions).

  17. On image pre-processing for PIV of sinlge- and two-phase flows over reflecting objects

    NARCIS (Netherlands)

    Deen, N.G.; Willems, P.; van Sint Annaland, M.; Kuipers, J.A.M.; Lammertink, Rob G.H.; Kemperman, Antonius J.B.; Wessling, Matthias; van der Meer, Walterus Gijsbertus Joseph

    2010-01-01

    A novel image pre-processing scheme for PIV of single- and two-phase flows over reflecting objects which does not require the use of additional hardware is discussed. The approach for single-phase flow consists of image normalization and intensity stretching followed by background subtraction. For

  18. Critical flux determination by flux-stepping

    DEFF Research Database (Denmark)

    Beier, Søren; Jonsson, Gunnar Eigil

    2010-01-01

    In membrane filtration related scientific literature, often step-by-step determined critical fluxes are reported. Using a dynamic microfiltration device, it is shown that critical fluxes determined from two different flux-stepping methods are dependent upon operational parameters such as step...... length, step height, and.flux start level. Filtrating 8 kg/m(3) yeast cell suspensions by a vibrating 0.45 x 10(-6) m pore size microfiltration hollow fiber module, critical fluxes from 5.6 x 10(-6) to 1.2 x 10(-5) m/s have been measured using various step lengths from 300 to 1200 seconds. Thus......, such values are more or less useless in itself as critical flux predictors, and constant flux verification experiments have to be conducted to check if the determined critical fluxes call predict sustainable flux regimes. However, it is shown that using the step-by-step predicted critical fluxes as start...

  19. Hyperspectral imaging in medicine: image pre-processing problems and solutions in Matlab.

    Science.gov (United States)

    Koprowski, Robert

    2015-11-01

    The paper presents problems and solutions related to hyperspectral image pre-processing. New methods of preliminary image analysis are proposed. The paper shows problems occurring in Matlab when trying to analyse this type of images. Moreover, new methods are discussed which provide the source code in Matlab that can be used in practice without any licensing restrictions. The proposed application and sample result of hyperspectral image analysis. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. One-Step Dynamic Classifier Ensemble Model for Customer Value Segmentation with Missing Values

    Directory of Open Access Journals (Sweden)

    Jin Xiao

    2014-01-01

    Full Text Available Scientific customer value segmentation (CVS is the base of efficient customer relationship management, and customer credit scoring, fraud detection, and churn prediction all belong to CVS. In real CVS, the customer data usually include lots of missing values, which may affect the performance of CVS model greatly. This study proposes a one-step dynamic classifier ensemble model for missing values (ODCEM model. On the one hand, ODCEM integrates the preprocess of missing values and the classification modeling into one step; on the other hand, it utilizes multiple classifiers ensemble technology in constructing the classification models. The empirical results in credit scoring dataset “German” from UCI and the real customer churn prediction dataset “China churn” show that the ODCEM outperforms four commonly used “two-step” models and the ensemble based model LMF and can provide better decision support for market managers.

  1. Preprocessing in a Tiered Sensor Network for Habitat Monitoring

    Directory of Open Access Journals (Sweden)

    Hanbiao Wang

    2003-03-01

    Full Text Available We investigate task decomposition and collaboration in a two-tiered sensor network for habitat monitoring. The system recognizes and localizes a specified type of birdcalls. The system has a few powerful macronodes in the first tier, and many less powerful micronodes in the second tier. Each macronode combines data collected by multiple micronodes for target classification and localization. We describe two types of lightweight preprocessing which significantly reduce data transmission from micronodes to macronodes. Micronodes classify events according to their cross-zero rates and discard irrelevant events. Data about events of interest is reduced and compressed before being transmitted to macronodes for target localization. Preliminary experiments illustrate the effectiveness of event filtering and data reduction at micronodes.

  2. ADDING A NEW STEP WITH SPATIAL AUTOCORRELATION TO IMPROVE THE FOUR-STEP TRAVEL DEMAND MODEL WITH FEEDBACK FOR A DEVELOPING CITY

    Directory of Open Access Journals (Sweden)

    Xuesong FENG, Ph.D Candidate

    2009-01-01

    Full Text Available It is expected that improvement of transport networks could give rise to the change of spatial distributions of population-related factors and car ownership, which are expected to further influence travel demand. To properly reflect such an interdependence mechanism, an aggregate multinomial logit (A-MNL model was firstly applied to represent the spatial distributions of these exogenous variables of the travel demand model by reflecting the influence of transport networks. Next, the spatial autocorrelation analysis is introduced into the log-transformed A-MNL model (called SPA-MNL model. Thereafter, the SPA-MNL model is integrated into the four-step travel demand model with feedback (called 4-STEP model. As a result, an integrated travel demand model is newly developed and named as the SPA-STEP model. Using person trip data collected in Beijing, the performance of the SPA-STEP model is empirically compared with the 4-STEP model. It was proven that the SPA-STEP model is superior to the 4-STEP model in accuracy; most of the estimated parameters showed statistical differences in values. Moreover, though the results of the simulations to the same set of assumed scenarios by the 4-STEP model and the SPA-STEP model consistently suggested the same sustainable path for the future development of Beijing, it was found that the environmental sustainability and the traffic congestion for these scenarios were generally overestimated by the 4-STEP model compared with the corresponding analyses by the SPA-STEP model. Such differences were clearly generated by the introduction of the new modeling step with spatial autocorrelation.

  3. THE EFFECT OF DECOMPOSITION METHOD AS DATA PREPROCESSING ON NEURAL NETWORKS MODEL FOR FORECASTING TREND AND SEASONAL TIME SERIES

    Directory of Open Access Journals (Sweden)

    Subanar Subanar

    2006-01-01

    Full Text Available Recently, one of the central topics for the neural networks (NN community is the issue of data preprocessing on the use of NN. In this paper, we will investigate this topic particularly on the effect of Decomposition method as data processing and the use of NN for modeling effectively time series with both trend and seasonal patterns. Limited empirical studies on seasonal time series forecasting with neural networks show that some find neural networks are able to model seasonality directly and prior deseasonalization is not necessary, and others conclude just the opposite. In this research, we study particularly on the effectiveness of data preprocessing, including detrending and deseasonalization by applying Decomposition method on NN modeling and forecasting performance. We use two kinds of data, simulation and real data. Simulation data are examined on multiplicative of trend and seasonality patterns. The results are compared to those obtained from the classical time series model. Our result shows that a combination of detrending and deseasonalization by applying Decomposition method is the effective data preprocessing on the use of NN for forecasting trend and seasonal time series.

  4. lop-DWI: A Novel Scheme for Pre-Processing of Diffusion-Weighted Images in the Gradient Direction Domain.

    Science.gov (United States)

    Sepehrband, Farshid; Choupan, Jeiran; Caruyer, Emmanuel; Kurniawan, Nyoman D; Gal, Yaniv; Tieng, Quang M; McMahon, Katie L; Vegh, Viktor; Reutens, David C; Yang, Zhengyi

    2014-01-01

    We describe and evaluate a pre-processing method based on a periodic spiral sampling of diffusion-gradient directions for high angular resolution diffusion magnetic resonance imaging. Our pre-processing method incorporates prior knowledge about the acquired diffusion-weighted signal, facilitating noise reduction. Periodic spiral sampling of gradient direction encodings results in an acquired signal in each voxel that is pseudo-periodic with characteristics that allow separation of low-frequency signal from high frequency noise. Consequently, it enhances local reconstruction of the orientation distribution function used to define fiber tracks in the brain. Denoising with periodic spiral sampling was tested using synthetic data and in vivo human brain images. The level of improvement in signal-to-noise ratio and in the accuracy of local reconstruction of fiber tracks was significantly improved using our method.

  5. A review of blood sample handling and pre-processing for metabolomics studies.

    Science.gov (United States)

    Hernandes, Vinicius Veri; Barbas, Coral; Dudzik, Danuta

    2017-09-01

    Metabolomics has been found to be applicable to a wide range of clinical studies, bringing a new era for improving clinical diagnostics, early disease detection, therapy prediction and treatment efficiency monitoring. A major challenge in metabolomics, particularly untargeted studies, is the extremely diverse and complex nature of biological specimens. Despite great advances in the field there still exist fundamental needs for considering pre-analytical variability that can introduce bias to the subsequent analytical process and decrease the reliability of the results and moreover confound final research outcomes. Many researchers are mainly focused on the instrumental aspects of the biomarker discovery process, and sample related variables sometimes seem to be overlooked. To bridge the gap, critical information and standardized protocols regarding experimental design and sample handling and pre-processing are highly desired. Characterization of a range variation among sample collection methods is necessary to prevent results misinterpretation and to ensure that observed differences are not due to an experimental bias caused by inconsistencies in sample processing. Herein, a systematic discussion of pre-analytical variables affecting metabolomics studies based on blood derived samples is performed. Furthermore, we provide a set of recommendations concerning experimental design, collection, pre-processing procedures and storage conditions as a practical review that can guide and serve for the standardization of protocols and reduction of undesirable variation. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Computational Testing for Automated Preprocessing 2: Practical Demonstration of a System for Scientific Data-Processing Workflow Management for High-Volume EEG.

    Science.gov (United States)

    Cowley, Benjamin U; Korpela, Jussi

    2018-01-01

    Existing tools for the preprocessing of EEG data provide a large choice of methods to suitably prepare and analyse a given dataset. Yet it remains a challenge for the average user to integrate methods for batch processing of the increasingly large datasets of modern research, and compare methods to choose an optimal approach across the many possible parameter configurations. Additionally, many tools still require a high degree of manual decision making for, e.g., the classification of artifacts in channels, epochs or segments. This introduces extra subjectivity, is slow, and is not reproducible. Batching and well-designed automation can help to regularize EEG preprocessing, and thus reduce human effort, subjectivity, and consequent error. The Computational Testing for Automated Preprocessing (CTAP) toolbox facilitates: (i) batch processing that is easy for experts and novices alike; (ii) testing and comparison of preprocessing methods. Here we demonstrate the application of CTAP to high-resolution EEG data in three modes of use. First, a linear processing pipeline with mostly default parameters illustrates ease-of-use for naive users. Second, a branching pipeline illustrates CTAP's support for comparison of competing methods. Third, a pipeline with built-in parameter-sweeping illustrates CTAP's capability to support data-driven method parameterization. CTAP extends the existing functions and data structure from the well-known EEGLAB toolbox, based on Matlab, and produces extensive quality control outputs. CTAP is available under MIT open-source licence from https://github.com/bwrc/ctap.

  7. Effects of Preprocessing on Multi-Direction Properties of Aluminum Alloy Cold-Spray Deposits

    Science.gov (United States)

    Rokni, M. R.; Nardi, A. T.; Champagne, V. K.; Nutt, S. R.

    2018-05-01

    The effects of powder preprocessing (degassing at 400 °C for 6 h) on microstructure and mechanical properties of 5056 aluminum deposits produced by high-pressure cold spray were investigated. To investigate directionality of the mechanical properties, microtensile coupons were excised from different directions of the deposit, i.e., longitudinal, short transverse, long transverse, and diagonal and then tested. The results were compared to properties of wrought 5056 and the coating deposited with as-received 5056 Al powder and correlated with the observed microstructures. Preprocessing softened the particles and eliminated the pores within them, resulting in more extensive and uniform deformation upon impact with the substrate and with underlying deposited material. Microstructural characterization and finite element simulation indicated that upon particle impact, the peripheral regions experienced more extensive deformation and higher temperatures than the central contact zone. This led to more recrystallization and stronger bonding at peripheral regions relative to the contact zone area and yielded superior properties in the longitudinal direction compared with the short transverse direction. Fractography revealed that crack propagation takes place along the particle-particle interfaces in the transverse directions (caused by insufficient bonding and recrystallization), whereas through the deposited particles, fracture is dominant in the longitudinal direction.

  8. Call Forecasting for Inbound Call Center

    Directory of Open Access Journals (Sweden)

    Peter Vinje

    2009-01-01

    Full Text Available In a scenario of inbound call center customer service, the ability to forecast calls is a key element and advantage. By forecasting the correct number of calls a company can predict staffing needs, meet service level requirements, improve customer satisfaction, and benefit from many other optimizations. This project will show how elementary statistics can be used to predict calls for a specific company, forecast the rate at which calls are increasing/decreasing, and determine if the calls may stop at some point.

  9. Textural Analysis of Fatique Crack Surfaces: Image Pre-processing

    Directory of Open Access Journals (Sweden)

    H. Lauschmann

    2000-01-01

    Full Text Available For the fatique crack history reconstitution, new methods of quantitative microfractography are beeing developed based on the image processing and textural analysis. SEM magnifications between micro- and macrofractography are used. Two image pre-processing operatins were suggested and proved to prepare the crack surface images for analytical treatment: 1. Normalization is used to transform the image to a stationary form. Compared to the generally used equalization, it conserves the shape of brightness distribution and saves the character of the texture. 2. Binarization is used to transform the grayscale image to a system of thick fibres. An objective criterion for the threshold brightness value was found as that resulting into the maximum number of objects. Both methods were succesfully applied together with the following textural analysis.

  10. PRACTICAL RECOMMENDATIONS OF DATA PREPROCESSING AND GEOSPATIAL MEASURES FOR OPTIMIZING THE NEUROLOGICAL AND OTHER PEDIATRIC EMERGENCIES MANAGEMENT

    Directory of Open Access Journals (Sweden)

    Ionela MANIU

    2017-08-01

    Full Text Available Time management, optimal and timed determination of emergency severity as well as optimizing the use of available human and material resources are crucial areas of emergency services. A starting point for achieving these optimizations can be considered the analysis and preprocess of real data from the emergency services. The benefits of performing this method consist in exposing more useful structures to data modelling algorithms which consequently will reduce overfitting and improves accuracy. This paper aims to offer practical recommendations for data preprocessing measures including feature selection and discretization of numeric attributes regarding age, duration of the case, season, period, week period (workday, weekend and geospatial location of neurological and other pediatric emergencies. An analytical, retrospective study was conducted on a sample consisting of 933 pediatric cases, from UPU-SMURD Sibiu, 01.01.2014 – 27.02.2017 period.

  11. Statistics in experimental design, preprocessing, and analysis of proteomics data.

    Science.gov (United States)

    Jung, Klaus

    2011-01-01

    High-throughput experiments in proteomics, such as 2-dimensional gel electrophoresis (2-DE) and mass spectrometry (MS), yield usually high-dimensional data sets of expression values for hundreds or thousands of proteins which are, however, observed on only a relatively small number of biological samples. Statistical methods for the planning and analysis of experiments are important to avoid false conclusions and to receive tenable results. In this chapter, the most frequent experimental designs for proteomics experiments are illustrated. In particular, focus is put on studies for the detection of differentially regulated proteins. Furthermore, issues of sample size planning, statistical analysis of expression levels as well as methods for data preprocessing are covered.

  12. Making the most of RNA-seq: Pre-processing sequencing data with Opossum for reliable SNP variant detection [version 2; referees: 2 approved, 1 approved with reservations

    Directory of Open Access Journals (Sweden)

    Laura Oikkonen

    2017-03-01

    Full Text Available Identifying variants from RNA-seq (transcriptome sequencing data is a cost-effective and versatile complement to whole-exome (WES and whole-genome sequencing (WGS analysis. RNA-seq (transcriptome sequencing is primarily considered a method of gene expression analysis but it can also be used to detect DNA variants in expressed regions of the genome. However, current variant callers do not generally behave well with RNA-seq data due to reads encompassing intronic regions. We have developed a software programme called Opossum to address this problem. Opossum pre-processes RNA-seq reads prior to variant calling, and although it has been designed to work specifically with Platypus, it can be used equally well with other variant callers such as GATK HaplotypeCaller. In this work, we show that using Opossum in conjunction with either Platypus or GATK HaplotypeCaller maintains precision and improves the sensitivity for SNP detection compared to the GATK Best Practices pipeline. In addition, using it in combination with Platypus offers a substantial reduction in run times compared to the GATK pipeline so it is ideal when there are only limited time or computational resources available.

  13. An explicit multi-time-stepping algorithm for aerodynamic flows

    NARCIS (Netherlands)

    Niemann-Tuitman, B.E.; Veldman, A.E.P.

    1997-01-01

    An explicit multi-time-stepping algorithm with applications to aerodynamic flows is presented. In the algorithm, in different parts of the computational domain different time steps are taken, and the flow is synchronized at the so-called synchronization levels. The algorithm is validated for

  14. Web Log Pre-processing and Analysis for Generation of Learning Profiles in Adaptive E-learning

    Directory of Open Access Journals (Sweden)

    Radhika M. Pai

    2016-03-01

    Full Text Available Adaptive E-learning Systems (AESs enhance the efficiency of online courses in education by providing personalized contents and user interfaces that changes according to learner’s requirements and usage patterns. This paper presents the approach to generate learning profile of each learner which helps to identify the learning styles and provide Adaptive User Interface which includes adaptive learning components and learning material. The proposed method analyzes the captured web usage data to identify the learning profile of the learners. The learning profiles are identified by an algorithmic approach that is based on the frequency of accessing the materials and the time spent on the various learning components on the portal. The captured log data is pre-processed and converted into standard XML format to generate learners sequence data corresponding to the different sessions and time spent. The learning style model adopted in this approach is Felder-Silverman Learning Style Model (FSLSM. This paper also presents the analysis of learner’s activities, preprocessed XML files and generated sequences.

  15. Web Log Pre-processing and Analysis for Generation of Learning Profiles in Adaptive E-learning

    Directory of Open Access Journals (Sweden)

    Radhika M. Pai

    2016-04-01

    Full Text Available Adaptive E-learning Systems (AESs enhance the efficiency of online courses in education by providing personalized contents and user interfaces that changes according to learner’s requirements and usage patterns. This paper presents the approach to generate learning profile of each learner which helps to identify the learning styles and provide Adaptive User Interface which includes adaptive learning components and learning material. The proposed method analyzes the captured web usage data to identify the learning profile of the learners. The learning profiles are identified by an algorithmic approach that is based on the frequency of accessing the materials and the time spent on the various learning components on the portal. The captured log data is pre-processed and converted into standard XML format to generate learners sequence data corresponding to the different sessions and time spent. The learning style model adopted in this approach is Felder-Silverman Learning Style Model (FSLSM. This paper also presents the analysis of learner’s activities, preprocessed XML files and generated sequences.

  16. Digital soil mapping: strategy for data pre-processing

    Directory of Open Access Journals (Sweden)

    Alexandre ten Caten

    2012-08-01

    Full Text Available The region of greatest variability on soil maps is along the edge of their polygons, causing disagreement among pedologists about the appropriate description of soil classes at these locations. The objective of this work was to propose a strategy for data pre-processing applied to digital soil mapping (DSM. Soil polygons on a training map were shrunk by 100 and 160 m. This strategy prevented the use of covariates located near the edge of the soil classes for the Decision Tree (DT models. Three DT models derived from eight predictive covariates, related to relief and organism factors sampled on the original polygons of a soil map and on polygons shrunk by 100 and 160 m were used to predict soil classes. The DT model derived from observations 160 m away from the edge of the polygons on the original map is less complex and has a better predictive performance.

  17. Bayesian Optimization for Neuroimaging Pre-processing in Brain Age Classification and Prediction

    Directory of Open Access Journals (Sweden)

    Jenessa Lancaster

    2018-02-01

    Full Text Available Neuroimaging-based age prediction using machine learning is proposed as a biomarker of brain aging, relating to cognitive performance, health outcomes and progression of neurodegenerative disease. However, even leading age-prediction algorithms contain measurement error, motivating efforts to improve experimental pipelines. T1-weighted MRI is commonly used for age prediction, and the pre-processing of these scans involves normalization to a common template and resampling to a common voxel size, followed by spatial smoothing. Resampling parameters are often selected arbitrarily. Here, we sought to improve brain-age prediction accuracy by optimizing resampling parameters using Bayesian optimization. Using data on N = 2003 healthy individuals (aged 16–90 years we trained support vector machines to (i distinguish between young (<22 years and old (>50 years brains (classification and (ii predict chronological age (regression. We also evaluated generalisability of the age-regression model to an independent dataset (CamCAN, N = 648, aged 18–88 years. Bayesian optimization was used to identify optimal voxel size and smoothing kernel size for each task. This procedure adaptively samples the parameter space to evaluate accuracy across a range of possible parameters, using independent sub-samples to iteratively assess different parameter combinations to arrive at optimal values. When distinguishing between young and old brains a classification accuracy of 88.1% was achieved, (optimal voxel size = 11.5 mm3, smoothing kernel = 2.3 mm. For predicting chronological age, a mean absolute error (MAE of 5.08 years was achieved, (optimal voxel size = 3.73 mm3, smoothing kernel = 3.68 mm. This was compared to performance using default values of 1.5 mm3 and 4mm respectively, resulting in MAE = 5.48 years, though this 7.3% improvement was not statistically significant. When assessing generalisability, best performance was achieved when applying the entire Bayesian

  18. An explicit multi-time-stepping algorithm for aerodynamic flows

    OpenAIRE

    Niemann-Tuitman, B.E.; Veldman, A.E.P.

    1997-01-01

    An explicit multi-time-stepping algorithm with applications to aerodynamic flows is presented. In the algorithm, in different parts of the computational domain different time steps are taken, and the flow is synchronized at the so-called synchronization levels. The algorithm is validated for aerodynamic turbulent flows. For two-dimensional flows speedups in the order of five with respect to single time stepping are obtained.

  19. Image-preprocessing method for near-wall particle image velocimetry (PIV) image interrogation with very large in-plane displacement

    International Nuclear Information System (INIS)

    Zhu, Yiding; Yuan, Huijing; Zhang, Chuanhong; Lee, Cunbiao

    2013-01-01

    Accurate particle image velocimetry (PIV) measurements very near the wall are still a great challenge. The problem is compounded by the very large in-plane displacement on PIV images commonly encountered in measurements in hypersonic boundary layers. An improved image-preprocessing method is presented in this paper which expands the traditional window deformation iterative multigrid scheme to PIV images with very large displacement. Before the interrogation, stationary artificial particles of uniform size are added homogeneously in the wall region. The mean squares of the intensities of signals in the flow and in the wall region are postulated to be equal when half the initial interrogation window overlaps the wall region. The initial estimation near the wall is then smoothed by data from both sides of the shear layer to reduce the large random uncertainties. Interrogations in the following iterative steps then converge to the correct results to provide accurate predictions for particle tracking velocimetries. Significant improvement is seen in Monte Carlo simulations and experimental tests. The algorithm successfully extracted the small flow structures of the second-mode wave in the hypersonic boundary layer from PIV images with low signal-noise-ratios when the traditional method was not successful. (paper)

  20. 3-D image pre-processing algorithms for improved automated tracing of neuronal arbors.

    Science.gov (United States)

    Narayanaswamy, Arunachalam; Wang, Yu; Roysam, Badrinath

    2011-09-01

    The accuracy and reliability of automated neurite tracing systems is ultimately limited by image quality as reflected in the signal-to-noise ratio, contrast, and image variability. This paper describes a novel combination of image processing methods that operate on images of neurites captured by confocal and widefield microscopy, and produce synthetic images that are better suited to automated tracing. The algorithms are based on the curvelet transform (for denoising curvilinear structures and local orientation estimation), perceptual grouping by scalar voting (for elimination of non-tubular structures and improvement of neurite continuity while preserving branch points), adaptive focus detection, and depth estimation (for handling widefield images without deconvolution). The proposed methods are fast, and capable of handling large images. Their ability to handle images of unlimited size derives from automated tiling of large images along the lateral dimension, and processing of 3-D images one optical slice at a time. Their speed derives in part from the fact that the core computations are formulated in terms of the Fast Fourier Transform (FFT), and in part from parallel computation on multi-core computers. The methods are simple to apply to new images since they require very few adjustable parameters, all of which are intuitive. Examples of pre-processing DIADEM Challenge images are used to illustrate improved automated tracing resulting from our pre-processing methods.

  1. Statistical Downscaling Output GCM Modeling with Continuum Regression and Pre-Processing PCA Approach

    Directory of Open Access Journals (Sweden)

    Sutikno Sutikno

    2010-08-01

    Full Text Available One of the climate models used to predict the climatic conditions is Global Circulation Models (GCM. GCM is a computer-based model that consists of different equations. It uses numerical and deterministic equation which follows the physics rules. GCM is a main tool to predict climate and weather, also it uses as primary information source to review the climate change effect. Statistical Downscaling (SD technique is used to bridge the large-scale GCM with a small scale (the study area. GCM data is spatial and temporal data most likely to occur where the spatial correlation between different data on the grid in a single domain. Multicollinearity problems require the need for pre-processing of variable data X. Continuum Regression (CR and pre-processing with Principal Component Analysis (PCA methods is an alternative to SD modelling. CR is one method which was developed by Stone and Brooks (1990. This method is a generalization from Ordinary Least Square (OLS, Principal Component Regression (PCR and Partial Least Square method (PLS methods, used to overcome multicollinearity problems. Data processing for the station in Ambon, Pontianak, Losarang, Indramayu and Yuntinyuat show that the RMSEP values and R2 predict in the domain 8x8 and 12x12 by uses CR method produces results better than by PCR and PLS.

  2. CSS Preprocessing: Tools and Automation Techniques

    Directory of Open Access Journals (Sweden)

    Ricardo Queirós

    2018-01-01

    Full Text Available Cascading Style Sheets (CSS is a W3C specification for a style sheet language used for describing the presentation of a document written in a markup language, more precisely, for styling Web documents. However, in the last few years, the landscape for CSS development has changed dramatically with the appearance of several languages and tools aiming to help developers build clean, modular and performance-aware CSS. These new approaches give developers mechanisms to preprocess CSS rules through the use of programming constructs, defined as CSS preprocessors, with the ultimate goal to bring those missing constructs to the CSS realm and to foster stylesheets structured programming. At the same time, a new set of tools appeared, defined as postprocessors, for extension and automation purposes covering a broad set of features ranging from identifying unused and duplicate code to applying vendor prefixes. With all these tools and techniques in hands, developers need to provide a consistent workflow to foster CSS modular coding. This paper aims to present an introductory survey on the CSS processors. The survey gathers information on a specific set of processors, categorizes them and compares their features regarding a set of predefined criteria such as: maturity, coverage and performance. Finally, we propose a basic set of best practices in order to setup a simple and pragmatic styling code workflow.

  3. Preprocessing Raw Data in Clinical Medicine for a Data Mining Purpose

    Directory of Open Access Journals (Sweden)

    Peterková Andrea

    2016-12-01

    Full Text Available Dealing with data from the field of medicine is nowadays very current and difficult. On a global scale, a large amount of medical data is produced on an everyday basis. For the purpose of our research, we understand medical data as data about patients like results from laboratory analysis, results from screening examinations (CT, ECHO and clinical parameters. This data is usually in a raw format, difficult to understand, non-standard and not suitable for further processing or analysis. This paper aims to describe the possible method of data preparation and preprocessing of such raw medical data into a form, where further analysis algorithms can be applied.

  4. Pendekatan Pelatihan On-Site dan Step by Step untuk Optimalisasi Fungsi Guru dalam Pembelajaran

    Directory of Open Access Journals (Sweden)

    Moch. Sholeh Y.A. Ichrom

    2016-02-01

    Full Text Available Remoteness of programme content from teachers' real work situation and unsuitability of approach employed were suspected as main reasons contributing to the failure of many inservise teacher training programmes. A step by step, onsite teacher training (SSOTT model was tried out in this experiment to study if the weakness of inservise programmes could be rectified. As it was tried out in relation with kindergarten mathemathics it was then called SSOTT-MTW (Step by Step Onsite Teacher Training-Methemathics Their Way model. Eighty four kindergartens were involved, in which 84 teachers and 877 pupils were recruited as experimental subjects. The teachers were devided into three group. One group was instructed by using One Period Teacher Training (OPOTT-MTW model, second group was trained with SSOTT-MTW model and the last group was given no training (NTT at all. Result of the experiment showed that the other groups. It was also shown that pupil and parents participation in teaching-learning activities also significantly improved.

  5. Data Pre-Processing Method to Remove Interference of Gas Bubbles and Cell Clusters During Anaerobic and Aerobic Yeast Fermentations in a Stirred Tank Bioreactor

    Science.gov (United States)

    Princz, S.; Wenzel, U.; Miller, R.; Hessling, M.

    2014-11-01

    One aerobic and four anaerobic batch fermentations of the yeast Saccharomyces cerevisiae were conducted in a stirred bioreactor and monitored inline by NIR spectroscopy and a transflectance dip probe. From the acquired NIR spectra, chemometric partial least squares regression (PLSR) models for predicting biomass, glucose and ethanol were constructed. The spectra were directly measured in the fermentation broth and successfully inspected for adulteration using our novel data pre-processing method. These adulterations manifested as strong fluctuations in the shape and offset of the absorption spectra. They resulted from cells, cell clusters, or gas bubbles intercepting the optical path of the dip probe. In the proposed data pre-processing method, adulterated signals are removed by passing the time-scanned non-averaged spectra through two filter algorithms with a 5% quantile cutoff. The filtered spectra containing meaningful data are then averaged. A second step checks whether the whole time scan is analyzable. If true, the average is calculated and used to prepare the PLSR models. This new method distinctly improved the prediction results. To dissociate possible correlations between analyte concentrations, such as glucose and ethanol, the feeding analytes were alternately supplied at different concentrations (spiking) at the end of the four anaerobic fermentations. This procedure yielded low-error (anaerobic) PLSR models for predicting analyte concentrations of 0.31 g/l for biomass, 3.41 g/l for glucose, and 2.17 g/l for ethanol. The maximum concentrations were 14 g/l biomass, 167 g/l glucose, and 80 g/l ethanol. Data from the aerobic fermentation, carried out under high agitation and high aeration, were incorporated to realize combined PLSR models, which have not been previously reported to our knowledge.

  6. Evidence-based practice, step by step: critical appraisal of the evidence: part II: digging deeper--examining the "keeper" studies.

    Science.gov (United States)

    Fineout-Overholt, Ellen; Melnyk, Bernadette Mazurek; Stillwell, Susan B; Williamson, Kathleen M

    2010-09-01

    This is the sixth article in a series from the Arizona State University College of Nursing and Health Innovation's Center for the Advancement of Evidence-Based Practice. Evidence-based practice (EBP) is a problem-solving approach to the delivery of health care that integrates the best evidence from studies and patient care data with clinician expertise and patient preferences and values. When delivered in a context of caring and in a supportive organizational culture, the highest quality of care and best patient outcomes can be achieved. The purpose of this series is to give nurses the knowledge and skills they need to implement EBP consistently, one step at a time. Articles will appear every two months to allow you time to incorporate information as you work toward implementing EBP at your institution. Also, we've scheduled "Chat with the Authors" calls every few months to provide a direct line to the experts to help you resolve questions. Details about how to participate in the next call will be published with November's Evidence-Based Practice, Step by Step.

  7. Quality assessment of baby food made of different pre-processed organic raw materials under industrial processing conditions.

    Science.gov (United States)

    Seidel, Kathrin; Kahl, Johannes; Paoletti, Flavio; Birlouez, Ines; Busscher, Nicolaas; Kretzschmar, Ursula; Särkkä-Tirkkonen, Marjo; Seljåsen, Randi; Sinesio, Fiorella; Torp, Torfinn; Baiamonte, Irene

    2015-02-01

    The market for processed food is rapidly growing. The industry needs methods for "processing with care" leading to high quality products in order to meet consumers' expectations. Processing influences the quality of the finished product through various factors. In carrot baby food, these are the raw material, the pre-processing and storage treatments as well as the processing conditions. In this study, a quality assessment was performed on baby food made from different pre-processed raw materials. The experiments were carried out under industrial conditions using fresh, frozen and stored organic carrots as raw material. Statistically significant differences were found for sensory attributes among the three autoclaved puree samples (e.g. overall odour F = 90.72, p processed from frozen carrots show increased moisture content and decrease of several chemical constituents. Biocrystallization identified changes between replications of the cooking. Pre-treatment of raw material has a significant influence on the final quality of the baby food.

  8. THE IMAGE REGISTRATION OF FOURIER-MELLIN BASED ON THE COMBINATION OF PROJECTION AND GRADIENT PREPROCESSING

    Directory of Open Access Journals (Sweden)

    D. Gao

    2017-09-01

    Full Text Available Image registration is one of the most important applications in the field of image processing. The method of Fourier Merlin transform, which has the advantages of high precision and good robustness to change in light and shade, partial blocking, noise influence and so on, is widely used. However, not only this method can’t obtain the unique mutual power pulse function for non-parallel image pairs, even part of image pairs also can’t get the mutual power function pulse. In this paper, an image registration method based on Fourier-Mellin transformation in the view of projection-gradient preprocessing is proposed. According to the projection conformational equation, the method calculates the matrix of image projection transformation to correct the tilt image; then, gradient preprocessing and Fourier-Mellin transformation are performed on the corrected image to obtain the registration parameters. Eventually, the experiment results show that the method makes the image registration of Fourier-Mellin transformation not only applicable to the registration of the parallel image pairs, but also to the registration of non-parallel image pairs. What’s more, the better registration effect can be obtained

  9. Effects of different correlation metrics and preprocessing factors on small-world brain functional networks: a resting-state functional MRI study.

    Science.gov (United States)

    Liang, Xia; Wang, Jinhui; Yan, Chaogan; Shu, Ni; Xu, Ke; Gong, Gaolang; He, Yong

    2012-01-01

    Graph theoretical analysis of brain networks based on resting-state functional MRI (R-fMRI) has attracted a great deal of attention in recent years. These analyses often involve the selection of correlation metrics and specific preprocessing steps. However, the influence of these factors on the topological properties of functional brain networks has not been systematically examined. Here, we investigated the influences of correlation metric choice (Pearson's correlation versus partial correlation), global signal presence (regressed or not) and frequency band selection [slow-5 (0.01-0.027 Hz) versus slow-4 (0.027-0.073 Hz)] on the topological properties of both binary and weighted brain networks derived from them, and we employed test-retest (TRT) analyses for further guidance on how to choose the "best" network modeling strategy from the reliability perspective. Our results show significant differences in global network metrics associated with both correlation metrics and global signals. Analysis of nodal degree revealed differing hub distributions for brain networks derived from Pearson's correlation versus partial correlation. TRT analysis revealed that the reliability of both global and local topological properties are modulated by correlation metrics and the global signal, with the highest reliability observed for Pearson's-correlation-based brain networks without global signal removal (WOGR-PEAR). The nodal reliability exhibited a spatially heterogeneous distribution wherein regions in association and limbic/paralimbic cortices showed moderate TRT reliability in Pearson's-correlation-based brain networks. Moreover, we found that there were significant frequency-related differences in topological properties of WOGR-PEAR networks, and brain networks derived in the 0.027-0.073 Hz band exhibited greater reliability than those in the 0.01-0.027 Hz band. Taken together, our results provide direct evidence regarding the influences of correlation metrics and specific

  10. Effects of different correlation metrics and preprocessing factors on small-world brain functional networks: a resting-state functional MRI study.

    Directory of Open Access Journals (Sweden)

    Xia Liang

    Full Text Available Graph theoretical analysis of brain networks based on resting-state functional MRI (R-fMRI has attracted a great deal of attention in recent years. These analyses often involve the selection of correlation metrics and specific preprocessing steps. However, the influence of these factors on the topological properties of functional brain networks has not been systematically examined. Here, we investigated the influences of correlation metric choice (Pearson's correlation versus partial correlation, global signal presence (regressed or not and frequency band selection [slow-5 (0.01-0.027 Hz versus slow-4 (0.027-0.073 Hz] on the topological properties of both binary and weighted brain networks derived from them, and we employed test-retest (TRT analyses for further guidance on how to choose the "best" network modeling strategy from the reliability perspective. Our results show significant differences in global network metrics associated with both correlation metrics and global signals. Analysis of nodal degree revealed differing hub distributions for brain networks derived from Pearson's correlation versus partial correlation. TRT analysis revealed that the reliability of both global and local topological properties are modulated by correlation metrics and the global signal, with the highest reliability observed for Pearson's-correlation-based brain networks without global signal removal (WOGR-PEAR. The nodal reliability exhibited a spatially heterogeneous distribution wherein regions in association and limbic/paralimbic cortices showed moderate TRT reliability in Pearson's-correlation-based brain networks. Moreover, we found that there were significant frequency-related differences in topological properties of WOGR-PEAR networks, and brain networks derived in the 0.027-0.073 Hz band exhibited greater reliability than those in the 0.01-0.027 Hz band. Taken together, our results provide direct evidence regarding the influences of correlation metrics

  11. Pre-processing of input files for the AZTRAN code

    International Nuclear Information System (INIS)

    Vargas E, S.; Ibarra, G.

    2017-09-01

    The AZTRAN code began to be developed in the Nuclear Engineering Department of the Escuela Superior de Fisica y Matematicas (ESFM) of the Instituto Politecnico Nacional (IPN) with the purpose of numerically solving various models arising from the physics and engineering of nuclear reactors. The code is still under development and is part of the AZTLAN platform: Development of a Mexican platform for the analysis and design of nuclear reactors. Due to the complexity to generate an input file for the code, a script based on D language is developed, with the purpose of making its elaboration easier, based on a new input file format which includes specific cards, which have been divided into two blocks, mandatory cards and optional cards, including a pre-processing of the input file to identify possible errors within it, as well as an image generator for the specific problem based on the python interpreter. (Author)

  12. Review of Data Preprocessing Methods for Sign Language Recognition Systems based on Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Zorins Aleksejs

    2016-12-01

    Full Text Available The article presents an introductory analysis of relevant research topic for Latvian deaf society, which is the development of the Latvian Sign Language Recognition System. More specifically the data preprocessing methods are discussed in the paper and several approaches are shown with a focus on systems based on artificial neural networks, which are one of the most successful solutions for sign language recognition task.

  13. Neural Online Filtering Based on Preprocessed Calorimeter Data

    CERN Document Server

    Torres, R C; The ATLAS collaboration; Simas Filho, E F; De Seixas, J M

    2009-01-01

    Among LHC detectors, ATLAS aims at coping with such high event rate by designing a three-level online triggering system. The first level trigger output will be ~75 kHz. This level will mark the regions where relevant events were found. The second level will validate LVL1 decision by looking only at the approved data using full granularity. At the level two output, the event rate will be reduced to ~2 kHz. Finally, the third level will look at full event information and a rate of ~200 Hz events is expected to be approved, and stored in persistent media for further offline analysis. Many interesting events decay into electrons, which have to be identified from the huge background noise (jets). This work proposes a high-efficient LVL2 electron / jet discrimination system based on neural networks fed from preprocessed calorimeter information. The feature extraction part of the proposed system performs a ring structure of data description. A set of concentric rings centered at the highest energy cell is generated ...

  14. Data preprocessing methods for robust Fourier ptychographic microscopy

    Science.gov (United States)

    Zhang, Yan; Pan, An; Lei, Ming; Yao, Baoli

    2017-12-01

    Fourier ptychographic microscopy (FPM) is a recently developed computational imaging technique that achieves gigapixel images with both high resolution and large field-of-view. In the current FPM experimental setup, the dark-field images with high-angle illuminations are easily overwhelmed by stray lights and background noises due to the low signal-to-noise ratio, thus significantly degrading the achievable resolution of the FPM approach. We provide an overall and systematic data preprocessing scheme to enhance the FPM's performance, which involves sampling analysis, underexposed/overexposed treatments, background noises suppression, and stray lights elimination. It is demonstrated experimentally with both US Air Force (USAF) 1951 resolution target and biological samples that the benefit of the noise removal by these methods far outweighs the defect of the accompanying signal loss, as part of the lost signals can be compensated by the improved consistencies among the captured raw images. In addition, the reported nonparametric scheme could be further cooperated with the existing state-of-the-art algorithms with a great flexibility, facilitating a stronger noise-robust capability of the FPM approach in various applications.

  15. Fast randomized point location without preprocessing in two- and three-dimensional Delaunay triangulations

    Energy Technology Data Exchange (ETDEWEB)

    Muecke, E.P.; Saias, I.; Zhu, B.

    1996-05-01

    This paper studies the point location problem in Delaunay triangulations without preprocessing and additional storage. The proposed procedure finds the query point simply by walking through the triangulation, after selecting a good starting point by random sampling. The analysis generalizes and extends a recent result of d = 2 dimensions by proving this procedure to take expected time close to O(n{sup 1/(d+1)}) for point location in Delaunay triangulations of n random points in d = 3 dimensions. Empirical results in both two and three dimensions show that this procedure is efficient in practice.

  16. Performance Comparison of Several Pre-Processing Methods in a Hand Gesture Recognition System based on Nearest Neighbor for Different Background Conditions

    Directory of Open Access Journals (Sweden)

    Regina Lionnie

    2013-09-01

    Full Text Available This paper presents a performance analysis and comparison of several pre-processing  methods  used  in  a  hand  gesture  recognition  system.  The  preprocessing methods are based on the combinations ofseveral image processing operations,  namely  edge  detection,  low  pass  filtering,  histogram  equalization, thresholding and desaturation. The hand gesture recognition system is designed to classify an input image into one of six possibleclasses. The input images are taken with various background conditions. Our experiments showed that the best result is achieved when the pre-processing method consists of only a desaturation operation, achieving a classification accuracy of up to 83.15%.

  17. Integrated fMRI Preprocessing Framework Using Extended Kalman Filter for Estimation of Slice-Wise Motion

    OpenAIRE

    Basile Pinsard; Basile Pinsard; Basile Pinsard; Arnaud Boutin; Arnaud Boutin; Julien Doyon; Julien Doyon; Habib Benali; Habib Benali; Habib Benali

    2018-01-01

    Functional MRI acquisition is sensitive to subjects' motion that cannot be fully constrained. Therefore, signal corrections have to be applied a posteriori in order to mitigate the complex interactions between changing tissue localization and magnetic fields, gradients and readouts. To circumvent current preprocessing strategies limitations, we developed an integrated method that correct motion and spatial low-frequency intensity fluctuations at the level of each slice in order to better fit ...

  18. Piecewise Polynomial Aggregation as Preprocessing for Data Numerical Modeling

    Science.gov (United States)

    Dobronets, B. S.; Popova, O. A.

    2018-05-01

    Data aggregation issues for numerical modeling are reviewed in the present study. The authors discuss data aggregation procedures as preprocessing for subsequent numerical modeling. To calculate the data aggregation, the authors propose using numerical probabilistic analysis (NPA). An important feature of this study is how the authors represent the aggregated data. The study shows that the offered approach to data aggregation can be interpreted as the frequency distribution of a variable. To study its properties, the density function is used. For this purpose, the authors propose using the piecewise polynomial models. A suitable example of such approach is the spline. The authors show that their approach to data aggregation allows reducing the level of data uncertainty and significantly increasing the efficiency of numerical calculations. To demonstrate the degree of the correspondence of the proposed methods to reality, the authors developed a theoretical framework and considered numerical examples devoted to time series aggregation.

  19. The Python Spectral Analysis Tool (PySAT) for Powerful, Flexible, and Easy Preprocessing and Machine Learning with Point Spectral Data

    Science.gov (United States)

    Anderson, R. B.; Finch, N.; Clegg, S. M.; Graff, T.; Morris, R. V.; Laura, J.

    2018-04-01

    The PySAT point spectra tool provides a flexible graphical interface, enabling scientists to apply a wide variety of preprocessing and machine learning methods to point spectral data, with an emphasis on multivariate regression.

  20. Integrated fMRI Preprocessing Framework Using Extended Kalman Filter for Estimation of Slice-Wise Motion

    Directory of Open Access Journals (Sweden)

    Basile Pinsard

    2018-04-01

    Full Text Available Functional MRI acquisition is sensitive to subjects' motion that cannot be fully constrained. Therefore, signal corrections have to be applied a posteriori in order to mitigate the complex interactions between changing tissue localization and magnetic fields, gradients and readouts. To circumvent current preprocessing strategies limitations, we developed an integrated method that correct motion and spatial low-frequency intensity fluctuations at the level of each slice in order to better fit the acquisition processes. The registration of single or multiple simultaneously acquired slices is achieved online by an Iterated Extended Kalman Filter, favoring the robust estimation of continuous motion, while an intensity bias field is non-parametrically fitted. The proposed extraction of gray-matter BOLD activity from the acquisition space to an anatomical group template space, taking into account distortions, better preserves fine-scale patterns of activity. Importantly, the proposed unified framework generalizes to high-resolution multi-slice techniques. When tested on simulated and real data the latter shows a reduction of motion explained variance and signal variability when compared to the conventional preprocessing approach. These improvements provide more stable patterns of activity, facilitating investigation of cerebral information representation in healthy and/or clinical populations where motion is known to impact fine-scale data.

  1. Multi-objective optimization for an automated and simultaneous phase and baseline correction of NMR spectral data

    Science.gov (United States)

    Sawall, Mathias; von Harbou, Erik; Moog, Annekathrin; Behrens, Richard; Schröder, Henning; Simoneau, Joël; Steimers, Ellen; Neymeyr, Klaus

    2018-04-01

    Spectral data preprocessing is an integral and sometimes inevitable part of chemometric analyses. For Nuclear Magnetic Resonance (NMR) spectra a possible first preprocessing step is a phase correction which is applied to the Fourier transformed free induction decay (FID) signal. This preprocessing step can be followed by a separate baseline correction step. Especially if series of high-resolution spectra are considered, then automated and computationally fast preprocessing routines are desirable. A new method is suggested that applies the phase and the baseline corrections simultaneously in an automated form without manual input, which distinguishes this work from other approaches. The underlying multi-objective optimization or Pareto optimization provides improved results compared to consecutively applied correction steps. The optimization process uses an objective function which applies strong penalty constraints and weaker regularization conditions. The new method includes an approach for the detection of zero baseline regions. The baseline correction uses a modified Whittaker smoother. The functionality of the new method is demonstrated for experimental NMR spectra. The results are verified against gravimetric data. The method is compared to alternative preprocessing tools. Additionally, the simultaneous correction method is compared to a consecutive application of the two correction steps.

  2. chipPCR: an R package to pre-process raw data of amplification curves.

    Science.gov (United States)

    Rödiger, Stefan; Burdukiewicz, Michał; Schierack, Peter

    2015-09-01

    Both the quantitative real-time polymerase chain reaction (qPCR) and quantitative isothermal amplification (qIA) are standard methods for nucleic acid quantification. Numerous real-time read-out technologies have been developed. Despite the continuous interest in amplification-based techniques, there are only few tools for pre-processing of amplification data. However, a transparent tool for precise control of raw data is indispensable in several scenarios, for example, during the development of new instruments. chipPCR is an R: package for the pre-processing and quality analysis of raw data of amplification curves. The package takes advantage of R: 's S4 object model and offers an extensible environment. chipPCR contains tools for raw data exploration: normalization, baselining, imputation of missing values, a powerful wrapper for amplification curve smoothing and a function to detect the start and end of an amplification curve. The capabilities of the software are enhanced by the implementation of algorithms unavailable in R: , such as a 5-point stencil for derivative interpolation. Simulation tools, statistical tests, plots for data quality management, amplification efficiency/quantification cycle calculation, and datasets from qPCR and qIA experiments are part of the package. Core functionalities are integrated in GUIs (web-based and standalone shiny applications), thus streamlining analysis and report generation. http://cran.r-project.org/web/packages/chipPCR. Source code: https://github.com/michbur/chipPCR. stefan.roediger@b-tu.de Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  3. Software for Preprocessing Data from Rocket-Engine Tests

    Science.gov (United States)

    Cheng, Chiu-Fu

    2004-01-01

    Three computer programs have been written to preprocess digitized outputs of sensors during rocket-engine tests at Stennis Space Center (SSC). The programs apply exclusively to the SSC E test-stand complex and utilize the SSC file format. The programs are the following: Engineering Units Generator (EUGEN) converts sensor-output-measurement data to engineering units. The inputs to EUGEN are raw binary test-data files, which include the voltage data, a list identifying the data channels, and time codes. EUGEN effects conversion by use of a file that contains calibration coefficients for each channel. QUICKLOOK enables immediate viewing of a few selected channels of data, in contradistinction to viewing only after post-test processing (which can take 30 minutes to several hours depending on the number of channels and other test parameters) of data from all channels. QUICKLOOK converts the selected data into a form in which they can be plotted in engineering units by use of Winplot (a free graphing program written by Rick Paris). EUPLOT provides a quick means for looking at data files generated by EUGEN without the necessity of relying on the PV-WAVE based plotting software.

  4. Nuclear data for fusion: Validation of typical pre-processing methods for radiation transport calculations

    International Nuclear Information System (INIS)

    Hutton, T.; Sublet, J.C.; Morgan, L.; Leadbeater, T.W.

    2015-01-01

    Highlights: • We quantify the effect of processing nuclear data from ENDF to ACE format. • We consider the differences between fission and fusion angular distributions. • C-nat(n,el) at 2.0 MeV has a 0.6% deviation between original and processed data. • Fe-56(n,el) at 14.1 MeV has a 11.0% deviation between original and processed data. • Processed data do not accurately depict ENDF distributions for fusion energies. - Abstract: Nuclear data form the basis of the radiation transport codes used to design and simulate the behaviour of nuclear facilities, such as the ITER and DEMO fusion reactors. Typically these data and codes are biased towards fission and high-energy physics applications yet are still applied to fusion problems. With increasing interest in fusion applications, the lack of fusion specific codes and relevant data libraries is becoming increasingly apparent. Industry standard radiation transport codes require pre-processing of the evaluated data libraries prior to use in simulation. Historically these methods focus on speed of simulation at the cost of accurate data representation. For legacy applications this has not been a major concern, but current fusion needs differ significantly. Pre-processing reconstructs the differential and double differential interaction cross sections with a coarse binned structure, or more recently as a tabulated cumulative distribution function. This work looks at the validity of applying these processing methods to data used in fusion specific calculations in comparison to fission. The relative effects of applying this pre-processing mechanism, to both fission and fusion relevant reaction channels are demonstrated, and as such the poor representation of these distributions for the fusion energy regime. For the nat C(n,el) reaction at 2.0 MeV, the binned differential cross section deviates from the original data by 0.6% on average. For the 56 Fe(n,el) reaction at 14.1 MeV, the deviation increases to 11.0%. We

  5. Impulse Noise Cancellation of Medical Images Using Wavelet Networks and Median Filters

    Science.gov (United States)

    Sadri, Amir Reza; Zekri, Maryam; Sadri, Saeid; Gheissari, Niloofar

    2012-01-01

    This paper presents a new two-stage approach to impulse noise removal for medical images based on wavelet network (WN). The first step is noise detection, in which the so-called gray-level difference and average background difference are considered as the inputs of a WN. Wavelet Network is used as a preprocessing for the second stage. The second step is removing impulse noise with a median filter. The wavelet network presented here is a fixed one without learning. Experimental results show that our method acts on impulse noise effectively, and at the same time preserves chromaticity and image details very well. PMID:23493998

  6. A Comparative Investigation of the Combined Effects of Pre-Processing, Wavelength Selection, and Regression Methods on Near-Infrared Calibration Model Performance.

    Science.gov (United States)

    Wan, Jian; Chen, Yi-Chieh; Morris, A Julian; Thennadil, Suresh N

    2017-07-01

    Near-infrared (NIR) spectroscopy is being widely used in various fields ranging from pharmaceutics to the food industry for analyzing chemical and physical properties of the substances concerned. Its advantages over other analytical techniques include available physical interpretation of spectral data, nondestructive nature and high speed of measurements, and little or no need for sample preparation. The successful application of NIR spectroscopy relies on three main aspects: pre-processing of spectral data to eliminate nonlinear variations due to temperature, light scattering effects and many others, selection of those wavelengths that contribute useful information, and identification of suitable calibration models using linear/nonlinear regression . Several methods have been developed for each of these three aspects and many comparative studies of different methods exist for an individual aspect or some combinations. However, there is still a lack of comparative studies for the interactions among these three aspects, which can shed light on what role each aspect plays in the calibration and how to combine various methods of each aspect together to obtain the best calibration model. This paper aims to provide such a comparative study based on four benchmark data sets using three typical pre-processing methods, namely, orthogonal signal correction (OSC), extended multiplicative signal correction (EMSC) and optical path-length estimation and correction (OPLEC); two existing wavelength selection methods, namely, stepwise forward selection (SFS) and genetic algorithm optimization combined with partial least squares regression for spectral data (GAPLSSP); four popular regression methods, namely, partial least squares (PLS), least absolute shrinkage and selection operator (LASSO), least squares support vector machine (LS-SVM), and Gaussian process regression (GPR). The comparative study indicates that, in general, pre-processing of spectral data can play a significant

  7. Compiling the parallel programming language NestStep to the CELL processor

    OpenAIRE

    Holm, Magnus

    2010-01-01

    The goal of this project is to create a source-to-source compiler which will translate NestStep code to C code. The compiler's job is to replace NestStep constructs with a series of function calls to the NestStep runtime system. NestStep is a parallel programming language extension based on the BSP model. It adds constructs for parallel programming on top of an imperative programming language. For this project, only constructs extending the C language are relevant. The output code will compil...

  8. STEP-TRAMM - A modeling interface for simulating localized rainfall induced shallow landslides and debris flow runout pathways

    Science.gov (United States)

    Or, D.; von Ruette, J.; Lehmann, P.

    2017-12-01

    Landslides and subsequent debris-flows initiated by rainfall represent a common natural hazard in mountainous regions. We integrated a landslide hydro-mechanical triggering model with a simple model for debris flow runout pathways and developed a graphical user interface (GUI) to represent these natural hazards at catchment scale at any location. The STEP-TRAMM GUI provides process-based estimates of the initiation locations and sizes of landslides patterns based on digital elevation models (SRTM) linked with high resolution global soil maps (SoilGrids 250 m resolution) and satellite based information on rainfall statistics for the selected region. In the preprocessing phase the STEP-TRAMM model estimates soil depth distribution to supplement other soil information for delineating key hydrological and mechanical properties relevant to representing local soil failure. We will illustrate this publicly available GUI and modeling platform to simulate effects of deforestation on landslide hazards in several regions and compare model outcome with satellite based information.

  9. Arabic text preprocessing for the natural language processing applications

    International Nuclear Information System (INIS)

    Awajan, A.

    2007-01-01

    A new approach for processing vowelized and unvowelized Arabic texts in order to prepare them for Natural Language Processing (NLP) purposes is described. The developed approach is rule-based and made up of four phases: text tokenization, word light stemming, word's morphological analysis and text annotation. The first phase preprocesses the input text in order to isolate the words and represent them in a formal way. The second phase applies a light stemmer in order to extract the stem of each word by eliminating the prefixes and suffixes. The third phase is a rule-based morphological analyzer that determines the root and the morphological pattern for each extracted stem. The last phase produces an annotated text where each word is tagged with its morphological attributes. The preprocessor presented in this paper is capable of dealing with vowelized and unvowelized words, and provides the input words along with relevant linguistics information needed by different applications. It is designed to be used with different NLP applications such as machine translation text summarization, text correction, information retrieval and automatic vowelization of Arabic Text. (author)

  10. Finite Volume Method for Pricing European Call Option with Regime-switching Volatility

    Science.gov (United States)

    Lista Tauryawati, Mey; Imron, Chairul; Putri, Endah RM

    2018-03-01

    In this paper, we present a finite volume method for pricing European call option using Black-Scholes equation with regime-switching volatility. In the first step, we formulate the Black-Scholes equations with regime-switching volatility. we use a finite volume method based on fitted finite volume with spatial discretization and an implicit time stepping technique for the case. We show that the regime-switching scheme can revert to the non-switching Black Scholes equation, both in theoretical evidence and numerical simulations.

  11. Multisubject Learning for Common Spatial Patterns in Motor-Imagery BCI

    Directory of Open Access Journals (Sweden)

    Dieter Devlaminck

    2011-01-01

    Full Text Available Motor-imagery-based brain-computer interfaces (BCIs commonly use the common spatial pattern filter (CSP as preprocessing step before feature extraction and classification. The CSP method is a supervised algorithm and therefore needs subject-specific training data for calibration, which is very time consuming to collect. In order to reduce the amount of calibration data that is needed for a new subject, one can apply multitask (from now on called multisubject machine learning techniques to the preprocessing phase. Here, the goal of multisubject learning is to learn a spatial filter for a new subject based on its own data and that of other subjects. This paper outlines the details of the multitask CSP algorithm and shows results on two data sets. In certain subjects a clear improvement can be seen, especially when the number of training trials is relatively low.

  12. Three-Step Predictor-Corrector of Exponential Fitting Method for Nonlinear Schroedinger Equations

    International Nuclear Information System (INIS)

    Tang Chen; Zhang Fang; Yan Haiqing; Luo Tao; Chen Zhanqing

    2005-01-01

    We develop the three-step explicit and implicit schemes of exponential fitting methods. We use the three-step explicit exponential fitting scheme to predict an approximation, then use the three-step implicit exponential fitting scheme to correct this prediction. This combination is called the three-step predictor-corrector of exponential fitting method. The three-step predictor-corrector of exponential fitting method is applied to numerically compute the coupled nonlinear Schroedinger equation and the nonlinear Schroedinger equation with varying coefficients. The numerical results show that the scheme is highly accurate.

  13. Use of spectral pre-processing methods to compensate for the presence of packaging film in visible–near infrared hyperspectral images of food products

    Directory of Open Access Journals (Sweden)

    A.A. Gowen

    2010-10-01

    Full Text Available The presence of polymeric packaging film in images of food products may modify spectra obtained in hyperspectral imaging (HSI experiments, leading to undesirable image artefacts which may impede image classification. Some pre-processing of the image is typically required to reduce the presence of such artefacts. The objective of this research was to investigate the use of spectral pre-processing techniques to compensate for the presence of packaging film in hyperspectral images obtained in the visible–near infrared wavelength range (445–945 nm, with application in food quality assessment. A selection of commonly used pre-processing methods, used individually and in combination, were applied to hyperspectral images of flat homogeneous samples, imaged in the presence and absence of different packaging films (polyvinyl chloride and polyethylene terephthalate. Effects of the selected pre-treatments on variation due to the film’s presence were examined in principal components score space. The results show that the combination of first derivative Savitzky–Golay followed by standard normal variate transformation was useful in reducing variations in spectral response caused by the presence of packaging film. Compared to other methods examined, this combination has the benefits of being computationally fast and not requiring a priori knowledge about the sample or film used.

  14. Called to respond: The potential of unveiling hiddens

    Directory of Open Access Journals (Sweden)

    Alison L Black

    2014-12-01

    Full Text Available Interested in exploring how personal stories and aesthetic modes of representing experiences can nudge open academic and educational spaces, this article/collection of particles seeks to document our encounters of being affected and called to respond to things the other has written and represented. As a way of engaging with questions about what research and research data might be and become, our attention has been drawn to stories and images from our lives that we have not shaken off – and to how, as we have opened these to the other, making once private moments public, our hiddens have morphed tenderly into a shared knowing and being. As we have acted on the call we have felt to respond we have found ourselves entering spaces of collaboration, communion, contemplation, and conversation – spaces illuminated by what we have not been able to – and cannot – set aside. Using visual and poetic materials we explore heartfelt and heartbroken aspects of our educational worlds and lives, to be present with each other and our (reemerging personal and professional meanings. We see the shared body (of work, of writing, of image that develops from the taking of brave steps and the risky slipping off of academic masks and language, as a manifestation of the trusted and nurturing spaces that can be generated through collaborative opportunities to gather together. These steps towards unveiling hiddens are producing in us and of us a friendship, fluency, and fluidity as we write new ways of becoming. In turn, we hope the uncovering and revealing of our dialogue in the public gathering of this journal might supports readers’ telling of their own life stories through what calls them to respond.

  15. Randomness in multi-step direct reactions

    International Nuclear Information System (INIS)

    Koning, A.J.; Akkermans, J.M.

    1991-01-01

    The authors propose a quantum-statistical framework that provides an integrated perspective on the differences and similarities between the many current models for multi-step direct reactions in the continuum. It is argued that to obtain a statistical theory two physically different approaches are conceivable to postulate randomness, respectively called leading-particle statistics and residual-system statistics. They present a new leading-particle statistics theory for multi-step direct reactions. It is shown that the model of Feshbach et al. can be derived as a simplification of this theory and thus can be founded solely upon leading-particle statistics. The models developed by Tamura et al. and Nishioka et al. are based upon residual-system statistics and hence fall into a physically different class of multi-step direct theories, although the resulting cross-section formulae for the important first step are shown to be the same. The widely used semi-classical models such as the generalized exciton model can be interpreted as further phenomenological simplification of the leading-particle statistics theory

  16. Masking as an effective quality control method for next-generation sequencing data analysis.

    Science.gov (United States)

    Yun, Sajung; Yun, Sijung

    2014-12-13

    Next generation sequencing produces base calls with low quality scores that can affect the accuracy of identifying simple nucleotide variation calls, including single nucleotide polymorphisms and small insertions and deletions. Here we compare the effectiveness of two data preprocessing methods, masking and trimming, and the accuracy of simple nucleotide variation calls on whole-genome sequence data from Caenorhabditis elegans. Masking substitutes low quality base calls with 'N's (undetermined bases), whereas trimming removes low quality bases that results in a shorter read lengths. We demonstrate that masking is more effective than trimming in reducing the false-positive rate in single nucleotide polymorphism (SNP) calling. However, both of the preprocessing methods did not affect the false-negative rate in SNP calling with statistical significance compared to the data analysis without preprocessing. False-positive rate and false-negative rate for small insertions and deletions did not show differences between masking and trimming. We recommend masking over trimming as a more effective preprocessing method for next generation sequencing data analysis since masking reduces the false-positive rate in SNP calling without sacrificing the false-negative rate although trimming is more commonly used currently in the field. The perl script for masking is available at http://code.google.com/p/subn/. The sequencing data used in the study were deposited in the Sequence Read Archive (SRX450968 and SRX451773).

  17. Automated pre-processing and multivariate vibrational spectra analysis software for rapid results in clinical settings

    Science.gov (United States)

    Bhattacharjee, T.; Kumar, P.; Fillipe, L.

    2018-02-01

    Vibrational spectroscopy, especially FTIR and Raman, has shown enormous potential in disease diagnosis, especially in cancers. Their potential for detecting varied pathological conditions are regularly reported. However, to prove their applicability in clinics, large multi-center multi-national studies need to be undertaken; and these will result in enormous amount of data. A parallel effort to develop analytical methods, including user-friendly software that can quickly pre-process data and subject them to required multivariate analysis is warranted in order to obtain results in real time. This study reports a MATLAB based script that can automatically import data, preprocess spectra— interpolation, derivatives, normalization, and then carry out Principal Component Analysis (PCA) followed by Linear Discriminant Analysis (LDA) of the first 10 PCs; all with a single click. The software has been verified on data obtained from cell lines, animal models, and in vivo patient datasets, and gives results comparable to Minitab 16 software. The software can be used to import variety of file extensions, asc, .txt., .xls, and many others. Options to ignore noisy data, plot all possible graphs with PCA factors 1 to 5, and save loading factors, confusion matrices and other parameters are also present. The software can provide results for a dataset of 300 spectra within 0.01 s. We believe that the software will be vital not only in clinical trials using vibrational spectroscopic data, but also to obtain rapid results when these tools get translated into clinics.

  18. FPGA Implementation of Blue Whale Calls Classifier Using High-Level Programming Tool

    Directory of Open Access Journals (Sweden)

    Mohammed Bahoura

    2016-02-01

    Full Text Available In this paper, we propose a hardware-based architecture for automatic blue whale calls classification based on short-time Fourier transform and multilayer perceptron neural network. The proposed architecture is implemented on field programmable gate array (FPGA using Xilinx System Generator (XSG and the Nexys-4 Artix-7 FPGA board. This high-level programming tool allows us to design, simulate and execute the compiled design in Matlab/Simulink environment quickly and easily. Intermediate signals obtained at various steps of the proposed system are presented for typical blue whale calls. Classification performances based on the fixed-point XSG/FPGA implementation are compared to those obtained by the floating-point Matlab simulation, using a representative database of the blue whale calls.

  19. Pre-processing, registration and selection of adaptive optics corrected retinal images.

    Science.gov (United States)

    Ramaswamy, Gomathy; Devaney, Nicholas

    2013-07-01

    In this paper, the aim is to demonstrate enhanced processing of sequences of fundus images obtained using a commercial AO flood illumination system. The purpose of the work is to (1) correct for uneven illumination at the retina (2) automatically select the best quality images and (3) precisely register the best images. Adaptive optics corrected retinal images are pre-processed to correct uneven illumination using different methods; subtracting or dividing by the average filtered image, homomorphic filtering and a wavelet based approach. These images are evaluated to measure the image quality using various parameters, including sharpness, variance, power spectrum kurtosis and contrast. We have carried out the registration in two stages; a coarse stage using cross-correlation followed by fine registration using two approaches; parabolic interpolation on the peak of the cross-correlation and maximum-likelihood estimation. The angle of rotation of the images is measured using a combination of peak tracking and Procrustes transformation. We have found that a wavelet approach (Daubechies 4 wavelet at 6th level decomposition) provides good illumination correction with clear improvement in image sharpness and contrast. The assessment of image quality using a 'Designer metric' works well when compared to visual evaluation, although it is highly correlated with other metrics. In image registration, sub-pixel translation measured using parabolic interpolation on the peak of the cross-correlation function and maximum-likelihood estimation are found to give very similar results (RMS difference 0.047 pixels). We have confirmed that correcting rotation of the images provides a significant improvement, especially at the edges of the image. We observed that selecting the better quality frames (e.g. best 75% images) for image registration gives improved resolution, at the expense of poorer signal-to-noise. The sharpness map of the registered and de-rotated images shows increased

  20. Research of high speed data readout and pre-processing system based on xTCA for silicon pixel detector

    International Nuclear Information System (INIS)

    Zhao Jingzhou; Lin Haichuan; Guo Fang; Liu Zhen'an; Xu Hao; Gong Wenxuan; Liu Zhao

    2012-01-01

    As the development of the detector, Silicon pixel detectors have been widely used in high energy physics experiments. It needs data processing system with high speed, high bandwidth and high availability to read data from silicon pixel detectors which generate more large data. The same question occurs on Belle II Pixel Detector which is a new style silicon pixel detector used in SuperKEKB accelerator with high luminance. The paper describes the research of High speed data readout and pre-processing system based on xTCA for silicon pixel detector. The system consists of High Performance Computer Node (HPCN) based on xTCA and ATCA frame. The HPCN consists of 4XFPs based on AMC, 1 AMC Carrier ATCA Board (ACAB) and 1 Rear Transmission Module. It characterized by 5 high performance FPGAs, 16 fiber links based on RocketIO, 5 Gbit Ethernet ports and DDR2 with capacity up to 18GB. In a ATCA frame, 14 HPCNs make up a system using the high speed backplane to achieve the function of data pre-processing and trigger. This system will be used on the trigger and data acquisition system of Belle II Pixel detector. (authors)

  1. Flexible high-speed FASTBUS master for data read-out and preprocessing

    International Nuclear Information System (INIS)

    Wurz, A.; Manner, R.

    1990-01-01

    This paper describes a single slot FASTBUS master module. It can be used for read-out and preprocessing of data that are read out from FASTBUS modules, e.g., and ADC system. The module consists of a 25 MHz, 32-bit processor MC 68030 with cache memory and memory management, a floating point coprocessor MC68882, 4 MBytes of main memory, and FASTBUS master and slave interfaces. In addition, a DMA controller for read-out of FASTBUS data is provided. The processor allows I/O via serial ports, a 16-bit parallel port, and a transputer link. Additional interfaces are planned. The main memory is multi-ported and can be accessed directly by the CPU, the FASTBUS, and external masters via the high-speed local bus that is accessible by way of a connector. The FASTBUS interface supports most of the standard operations in master and slave mode

  2. Using primary care electronic health record data for comparative effectiveness research : experience of data quality assessment and preprocessing in The Netherlands

    NARCIS (Netherlands)

    Huang, Yunyu; Voorham, Jaco; Haaijer-Ruskamp, Flora M.

    Aim: Details of data quality and how quality issues were solved have not been reported in published comparative effectiveness studies using electronic health record data. Methods: We developed a conceptual framework of data quality assessment and preprocessing and apply it to a study comparing

  3. Blended call center with idling times during the call service

    NARCIS (Netherlands)

    Legros, Benjamin; Jouini, Oualid; Koole, Ger

    We consider a blended call center with calls arriving over time and an infinitely backlogged amount of outbound jobs. Inbound calls have a non-preemptive priority over outbound jobs. The inbound call service is characterized by three successive stages where the second one is a break; i.e., there is

  4. Peak Detection Method Evaluation for Ion Mobility Spectrometry by Using Machine Learning Approaches

    DEFF Research Database (Denmark)

    Hauschild, Anne-Christin; Kopczynski, Dominik; D'Addario, Marianna

    2013-01-01

    machine learning methods exist, an inevitable preprocessing step is reliable and robust peak detection without manual intervention. In this work we evaluate four state-of-the-art approaches for automated IMS-based peak detection: local maxima search, watershed transformation with IPHEx, region......-merging with VisualNow, and peak model estimation (PME).We manually generated Metabolites 2013, 3 278 a gold standard with the aid of a domain expert (manual) and compare the performance of the four peak calling methods with respect to two distinct criteria. We first utilize established machine learning methods...

  5. A strand specific high resolution normalization method for chip-sequencing data employing multiple experimental control measurements

    DEFF Research Database (Denmark)

    Enroth, Stefan; Andersson, Claes; Andersson, Robin

    2012-01-01

    High-throughput sequencing is becoming the standard tool for investigating protein-DNA interactions or epigenetic modifications. However, the data generated will always contain noise due to e.g. repetitive regions or non-specific antibody interactions. The noise will appear in the form of a backg......, the background is only used to adjust peak calling and not as a pre-processing step that aims at discerning the signal from the background noise. A normalization procedure that extracts the signal of interest would be of universal use when investigating genomic patterns....

  6. Current breathomics-a review on data pre-processing techniques and machine learning in metabolomics breath analysis

    DEFF Research Database (Denmark)

    Smolinska, A.; Hauschild, A. C.; Fijten, R. R. R.

    2014-01-01

    been extensively developed. Yet, the application of machine learning methods for fingerprinting VOC profiles in the breathomics is still in its infancy. Therefore, in this paper, we describe the current state of the art in data pre-processing and multivariate analysis of breathomics data. We start...... different conditions (e.g. disease stage, treatment). Independently of the utilized analytical method, the most important question, 'which VOCs are discriminatory?', remains the same. Answers can be given by several modern machine learning techniques (multivariate statistics) and, therefore, are the focus...

  7. Robust preprocessing for stimulus-based functional MRI of the moving fetus.

    Science.gov (United States)

    You, Wonsang; Evangelou, Iordanis E; Zun, Zungho; Andescavage, Nickie; Limperopoulos, Catherine

    2016-04-01

    Fetal motion manifests as signal degradation and image artifact in the acquired time series of blood oxygen level dependent (BOLD) functional magnetic resonance imaging (fMRI) studies. We present a robust preprocessing pipeline to specifically address fetal and placental motion-induced artifacts in stimulus-based fMRI with slowly cycled block design in the living fetus. In the proposed pipeline, motion correction is optimized to the experimental paradigm, and it is performed separately in each phase as well as in each region of interest (ROI), recognizing that each phase and organ experiences different types of motion. To obtain the averaged BOLD signals for each ROI, both misaligned volumes and noisy voxels are automatically detected and excluded, and the missing data are then imputed by statistical estimation based on local polynomial smoothing. Our experimental results demonstrate that the proposed pipeline was effective in mitigating the motion-induced artifacts in stimulus-based fMRI data of the fetal brain and placenta.

  8. Microfluidic step-emulsification in axisymmetric geometry.

    Science.gov (United States)

    Chakraborty, I; Ricouvier, J; Yazhgur, P; Tabeling, P; Leshansky, A M

    2017-10-25

    Biphasic step-emulsification (Z. Li et al., Lab Chip, 2015, 15, 1023) is a promising microfluidic technique for high-throughput production of μm and sub-μm highly monodisperse droplets. The step-emulsifier consists of a shallow (Hele-Shaw) microchannel operating with two co-flowing immiscible liquids and an abrupt expansion (i.e., step) to a deep and wide reservoir. Under certain conditions the confined stream of the disperse phase, engulfed by the co-flowing continuous phase, breaks into small highly monodisperse droplets at the step. Theoretical investigation of the corresponding hydrodynamics is complicated due to the complex geometry of the planar device, calling for numerical approaches. However, direct numerical simulations of the three dimensional surface-tension-dominated biphasic flows in confined geometries are computationally expensive. In the present paper we study a model problem of axisymmetric step-emulsification. This setup consists of a stable core-annular biphasic flow in a cylindrical capillary tube connected co-axially to a reservoir tube of a larger diameter through a sudden expansion mimicking the edge of the planar step-emulsifier. We demonstrate that the axisymmetric setup exhibits similar regimes of droplet generation to the planar device. A detailed parametric study of the underlying hydrodynamics is feasible via inexpensive (two dimensional) simulations owing to the axial symmetry. The phase diagram quantifying the different regimes of droplet generation in terms of governing dimensionless parameters is presented. We show that in qualitative agreement with experiments in planar devices, the size of the droplets generated in the step-emulsification regime is independent of the capillary number and almost insensitive to the viscosity ratio. These findings confirm that the step-emulsification regime is solely controlled by surface tension. The numerical predictions are in excellent agreement with in-house experiments with the axisymmetric

  9. The statistics of multi-step direct reactions

    International Nuclear Information System (INIS)

    Koning, A.J.; Akkermans, J.M.

    1991-01-01

    We propose a quantum-statistical framework that provides an integrated perspective on the differences and similarities between the many current models for multi-step direct reactions in the continuum. It is argued that to obtain a statistical theory two physically different approaches are conceivable to postulate randomness, respectively called leading-particle statistics and residual-system statistics. We present a new leading-particle statistics theory for multi-step direct reactions. It is shown that the model of Feshbach et al. can be derived as a simplification of this theory and thus can be founded solely upon leading-particle statistics. The models developed by Tamura et al. and Nishioka et al. are based upon residual-system statistics and hence fall into a physically different class of multi-step direct theories, although the resulting cross-section formulae for the important first step are shown to be the same. The widely used semi-classical models such as the generalized exciton model can be interpreted as further phenomenological simplifications of the leading-particle statistics theory. A more comprehensive exposition will appear before long. (author). 32 refs, 4 figs

  10. An Innovative Hybrid Model Based on Data Pre-Processing and Modified Optimization Algorithm and Its Application in Wind Speed Forecasting

    Directory of Open Access Journals (Sweden)

    Ping Jiang

    2017-07-01

    Full Text Available Wind speed forecasting has an unsuperseded function in the high-efficiency operation of wind farms, and is significant in wind-related engineering studies. Back-propagation (BP algorithms have been comprehensively employed to forecast time series that are nonlinear, irregular, and unstable. However, the single model usually overlooks the importance of data pre-processing and parameter optimization of the model, which results in weak forecasting performance. In this paper, a more precise and robust model that combines data pre-processing, BP neural network, and a modified artificial intelligence optimization algorithm was proposed, which succeeded in avoiding the limitations of the individual algorithm. The novel model not only improves the forecasting accuracy but also retains the advantages of the firefly algorithm (FA and overcomes the disadvantage of the FA while optimizing in the later stage. To verify the forecasting performance of the presented hybrid model, 10-min wind speed data from Penglai city, Shandong province, China, were analyzed in this study. The simulations revealed that the proposed hybrid model significantly outperforms other single metaheuristics.

  11. Unsharp masking technique as a preprocessing filter for improvement of 3D-CT image of bony structure in the maxillofacial region

    International Nuclear Information System (INIS)

    Harada, Takuya; Nishikawa, Keiichi; Kuroyanagi, Kinya

    1998-01-01

    We evaluated the usefulness of the unsharp masking technique as a preprocessing filter to improve 3D-CT images of bony structure in the maxillofacial region. The effect of the unsharp masking technique with several combinations of mask size and weighting factor on image resolution was investigated using a spatial frequency phantom made of bone-equivalent material. The 3D-CT images were obtained with scans perpendicular to and parallel to the phantom plates. The contrast transfer function (CTF) and the full width at half maximum (FWHM) of each spatial frequency component were measured. The FWHM was expressed as a ratio against the actual thickness of phantom plate. The effect on pseudoforamina was assessed using sliced CT images obtained in clinical bony 3D-CT examinations. The effect of the unsharp masking technique on image quality was also visually evaluated using five clinical fracture cases. CTFs did not change. FWHM ratios of original 3D-CT images were smaller than 1.0, regardless of the scanning direction. Those in scans perpendicular to the phantom plates were not changed by the unsharp masking technique. Those in parallel scanning were increased by mask size and weighting factor. The area of pseudoforamina decreased with increases in mask size and weighting factor. The combination of mask size 3 x 3 pixels and weighting factor 5 was optimal. Visual evaluation indicated that preprocessing with the unsharp masking technique improved the image quality of the 3D-CT images. The unsharp masking technique is useful as a preprocessing filter to improve the 3D-CT image of bony structure in the maxillofacial region. (author)

  12. Neural network-based preprocessing to estimate the parameters of the X-ray emission of a single-temperature thermal plasma

    Science.gov (United States)

    Ichinohe, Y.; Yamada, S.; Miyazaki, N.; Saito, S.

    2018-04-01

    We present data preprocessing based on an artificial neural network to estimate the parameters of the X-ray emission spectra of a single-temperature thermal plasma. The method finds appropriate parameters close to the global optimum. The neural network is designed to learn the parameters of the thermal plasma (temperature, abundance, normalization and redshift) of the input spectra. After training using 9000 simulated X-ray spectra, the network has grown to predict all the unknown parameters with uncertainties of about a few per cent. The performance dependence on the network structure has been studied. We applied the neural network to an actual high-resolution spectrum obtained with Hitomi. The predicted plasma parameters agree with the known best-fitting parameters of the Perseus cluster within uncertainties of ≲10 per cent. The result shows that neural networks trained by simulated data might possibly be used to extract a feature built in the data. This would reduce human-intensive preprocessing costs before detailed spectral analysis, and would help us make the best use of the large quantities of spectral data that will be available in the coming decades.

  13. Step-up fecal microbiota transplantation (FMT) strategy

    Science.gov (United States)

    Cui, Bota; Li, Pan; Xu, Lijuan; Peng, Zhaoyuan; Xiang, Jie; He, Zhi; Zhang, Ting; Ji, Guozhong; Nie, Yongzhan; Wu, Kaichun; Fan, Daiming; Zhang, Faming

    2016-01-01

    ABSTRACT Gut dysbiosis is a characteristic of inflammatory bowel disease (IBD) and is believed to play a role in the pathogenesis of IBD. Fecal microbiota transplantation (FMT) is an effective strategy to restore intestinal microbial diversity and has been reported to have a potential therapeutic value in IBD. Our recent study reported a holistic integrative therapy calledstep-up FMT strategy,” which was beneficial in treating steroid-dependent IBD patients. This strategy consists of scheduled FMTs combined with steroids, anti-TNF-α antibody treatment or enteral nutrition. Herein, we will elaborate the strategy thoroughly, introducing the concept, potential indication, methodology, and safety of “step-up FMT strategy” in detail. PMID:26939622

  14. Seven steps to curb global warming

    International Nuclear Information System (INIS)

    Mathews, John

    2007-01-01

    Based on best current estimates that the world needs to reduce global carbon dioxide emissions by 70% by 2050, and that there is at best a 10-year window of opportunity available to initiate the enormous changes needed, this paper proposes a set of seven self-contained steps that can be taken at a global level to tackle the problem with some prospect of success. The steps are self-financing and practicable, in that they are based on existing technologies. They involve agreement to create a new international agency charged with formulating and policing a global carbon pricing regime; a complementary step involving global monitoring of greenhouse gas emissions utilizing satellite resources; taking steps to compensate developing countries for preserving rainforest as carbon sinks; the dismantling of newly created trade barriers holding back global trade in biofuels; global promotion of a transition to renewable sources of electricity through facilitation of grid interconnections with independent power producers; a global moratorium on the building of new coal-fired power stations; and recycling of carbon revenues to promote uptake of renewable energy sources in developing countries, particularly Brazil, India and China. Taken as a group, it is argued that these steps are both necessary and sufficient. They call for institutional innovations at a global level that are politically difficult but feasible, given the magnitude of the problems addressed

  15. Road Sign Recognition with Fuzzy Adaptive Pre-Processing Models

    Science.gov (United States)

    Lin, Chien-Chuan; Wang, Ming-Shi

    2012-01-01

    A road sign recognition system based on adaptive image pre-processing models using two fuzzy inference schemes has been proposed. The first fuzzy inference scheme is to check the changes of the light illumination and rich red color of a frame image by the checking areas. The other is to check the variance of vehicle's speed and angle of steering wheel to select an adaptive size and position of the detection area. The Adaboost classifier was employed to detect the road sign candidates from an image and the support vector machine technique was employed to recognize the content of the road sign candidates. The prohibitory and warning road traffic signs are the processing targets in this research. The detection rate in the detection phase is 97.42%. In the recognition phase, the recognition rate is 93.04%. The total accuracy rate of the system is 92.47%. For video sequences, the best accuracy rate is 90.54%, and the average accuracy rate is 80.17%. The average computing time is 51.86 milliseconds per frame. The proposed system can not only overcome low illumination and rich red color around the road sign problems but also offer high detection rates and high computing performance. PMID:22778650

  16. Modeling the stepping mechanism in negative lightning leaders

    Science.gov (United States)

    Iudin, Dmitry; Syssoev, Artem; Davydenko, Stanislav; Rakov, Vladimir

    2017-04-01

    It is well-known that the negative leaders develop in a step manner using a mechanism of the so-called space leaders in contrary to positive ones, which propagate continuously. Despite this fact has been known for about a hundred years till now no one had developed any plausible model explaining this asymmetry. In this study we suggest a model of the stepped development of the negative lightning leader which for the first time allows carrying out the numerical simulation of its evolution. The model is based on the probability approach and description of temporal evolution of the discharge channels. One of the key features of our model is accounting for the presence of so called space streamers/leaders which play a fundamental role in the formation of negative leader's steps. Their appearance becomes possible due to the accounting of potential influence of the space charge injected into the discharge gap by the streamer corona. The model takes into account an asymmetry of properties of negative and positive streamers which is based on well-known from numerous laboratory measurements fact that positive streamers need about twice weaker electric field to appear and propagate as compared to negative ones. An extinction of the conducting channel as a possible way of its evolution is also taken into account. This allows us to describe the leader channel's sheath formation. To verify the morphology and characteristics of the model discharge, we use the results of the high-speed video observations of natural negative stepped leaders. We can conclude that the key properties of the model and natural negative leaders are very similar.

  17. Bat calls while preying: A method for reconstructing the signal emitted by a directional sound source

    DEFF Research Database (Denmark)

    Guarato, Francesco; Hallam, John

    2010-01-01

    Understanding and modeling bat biosonar behavior should take into account what the bat actually emitted while exploring the surrounding environment. Recording of the bat calls could be performed by means of a telemetry system small enough to sit on the bat head, though filtering due to bat...... directivity affects recordings and not all bat species are able to carry such a device. Instead, remote microphone recordings of the bat calls could be processed by means of a mathematical method that estimates bat head orientation as a first step before calculating the amplitudes of each call for each...... and discussed. A further improvement of the method is necessary as its performance for call reconstruction strongly depends on correct choice of the sample at which the recorded call is thought to start in each microphone data set....

  18. Joint preprocesser-based detector for cooperative networks with limited hardware processing capability

    KAUST Repository

    Abuzaid, Abdulrahman I.

    2015-02-01

    In this letter, a joint detector for cooperative communication networks is proposed when the destination has limited hardware processing capability. The transmitter sends its symbols with the help of L relays. As the destination has limited hardware, only U out of L signals are processed and the energy of the remaining relays is lost. To solve this problem, a joint preprocessing based detector is proposed. This joint preprocessor based detector operate on the principles of minimizing the symbol error rate (SER). For a realistic assessment, pilot symbol aided channel estimation is incorporated for this proposed detector. From our simulations, it can be observed that our proposed detector achieves the same SER performance as that of the maximum likelihood (ML) detector with all participating relays. Additionally, our detector outperforms selection combining (SC), channel shortening (CS) scheme and reduced-rank techniques when using the same U. Our proposed scheme has low computational complexity.

  19. Dynamic call center routing policies using call waiting and agent idle times

    NARCIS (Netherlands)

    Chan, W.; Koole, G.M.; L'Ecuyer, P.

    2014-01-01

    We study call routing policies for call centers with multiple call types and multiple agent groups. We introduce new weight-based routing policies where each pair (call type, agent group) is given a matching priority defined as an affine combination of the longest waiting time for that call type and

  20. The PREP Pipeline: Standardized preprocessing for large-scale EEG analysis

    Directory of Open Access Journals (Sweden)

    Nima eBigdelys Shamlo

    2015-06-01

    Full Text Available The technology to collect brain imaging and physiological measures has become portable and ubiquitous, opening the possibility of large-scale analysis of real-world human imaging. By its nature, such data is large and complex, making automated processing essential. This paper shows how lack of attention to the very early stages of an EEG preprocessing pipeline can reduce the signal-to-noise ratio and introduce unwanted artifacts into the data, particularly for computations done in single precision. We demonstrate that ordinary average referencing improves the signal-to-noise ratio, but that noisy channels can contaminate the results. We also show that identification of noisy channels depends on the reference and examine the complex interaction of filtering, noisy channel identification, and referencing. We introduce a multi-stage robust referencing scheme to deal with the noisy channel-reference interaction. We propose a standardized early-stage EEG processing pipeline (PREP and discuss the application of the pipeline to more than 600 EEG datasets. The pipeline includes an automatically generated report for each dataset processed. Users can download the PREP pipeline as a freely available MATLAB library from http://eegstudy.org/prepcode/.

  1. The PREP pipeline: standardized preprocessing for large-scale EEG analysis.

    Science.gov (United States)

    Bigdely-Shamlo, Nima; Mullen, Tim; Kothe, Christian; Su, Kyung-Min; Robbins, Kay A

    2015-01-01

    The technology to collect brain imaging and physiological measures has become portable and ubiquitous, opening the possibility of large-scale analysis of real-world human imaging. By its nature, such data is large and complex, making automated processing essential. This paper shows how lack of attention to the very early stages of an EEG preprocessing pipeline can reduce the signal-to-noise ratio and introduce unwanted artifacts into the data, particularly for computations done in single precision. We demonstrate that ordinary average referencing improves the signal-to-noise ratio, but that noisy channels can contaminate the results. We also show that identification of noisy channels depends on the reference and examine the complex interaction of filtering, noisy channel identification, and referencing. We introduce a multi-stage robust referencing scheme to deal with the noisy channel-reference interaction. We propose a standardized early-stage EEG processing pipeline (PREP) and discuss the application of the pipeline to more than 600 EEG datasets. The pipeline includes an automatically generated report for each dataset processed. Users can download the PREP pipeline as a freely available MATLAB library from http://eegstudy.org/prepcode.

  2. The Point Zoro Symmetric Single-Step Procedure for Simultaneous Estimation of Polynomial Zeros

    Directory of Open Access Journals (Sweden)

    Mansor Monsi

    2012-01-01

    Full Text Available The point symmetric single step procedure PSS1 has R-order of convergence at least 3. This procedure is modified by adding another single-step, which is the third step in PSS1. This modified procedure is called the point zoro symmetric single-step PZSS1. It is proven that the R-order of convergence of PZSS1 is at least 4 which is higher than the R-order of convergence of PT1, PS1, and PSS1. Hence, computational time is reduced since this procedure is more efficient for bounding simple zeros simultaneously.

  3. Development of STEP-NC Adaptor for Advanced Web Manufacturing System

    Science.gov (United States)

    Ajay Konapala, Mr.; Koona, Ramji, Dr.

    2017-08-01

    Information systems play a key role in the modern era of Information Technology. Rapid developments in IT & global competition calls for many changes in basic CAD/CAM/CAPP/CNC manufacturing chain of operations. ‘STEP-NC’ an enhancement to STEP for operating CNC machines, creating new opportunities for collaborative, concurrent, adaptive works across the manufacturing chain of operations. Schemas and data models defined by ISO14649 in liaison with ISO10303 standards made STEP-NC file rich with feature based, rather than mere point to point information of G/M Code format. But one needs to have a suitable information system to understand and modify these files. Various STEP-NC information systems are reviewed to understand the suitability of STEP-NC for web manufacturing. Present work also deals with the development of an adaptor which imports STEP-NC file, organizes its information, allowing modifications to entity values and finally generates a new STEP-NC file to export. The system is designed and developed to work on web to avail additional benefits through the web and also to be part of a proposed ‘Web based STEP-NC manufacturing platform’ which is under development and explained as future scope.

  4. Gravity gradient preprocessing at the GOCE HPF

    Science.gov (United States)

    Bouman, J.; Rispens, S.; Gruber, T.; Schrama, E.; Visser, P.; Tscherning, C. C.; Veicherts, M.

    2009-04-01

    One of the products derived from the GOCE observations are the gravity gradients. These gravity gradients are provided in the Gradiometer Reference Frame (GRF) and are calibrated in-flight using satellite shaking and star sensor data. In order to use these gravity gradients for application in Earth sciences and gravity field analysis, additional pre-processing needs to be done, including corrections for temporal gravity field signals to isolate the static gravity field part, screening for outliers, calibration by comparison with existing external gravity field information and error assessment. The temporal gravity gradient corrections consist of tidal and non-tidal corrections. These are all generally below the gravity gradient error level, which is predicted to show a 1/f behaviour for low frequencies. In the outlier detection the 1/f error is compensated for by subtracting a local median from the data, while the data error is assessed using the median absolute deviation. The local median acts as a high-pass filter and it is robust as is the median absolute deviation. Three different methods have been implemented for the calibration of the gravity gradients. All three methods use a high-pass filter to compensate for the 1/f gravity gradient error. The baseline method uses state-of-the-art global gravity field models and the most accurate results are obtained if star sensor misalignments are estimated along with the calibration parameters. A second calibration method uses GOCE GPS data to estimate a low degree gravity field model as well as gravity gradient scale factors. Both methods allow to estimate gravity gradient scale factors down to the 10-3 level. The third calibration method uses high accurate terrestrial gravity data in selected regions to validate the gravity gradient scale factors, focussing on the measurement band. Gravity gradient scale factors may be estimated down to the 10-2 level with this method.

  5. An overview on STEP-NC compliant controller development

    Science.gov (United States)

    Othman, M. A.; Minhat, M.; Jamaludin, Z.

    2017-10-01

    The capabilities of conventional Computer Numerical Control (CNC) machine tools as termination organiser to fabricate high-quality parts promptly, economically and precisely are undeniable. To date, most CNCs follow the programming standard of ISO 6983, also called G & M code. However, in fluctuating shop floor environment, flexibility and interoperability of current CNC system to react dynamically and adaptively are believed still limited. This outdated programming language does not explicitly relate to each other to have control of arbitrary locations other than the motion of the block-by-block. To address this limitation, new standard known as STEP-NC was developed in late 1990s and is formalized as an ISO 14649. It adds intelligence to the CNC in term of interoperability, flexibility, adaptability and openness. This paper presents an overview of the research work that have been done in developing a STEP-NC controller standard and the capabilities of STEP-NC to overcome modern manufacturing demands. Reviews stated that most existing STEP-NC controller prototypes are based on type 1 and type 2 implementation levels. There are still lack of effort being done to develop type 3 and type 4 STEP-NC compliant controller.

  6. A non-parametric peak calling algorithm for DamID-Seq.

    Directory of Open Access Journals (Sweden)

    Renhua Li

    Full Text Available Protein-DNA interactions play a significant role in gene regulation and expression. In order to identify transcription factor binding sites (TFBS of double sex (DSX-an important transcription factor in sex determination, we applied the DNA adenine methylation identification (DamID technology to the fat body tissue of Drosophila, followed by deep sequencing (DamID-Seq. One feature of DamID-Seq data is that induced adenine methylation signals are not assured to be symmetrically distributed at TFBS, which renders the existing peak calling algorithms for ChIP-Seq, including SPP and MACS, inappropriate for DamID-Seq data. This challenged us to develop a new algorithm for peak calling. A challenge in peaking calling based on sequence data is estimating the averaged behavior of background signals. We applied a bootstrap resampling method to short sequence reads in the control (Dam only. After data quality check and mapping reads to a reference genome, the peaking calling procedure compromises the following steps: 1 reads resampling; 2 reads scaling (normalization and computing signal-to-noise fold changes; 3 filtering; 4 Calling peaks based on a statistically significant threshold. This is a non-parametric method for peak calling (NPPC. We also used irreproducible discovery rate (IDR analysis, as well as ChIP-Seq data to compare the peaks called by the NPPC. We identified approximately 6,000 peaks for DSX, which point to 1,225 genes related to the fat body tissue difference between female and male Drosophila. Statistical evidence from IDR analysis indicated that these peaks are reproducible across biological replicates. In addition, these peaks are comparable to those identified by use of ChIP-Seq on S2 cells, in terms of peak number, location, and peaks width.

  7. A non-parametric peak calling algorithm for DamID-Seq.

    Science.gov (United States)

    Li, Renhua; Hempel, Leonie U; Jiang, Tingbo

    2015-01-01

    Protein-DNA interactions play a significant role in gene regulation and expression. In order to identify transcription factor binding sites (TFBS) of double sex (DSX)-an important transcription factor in sex determination, we applied the DNA adenine methylation identification (DamID) technology to the fat body tissue of Drosophila, followed by deep sequencing (DamID-Seq). One feature of DamID-Seq data is that induced adenine methylation signals are not assured to be symmetrically distributed at TFBS, which renders the existing peak calling algorithms for ChIP-Seq, including SPP and MACS, inappropriate for DamID-Seq data. This challenged us to develop a new algorithm for peak calling. A challenge in peaking calling based on sequence data is estimating the averaged behavior of background signals. We applied a bootstrap resampling method to short sequence reads in the control (Dam only). After data quality check and mapping reads to a reference genome, the peaking calling procedure compromises the following steps: 1) reads resampling; 2) reads scaling (normalization) and computing signal-to-noise fold changes; 3) filtering; 4) Calling peaks based on a statistically significant threshold. This is a non-parametric method for peak calling (NPPC). We also used irreproducible discovery rate (IDR) analysis, as well as ChIP-Seq data to compare the peaks called by the NPPC. We identified approximately 6,000 peaks for DSX, which point to 1,225 genes related to the fat body tissue difference between female and male Drosophila. Statistical evidence from IDR analysis indicated that these peaks are reproducible across biological replicates. In addition, these peaks are comparable to those identified by use of ChIP-Seq on S2 cells, in terms of peak number, location, and peaks width.

  8. Two-step calibration method for multi-algorithm score-based face recognition systems by minimizing discrimination loss

    NARCIS (Netherlands)

    Susyanto, N.; Veldhuis, R.N.J.; Spreeuwers, L.J.; Klaassen, C.A.J.; Fierrez, J.; Li, S.Z.; Ross, A.; Veldhuis, R.; Alonso-Fernandez, F.; Bigun, J.

    2016-01-01

    We propose a new method for combining multi-algorithm score-based face recognition systems, which we call the two-step calibration method. Typically, algorithms for face recognition systems produce dependent scores. The two-step method is based on parametric copulas to handle this dependence. Its

  9. Switched Flip-Flop based Preprocessing Circuit for ISFETs

    Directory of Open Access Journals (Sweden)

    Martin Kollár

    2005-03-01

    Full Text Available In this paper, a preprocessing circuit for ISFETs (Ion-sensitive field-effecttransistors to measure hydrogen-ion concentration in electrolyte is presented. A modifiedflip-flop is the main part of the circuit. The modification consists in replacing the standardtransistors by ISFETs and periodically switching the supply voltage on and off.Concentration of hydrogen ions to be measured discontinues the flip-flop value symmetry,which means that by switching the supply voltage on the flip-flop goes to one of two stablestates, ‘one’ or ‘zero’. The recovery of the value symmetry can be achieved by changing abalanced voltage, which is incorporated to the flip-flop, to bring the flip-flop to a 50%position (probability of ‘one’ equals to probability of ‘zero’. Thus, the balanced voltagereflects the measured concentration of hydrogen ions. Its magnitude is set automatically byusing a feedback circuit whose input is connected to the flip-flop output. The preprocessingcircuit, as the whole, is the well-known δ modulator in which the switched flip-flop servesas a comparator and a sampling circuit. The advantages of this approach in comparison tothose of standard approaches are discussed. Finally, theoretical results are verified bysimulations with TSPICE and a good agreement is reported.

  10. Spectral Difference in the Image Domain for Large Neighborhoods, a GEOBIA Pre-Processing Step for High Resolution Imagery

    Directory of Open Access Journals (Sweden)

    Roeland de Kok

    2012-08-01

    Full Text Available Contrast plays an important role in the visual interpretation of imagery. To mimic visual interpretation and using contrast in a Geographic Object Based Image Analysis (GEOBIA environment, it is useful to consider an analysis for single pixel objects. This should be done before applying homogeneity criteria in the aggregation of pixels for the construction of meaningful image objects. The habit or “best practice” to start GEOBIA with pixel aggregation into homogeneous objects should come with the awareness that feature attributes for single pixels are at risk of becoming less accessible for further analysis. Single pixel contrast with image convolution on close neighborhoods is a standard technique, also applied in edge detection. This study elaborates on the analysis of close as well as much larger neighborhoods inside the GEOBIA domain. The applied calculations are limited to the first segmentation step for single pixel objects in order to produce additional feature attributes for objects of interest to be generated in further aggregation processes. The equation presented functions at a level that is considered an intermediary product in the sequential processing of imagery. The procedure requires intensive processor and memory capacity. The resulting feature attributes highlight not only contrasting pixels (edges but also contrasting areas of local pixel groups. The suggested approach can be extended and becomes useful in classifying artificial areas at national scales using high resolution satellite mosaics.

  11. Self-sustained oscillations with acoustic feedback in flows over a backward-facing step with a small upstream step

    Science.gov (United States)

    Yokoyama, Hiroshi; Tsukamoto, Yuichi; Kato, Chisachi; Iida, Akiyoshi

    2007-10-01

    Self-sustained oscillations with acoustic feedback take place in a flow over a two-dimensional two-step configuration: a small forward-backward facing step, which we hereafter call a bump, and a relatively large backward-facing step (backstep). These oscillations can radiate intense tonal sound and fatigue nearby components of industrial products. We clarify the mechanism of these oscillations by directly solving the compressible Navier-Stokes equations. The results show that vortices are shed from the leading edge of the bump and acoustic waves are radiated when these vortices pass the trailing edge of the backstep. The radiated compression waves shed new vortices by stretching the vortex formed by the flow separation at the leading edge of the bump, thereby forming a feedback loop. We propose a formula based on a detailed investigation of the phase relationship between the vortices and the acoustic waves for predicting the frequencies of the tonal sound. The frequencies predicted by this formula are in good agreement with those measured in the experiments we performed.

  12. Variation in chick-a-dee calls of tufted titmice, Baeolophus bicolor: note type and individual distinctiveness.

    Science.gov (United States)

    Owens, Jessica L; Freeberg, Todd M

    2007-08-01

    The chick-a-dee call of chickadee species (genus Poecile) has been the focus of much research. A great deal is known about the structural complexity and the meaning of variation in notes making up calls in these species. However, little is known about the likely homologous "chick-a-dee" call of the closely related tufted titmouse, Baeolophus bicolor. Tufted titmice are a prime candidate for comparative analyses of the call, because their vocal and social systems share many characteristics with those of chickadees. To address the paucity of data on the structure of chick-a-dee calls of tufted titmice, we recorded birds in field and aviary settings. Four main note types were identified in the call: Z, A, D(h), and D notes. Several acoustic parameters of each note type were measured, and statistical analyses revealed that the note types are acoustically distinct from one another. Furthermore, note types vary in the extent of individual distinctiveness reflected in their acoustic parameters. This first step towards understanding the chick-a-dee call of tufted titmice indicates that the call is comparable in structure and complexity to the calls of chickadees.

  13. SPAR-H Step-by-Step Guidance

    Energy Technology Data Exchange (ETDEWEB)

    W. J. Galyean; A. M. Whaley; D. L. Kelly; R. L. Boring

    2011-05-01

    This guide provides step-by-step guidance on the use of the SPAR-H method for quantifying Human Failure Events (HFEs). This guide is intended to be used with the worksheets provided in: 'The SPAR-H Human Reliability Analysis Method,' NUREG/CR-6883, dated August 2005. Each step in the process of producing a Human Error Probability (HEP) is discussed. These steps are: Step-1, Categorizing the HFE as Diagnosis and/or Action; Step-2, Rate the Performance Shaping Factors; Step-3, Calculate PSF-Modified HEP; Step-4, Accounting for Dependence, and; Step-5, Minimum Value Cutoff. The discussions on dependence are extensive and include an appendix that describes insights obtained from the psychology literature.

  14. SPAR-H Step-by-Step Guidance

    International Nuclear Information System (INIS)

    Galyean, W.J.; Whaley, A.M.; Kelly, D.L.; Boring, R.L.

    2011-01-01

    This guide provides step-by-step guidance on the use of the SPAR-H method for quantifying Human Failure Events (HFEs). This guide is intended to be used with the worksheets provided in: 'The SPAR-H Human Reliability Analysis Method,' NUREG/CR-6883, dated August 2005. Each step in the process of producing a Human Error Probability (HEP) is discussed. These steps are: Step-1, Categorizing the HFE as Diagnosis and/or Action; Step-2, Rate the Performance Shaping Factors; Step-3, Calculate PSF-Modified HEP; Step-4, Accounting for Dependence, and; Step-5, Minimum Value Cutoff. The discussions on dependence are extensive and include an appendix that describes insights obtained from the psychology literature.

  15. SPAR-H Step-by-Step Guidance

    Energy Technology Data Exchange (ETDEWEB)

    April M. Whaley; Dana L. Kelly; Ronald L. Boring; William J. Galyean

    2012-06-01

    Step-by-step guidance was developed recently at Idaho National Laboratory for the US Nuclear Regulatory Commission on the use of the Standardized Plant Analysis Risk-Human Reliability Analysis (SPAR-H) method for quantifying Human Failure Events (HFEs). This work was done to address SPAR-H user needs, specifically requests for additional guidance on the proper application of various aspects of the methodology. This paper overviews the steps of the SPAR-H analysis process and highlights some of the most important insights gained during the development of the step-by-step directions. This supplemental guidance for analysts is applicable when plant-specific information is available, and goes beyond the general guidance provided in existing SPAR-H documentation. The steps highlighted in this paper are: Step-1, Categorizing the HFE as Diagnosis and/or Action; Step-2, Rate the Performance Shaping Factors; Step-3, Calculate PSF-Modified HEP; Step-4, Accounting for Dependence, and; Step-5, Minimum Value Cutoff.

  16. 47 CFR 22.921 - 911 call processing procedures; 911-only calling mode.

    Science.gov (United States)

    2010-10-01

    ... programming in the mobile unit that determines the handling of a non-911 call and permit the call to be... CARRIER SERVICES PUBLIC MOBILE SERVICES Cellular Radiotelephone Service § 22.921 911 call processing procedures; 911-only calling mode. Mobile telephones manufactured after February 13, 2000 that are capable of...

  17. Stepwise strategy to improve Cervical Cancer Screening Adherence (SCAN-CC): automated text messages, phone calls and face-to-face interviews: protocol of a population-based randomised controlled trial.

    Science.gov (United States)

    Firmino-Machado, João; Mendes, Romeu; Moreira, Amélia; Lunet, Nuno

    2017-10-05

    Screening is highly effective for cervical cancer prevention and control. Population-based screening programmes are widely implemented in high-income countries, although adherence is often low. In Portugal, just over half of the women adhere to cervical cancer screening, contributing for greater mortality rates than in other European countries. The most effective adherence raising strategies are based on patient reminders, small/mass media and face-to-face educational programmes, but sequential interventions targeting the general population have seldom been evaluated. The aim of this study is to assess the effectiveness of a stepwise approach, with increasing complexity and cost, to improve adherence to organised cervical cancer screening: step 1a-customised text message invitation; step 1b-customised automated phone call invitation; step 2-secretary phone call; step 3-family health professional phone call and face-to-face appointment. A population-based randomised controlled trial will be implemented in Portuguese urban and rural areas. Women eligible for cervical cancer screening will be randomised (1:1) to intervention and control. In the intervention group, women will be invited for screening through text messages, automated phone calls, manual phone calls and health professional appointments, to be applied sequentially to participants remaining non-adherent after each step. Control will be the standard of care (written letter). The primary outcome is the proportion of women adherent to screening after step 1 or sequences of steps from 1 to 3. The secondary outcomes are: proportion of women screened after each step (1a, 2 and 3); proportion of text messages/phone calls delivered; proportion of women previously screened in a private health institution who change to organised screening. The intervention and control groups will be compared based on intention-to-treat and per-protocol analyses. The study was approved by the Ethics Committee of the Northern Health

  18. International Best Practices for Pre-Processing and Co-Processing Municipal Solid Waste and Sewage Sludge in the Cement Industry

    Energy Technology Data Exchange (ETDEWEB)

    Hasanbeigi, Ali [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Lu, Hongyou [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Williams, Christopher [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Price, Lynn [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2012-07-01

    The purpose of this report is to describe international best practices for pre-processing and coprocessing of MSW and sewage sludge in cement plants, for the benefit of countries that wish to develop co-processing capacity. The report is divided into three main sections. Section 2 describes the fundamentals of co-processing, Section 3 describes exemplary international regulatory and institutional frameworks for co-processing, and Section 4 describes international best practices related to the technological aspects of co-processing.

  19. Predicting prices of agricultural commodities in Thailand using combined approach emphasizing on data pre-processing technique

    Directory of Open Access Journals (Sweden)

    Thoranin Sujjaviriyasup

    2018-02-01

    Full Text Available In this research, a combined approach emphasizing on data pre-processing technique is developed to forecast prices of agricultural commodities in Thailand. The future prices play significant role in decision making to cultivate crops in next year. The proposed model takes ability of MODWT to decompose original time series data into more stable and explicit subseries, and SVR model to formulate complex function of forecasting. The experimental results indicated that the proposed model outperforms traditional forecasting models based on MAE and MAPE criteria. Furthermore, the proposed model reveals that it is able to be a useful forecasting tool for prices of agricultural commodities in Thailand

  20. Measurement of cross-sections for step-bystep excitation of inert gas atoms from metastable states by electron collisions

    International Nuclear Information System (INIS)

    Mityureva, A.A.; Penkin, N.P.; Smirnov, V.V.

    1989-01-01

    Excitation of argon atoms by electron collisions from metastable (MS) to high-lying states of inert gases (the so-called step-by-step excitation) is investigated. Formation of MS atoms m and their further step-by-step excitation up to k level is carried out by an electron beam with energy from 1 up to 40 eV. Time distribution of forming metastable and step-by-step electron collisions is used. The method used permits to measure the functions of step-by-step excitation and the absolute values of cross sections. Absolute values of cross-sections and functions of step-by-step excitation of some lines and argon levels are obtained

  1. Two-step rapid sulfur capture. Final report

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1994-04-01

    The primary goal of this program was to test the technical and economic feasibility of a novel dry sorbent injection process called the Two-Step Rapid Sulfur Capture process for several advanced coal utilization systems. The Two-Step Rapid Sulfur Capture process consists of limestone activation in a high temperature auxiliary burner for short times followed by sorbent quenching in a lower temperature sulfur containing coal combustion gas. The Two-Step Rapid Sulfur Capture process is based on the Non-Equilibrium Sulfur Capture process developed by the Energy Technology Office of Textron Defense Systems (ETO/TDS). Based on the Non-Equilibrium Sulfur Capture studies the range of conditions for optimum sorbent activation were thought to be: activation temperature > 2,200 K for activation times in the range of 10--30 ms. Therefore, the aim of the Two-Step process is to create a very active sorbent (under conditions similar to the bomb reactor) and complete the sulfur reaction under thermodynamically favorable conditions. A flow facility was designed and assembled to simulate the temperature, time, stoichiometry, and sulfur gas concentration prevalent in the advanced coal utilization systems such as gasifiers, fluidized bed combustors, mixed-metal oxide desulfurization systems, diesel engines, and gas turbines.

  2. INFLUENCE OF RAW IMAGE PREPROCESSING AND OTHER SELECTED PROCESSES ON ACCURACY OF CLOSE-RANGE PHOTOGRAMMETRIC SYSTEMS ACCORDING TO VDI 2634

    Directory of Open Access Journals (Sweden)

    J. Reznicek

    2016-06-01

    Full Text Available This paper examines the influence of raw image preprocessing and other selected processes on the accuracy of close-range photogrammetric measurement. The examined processes and features includes: raw image preprocessing, sensor unflatness, distance-dependent lens distortion, extending the input observations (image measurements by incorporating all RGB colour channels, ellipse centre eccentricity and target detecting. The examination of each effect is carried out experimentally by performing the validation procedure proposed in the German VDI guideline 2634/1. The validation procedure is based on performing standard photogrammetric measurements of high-accurate calibrated measuring lines (multi-scale bars with known lengths (typical uncertainty = 5 μm at 2 sigma. The comparison of the measured lengths with the known values gives the maximum length measurement error LME, which characterize the accuracy of the validated photogrammetric system. For higher reliability the VDI test field was photographed ten times independently with the same configuration and camera settings. The images were acquired with the metric ALPA 12WA camera. The tests are performed on all ten measurements which gives the possibility to measure the repeatability of the estimated parameters as well. The influences are examined by comparing the quality characteristics of the reference and tested settings.

  3. Zseq: An Approach for Preprocessing Next-Generation Sequencing Data.

    Science.gov (United States)

    Alkhateeb, Abedalrhman; Rueda, Luis

    2017-08-01

    Next-generation sequencing technology generates a huge number of reads (short sequences), which contain a vast amount of genomic data. The sequencing process, however, comes with artifacts. Preprocessing of sequences is mandatory for further downstream analysis. We present Zseq, a linear method that identifies the most informative genomic sequences and reduces the number of biased sequences, sequence duplications, and ambiguous nucleotides. Zseq finds the complexity of the sequences by counting the number of unique k-mers in each sequence as its corresponding score and also takes into the account other factors such as ambiguous nucleotides or high GC-content percentage in k-mers. Based on a z-score threshold, Zseq sweeps through the sequences again and filters those with a z-score less than the user-defined threshold. Zseq algorithm is able to provide a better mapping rate; it reduces the number of ambiguous bases significantly in comparison with other methods. Evaluation of the filtered reads has been conducted by aligning the reads and assembling the transcripts using the reference genome as well as de novo assembly. The assembled transcripts show a better discriminative ability to separate cancer and normal samples in comparison with another state-of-the-art method. Moreover, de novo assembled transcripts from the reads filtered by Zseq have longer genomic sequences than other tested methods. Estimating the threshold of the cutoff point is introduced using labeling rules with optimistic results.

  4. An efficient depth map preprocessing method based on structure-aided domain transform smoothing for 3D view generation.

    Directory of Open Access Journals (Sweden)

    Wei Liu

    Full Text Available Depth image-based rendering (DIBR, which is used to render virtual views with a color image and the corresponding depth map, is one of the key techniques in the 2D to 3D conversion process. Due to the absence of knowledge about the 3D structure of a scene and its corresponding texture, DIBR in the 2D to 3D conversion process, inevitably leads to holes in the resulting 3D image as a result of newly-exposed areas. In this paper, we proposed a structure-aided depth map preprocessing framework in the transformed domain, which is inspired by recently proposed domain transform for its low complexity and high efficiency. Firstly, our framework integrates hybrid constraints including scene structure, edge consistency and visual saliency information in the transformed domain to improve the performance of depth map preprocess in an implicit way. Then, adaptive smooth localization is cooperated and realized in the proposed framework to further reduce over-smoothness and enhance optimization in the non-hole regions. Different from the other similar methods, the proposed method can simultaneously achieve the effects of hole filling, edge correction and local smoothing for typical depth maps in a united framework. Thanks to these advantages, it can yield visually satisfactory results with less computational complexity for high quality 2D to 3D conversion. Numerical experimental results demonstrate the excellent performances of the proposed method.

  5. QSpike Tools: a Generic Framework for Parallel Batch Preprocessing of Extracellular Neuronal Signals Recorded by Substrate Microelectrode Arrays

    Directory of Open Access Journals (Sweden)

    Mufti eMahmud

    2014-03-01

    Full Text Available Micro-Electrode Arrays (MEAs have emerged as a mature technique to investigate brain (dysfunctions in vivo and in in vitro animal models. Often referred to as smart Petri dishes, MEAs has demonstrated a great potential particularly for medium-throughput studies in vitro, both in academic and pharmaceutical industrial contexts. Enabling rapid comparison of ionic/pharmacological/genetic manipulations with control conditions, MEAs are often employed to screen compounds by monitoring non-invasively the spontaneous and evoked neuronal electrical activity in longitudinal studies, with relatively inexpensive equipment. However, in order to acquire sufficient statistical significance, recordings last up to tens of minutes and generate large amount of raw data (e.g., 60 channels/MEA, 16 bits A/D conversion, 20kHz sampling rate: ~8GB/MEA,h uncompressed. Thus, when the experimental conditions to be tested are numerous, the availability of fast, standardized, and automated signal preprocessing becomes pivotal for any subsequent analysis and data archiving. To this aim, we developed an in-house cloud-computing system, named QSpike Tools, where CPU-intensive operations, required for preprocessing of each recorded channel (e.g., filtering, multi-unit activity detection, spike-sorting, etc., are decomposed and batch-queued to a multi-core architecture or to computer cluster. With the commercial availability of new and inexpensive high-density MEAs, we believe that disseminating QSpike Tools might facilitate its wide adoption and customization, and possibly inspire the creation of community-supported cloud-computing facilities for MEAs users.

  6. QSpike tools: a generic framework for parallel batch preprocessing of extracellular neuronal signals recorded by substrate microelectrode arrays.

    Science.gov (United States)

    Mahmud, Mufti; Pulizzi, Rocco; Vasilaki, Eleni; Giugliano, Michele

    2014-01-01

    Micro-Electrode Arrays (MEAs) have emerged as a mature technique to investigate brain (dys)functions in vivo and in in vitro animal models. Often referred to as "smart" Petri dishes, MEAs have demonstrated a great potential particularly for medium-throughput studies in vitro, both in academic and pharmaceutical industrial contexts. Enabling rapid comparison of ionic/pharmacological/genetic manipulations with control conditions, MEAs are employed to screen compounds by monitoring non-invasively the spontaneous and evoked neuronal electrical activity in longitudinal studies, with relatively inexpensive equipment. However, in order to acquire sufficient statistical significance, recordings last up to tens of minutes and generate large amount of raw data (e.g., 60 channels/MEA, 16 bits A/D conversion, 20 kHz sampling rate: approximately 8 GB/MEA,h uncompressed). Thus, when the experimental conditions to be tested are numerous, the availability of fast, standardized, and automated signal preprocessing becomes pivotal for any subsequent analysis and data archiving. To this aim, we developed an in-house cloud-computing system, named QSpike Tools, where CPU-intensive operations, required for preprocessing of each recorded channel (e.g., filtering, multi-unit activity detection, spike-sorting, etc.), are decomposed and batch-queued to a multi-core architecture or to a computers cluster. With the commercial availability of new and inexpensive high-density MEAs, we believe that disseminating QSpike Tools might facilitate its wide adoption and customization, and inspire the creation of community-supported cloud-computing facilities for MEAs users.

  7. Combined data preprocessing and multivariate statistical analysis characterizes fed-batch culture of mouse hybridoma cells for rational medium design.

    Science.gov (United States)

    Selvarasu, Suresh; Kim, Do Yun; Karimi, Iftekhar A; Lee, Dong-Yup

    2010-10-01

    We present an integrated framework for characterizing fed-batch cultures of mouse hybridoma cells producing monoclonal antibody (mAb). This framework systematically combines data preprocessing, elemental balancing and statistical analysis technique. Initially, specific rates of cell growth, glucose/amino acid consumptions and mAb/metabolite productions were calculated via curve fitting using logistic equations, with subsequent elemental balancing of the preprocessed data indicating the presence of experimental measurement errors. Multivariate statistical analysis was then employed to understand physiological characteristics of the cellular system. The results from principal component analysis (PCA) revealed three major clusters of amino acids with similar trends in their consumption profiles: (i) arginine, threonine and serine, (ii) glycine, tyrosine, phenylalanine, methionine, histidine and asparagine, and (iii) lysine, valine and isoleucine. Further analysis using partial least square (PLS) regression identified key amino acids which were positively or negatively correlated with the cell growth, mAb production and the generation of lactate and ammonia. Based on these results, the optimal concentrations of key amino acids in the feed medium can be inferred, potentially leading to an increase in cell viability and productivity, as well as a decrease in toxic waste production. The study demonstrated how the current methodological framework using multivariate statistical analysis techniques can serve as a potential tool for deriving rational medium design strategies. Copyright © 2010 Elsevier B.V. All rights reserved.

  8. Gaussian anamorphosis in the analysis step of the EnKF: a joint state-variable/observation approach

    Directory of Open Access Journals (Sweden)

    Javier Amezcua

    2014-09-01

    Full Text Available The analysis step of the (ensemble Kalman filter is optimal when (1 the distribution of the background is Gaussian, (2 state variables and observations are related via a linear operator, and (3 the observational error is of additive nature and has Gaussian distribution. When these conditions are largely violated, a pre-processing step known as Gaussian anamorphosis (GA can be applied. The objective of this procedure is to obtain state variables and observations that better fulfil the Gaussianity conditions in some sense. In this work we analyse GA from a joint perspective, paying attention to the effects of transformations in the joint state-variable/observation space. First, we study transformations for state variables and observations that are independent from each other. Then, we introduce a targeted joint transformation with the objective to obtain joint Gaussianity in the transformed space. We focus primarily in the univariate case, and briefly comment on the multivariate one. A key point of this paper is that, when (1–(3 are violated, using the analysis step of the EnKF will not recover the exact posterior density in spite of any transformations one may perform. These transformations, however, provide approximations of different quality to the Bayesian solution of the problem. Using an example in which the Bayesian posterior can be analytically computed, we assess the quality of the analysis distributions generated after applying the EnKF analysis step in conjunction with different GA options. The value of the targeted joint transformation is particularly clear for the case when the prior is Gaussian, the marginal density for the observations is close to Gaussian, and the likelihood is a Gaussian mixture.

  9. A comparative analysis of pre-processing techniques in colour retinal images

    International Nuclear Information System (INIS)

    Salvatelli, A; Bizai, G; Barbosa, G; Drozdowicz, B; Delrieux, C

    2007-01-01

    Diabetic retinopathy (DR) is a chronic disease of the ocular retina, which most of the times is only discovered when the disease is on an advanced stage and most of the damage is irreversible. For that reason, early diagnosis is paramount for avoiding the most severe consequences of the DR, of which complete blindness is not uncommon. Unsupervised or supervised image processing of retinal images emerges as a feasible tool for this diagnosis. The preprocessing stages are the key for any further assessment, since these images exhibit several defects, including non uniform illumination, sampling noise, uneven contrast due to pigmentation loss during sampling, and many others. Any feasible diagnosis system should work with images where these defects were compensated. In this work we analyze and test several correction techniques. Non uniform illumination is compensated using morphology and homomorphic filtering; uneven contrast is compensated using morphology and local enhancement. We tested our processing stages using Fuzzy C-Means, and local Hurst (self correlation) coefficient for unsupervised segmentation of the abnormal blood vessels. The results over a standard set of DR images are more than promising

  10. A comparative analysis of pre-processing techniques in colour retinal images

    Energy Technology Data Exchange (ETDEWEB)

    Salvatelli, A [Artificial Intelligence Group, Facultad de Ingenieria, Universidad Nacional de Entre Rios (Argentina); Bizai, G [Artificial Intelligence Group, Facultad de Ingenieria, Universidad Nacional de Entre Rios (Argentina); Barbosa, G [Artificial Intelligence Group, Facultad de Ingenieria, Universidad Nacional de Entre Rios (Argentina); Drozdowicz, B [Artificial Intelligence Group, Facultad de Ingenieria, Universidad Nacional de Entre Rios (Argentina); Delrieux, C [Electric and Computing Engineering Department, Universidad Nacional del Sur, Alem 1253, BahIa Blanca, (Partially funded by SECyT-UNS) (Argentina)], E-mail: claudio@acm.org

    2007-11-15

    Diabetic retinopathy (DR) is a chronic disease of the ocular retina, which most of the times is only discovered when the disease is on an advanced stage and most of the damage is irreversible. For that reason, early diagnosis is paramount for avoiding the most severe consequences of the DR, of which complete blindness is not uncommon. Unsupervised or supervised image processing of retinal images emerges as a feasible tool for this diagnosis. The preprocessing stages are the key for any further assessment, since these images exhibit several defects, including non uniform illumination, sampling noise, uneven contrast due to pigmentation loss during sampling, and many others. Any feasible diagnosis system should work with images where these defects were compensated. In this work we analyze and test several correction techniques. Non uniform illumination is compensated using morphology and homomorphic filtering; uneven contrast is compensated using morphology and local enhancement. We tested our processing stages using Fuzzy C-Means, and local Hurst (self correlation) coefficient for unsupervised segmentation of the abnormal blood vessels. The results over a standard set of DR images are more than promising.

  11. Image Processing of Welding Procedure Specification and Pre-process program development for Finite Element Modelling

    International Nuclear Information System (INIS)

    Kim, K. S.; Lee, H. J.

    2009-11-01

    PRE-WELD program, which generates automatically the input file for the finite element analysis on the 2D butt welding at the dissimilar metal weld part, was developed. This program is pre-process program of the FEM code for analyzing the residual stress at the welding parts. Even if the users have not the detail knowledge for the FEM modelling, the users can make the ABAQUS INPUT easily by inputting the shape data of welding part, the weld current and voltage of welding parameters. By using PRE-WELD program, we can save the time and the effort greatly for preparing the ABAQUS INPUT for the residual stress analysis at the welding parts, and make the exact input without the human error

  12. Structural properties and complexity of a new network class: Collatz step graphs.

    Directory of Open Access Journals (Sweden)

    Frank Emmert-Streib

    Full Text Available In this paper, we introduce a biologically inspired model to generate complex networks. In contrast to many other construction procedures for growing networks introduced so far, our method generates networks from one-dimensional symbol sequences that are related to the so called Collatz problem from number theory. The major purpose of the present paper is, first, to derive a symbol sequence from the Collatz problem, we call the step sequence, and investigate its structural properties. Second, we introduce a construction procedure for growing networks that is based on these step sequences. Third, we investigate the structural properties of this new network class including their finite scaling and asymptotic behavior of their complexity, average shortest path lengths and clustering coefficients. Interestingly, in contrast to many other network models including the small-world network from Watts & Strogatz, we find that CS graphs become 'smaller' with an increasing size.

  13. Constant time distance queries in planar unweighted graphs with subquadratic preprocessing time

    DEFF Research Database (Denmark)

    Wulff-Nilsen, C.

    2013-01-01

    Let G be an n-vertex planar, undirected, and unweighted graph. It was stated as open problems whether the Wiener index, defined as the sum of all-pairs shortest path distances, and the diameter of G can be computed in o(n(2)) time. We show that both problems can be solved in O(n(2) log log n/log n......) time with O(n) space. The techniques that we apply allow us to build, within the same time bound, an oracle for exact distance queries in G. More generally, for any parameter S is an element of [(log n/log log n)(2), n(2/5)], distance queries can be answered in O (root S log S/log n) time per query...... with O(n(2)/root S) preprocessing time and space requirement. With respect to running time, this is better than previous algorithms when log S = o(log n). All algorithms have linear space requirement. Our results generalize to a larger class of graphs including those with a fixed excluded minor. (C) 2012...

  14. An Advanced Pre-Processing Pipeline to Improve Automated Photogrammetric Reconstructions of Architectural Scenes

    Directory of Open Access Journals (Sweden)

    Marco Gaiani

    2016-02-01

    Full Text Available Automated image-based 3D reconstruction methods are more and more flooding our 3D modeling applications. Fully automated solutions give the impression that from a sample of randomly acquired images we can derive quite impressive visual 3D models. Although the level of automation is reaching very high standards, image quality is a fundamental pre-requisite to produce successful and photo-realistic 3D products, in particular when dealing with large datasets of images. This article presents an efficient pipeline based on color enhancement, image denoising, color-to-gray conversion and image content enrichment. The pipeline stems from an analysis of various state-of-the-art algorithms and aims to adjust the most promising methods, giving solutions to typical failure causes. The assessment evaluation proves how an effective image pre-processing, which considers the entire image dataset, can improve the automated orientation procedure and dense 3D point cloud reconstruction, even in the case of poor texture scenarios.

  15. Study on Construction of a Medical X-Ray Direct Digital Radiography System and Hybrid Preprocessing Methods

    Directory of Open Access Journals (Sweden)

    Yong Ren

    2014-01-01

    Full Text Available We construct a medical X-ray direct digital radiography (DDR system based on a CCD (charge-coupled devices camera. For the original images captured from X-ray exposure, computer first executes image flat-field correction and image gamma correction, and then carries out image contrast enhancement. A hybrid image contrast enhancement algorithm which is based on sharp frequency localization-contourlet transform (SFL-CT and contrast limited adaptive histogram equalization (CLAHE, is proposed and verified by the clinical DDR images. Experimental results show that, for the medical X-ray DDR images, the proposed comprehensive preprocessing algorithm can not only greatly enhance the contrast and detail information, but also improve the resolution capability of DDR system.

  16. Step out - Step in Sequencing Games

    NARCIS (Netherlands)

    Musegaas, M.; Borm, P.E.M.; Quant, M.

    2014-01-01

    In this paper a new class of relaxed sequencing games is introduced: the class of Step out - Step in sequencing games. In this relaxation any player within a coalition is allowed to step out from his position in the processing order and to step in at any position later in the processing order.

  17. Step out-step in sequencing games

    NARCIS (Netherlands)

    Musegaas, Marieke; Borm, Peter; Quant, Marieke

    2015-01-01

    In this paper a new class of relaxed sequencing games is introduced: the class of Step out–Step in sequencing games. In this relaxation any player within a coalition is allowed to step out from his position in the processing order and to step in at any position later in the processing order. First,

  18. Perceiving a calling, living a calling, and job satisfaction: testing a moderated, multiple mediator model.

    Science.gov (United States)

    Duffy, Ryan D; Bott, Elizabeth M; Allan, Blake A; Torrey, Carrie L; Dik, Bryan J

    2012-01-01

    The current study examined the relation between perceiving a calling, living a calling, and job satisfaction among a diverse group of employed adults who completed an online survey (N = 201). Perceiving a calling and living a calling were positively correlated with career commitment, work meaning, and job satisfaction. Living a calling moderated the relations of perceiving a calling with career commitment and work meaning, such that these relations were more robust for those with a stronger sense they were living their calling. Additionally, a moderated, multiple mediator model was run to examine the mediating role of career commitment and work meaning in the relation of perceiving a calling and job satisfaction, while accounting for the moderating role of living a calling. Results indicated that work meaning and career commitment fully mediated the relation between perceiving a calling and job satisfaction. However, the indirect effects of work meaning and career commitment were only significant for individuals with high levels of living a calling, indicating the importance of living a calling in the link between perceiving a calling and job satisfaction. Implications for research and practice are discussed. (c) 2012 APA, all rights reserved.

  19. Call Center Capacity Planning

    DEFF Research Database (Denmark)

    Nielsen, Thomas Bang

    in order to relate the results to the service levels used in call centers. Furthermore, the generic nature of the approximation is demonstrated by applying it to a system incorporating a dynamic priority scheme. In the last paper Optimization of overflow policies in call centers, overflows between agent......The main topics of the thesis are theoretical and applied queueing theory within a call center setting. Call centers have in recent years become the main means of communication between customers and companies, and between citizens and public institutions. The extensively computerized infrastructure...... in modern call centers allows for a high level of customization, but also induces complicated operational processes. The size of the industry together with the complex and labor intensive nature of large call centers motivates the research carried out to understand the underlying processes. The customizable...

  20. Calle Blanco

    Directory of Open Access Journals (Sweden)

    Gonzalo Cerda Brintrup

    1988-06-01

    Full Text Available Importante arteria, que comunica el sector del puerto con la plaza. Las más imponentes construcciones se sucedían de un modo continuo, encaramándose a ambos lados de la empinada calle. Antes del gran incendio de 1936 grandes casonas de madera destacaban en calle Irarrázabal y en la esquina de ésta con calle Blanco, la más hermosa construcción pertenecía a don Alberto Oyarzún y la casa vecina hacia Blanco era de don Mateo Miserda, limitada por arriba con la casa de don Augusto Van Der Steldt y ésta era seguida de la casa de don David Barrientos provista de cuatro cúpulas en las esquinas y de un amplio corredor en el frontis. Todas estas construcciones de madera fueron destruidas en el gran incendio de 1936.

  1. Two-Step Amyloid Aggregation: Sequential Lag Phase Intermediates

    Science.gov (United States)

    Castello, Fabio; Paredes, Jose M.; Ruedas-Rama, Maria J.; Martin, Miguel; Roldan, Mar; Casares, Salvador; Orte, Angel

    2017-01-01

    The self-assembly of proteins into fibrillar structures called amyloid fibrils underlies the onset and symptoms of neurodegenerative diseases, such as Alzheimer’s and Parkinson’s. However, the molecular basis and mechanism of amyloid aggregation are not completely understood. For many amyloidogenic proteins, certain oligomeric intermediates that form in the early aggregation phase appear to be the principal cause of cellular toxicity. Recent computational studies have suggested the importance of nonspecific interactions for the initiation of the oligomerization process prior to the structural conversion steps and template seeding, particularly at low protein concentrations. Here, using advanced single-molecule fluorescence spectroscopy and imaging of a model SH3 domain, we obtained direct evidence that nonspecific aggregates are required in a two-step nucleation mechanism of amyloid aggregation. We identified three different oligomeric types according to their sizes and compactness and performed a full mechanistic study that revealed a mandatory rate-limiting conformational conversion step. We also identified the most cytotoxic species, which may be possible targets for inhibiting and preventing amyloid aggregation.

  2. A call-by-value lambda-calculus with lists and control

    Directory of Open Access Journals (Sweden)

    Robbert Krebbers

    2012-10-01

    Full Text Available Calculi with control operators have been studied to reason about control in programming languages and to interpret the computational content of classical proofs. To make these calculi into a real programming language, one should also include data types. As a step into that direction, this paper defines a simply typed call-by-value lambda calculus with the control operators catch and throw, a data type of lists, and an operator for primitive recursion (a la Goedel's T. We prove that our system satisfies subject reduction, progress, confluence for untyped terms, and strong normalization for well-typed terms.

  3. Speech-like rhythm in a voiced and voiceless orangutan call.

    Directory of Open Access Journals (Sweden)

    Adriano R Lameira

    Full Text Available The evolutionary origins of speech remain obscure. Recently, it was proposed that speech derived from monkey facial signals which exhibit a speech-like rhythm of ∼5 open-close lip cycles per second. In monkeys, these signals may also be vocalized, offering a plausible evolutionary stepping stone towards speech. Three essential predictions remain, however, to be tested to assess this hypothesis' validity; (i Great apes, our closest relatives, should likewise produce 5Hz-rhythm signals, (ii speech-like rhythm should involve calls articulatorily similar to consonants and vowels given that speech rhythm is the direct product of stringing together these two basic elements, and (iii speech-like rhythm should be experience-based. Via cinematic analyses we demonstrate that an ex-entertainment orangutan produces two calls at a speech-like rhythm, coined "clicks" and "faux-speech." Like voiceless consonants, clicks required no vocal fold action, but did involve independent manoeuvring over lips and tongue. In parallel to vowels, faux-speech showed harmonic and formant modulations, implying vocal fold and supralaryngeal action. This rhythm was several times faster than orangutan chewing rates, as observed in monkeys and humans. Critically, this rhythm was seven-fold faster, and contextually distinct, than any other known rhythmic calls described to date in the largest database of the orangutan repertoire ever assembled. The first two predictions advanced by this study are validated and, based on parsimony and exclusion of potential alternative explanations, initial support is given to the third prediction. Irrespectively of the putative origins of these calls and underlying mechanisms, our findings demonstrate irrevocably that great apes are not respiratorily, articulatorilly, or neurologically constrained for the production of consonant- and vowel-like calls at speech rhythm. Orangutan clicks and faux-speech confirm the importance of rhythmic speech

  4. A LITERATURE SURVEY ON VARIOUS ILLUMINATION NORMALIZATION TECHNIQUES FOR FACE RECOGNITION WITH FUZZY K NEAREST NEIGHBOUR CLASSIFIER

    Directory of Open Access Journals (Sweden)

    A. Thamizharasi

    2015-05-01

    Full Text Available The face recognition is popular in video surveillance, social networks and criminal identifications nowadays. The performance of face recognition would be affected by variations in illumination, pose, aging and partial occlusion of face by Wearing Hats, scarves and glasses etc. The illumination variations are still the challenging problem in face recognition. The aim is to compare the various illumination normalization techniques. The illumination normalization techniques include: Log transformations, Power Law transformations, Histogram equalization, Adaptive histogram equalization, Contrast stretching, Retinex, Multi scale Retinex, Difference of Gaussian, DCT, DCT Normalization, DWT, Gradient face, Self Quotient, Multi scale Self Quotient and Homomorphic filter. The proposed work consists of three steps. First step is to preprocess the face image with the above illumination normalization techniques; second step is to create the train and test database from the preprocessed face images and third step is to recognize the face images using Fuzzy K nearest neighbor classifier. The face recognition accuracy of all preprocessing techniques is compared using the AR face database of color images.

  5. Effects of walking speed on the step-by-step control of step width.

    Science.gov (United States)

    Stimpson, Katy H; Heitkamp, Lauren N; Horne, Joscelyn S; Dean, Jesse C

    2018-02-08

    Young, healthy adults walking at typical preferred speeds use step-by-step adjustments of step width to appropriately redirect their center of mass motion and ensure mediolateral stability. However, it is presently unclear whether this control strategy is retained when walking at the slower speeds preferred by many clinical populations. We investigated whether the typical stabilization strategy is influenced by walking speed. Twelve young, neurologically intact participants walked on a treadmill at a range of prescribed speeds (0.2-1.2 m/s). The mediolateral stabilization strategy was quantified as the proportion of step width variance predicted by the mechanical state of the pelvis throughout a step (calculated as R 2 magnitude from a multiple linear regression). Our ability to accurately predict the upcoming step width increased over the course of a step. The strength of the relationship between step width and pelvis mechanics at the start of a step was reduced at slower speeds. However, these speed-dependent differences largely disappeared by the end of a step, other than at the slowest walking speed (0.2 m/s). These results suggest that mechanics-dependent adjustments in step width are a consistent component of healthy gait across speeds and contexts. However, slower walking speeds may ease this control by allowing mediolateral repositioning of the swing leg to occur later in a step, thus encouraging slower walking among clinical populations with limited sensorimotor control. Published by Elsevier Ltd.

  6. Perceiving a Calling, Living a Calling, and Job Satisfaction: Testing a Moderated, Multiple Mediator Model

    Science.gov (United States)

    Duffy, Ryan D.; Bott, Elizabeth M.; Allan, Blake A.; Torrey, Carrie L.; Dik, Bryan J.

    2012-01-01

    The current study examined the relation between perceiving a calling, living a calling, and job satisfaction among a diverse group of employed adults who completed an online survey (N = 201). Perceiving a calling and living a calling were positively correlated with career commitment, work meaning, and job satisfaction. Living a calling moderated…

  7. Behavioral Preferences for Individual Securities : The Case for Call Warrants and Call Options

    NARCIS (Netherlands)

    Ter Horst, J.R.; Veld, C.H.

    2002-01-01

    Since 1998, large investment banks have flooded the European capital markets with issues of call warrants.This has led to a unique situation in the Netherlands, where now call warrants, traded on the stock exchange, and long-term call options, traded on the options exchange, exist.Both entitle their

  8. Free Modal Algebras Revisited: The Step-by-Step Method

    NARCIS (Netherlands)

    Bezhanishvili, N.; Ghilardi, Silvio; Jibladze, Mamuka

    2012-01-01

    We review the step-by-step method of constructing finitely generated free modal algebras. First we discuss the global step-by-step method, which works well for rank one modal logics. Next we refine the global step-by-step method to obtain the local step-by-step method, which is applicable beyond

  9. SYSTEMATIZATION OF THE BASIC STEPS OF THE STEP-AEROBICS

    Directory of Open Access Journals (Sweden)

    Darinka Korovljev

    2011-03-01

    Full Text Available Following the development of the powerful sport industry, in front of us appeared a lot of new opportunities for creating of the new programmes of exercising with certain requisites. One of such programmes is certainly step-aerobics. Step-aerobics can be defined as a type of aerobics consisting of the basic aerobic steps (basic steps applied in exercising on stepper (step bench, with a possibility to regulate its height. Step-aerobics itself can be divided into several groups, depending on the following: type of music, working methods and adopted knowledge of the attendants. In this work, the systematization of the basic steps in step-aerobics was made on the basis of the following criteria: steps origin, number of leg motions in stepping and relating the body support at the end of the step. Systematization of the basic steps of the step-aerobics is quite significant for making a concrete review of the existing basic steps, thus making creation of the step-aerobics lesson easier

  10. Research on the Translation and Implementation of Stepping On in Three Wisconsin Communities

    Directory of Open Access Journals (Sweden)

    Amy E. Schlotthauer

    2017-06-01

    Full Text Available ObjectiveFalls are a leading cause of injury death. Stepping On is a fall prevention program developed in Australia and shown to reduce falls by up to 31%. The original program was implemented in a community setting, by an occupational therapist, and included a home visit. The purpose of this study was to examine aspects of the translation and implementation of Stepping On in three community settings in Wisconsin.MethodsThe investigative team identified four research questions to understand the spread and use of the program, as well as to determine whether critical components of the program could be modified to maximize use in community practice. The team evaluated program uptake, participant reach, program feasibility, program acceptability, and program fidelity by varying the implementation setting and components of Stepping On. Implementation setting included type of host organization, rural versus urban location, health versus non-health background of leaders, and whether a phone call could replace the home visit. A mixed methodology of surveys and interviews completed by site managers, leaders, guest experts, participants, and content expert observations for program fidelity during classes was used.ResultsThe study identified implementation challenges that varied by setting, including securing a physical therapist for the class and needing more time to recruit participants. There were no implementation differences between rural and urban locations. Potential differences emerged in program fidelity between health and non-health professional leaders, although fidelity was high overall with both. Home visits identified more home hazards than did phone calls and were perceived as of greater benefit to participants, but at 1 year no differences were apparent in uptake of strategies discussed in home versus phone visits.ConclusionAdaptations to the program to increase implementation include using a leader who is a non-health professional, and

  11. Berry ripening, pre-processing and thermal treatments affect the phenolic composition and antioxidant capacity of grape (Vitis vinifera L.) juice.

    Science.gov (United States)

    Genova, Giuseppe; Tosetti, Roberta; Tonutti, Pietro

    2016-01-30

    Grape juice is an important dietary source of health-promoting antioxidant molecules. Different factors may affect juice composition and nutraceutical properties. The effects of some of these factors (harvest time, pre-processing ethylene treatment of grapes and juice thermal pasteurization) were here evaluated, considering in particular the phenolic composition and antioxidant capacity. Grapes (Vitis vinifera L., red-skinned variety Sangiovese) were collected twice in relation to the technological harvest (TH) and 12 days before TH (early harvest, EH) and treated with gaseous ethylene (1000 ppm) or air for 48 h. Fresh and pasteurized (78 °C for 30 min) juices were produced using a water bath. Three-way analysis of variance showed that the harvest date had the strongest impact on total polyphenols, hydroxycinnamates, flavonols, and especially on total flavonoids. Pre-processing ethylene treatment significantly increased the proanthocyanidin, anthocyanin and flavan-3-ol content in the juices. Pasteurization induced a significant increase in anthocyanin concentration. Antioxidant capacity was enhanced by ethylene treatment and pasteurization in juices from both TH and EH grapes. These results suggest that an appropriate management of grape harvesting date, postharvest and processing may lead to an improvement in nutraceutical quality of juices. Further research is needed to study the effect of the investigated factors on juice organoleptic properties. © 2015 Society of Chemical Industry.

  12. On the structure of Bayesian network for Indonesian text document paraphrase identification

    Science.gov (United States)

    Prayogo, Ario Harry; Syahrul Mubarok, Mohamad; Adiwijaya

    2018-03-01

    Paraphrase identification is an important process within natural language processing. The idea is to automatically recognize phrases that have different forms but contain same meanings. For examples if we input query “causing fire hazard”, then the computer has to recognize this query that this query has same meaning as “the cause of fire hazard. Paraphrasing is an activity that reveals the meaning of an expression, writing, or speech using different words or forms, especially to achieve greater clarity. In this research we will focus on classifying two Indonesian sentences whether it is a paraphrase to each other or not. There are four steps in this research, first is preprocessing, second is feature extraction, third is classifier building, and the last is performance evaluation. Preprocessing consists of tokenization, non-alphanumerical removal, and stemming. After preprocessing we will conduct feature extraction in order to build new features from given dataset. There are two kinds of features in the research, syntactic features and semantic features. Syntactic features consist of normalized levenshtein distance feature, term-frequency based cosine similarity feature, and LCS (Longest Common Subsequence) feature. Semantic features consist of Wu and Palmer feature and Shortest Path Feature. We use Bayesian Networks as the method of training the classifier. Parameter estimation that we use is called MAP (Maximum A Posteriori). For structure learning of Bayesian Networks DAG (Directed Acyclic Graph), we use BDeu (Bayesian Dirichlet equivalent uniform) scoring function and for finding DAG with the best BDeu score, we use K2 algorithm. In evaluation step we perform cross-validation. The average result that we get from testing the classifier as follows: Precision 75.2%, Recall 76.5%, F1-Measure 75.8% and Accuracy 75.6%.

  13. ATACseqQC: a Bioconductor package for post-alignment quality assessment of ATAC-seq data.

    Science.gov (United States)

    Ou, Jianhong; Liu, Haibo; Yu, Jun; Kelliher, Michelle A; Castilla, Lucio H; Lawson, Nathan D; Zhu, Lihua Julie

    2018-03-01

    ATAC-seq (Assays for Transposase-Accessible Chromatin using sequencing) is a recently developed technique for genome-wide analysis of chromatin accessibility. Compared to earlier methods for assaying chromatin accessibility, ATAC-seq is faster and easier to perform, does not require cross-linking, has higher signal to noise ratio, and can be performed on small cell numbers. However, to ensure a successful ATAC-seq experiment, step-by-step quality assurance processes, including both wet lab quality control and in silico quality assessment, are essential. While several tools have been developed or adopted for assessing read quality, identifying nucleosome occupancy and accessible regions from ATAC-seq data, none of the tools provide a comprehensive set of functionalities for preprocessing and quality assessment of aligned ATAC-seq datasets. We have developed a Bioconductor package, ATACseqQC, for easily generating various diagnostic plots to help researchers quickly assess the quality of their ATAC-seq data. In addition, this package contains functions to preprocess aligned ATAC-seq data for subsequent peak calling. Here we demonstrate the utilities of our package using 25 publicly available ATAC-seq datasets from four studies. We also provide guidelines on what the diagnostic plots should look like for an ideal ATAC-seq dataset. This software package has been used successfully for preprocessing and assessing several in-house and public ATAC-seq datasets. Diagnostic plots generated by this package will facilitate the quality assessment of ATAC-seq data, and help researchers to evaluate their own ATAC-seq experiments as well as select high-quality ATAC-seq datasets from public repositories such as GEO to avoid generating hypotheses or drawing conclusions from low-quality ATAC-seq experiments. The software, source code, and documentation are freely available as a Bioconductor package at https://bioconductor.org/packages/release/bioc/html/ATACseqQC.html .

  14. Unmixing-Based Denoising as a Pre-Processing Step for Coral Reef Analysis

    Science.gov (United States)

    Cerra, D.; Traganos, D.; Gege, P.; Reinartz, P.

    2017-05-01

    Coral reefs, among the world's most biodiverse and productive submerged habitats, have faced several mass bleaching events due to climate change during the past 35 years. In the course of this century, global warming and ocean acidification are expected to cause corals to become increasingly rare on reef systems. This will result in a sharp decrease in the biodiversity of reef communities and carbonate reef structures. Coral reefs may be mapped, characterized and monitored through remote sensing. Hyperspectral images in particular excel in being used in coral monitoring, being characterized by very rich spectral information, which results in a strong discrimination power to characterize a target of interest, and separate healthy corals from bleached ones. Being submerged habitats, coral reef systems are difficult to analyse in airborne or satellite images, as relevant information is conveyed in bands in the blue range which exhibit lower signal-to-noise ratio (SNR) with respect to other spectral ranges; furthermore, water is absorbing most of the incident solar radiation, further decreasing the SNR. Derivative features, which are important in coral analysis, result greatly affected by the resulting noise present in relevant spectral bands, justifying the need of new denoising techniques able to keep local spatial and spectral features. In this paper, Unmixing-based Denoising (UBD) is used to enable analysis of a hyperspectral image acquired over a coral reef system in the Red Sea based on derivative features. UBD reconstructs pixelwise a dataset with reduced noise effects, by forcing each spectrum to a linear combination of other reference spectra, exploiting the high dimensionality of hyperspectral datasets. Results show clear enhancements with respect to traditional denoising methods based on spatial and spectral smoothing, facilitating the coral detection task.

  15. Tactile and bone-conduction auditory brain computer interface for vision and hearing impaired users.

    Science.gov (United States)

    Rutkowski, Tomasz M; Mori, Hiromu

    2015-04-15

    The paper presents a report on the recently developed BCI alternative for users suffering from impaired vision (lack of focus or eye-movements) or from the so-called "ear-blocking-syndrome" (limited hearing). We report on our recent studies of the extents to which vibrotactile stimuli delivered to the head of a user can serve as a platform for a brain computer interface (BCI) paradigm. In the proposed tactile and bone-conduction auditory BCI novel multiple head positions are used to evoke combined somatosensory and auditory (via the bone conduction effect) P300 brain responses, in order to define a multimodal tactile and bone-conduction auditory brain computer interface (tbcaBCI). In order to further remove EEG interferences and to improve P300 response classification synchrosqueezing transform (SST) is applied. SST outperforms the classical time-frequency analysis methods of the non-linear and non-stationary signals such as EEG. The proposed method is also computationally more effective comparing to the empirical mode decomposition. The SST filtering allows for online EEG preprocessing application which is essential in the case of BCI. Experimental results with healthy BCI-naive users performing online tbcaBCI, validate the paradigm, while the feasibility of the concept is illuminated through information transfer rate case studies. We present a comparison of the proposed SST-based preprocessing method, combined with a logistic regression (LR) classifier, together with classical preprocessing and LDA-based classification BCI techniques. The proposed tbcaBCI paradigm together with data-driven preprocessing methods are a step forward in robust BCI applications research. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. The ion-step induced response of membrane-coated ISFETs: theoretical description and experimental verification

    NARCIS (Netherlands)

    Schasfoort, Richardus B.M.; Bergveld, Piet; Kooyman, R.P.H.; Greve, Jan

    1991-01-01

    Recently a new method was introduced to operate an immunological field effect transistor (ImmunoFET). By changing the electrolyte concentration of the sample solution stepwise (the so-called ion-step), a transient diffusion of ions through the membrane-protein layer occurs, resulting in a transient

  17. Automatic pre-processing for an object-oriented distributed hydrological model using GRASS-GIS

    Science.gov (United States)

    Sanzana, P.; Jankowfsky, S.; Branger, F.; Braud, I.; Vargas, X.; Hitschfeld, N.

    2012-04-01

    Landscapes are very heterogeneous, which impact the hydrological processes occurring in the catchments, especially in the modeling of peri-urban catchments. The Hydrological Response Units (HRUs), resulting from the intersection of different maps, such as land use, soil types and geology, and flow networks, allow the representation of these elements in an explicit way, preserving natural and artificial contours of the different layers. These HRUs are used as model mesh in some distributed object-oriented hydrological models, allowing the application of a topological oriented approach. The connectivity between polygons and polylines provides a detailed representation of the water balance and overland flow in these distributed hydrological models, based on irregular hydro-landscape units. When computing fluxes between these HRUs, the geometrical parameters, such as the distance between the centroid of gravity of the HRUs and the river network, and the length of the perimeter, can impact the realism of the calculated overland, sub-surface and groundwater fluxes. Therefore, it is necessary to process the original model mesh in order to avoid these numerical problems. We present an automatic pre-processing implemented in the open source GRASS-GIS software, for which several Python scripts or some algorithms already available were used, such as the Triangle software. First, some scripts were developed to improve the topology of the various elements, such as snapping of the river network to the closest contours. When data are derived with remote sensing, such as vegetation areas, their perimeter has lots of right angles that were smoothed. Second, the algorithms more particularly address bad-shaped elements of the model mesh such as polygons with narrow shapes, marked irregular contours and/or the centroid outside of the polygons. To identify these elements we used shape descriptors. The convexity index was considered the best descriptor to identify them with a threshold

  18. Call Centre- Computer Telephone Integration

    Directory of Open Access Journals (Sweden)

    Dražen Kovačević

    2012-10-01

    Full Text Available Call centre largely came into being as a result of consumerneeds converging with enabling technology- and by the companiesrecognising the revenue opportunities generated by meetingthose needs thereby increasing customer satisfaction. Regardlessof the specific application or activity of a Call centre, customersatisfaction with the interaction is critical to the revenuegenerated or protected by the Call centre. Physical(v, Call centreset up is a place that includes computer, telephone and supervisorstation. Call centre can be available 24 hours a day - whenthe customer wants to make a purchase, needs information, orsimply wishes to register a complaint.

  19. Novel low-power ultrasound digital preprocessing architecture for wireless display.

    Science.gov (United States)

    Levesque, Philippe; Sawan, Mohamad

    2010-03-01

    A complete hardware-based ultrasound preprocessing unit (PPU) is presented as an alternative to available power-hungry devices. Intended to expand the ultrasonic applications, the proposed unit allows replacement of the cable of the ultrasonic probe by a wireless link to transfer data from the probe to a remote monitor. The digital back-end architecture of this PPU is fully pipelined, which permits sampling of ultrasonic signals at a frequency equal to the field-programmable gate array-based system clock, up to 100 MHz. Experimental results show that the proposed processing unit has an excellent performance, an equivalent 53.15 Dhrystone 2.1 MIPS/ MHz (DMIPS/MHz), compared with other software-based architectures that allow a maximum of 1.6 DMIPS/MHz. In addition, an adaptive subsampling method is proposed to operate the pixel compressor, which allows real-time image zooming and, by removing high-frequency noise, the lateral and axial resolutions are enhanced by 25% and 33%, respectively. Realtime images, acquired from a reference phantom, validated the feasibility of the proposed architecture. For a display rate of 15 frames per second, and a 5-MHz single-element piezoelectric transducer, the proposed digital PPU requires a dynamic power of only 242 mW, which represents around 20% of the best-available software-based system. Furthermore, composed by the ultrasound processor and the image interpolation unit, the digital processing core of the PPU presents good power-performance ratios of 26 DMIPS/mW and 43.9 DMIPS/mW at a 20-MHz and 100-MHz sample frequency, respectively.

  20. Optimal performance of data acquisition and processing for bone SPECT using Tc-99m

    International Nuclear Information System (INIS)

    Tantawy, F.A.; Ziada, G.A.; Talaat, T.; Hassan, A.A.

    1995-01-01

    The present work deals with the physical factors that could affect the quality in the bone SPECT technique. The factors included different acquisition and processing variables such as matrix size, time for acquisition, preprocessing filter and reconstruction back projection filter. Our results revealed that the best matrix size was 64x64. The acquisition time was tested between 20 s/step to 40 s/step. It has been found that the optimal acquisition time was 20 s/step. Concerning the preprocessing filter, 9-Bw (8-0.3) and F-Bw (8-0.3) were the best. At the same time, back projection filters were applied by Ramp, Shepp and logan, medium and chesler. It has been found that the best reconstruction back projection filter was ramp filter. From the above results, the matrix size 64x64, acquisition time 20 s/step, preprocessing filter (9-Bw (8-0.3) and F-Bw (8-0.3)) and reconstruction back projection filter Ramp were selected as the optimum parameters to be taken into consideration in bone SPECT technique. Tc- 99 m was used a radioactive isotope. 9 figs

  1. Effect of beamlet step-size on IMRT plan quality

    International Nuclear Information System (INIS)

    Zhang Guowei; Jiang Ziping; Shepard, David; Earl, Matt; Yu, Cedric

    2005-01-01

    We have studied the degree to which beamlet step-size impacts the quality of intensity modulated radiation therapy (IMRT) treatment plans. Treatment planning for IMRT begins with the application of a grid that divides each beam's-eye-view of the target into a number of smaller beamlets (pencil beams) of radiation. The total dose is computed as a weighted sum of the dose delivered by the individual beamlets. The width of each beamlet is set to match the width of the corresponding leaf of the multileaf collimator (MLC). The length of each beamlet (beamlet step-size) is parallel to the direction of leaf travel. The beamlet step-size represents the minimum stepping distance of the leaves of the MLC and is typically predetermined by the treatment planning system. This selection imposes an artificial constraint because the leaves of the MLC and the jaws can both move continuously. Removing the constraint can potentially improve the IMRT plan quality. In this study, the optimized results were achieved using an aperture-based inverse planning technique called direct aperture optimization (DAO). We have tested the relationship between pencil beam step-size and plan quality using the American College of Radiology's IMRT test case. For this case, a series of IMRT treatment plans were produced using beamlet step-sizes of 1, 2, 5, and 10 mm. Continuous improvements were seen with each reduction in beamlet step size. The maximum dose to the planning target volume (PTV) was reduced from 134.7% to 121.5% and the mean dose to the organ at risk (OAR) was reduced from 38.5% to 28.2% as the beamlet step-size was reduced from 10 to 1 mm. The smaller pencil beam sizes also led to steeper dose gradients at the junction between the target and the critical structure with gradients of 6.0, 7.6, 8.7, and 9.1 dose%/mm achieved for beamlet step sizes of 10, 5, 2, and 1 mm, respectively

  2. Call cultures in orang-utans?

    Directory of Open Access Journals (Sweden)

    Serge A Wich

    Full Text Available BACKGROUND: Several studies suggested great ape cultures, arguing that human cumulative culture presumably evolved from such a foundation. These focused on conspicuous behaviours, and showed rich geographic variation, which could not be attributed to known ecological or genetic differences. Although geographic variation within call types (accents has previously been reported for orang-utans and other primate species, we examine geographic variation in the presence/absence of discrete call types (dialects. Because orang-utans have been shown to have geographic variation that is not completely explicable by genetic or ecological factors we hypothesized that this will be similar in the call domain and predict that discrete call type variation between populations will be found. METHODOLOGY/PRINCIPAL FINDINGS: We examined long-term behavioural data from five orang-utan populations and collected fecal samples for genetic analyses. We show that there is geographic variation in the presence of discrete types of calls. In exactly the same behavioural context (nest building and infant retrieval, individuals in different wild populations customarily emit either qualitatively different calls or calls in some but not in others. By comparing patterns in call-type and genetic similarity, we suggest that the observed variation is not likely to be explained by genetic or ecological differences. CONCLUSION/SIGNIFICANCE: These results are consistent with the potential presence of 'call cultures' and suggest that wild orang-utans possess the ability to invent arbitrary calls, which spread through social learning. These findings differ substantially from those that have been reported for primates before. First, the results reported here are on dialect and not on accent. Second, this study presents cases of production learning whereas most primate studies on vocal learning were cases of contextual learning. We conclude with speculating on how these findings might

  3. Inspection of power and ground layers in PCB images

    Science.gov (United States)

    Bunyak, Filiz; Ercal, Fikret

    1998-10-01

    In this work, we present an inspection method for power and ground (P&G) layers of printed circuit boards (PCB) also called utility layers. Design considerations for the P&G layers are different than those of signal layers. Current PCB inspection approaches cannot be applied to these layers. P&G layers act as internal ground, neutral or power sources. P&G layers are predominantly copper with occasional pad areas (without copper) called clearance. Defect definition is based on the spacing between the holes that will be drilled in clearances and the surrounding copper. Overlap of pads of different sizes and shapes are allowed. This results in complex, hard to inspect clearances. Our inspection is based on identification of shape, size and position of the individual pads that contribute to an overlapping clearance and then inspection of each pad based on design rules and tolerances. Main steps of our algorithm are as follows: (1) extraction and preprocessing of clearance contours; (2) decomposition of contours into segments: corner detection and matching lines or circular arcs between two corners; (3) determination of the pads from partial contour information obtained in step (2), and (4) design rules checking for each detected pad.

  4. Accurate step-hold tracking of smoothly varying periodic and aperiodic probability.

    Science.gov (United States)

    Ricci, Matthew; Gallistel, Randy

    2017-07-01

    Subjects observing many samples from a Bernoulli distribution are able to perceive an estimate of the generating parameter. A question of fundamental importance is how the current percept-what we think the probability now is-depends on the sequence of observed samples. Answers to this question are strongly constrained by the manner in which the current percept changes in response to changes in the hidden parameter. Subjects do not update their percept trial-by-trial when the hidden probability undergoes unpredictable and unsignaled step changes; instead, they update it only intermittently in a step-hold pattern. It could be that the step-hold pattern is not essential to the perception of probability and is only an artifact of step changes in the hidden parameter. However, we now report that the step-hold pattern obtains even when the parameter varies slowly and smoothly. It obtains even when the smooth variation is periodic (sinusoidal) and perceived as such. We elaborate on a previously published theory that accounts for: (i) the quantitative properties of the step-hold update pattern; (ii) subjects' quick and accurate reporting of changes; (iii) subjects' second thoughts about previously reported changes; (iv) subjects' detection of higher-order structure in patterns of change. We also call attention to the challenges these results pose for trial-by-trial updating theories.

  5. Smoking addiction among young women working at night at International call centres in India

    OpenAIRE

    Amrita Gupta

    2018-01-01

    Background Indian women are actively involved in occupations which were regarded as a taboo such as night work. Working at night for international call centres is a significant step in moving ahead of patriarchal control over women´s mobility in India. The job brings about lifestyle changes among employees such as late night partying, smoking, and boozing. The women employees are mainly fresh graduates. The study brings about the prevalence and smoking behaviour among th...

  6. Contribution of the microprocessors in the study and development of a pre-processing unit involved in a nuclear physics experiment

    International Nuclear Information System (INIS)

    Pichot, G.

    1980-01-01

    The pre-processing unit has its place between the electronic output of the detector and the experiment control system, its role is 3-fold: a real-time role to provide adequate data to the control system allowing feedback, a basic role of selecting and reducing the volume of data and a role of managing the programs necessary to the first 2 roles. This work can be divided in 3 parts. First the analysis of the needs and limitations of the present system, secondly the meeting of the needs with on-the-shelf equipment, and thirdly validation and future improvements [fr

  7. The difficult medical emergency call

    DEFF Research Database (Denmark)

    Møller, Thea Palsgaard; Kjærulff, Thora Majlund; Viereck, Søren

    2017-01-01

    BACKGROUND: Pre-hospital emergency care requires proper categorization of emergency calls and assessment of emergency priority levels by the medical dispatchers. We investigated predictors for emergency call categorization as "unclear problem" in contrast to "symptom-specific" categories and the ......BACKGROUND: Pre-hospital emergency care requires proper categorization of emergency calls and assessment of emergency priority levels by the medical dispatchers. We investigated predictors for emergency call categorization as "unclear problem" in contrast to "symptom-specific" categories...... and the effect of categorization on mortality. METHODS: Register-based study in a 2-year period based on emergency call data from the emergency medical dispatch center in Copenhagen combined with nationwide register data. Logistic regression analysis (N = 78,040 individuals) was used for identification...

  8. BUSINESS MODELS FOR EXTENDING OF 112 EMERGENCY CALL CENTER CAPABILITIES WITH E-CALL FUNCTION INSERTION

    Directory of Open Access Journals (Sweden)

    Pop Dragos Paul

    2010-12-01

    Full Text Available The present article concerns present status of implementation in Romania and Europe of eCall service and the proposed business models regarding eCall function implementation in Romania. eCall system is used for reliable transmission in case of crush between In Vehicle System and Public Service Answering Point, via the voice channel of cellular and Public Switched Telephone Network (PSTN. eCall service could be initiated automatically or manual the driver. All data presented in this article are part of researches made by authors in the Sectorial Contract Implementation study regarding eCall system, having as partners ITS Romania and Electronic Solution, with the Romanian Ministry of Communication and Information Technology as beneficiary.

  9. Min st-cut oracle for planar graphs with near-linear preprocessing time

    DEFF Research Database (Denmark)

    Borradaile, Glencora; Sankowski, Piotr; Wulff-Nilsen, Christian

    2010-01-01

    For an undirected n-vertex planar graph G with non-negative edge-weights, we consider the following type of query: given two vertices s and t in G, what is the weight of a min st-cut in G? We show how to answer such queries in constant time with O(n log5 n) preprocessing time and O(n log n) space....... We use a Gomory-Hu tree to represent all the pairwise min st-cuts implicitly. Previously, no subquadratic time algorithm was known for this problem. Our oracle can be extended to report the min st-cuts in time proportional to their size. Since all-pairs min st-cut and the minimum cycle basis are dual...... problems in planar graphs, we also obtain an implicit representation of a minimum cycle basis in O(n log5 n) time and O(n log n) space and an explicit representation with additional O(C) time and space where C is the size of the basis. To obtain our results, we require that shortest paths be unique...

  10. elPrep: High-Performance Preparation of Sequence Alignment/Map Files for Variant Calling.

    Directory of Open Access Journals (Sweden)

    Charlotte Herzeel

    Full Text Available elPrep is a high-performance tool for preparing sequence alignment/map files for variant calling in sequencing pipelines. It can be used as a replacement for SAMtools and Picard for preparation steps such as filtering, sorting, marking duplicates, reordering contigs, and so on, while producing identical results. What sets elPrep apart is its software architecture that allows executing preparation pipelines by making only a single pass through the data, no matter how many preparation steps are used in the pipeline. elPrep is designed as a multithreaded application that runs entirely in memory, avoids repeated file I/O, and merges the computation of several preparation steps to significantly speed up the execution time. For example, for a preparation pipeline of five steps on a whole-exome BAM file (NA12878, we reduce the execution time from about 1:40 hours, when using a combination of SAMtools and Picard, to about 15 minutes when using elPrep, while utilising the same server resources, here 48 threads and 23GB of RAM. For the same pipeline on whole-genome data (NA12878, elPrep reduces the runtime from 24 hours to less than 5 hours. As a typical clinical study may contain sequencing data for hundreds of patients, elPrep can remove several hundreds of hours of computing time, and thus substantially reduce analysis time and cost.

  11. Use of apparent thickness for preprocessing of low-frequency electromagnetic data in inversion-based multibarrier evaluation workflow

    Science.gov (United States)

    Omar, Saad; Omeragic, Dzevat

    2018-04-01

    The concept of apparent thicknesses is introduced for the inversion-based, multicasing evaluation interpretation workflow using multifrequency and multispacing electromagnetic measurements. A thickness value is assigned to each measurement, enabling the development of two new preprocessing algorithms to remove casing collar artifacts. First, long-spacing apparent thicknesses are used to remove, from the pipe sections, artifacts ("ghosts") caused by the transmitter crossing a casing collar or corrosion. Second, a collar identification, localization, and assignment algorithm is developed to enable robust inversion in collar sections. Last, casing eccentering can also be identified on the basis of opposite deviation of short-spacing phase and magnitude apparent thicknesses from the nominal value. The proposed workflow can handle an arbitrary number of nested casings and has been validated on synthetic and field data.

  12. STEP--a System for Teaching Experimental Psychology using E-Prime.

    Science.gov (United States)

    MacWhinney, B; St James, J; Schunn, C; Li, P; Schneider, W

    2001-05-01

    Students in psychology need to learn to design and analyze their own experiments. However, software that allows students to build experiments on their own has been limited in a variety of ways. The shipping of the first full release of the E-Prime system later this year will open up a new opportunity for addressing this problem. Because E-Prime promises to become the standard for building experiments in psychology, it is now possible to construct a Web-based resource that uses E-Prime as the delivery engine for a wide variety of instructional materials. This new system, funded by the National Science Foundation, is called STEP (System for the Teaching of Experimental Psychology). The goal of the STEP Project is to provide instructional materials that will facilitate the use of E-Prime in various learning contexts. We are now compiling a large set of classic experiments implemented in E-Prime and available over the Internet from http://step.psy.cmu.edu. The Web site also distributes instructional materials for building courses in experimental psychology based on E-Prime.

  13. Assessing call centers’ success:

    Directory of Open Access Journals (Sweden)

    Hesham A. Baraka

    2013-07-01

    This paper introduces a model to evaluate the performance of call centers based on the Delone and McLean Information Systems success model. A number of indicators are identified to track the call center’s performance. Mapping of the proposed indicators to the six dimensions of the D&M model is presented. A Weighted Call Center Performance Index is proposed to assess the call center performance; the index is used to analyze the effect of the identified indicators. Policy-Weighted approach was used to assume the weights with an analysis of different weights for each dimension. The analysis of the different weights cases gave priority to the User satisfaction and net Benefits dimension as the two outcomes from the system. For the input dimensions, higher priority was given to the system quality and the service quality dimension. Call centers decision makers can use the tool to tune the different weights in order to reach the objectives set by the organization. Multiple linear regression analysis was used in order to provide a linear formula for the User Satisfaction dimension and the Net Benefits dimension in order to be able to forecast the values for these two dimensions as function of the other dimensions

  14. 78 FR 76218 - Rural Call Completion

    Science.gov (United States)

    2013-12-17

    ... calls to rural areas, and enforce restrictions against blocking, choking, reducing, or restricting calls... to alert the Commission of systemic problems receiving calls from a particular originating long... associated with completing calls to rural areas. These rules will also enhance our ability to enforce...

  15. Long-distance calls in Neotropical primates

    Directory of Open Access Journals (Sweden)

    Oliveira Dilmar A.G.

    2004-01-01

    Full Text Available Long-distance calls are widespread among primates. Several studies concentrate on such calls in just one or in few species, while few studies have treated more general trends within the order. The common features that usually characterize these vocalizations are related to long-distance propagation of sounds. The proposed functions of primate long-distance calls can be divided into extragroup and intragroup ones. Extragroup functions relate to mate defense, mate attraction or resource defense, while intragroup functions involve group coordination or alarm. Among Neotropical primates, several species perform long-distance calls that seem more related to intragroup coordination, markedly in atelines. Callitrichids present long-distance calls that are employed both in intragroup coordination and intergroup contests or spacing. Examples of extragroup directed long-distance calls are the duets of titi monkeys and the roars and barks of howler monkeys. Considerable complexity and gradation exist in the long-distance call repertoires of some Neotropical primates, and female long-distance calls are probably more important in non-duetting species than usually thought. Future research must focus on larger trends in the evolution of primate long-distance calls, including the phylogeny of calling repertoires and the relationships between form and function in these signals.

  16. Automatic luminous reflections detector using global threshold with increased luminosity contrast in images

    Science.gov (United States)

    Silva, Ricardo Petri; Naozuka, Gustavo Taiji; Mastelini, Saulo Martiello; Felinto, Alan Salvany

    2018-01-01

    The incidence of luminous reflections (LR) in captured images can interfere with the color of the affected regions. These regions tend to oversaturate, becoming whitish and, consequently, losing the original color information of the scene. Decision processes that employ images acquired from digital cameras can be impaired by the LR incidence. Such applications include real-time video surgeries, facial, and ocular recognition. This work proposes an algorithm called contrast enhancement of potential LR regions, which is a preprocessing to increase the contrast of potential LR regions, in order to improve the performance of automatic LR detectors. In addition, three automatic detectors were compared with and without the employment of our preprocessing method. The first one is a technique already consolidated in the literature called the Chang-Tseng threshold. We propose two automatic detectors called adapted histogram peak and global threshold. We employed four performance metrics to evaluate the detectors, namely, accuracy, precision, exactitude, and root mean square error. The exactitude metric is developed by this work. Thus, a manually defined reference model was created. The global threshold detector combined with our preprocessing method presented the best results, with an average exactitude rate of 82.47%.

  17. Stepping Theories of Active Logic with Two Kinds of Negation

    Directory of Open Access Journals (Sweden)

    Mikhail M. Vinkov

    2017-01-01

    Full Text Available This paper formulates a stepping theory formalism with two kinds of negation dealing with one of the areas of Active Logic, a new kind of logic aimed at performing practical tasks in real time knowledge-based AI systems. In addition to the standard logical negation, the proposed formalism uses the so-called subjective negation interpreted as inability to arrive at some conclusion through reasoning by a current time. The semantics of the proposed formalism is defined as an~argumentation structure.

  18. TotalReCaller: improved accuracy and performance via integrated alignment and base-calling.

    Science.gov (United States)

    Menges, Fabian; Narzisi, Giuseppe; Mishra, Bud

    2011-09-01

    Currently, re-sequencing approaches use multiple modules serially to interpret raw sequencing data from next-generation sequencing platforms, while remaining oblivious to the genomic information until the final alignment step. Such approaches fail to exploit the full information from both raw sequencing data and the reference genome that can yield better quality sequence reads, SNP-calls, variant detection, as well as an alignment at the best possible location in the reference genome. Thus, there is a need for novel reference-guided bioinformatics algorithms for interpreting analog signals representing sequences of the bases ({A, C, G, T}), while simultaneously aligning possible sequence reads to a source reference genome whenever available. Here, we propose a new base-calling algorithm, TotalReCaller, to achieve improved performance. A linear error model for the raw intensity data and Burrows-Wheeler transform (BWT) based alignment are combined utilizing a Bayesian score function, which is then globally optimized over all possible genomic locations using an efficient branch-and-bound approach. The algorithm has been implemented in soft- and hardware [field-programmable gate array (FPGA)] to achieve real-time performance. Empirical results on real high-throughput Illumina data were used to evaluate TotalReCaller's performance relative to its peers-Bustard, BayesCall, Ibis and Rolexa-based on several criteria, particularly those important in clinical and scientific applications. Namely, it was evaluated for (i) its base-calling speed and throughput, (ii) its read accuracy and (iii) its specificity and sensitivity in variant calling. A software implementation of TotalReCaller as well as additional information, is available at: http://bioinformatics.nyu.edu/wordpress/projects/totalrecaller/ fabian.menges@nyu.edu.

  19. Sleep Quality of Call Handlers Employed in International Call Centers in National Capital Region of Delhi, India.

    Science.gov (United States)

    Raja, J D; Bhasin, S K

    2016-10-01

    Call center sector in India is a relatively new and fast growing industry driving employment and growth in modern India today. Most international call centers in National Capital Region (NCR) of Delhi operate at odd work hours corresponding to a time suitable fortheir international customers. The sleep quality of call handlers employed in these call centers is in jeopardy owing to their altered sleep schedule. To assess the sleep quality and determine its independent predictors among call handlers employed in international call centers in NCR of Delhi. A cross-sectional questionnaire-based study was conducted on 375 call handlers aged 18-39 years employed in international call centers in NCR of Delhi. Sleep quality was assessed using Athens Insomnia scale along with a pre-tested, structured questionnaire. The mean age of respondents was 24.6 (SD 2.4) years. 78% of participants were male. 83.5% of respondents were unmarried. 44.3% of call handlers were cigarette smokers. Physical ailments were reported by 37% call handlers. 77.6% of call handlers had somesuspicion of insomnia or suspected insomnia; the rest had no sleep problem. Smoking, poor social support, heavy workload, lack of relaxation facility at office, and prolonged travel time to office were independent predictors of sleep quality (pSafeguarding their health becomes an occupational health challenge to public health specialists.

  20. Colombeau algebra as a mathematical tool for investigating step load and step deformation of systems of nonlinear springs and dashpots

    Science.gov (United States)

    Průša, Vít; Řehoř, Martin; Tůma, Karel

    2017-02-01

    The response of mechanical systems composed of springs and dashpots to a step input is of eminent interest in the applications. If the system is formed by linear elements, then its response is governed by a system of linear ordinary differential equations. In the linear case, the mathematical method of choice for the analysis of the response is the classical theory of distributions. However, if the system contains nonlinear elements, then the classical theory of distributions is of no use, since it is strictly limited to the linear setting. Consequently, a question arises whether it is even possible or reasonable to study the response of nonlinear systems to step inputs. The answer is positive. A mathematical theory that can handle the challenge is the so-called Colombeau algebra. Building on the abstract result by Průša and Rajagopal (Int J Non-Linear Mech 81:207-221, 2016), we show how to use the theory in the analysis of response of nonlinear spring-dashpot and spring-dashpot-mass systems.

  1. PID controller auto-tuning based on process step response and damping optimum criterion.

    Science.gov (United States)

    Pavković, Danijel; Polak, Siniša; Zorc, Davor

    2014-01-01

    This paper presents a novel method of PID controller tuning suitable for higher-order aperiodic processes and aimed at step response-based auto-tuning applications. The PID controller tuning is based on the identification of so-called n-th order lag (PTn) process model and application of damping optimum criterion, thus facilitating straightforward algebraic rules for the adjustment of both the closed-loop response speed and damping. The PTn model identification is based on the process step response, wherein the PTn model parameters are evaluated in a novel manner from the process step response equivalent dead-time and lag time constant. The effectiveness of the proposed PTn model parameter estimation procedure and the related damping optimum-based PID controller auto-tuning have been verified by means of extensive computer simulations. © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  2. Callings and Organizational Behavior

    Science.gov (United States)

    Elangovan, A. R.; Pinder, Craig C.; McLean, Murdith

    2010-01-01

    Current literature on careers, social identity and meaning in work tends to understate the multiplicity, historical significance, and nuances of the concept of calling(s). In this article, we trace the evolution of the concept from its religious roots into secular realms and develop a typology of interpretations using occupation and religious…

  3. Uncertain call likelihood negatively affects sleep and next-day cognitive performance while on-call in a laboratory environment.

    Science.gov (United States)

    Sprajcer, Madeline; Jay, Sarah M; Vincent, Grace E; Vakulin, Andrew; Lack, Leon; Ferguson, Sally A

    2018-05-11

    On-call working arrangements are employed in a number of industries to manage unpredictable events, and often involve tasks that are safety- or time-critical. This study investigated the effects of call likelihood during an overnight on-call shift on self-reported pre-bed anxiety, sleep and next-day cognitive performance. A four-night laboratory-based protocol was employed, with an adaptation, a control and two counterbalanced on-call nights. On one on-call night, participants were instructed that they would definitely be called during the night, while on the other on-call night they were told they may be called. The State-Trait Anxiety Inventory form x-1 was used to investigate pre-bed anxiety, and sleep was assessed using polysomnography and power spectral analysis of the sleep electroencephalographic analysis. Cognitive performance was assessed four times daily using a 10-min psychomotor vigilance task. Participants felt more anxious before bed when they were definitely going to be called, compared with the control and maybe conditions. Conversely, participants experienced significantly less non-rapid eye movement and stage two sleep and poorer cognitive performance when told they may be called. Further, participants had significantly more rapid eye movement sleep in the maybe condition, which may be an adaptive response to the stress associated with this on-call condition. It appears that self-reported anxiety may not be linked with sleep outcomes while on-call. However, this research indicates that it is important to take call likelihood into consideration when constructing rosters and risk-management systems for on-call workers.

  4. Step dynamics and terrace-width distribution on flame-annealed gold films: The effect of step-step interaction

    International Nuclear Information System (INIS)

    Shimoni, Nira; Ayal, Shai; Millo, Oded

    2000-01-01

    Dynamics of atomic steps and the terrace-width distribution within step bunches on flame-annealed gold films are studied using scanning tunneling microscopy. The distribution is narrower than commonly observed for vicinal planes and has a Gaussian shape, indicating a short-range repulsive interaction between the steps, with an apparently large interaction constant. The dynamics of the atomic steps, on the other hand, appear to be influenced, in addition to these short-range interactions, also by a longer-range attraction of steps towards step bunches. Both types of interactions promote self-ordering of terrace structures on the surface. When current is driven through the films a step-fingering instability sets in, reminiscent of the Bales-Zangwill instability

  5. Traffic safety and step-by-step driving licence for young people

    DEFF Research Database (Denmark)

    Tønning, Charlotte; Agerholm, Niels

    2017-01-01

    presents a review of safety effects from step-by-step driving licence schemes. Most of the investigated schemes consist of a step-by-step driving licence with Step 1) various tests and education, Step 2) a period where driving is only allowed together with an experienced driver and Step 3) driving without...... companion is allowed but with various restrictions and, in some cases, additional driving education and tests. In general, a step-by-step driving licence improves traffic safety even though the young people are permitted to drive a car earlier on. The effects from driving with an experienced driver vary......Young novice car drivers are much more accident-prone than other drivers - up to 10 times that of their parents' generation. A central solution to improve the traffic safety for this group is implementation of a step-by-step driving licence. A number of countries have introduced a step...

  6. Care and calls

    DEFF Research Database (Denmark)

    Paasch, Bettina Sletten

    -centred care through the use of tactile resources and embodied orientations while they attend to the phone call. Experienced nurses Thus perform multiactivity by distributing attention towards both the patient and the phone, and the analysis shows that their concrete ways of doing so depend on the complex...... they are telephoned during interactions with patients are not universal. Indeed different strategies have evolved in other hospital departments. Not only does this thesis contribute insights into the way nurses manage phone calls during interactions with patients, but by subscribing to a growing body of embodied...... of human interaction....

  7. Performance driven IT management five practical steps to business success

    CERN Document Server

    Sachs, Ira

    2011-01-01

    This book argues that the Federal Government needs a new approach to IT management. Introducing a novel five-step process called performance-driven management (PDM), author Ira Sachs explains in detail how to reduce risk on large IT programs and projects. This is an essential tool for all IT and business managers in government and contractors doing business with the government, and it has much useful and actionable information for anyone who is interested in helping their business save money and take on effective, successful practices.

  8. A step-defined sedentary lifestyle index: <5000 steps/day.

    Science.gov (United States)

    Tudor-Locke, Catrine; Craig, Cora L; Thyfault, John P; Spence, John C

    2013-02-01

    Step counting (using pedometers or accelerometers) is widely accepted by researchers, practitioners, and the general public. Given the mounting evidence of the link between low steps/day and time spent in sedentary behaviours, how few steps/day some populations actually perform, and the growing interest in the potentially deleterious effects of excessive sedentary behaviours on health, an emerging question is "How many steps/day are too few?" This review examines the utility, appropriateness, and limitations of using a reoccurring candidate for a step-defined sedentary lifestyle index: 10 000) to lower (sedentary lifestyle index for adults is appropriate for researchers and practitioners and for communicating with the general public. There is little evidence to advocate any specific value indicative of a step-defined sedentary lifestyle index in children and adolescents.

  9. Effect of interpolation error in pre-processing codes on calculations of self-shielding factors and their temperature derivatives

    International Nuclear Information System (INIS)

    Ganesan, S.; Gopalakrishnan, V.; Ramanadhan, M.M.; Cullan, D.E.

    1986-01-01

    We investigate the effect of interpolation error in the pre-processing codes LINEAR, RECENT and SIGMA1 on calculations of self-shielding factors and their temperature derivatives. We consider the 2.0347 to 3.3546 keV energy region for 238 U capture, which is the NEACRP benchmark exercise on unresolved parameters. The calculated values of temperature derivatives of self-shielding factors are significantly affected by interpolation error. The sources of problems in both evaluated data and codes are identified and eliminated in the 1985 version of these codes. This paper helps to (1) inform code users to use only 1985 versions of LINEAR, RECENT, and SIGMA1 and (2) inform designers of other code systems where they may have problems and what to do to eliminate their problems. (author)

  10. Effect of interpolation error in pre-processing codes on calculations of self-shielding factors and their temperature derivatives

    International Nuclear Information System (INIS)

    Ganesan, S.; Gopalakrishnan, V.; Ramanadhan, M.M.; Cullen, D.E.

    1985-01-01

    The authors investigate the effect of interpolation error in the pre-processing codes LINEAR, RECENT and SIGMA1 on calculations of self-shielding factors and their temperature derivatives. They consider the 2.0347 to 3.3546 keV energy region for /sup 238/U capture, which is the NEACRP benchmark exercise on unresolved parameters. The calculated values of temperature derivatives of self-shielding factors are significantly affected by interpolation error. The sources of problems in both evaluated data and codes are identified and eliminated in the 1985 version of these codes. This paper helps to (1) inform code users to use only 1985 versions of LINEAR, RECENT, and SIGMA1 and (2) inform designers of other code systems where they may have problems and what to do to eliminate their problems

  11. PhySIC_IST: cleaning source trees to infer more informative supertrees.

    Science.gov (United States)

    Scornavacca, Celine; Berry, Vincent; Lefort, Vincent; Douzery, Emmanuel J P; Ranwez, Vincent

    2008-10-04

    Supertree methods combine phylogenies with overlapping sets of taxa into a larger one. Topological conflicts frequently arise among source trees for methodological or biological reasons, such as long branch attraction, lateral gene transfers, gene duplication/loss or deep gene coalescence. When topological conflicts occur among source trees, liberal methods infer supertrees containing the most frequent alternative, while veto methods infer supertrees not contradicting any source tree, i.e. discard all conflicting resolutions. When the source trees host a significant number of topological conflicts or have a small taxon overlap, supertree methods of both kinds can propose poorly resolved, hence uninformative, supertrees. To overcome this problem, we propose to infer non-plenary supertrees, i.e. supertrees that do not necessarily contain all the taxa present in the source trees, discarding those whose position greatly differs among source trees or for which insufficient information is provided. We detail a variant of the PhySIC veto method called PhySIC_IST that can infer non-plenary supertrees. PhySIC_IST aims at inferring supertrees that satisfy the same appealing theoretical properties as with PhySIC, while being as informative as possible under this constraint. The informativeness of a supertree is estimated using a variation of the CIC (Cladistic Information Content) criterion, that takes into account both the presence of multifurcations and the absence of some taxa. Additionally, we propose a statistical preprocessing step called STC (Source Trees Correction) to correct the source trees prior to the supertree inference. STC is a liberal step that removes the parts of each source tree that significantly conflict with other source trees. Combining STC with a veto method allows an explicit trade-off between veto and liberal approaches, tuned by a single parameter.Performing large-scale simulations, we observe that STC+PhySIC_IST infers much more informative

  12. Astronomical sketching a step-by-step introduction

    CERN Document Server

    Handy, Richard; Perez, Jeremy; Rix, Erika; Robbins, Sol

    2007-01-01

    This book presents the amateur with fine examples of astronomical sketches and step-by-step tutorials in each medium, from pencil to computer graphics programs. This unique book can teach almost anyone to create beautiful sketches of celestial objects.

  13. Numerical Simulation of Air Entrainment for Flat-Sloped Stepped Spillway

    Directory of Open Access Journals (Sweden)

    Bentalha Chakib

    2015-03-01

    Full Text Available Stepped spillway is a good hydraulic structure for energy dissipation because of the large value of the surface roughness. The performance of the stepped spillway is enhanced with the presence of air that can prevent or reduce the cavitation damage. Chanson developed a method to determine the position of the start of air entrainment called inception point. Within this work the inception point is determined by using fluent computational fluid dynamics (CFD where the volume of fluid (VOF model is used as a tool to simulate air-water interaction on the free surface thereby the turbulence closure is derived in the k –ε turbulence standard model, at the same time one-sixth power law distribution of the velocity profile is verified. Also the pressure contours and velocity vectors at the bed surface are determined. The found numerical results agree well with experimental results.

  14. How to call the Fire Brigade

    CERN Multimedia

    2003-01-01

    The telephone numbers for the CERN Fire Brigade are: 74444 for emergency calls 74848 for other calls Note The number 112 will stay in use for emergency calls from "wired" telephones, however, from mobile phones it leads to non-CERN emergency services.

  15. One step beyond: Different step-to-step transitions exist during continuous contact brachiation in siamangs

    Directory of Open Access Journals (Sweden)

    Fana Michilsens

    2012-02-01

    In brachiation, two main gaits are distinguished, ricochetal brachiation and continuous contact brachiation. During ricochetal brachiation, a flight phase exists and the body centre of mass (bCOM describes a parabolic trajectory. For continuous contact brachiation, where at least one hand is always in contact with the substrate, we showed in an earlier paper that four step-to-step transition types occur. We referred to these as a ‘point’, a ‘loop’, a ‘backward pendulum’ and a ‘parabolic’ transition. Only the first two transition types have previously been mentioned in the existing literature on gibbon brachiation. In the current study, we used three-dimensional video and force analysis to describe and characterize these four step-to-step transition types. Results show that, although individual preference occurs, the brachiation strides characterized by each transition type are mainly associated with speed. Yet, these four transitions seem to form a continuum rather than four distinct types. Energy recovery and collision fraction are used as estimators of mechanical efficiency of brachiation and, remarkably, these parameters do not differ between strides with different transition types. All strides show high energy recoveries (mean  = 70±11.4% and low collision fractions (mean  = 0.2±0.13, regardless of the step-to-step transition type used. We conclude that siamangs have efficient means of modifying locomotor speed during continuous contact brachiation by choosing particular step-to-step transition types, which all minimize collision fraction and enhance energy recovery.

  16. A stabilized Runge–Kutta–Legendre method for explicit super-time-stepping of parabolic and mixed equations

    International Nuclear Information System (INIS)

    Meyer, Chad D.; Balsara, Dinshaw S.; Aslam, Tariq D.

    2014-01-01

    Parabolic partial differential equations appear in several physical problems, including problems that have a dominant hyperbolic part coupled to a sub-dominant parabolic component. Explicit methods for their solution are easy to implement but have very restrictive time step constraints. Implicit solution methods can be unconditionally stable but have the disadvantage of being computationally costly or difficult to implement. Super-time-stepping methods for treating parabolic terms in mixed type partial differential equations occupy an intermediate position. In such methods each superstep takes “s” explicit Runge–Kutta-like time-steps to advance the parabolic terms by a time-step that is s 2 times larger than a single explicit time-step. The expanded stability is usually obtained by mapping the short recursion relation of the explicit Runge–Kutta scheme to the recursion relation of some well-known, stable polynomial. Prior work has built temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Chebyshev polynomials. Since their stability is based on the boundedness of the Chebyshev polynomials, these methods have been called RKC1 and RKC2. In this work we build temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Legendre polynomials. We call these methods RKL1 and RKL2. The RKL1 method is first-order accurate in time; the RKL2 method is second-order accurate in time. We verify that the newly-designed RKL1 and RKL2 schemes have a very desirable monotonicity preserving property for one-dimensional problems – a solution that is monotone at the beginning of a time step retains that property at the end of that time step. It is shown that RKL1 and RKL2 methods are stable for all values of the diffusion coefficient up to the maximum value. We call this a convex monotonicity preserving property and show by examples that it is very useful

  17. Application of stepping motor

    International Nuclear Information System (INIS)

    1980-10-01

    This book is divided into three parts, which is about practical using of stepping motor. The first part has six chapters. The contents of the first part are about stepping motor, classification of stepping motor, basic theory og stepping motor, characteristic and basic words, types and characteristic of stepping motor in hybrid type and basic control of stepping motor. The second part deals with application of stepping motor with hardware of stepping motor control, stepping motor control by microcomputer and software of stepping motor control. The last part mentions choice of stepping motor system, examples of stepping motor, measurement of stepping motor and practical cases of application of stepping motor.

  18. Internship guide : Work placements step by step

    NARCIS (Netherlands)

    Haag, Esther

    2013-01-01

    Internship Guide: Work Placements Step by Step has been written from the practical perspective of a placement coordinator. This book addresses the following questions : what problems do students encounter when they start thinking about the jobs their degree programme prepares them for? How do you

  19. Using Semantic Similarity In Automated Call Quality Evaluator For Call Centers

    Directory of Open Access Journals (Sweden)

    Ria A. Sagum

    2015-08-01

    Full Text Available Conversation between the agent and client are being evaluated manually by a quality assurance officer QA. This job is only one of the responsibilities being done by a QA and particularly eat ups a lot of time for them which lead to late evaluation results that may cause untimely response of the company to concerns raised by their clients. This research developed an application software that automates and evaluates the quality assurance in business process outsourcing companies or customer service management implementing sentence similarity. The developed system includes two modules speaker diarization which includes transcription and question and answer extraction and similarity checker which checks the similarity between the extracted answer and the answer of the call center agent to a question. The system was evaluated for Correctness of the extracted answers and accurateness of the evaluation for a particular call. Audio conversations were tested for the accuracy of the transcription module which has an accuracy of 27.96. The Precision Recall and F-measure of the extracted answer was tested as 78.03 96.26 and 86.19 respectively. The Accuracy of the system in evaluating a call is 70.

  20. Microsoft Office professional 2010 step by step

    CERN Document Server

    Cox, Joyce; Frye, Curtis

    2011-01-01

    Teach yourself exactly what you need to know about using Office Professional 2010-one step at a time! With STEP BY STEP, you build and practice new skills hands-on, at your own pace. Covering Microsoft Word, PowerPoint, Outlook, Excel, Access, Publisher, and OneNote, this book will help you learn the core features and capabilities needed to: Create attractive documents, publications, and spreadsheetsManage your e-mail, calendar, meetings, and communicationsPut your business data to workDevelop and deliver great presentationsOrganize your ideas and notes in one placeConnect, share, and accom

  1. Calling in Work: Secular or Sacred?

    Science.gov (United States)

    Steger, Michael F.; Pickering, N. K.; Shin, J. Y.; Dik, B. J.

    2010-01-01

    Recent scholarship indicates that people who view their work as a calling are more satisfied with their work and their lives. Historically, calling has been regarded as a religious experience, although modern researchers frequently have adopted a more expansive and secular conceptualization of calling, emphasizing meaning and personal fulfillment…

  2. Hornbills can distinguish between primate alarm calls.

    Science.gov (United States)

    Rainey, Hugo J.; Zuberbühler, Klaus; Slater, Peter J. B.

    2004-01-01

    Some mammals distinguish between and respond appropriately to the alarm calls of other mammal and bird species. However, the ability of birds to distinguish between mammal alarm calls has not been investigated. Diana monkeys (Cercopithecus diana) produce different alarm calls to two predators: crowned eagles (Stephanoaetus coronatus) and leopards (Panthera pardus). Yellow-casqued hornbills (Ceratogymna elata) are vulnerable to predation by crowned eagles but are not preyed on by leopards and might therefore be expected to respond to the Diana monkey eagle alarm call but not to the leopard alarm call. We compared responses of hornbills to playback of eagle shrieks, leopard growls, Diana monkey eagle alarm calls and Diana monkey leopard alarm calls and found that they distinguished appropriately between the two predator vocalizations as well as between the two Diana monkey alarm calls. We discuss possible mechanisms leading to these responses. PMID:15209110

  3. The hyperbolic step potential: Anti-bound states, SUSY partners and Wigner time delays

    Energy Technology Data Exchange (ETDEWEB)

    Gadella, M. [Departamento de Física Teórica, Atómica y Óptica and IMUVA, Universidad de Valladolid, E-47011 Valladolid (Spain); Kuru, Ş. [Department of Physics, Faculty of Science, Ankara University, 06100 Ankara (Turkey); Negro, J., E-mail: jnegro@fta.uva.es [Departamento de Física Teórica, Atómica y Óptica and IMUVA, Universidad de Valladolid, E-47011 Valladolid (Spain)

    2017-04-15

    We study the scattering produced by a one dimensional hyperbolic step potential, which is exactly solvable and shows an unusual interest because of its asymmetric character. The analytic continuation of the scattering matrix in the momentum representation has a branch cut and an infinite number of simple poles on the negative imaginary axis which are related with the so called anti-bound states. This model does not show resonances. Using the wave functions of the anti-bound states, we obtain supersymmetric (SUSY) partners which are the series of Rosen–Morse II potentials. We have computed the Wigner reflection and transmission time delays for the hyperbolic step and such SUSY partners. Our results show that the more bound states a partner Hamiltonian has the smaller is the time delay. We also have evaluated time delays for the hyperbolic step potential in the classical case and have obtained striking similitudes with the quantum case. - Highlights: • The scattering matrix of hyperbolic step potential is studied. • The scattering matrix has a branch cut and an infinite number of poles. • The poles are associated to anti-bound states. • Susy partners using antibound states are computed. • Wigner time delays for the hyperbolic step and partner potentials are compared.

  4. Using an isomorphic problem pair to learn introductory physics: Transferring from a two-step problem to a three-step problem

    Directory of Open Access Journals (Sweden)

    Shih-Yin Lin

    2013-10-01

    Full Text Available In this study, we examine introductory physics students’ ability to perform analogical reasoning between two isomorphic problems which employ the same underlying physics principles but have different surface features. 382 students from a calculus-based and an algebra-based introductory physics course were administered a quiz in the recitation in which they had to learn from a solved problem provided and take advantage of what they learned from it to solve another isomorphic problem (which we call the quiz problem. The solved problem provided has two subproblems while the quiz problem has three subproblems, which is known from previous research to be challenging for introductory students. In addition to the solved problem, students also received extra scaffolding supports that were intended to help them discern and exploit the underlying similarities of the isomorphic solved and quiz problems. The data analysis suggests that students had great difficulty in transferring what they learned from a two-step problem to a three-step problem. Although most students were able to learn from the solved problem to some extent with the scaffolding provided and invoke the relevant principles in the quiz problem, they were not necessarily able to apply the principles correctly. We also conducted think-aloud interviews with six introductory students in order to understand in depth the difficulties they had and explore strategies to provide better scaffolding. The interviews suggest that students often superficially mapped the principles employed in the solved problem to the quiz problem without necessarily understanding the governing conditions underlying each principle and examining the applicability of the principle in the new situation in an in-depth manner. Findings suggest that more scaffolding is needed to help students in transferring from a two-step problem to a three-step problem and applying the physics principles appropriately. We outline a few

  5. Using an isomorphic problem pair to learn introductory physics: Transferring from a two-step problem to a three-step problem

    Science.gov (United States)

    Lin, Shih-Yin; Singh, Chandralekha

    2013-12-01

    In this study, we examine introductory physics students’ ability to perform analogical reasoning between two isomorphic problems which employ the same underlying physics principles but have different surface features. 382 students from a calculus-based and an algebra-based introductory physics course were administered a quiz in the recitation in which they had to learn from a solved problem provided and take advantage of what they learned from it to solve another isomorphic problem (which we call the quiz problem). The solved problem provided has two subproblems while the quiz problem has three subproblems, which is known from previous research to be challenging for introductory students. In addition to the solved problem, students also received extra scaffolding supports that were intended to help them discern and exploit the underlying similarities of the isomorphic solved and quiz problems. The data analysis suggests that students had great difficulty in transferring what they learned from a two-step problem to a three-step problem. Although most students were able to learn from the solved problem to some extent with the scaffolding provided and invoke the relevant principles in the quiz problem, they were not necessarily able to apply the principles correctly. We also conducted think-aloud interviews with six introductory students in order to understand in depth the difficulties they had and explore strategies to provide better scaffolding. The interviews suggest that students often superficially mapped the principles employed in the solved problem to the quiz problem without necessarily understanding the governing conditions underlying each principle and examining the applicability of the principle in the new situation in an in-depth manner. Findings suggest that more scaffolding is needed to help students in transferring from a two-step problem to a three-step problem and applying the physics principles appropriately. We outline a few possible strategies

  6. From nestling calls to fledgling silence: adaptive timing of change in response to aerial alarm calls.

    Science.gov (United States)

    Magrath, Robert D; Platzen, Dirk; Kondo, Junko

    2006-09-22

    Young birds and mammals are extremely vulnerable to predators and so should benefit from responding to parental alarm calls warning of danger. However, young often respond differently from adults. This difference may reflect: (i) an imperfect stage in the gradual development of adult behaviour or (ii) an adaptation to different vulnerability. Altricial birds provide an excellent model to test for adaptive changes with age in response to alarm calls, because fledglings are vulnerable to a different range of predators than nestlings. For example, a flying hawk is irrelevant to a nestling in a enclosed nest, but is dangerous to that individual once it has left the nest, so we predict that young develop a response to aerial alarm calls to coincide with fledging. Supporting our prediction, recently fledged white-browed scrubwrens, Sericornis frontalis, fell silent immediately after playback of their parents' aerial alarm call, whereas nestlings continued to calling despite hearing the playback. Young scrubwrens are therefore exquisitely adapted to the changing risks faced during development.

  7. Staffing to Maximize Profit for Call Centers with Impatient and Repeat-Calling Customers

    Directory of Open Access Journals (Sweden)

    Jun Gong

    2015-01-01

    Full Text Available Motivated by call center practice, we study the optimal staffing of many-server queues with impatient and repeat-calling customers. A call center is modeled as an M/M/s+M queue, which is developed to a behavioral queuing model in which customers come and go based on their satisfaction with waiting time. We explicitly take into account customer repeat behavior, which implies that satisfied customers might return and have an impact on the arrival rate. Optimality is defined as the number of agents that maximize revenues net of staffing costs, and we account for the characteristic that revenues are a direct function of staffing. Finally, we use numerical experiments to make certain comparisons with traditional models that do not consider customer repeat behavior. Furthermore, we indicate how managers might allocate staffing optimally with various customer behavior mechanisms.

  8. The way to collisions, step by step

    CERN Multimedia

    2009-01-01

    While the LHC sectors cool down and reach the cryogenic operating temperature, spirits are warming up as we all eagerly await the first collisions. No reason to hurry, though. Making particles collide involves the complex manoeuvring of thousands of delicate components. The experts will make it happen using a step-by-step approach.

  9. A Novel Motion Compensation Method for Random Stepped Frequency Radar with M-sequence

    Science.gov (United States)

    Liao, Zhikun; Hu, Jiemin; Lu, Dawei; Zhang, Jun

    2018-01-01

    The random stepped frequency radar is a new kind of synthetic wideband radar. In the research, it has been found that it possesses a thumbtack-like ambiguity function which is considered to be the ideal one. This also means that only a precise motion compensation could result in the correct high resolution range profile. In this paper, we will introduce the random stepped frequency radar coded by M-sequence firstly and briefly analyse the effect of relative motion between target and radar on the distance imaging, which is called defocusing problem. Then, a novel motion compensation method, named complementary code cancellation, will be put forward to solve this problem. Finally, the simulated experiments will demonstrate its validity and the computational analysis will show up its efficiency.

  10. Peafowl antipredator calls encode information about signalers.

    Science.gov (United States)

    Yorzinski, Jessica L

    2014-02-01

    Animals emit vocalizations that convey information about external events. Many of these vocalizations, including those emitted in response to predators, also encode information about the individual that produced the call. The relationship between acoustic features of antipredator calls and information relating to signalers (including sex, identity, body size, and social rank) were examined in peafowl (Pavo cristatus). The "bu-girk" antipredator calls of male and female peafowl were recorded and 20 acoustic parameters were automatically extracted from each call. Both the bu and girk elements of the antipredator call were individually distinctive and calls were classified to the correct signaler with over 90% and 70% accuracy in females and males, respectively. Females produced calls with a higher fundamental frequency (F0) than males. In both females and males, body size was negatively correlated with F0. In addition, peahen rank was related to the duration, end mean frequency, and start harmonicity of the bu element. Peafowl antipredator calls contain detailed information about the signaler and can potentially be used by receivers to respond to dangerous situations.

  11. A Million Steps: Developing a Health Promotion Program at the Workplace to Enhance Physical Activity.

    Science.gov (United States)

    González-Dominguez, María Eugenia; Romero-Sánchez, José Manuel; Ares-Camerino, Antonio; Marchena-Aparicio, Jose Carlos; Flores-Muñoz, Manuel; Infantes-Guzmán, Inés; León-Asuero, José Manuel; Casals-Martín, Fernando

    2017-11-01

    The workplace is a key setting for the prevention of occupational risks and for promoting healthy activities such as physical activity. Developing a physically active lifestyle results in many health benefits, improving both well-being and quality of life. This article details the experience of two Spanish companies that implemented a program to promote physical exercise in the workplace, called "A Million Steps." This program aimed to increase the physical activity of participants, challenging them to reach at least a million steps in a month through group walks. Participant workers reached the set goal and highlighted the motivational and interpersonal functions of the program.

  12. BUFO PARDALIS (ANURA: BUFONIDAE): MATING CALL AND ...

    African Journals Online (AJOL)

    the calls of one of these species, Bufo pardalis. Hewitt, were not analysed by Tandy & Keith. (1972). Furthennore there is some confusion in the literature regarding the mating call of this species. For these reasons this mating call is here clarified. The mating call of B. pardaiis was first described by Ranger (in Hewitt 1935) as ...

  13. Chemical pre-processing of cluster galaxies over the past 10 billion years in the IllustrisTNG simulations

    Science.gov (United States)

    Gupta, Anshu; Yuan, Tiantian; Torrey, Paul; Vogelsberger, Mark; Martizzi, Davide; Tran, Kim-Vy H.; Kewley, Lisa J.; Marinacci, Federico; Nelson, Dylan; Pillepich, Annalisa; Hernquist, Lars; Genel, Shy; Springel, Volker

    2018-06-01

    We use the IllustrisTNG simulations to investigate the evolution of the mass-metallicity relation (MZR) for star-forming cluster galaxies as a function of the formation history of their cluster host. The simulations predict an enhancement in the gas-phase metallicities of star-forming cluster galaxies (109 cluster galaxies appears prior to their infall into the central cluster potential, indicating for the first time a systematic `chemical pre-processing' signature for infalling cluster galaxies. Namely, galaxies that will fall into a cluster by z = 0 show a ˜0.05 dex enhancement in the MZR compared to field galaxies at z ≤ 0.5. Based on the inflow rate of gas into cluster galaxies and its metallicity, we identify that the accretion of pre-enriched gas is the key driver of the chemical evolution of such galaxies, particularly in the stellar mass range (109 clusters. Our results motivate future observations looking for pre-enrichment signatures in dense environments.

  14. Impact of mobility on call block, call drops and optimal cell size in small cell networks

    OpenAIRE

    Ramanath , Sreenath; Voleti , Veeraruna Kavitha; Altman , Eitan

    2011-01-01

    We consider small cell networks and study the impact of user mobility. Assuming Poisson call arrivals at random positions with random velocities, we discuss the characterization of handovers at the boundaries. We derive explicit expressions for call block and call drop probabilities using tools from spatial queuing theory. We also derive expressions for the average virtual server held up time. These expressions are used to derive optimal cell sizes for various profile of velocities in small c...

  15. External GSM phone calls now made simpler

    CERN Multimedia

    2007-01-01

    On 2 July, the IT/CS Telecom Service introduced a new service making external calls from CERN GSM phones easier. A specific prefix is no longer needed for calls outside CERN. External calls from CERN GSM phones are to be simplified. It is no longer necessary to use a special prefix to call an external number from the CERN GSM network.The Telecom Section of the IT/CS Group is introducing a new system that will make life easier for GSM users. It is no longer necessary to use a special prefix (333) to call an external number from the CERN GSM network. Simply dial the number directly like any other Swiss GSM customer. CERN currently has its own private GSM network with the Swiss mobile operator, Sunrise, covering the whole of Switzerland. This network was initially intended exclusively for calls between CERN numbers (replacing the old beeper system). A special system was later introduced for external calls, allowing them to pass thr...

  16. Sharing programming resources between Bio* projects through remote procedure call and native call stack strategies.

    Science.gov (United States)

    Prins, Pjotr; Goto, Naohisa; Yates, Andrew; Gautier, Laurent; Willis, Scooter; Fields, Christopher; Katayama, Toshiaki

    2012-01-01

    Open-source software (OSS) encourages computer programmers to reuse software components written by others. In evolutionary bioinformatics, OSS comes in a broad range of programming languages, including C/C++, Perl, Python, Ruby, Java, and R. To avoid writing the same functionality multiple times for different languages, it is possible to share components by bridging computer languages and Bio* projects, such as BioPerl, Biopython, BioRuby, BioJava, and R/Bioconductor. In this chapter, we compare the two principal approaches for sharing software between different programming languages: either by remote procedure call (RPC) or by sharing a local call stack. RPC provides a language-independent protocol over a network interface; examples are RSOAP and Rserve. The local call stack provides a between-language mapping not over the network interface, but directly in computer memory; examples are R bindings, RPy, and languages sharing the Java Virtual Machine stack. This functionality provides strategies for sharing of software between Bio* projects, which can be exploited more often. Here, we present cross-language examples for sequence translation, and measure throughput of the different options. We compare calling into R through native R, RSOAP, Rserve, and RPy interfaces, with the performance of native BioPerl, Biopython, BioJava, and BioRuby implementations, and with call stack bindings to BioJava and the European Molecular Biology Open Software Suite. In general, call stack approaches outperform native Bio* implementations and these, in turn, outperform RPC-based approaches. To test and compare strategies, we provide a downloadable BioNode image with all examples, tools, and libraries included. The BioNode image can be run on VirtualBox-supported operating systems, including Windows, OSX, and Linux.

  17. Call Duration Characteristics based on Customers Location

    Directory of Open Access Journals (Sweden)

    Žvinys Karolis

    2014-05-01

    Full Text Available Nowadays a lot of different researches are performed based on call duration distributions (CDD analysis. However, the majority of studies are linked with social relationships between the people. Therefore the scarcity of information, how the call duration is associated with a user's location, is appreciable. The goal of this paper is to reveal the ties between user's voice call duration and the location of call. For this reason we analyzed more than 5 million calls from real mobile network, which were made over the base stations located in rural areas, roads, small towns, business and entertainment centers, residential districts. According to these site types CDD’s and characteristic features for call durations are given and discussed. Submitted analysis presents the users habits and behavior as a group (not an individual. The research showed that CDD’s of customers being them in different locations are not equal. It has been found that users at entertainment, business centers are tend to talk much shortly, than people being at home. Even more CDD can be distorted strongly, when machinery calls are evaluated. Hence to apply a common CDD for a whole network it is not recommended. The study also deals with specific parameters of call duration for distinguished user groups, the influence of network technology for call duration is considered.

  18. Field theoretical approach to proton-nucleus reactions. I - One step inelastic scattering

    International Nuclear Information System (INIS)

    Eiras, A.; Kodama, T.; Nemes, M.C.

    1988-01-01

    In this work we obtain a closed form expression to the double differential cross section for one step proton-nucleus reaction within a field theoretical framework. Energy and momentum conservation as well as nuclear structure effects are consistently taken into account within the field theoretical eikonal approximation. In our formulation the kinematics of such reaction is not dominated by the free nucleon-nucleon cross section but a new factor which we call relativistic differential cross section in a Born Approximation. (author) [pt

  19. Care and Calls

    DEFF Research Database (Denmark)

    Paasch, Bettina Sletten

    on the enactment of care but also on patient safety. Nurses working in various hospital departments have developed different strategies for handling mobile phone calls when with a patient. Additional research into the ways nurses successfully or unsuccessfully enact care and ensure patient safety when they answer......In Danish hospitals, nurses have been equipped with a mobile work phone to improve their availability and efficiency. On the phones nurses receive internal and external phone conversations, patient calls, and alarms from electronic surveillance equipment. For safety reasons the phones cannot...... be switched off or silenced; they consequently ring during all activities and also during interactions with patients. A possible tension thus arises when nurses have to be both caring and sensitive towards the patient and simultaneously be efficient and available and answer their phone. The present paper...

  20. Calling under pressure: short-finned pilot whales make social calls during deep foraging dives.

    Science.gov (United States)

    Jensen, Frants H; Perez, Jacobo Marrero; Johnson, Mark; Soto, Natacha Aguilar; Madsen, Peter T

    2011-10-22

    Toothed whales rely on sound to echolocate prey and communicate with conspecifics, but little is known about how extreme pressure affects pneumatic sound production in deep-diving species with a limited air supply. The short-finned pilot whale (Globicephala macrorhynchus) is a highly social species among the deep-diving toothed whales, in which individuals socialize at the surface but leave their social group in pursuit of prey at depths of up to 1000 m. To investigate if these animals communicate acoustically at depth and test whether hydrostatic pressure affects communication signals, acoustic DTAGs logging sound, depth and orientation were attached to 12 pilot whales. Tagged whales produced tonal calls during deep foraging dives at depths of up to 800 m. Mean call output and duration decreased with depth despite the increased distance to conspecifics at the surface. This shows that the energy content of calls is lower at depths where lungs are collapsed and where the air volume available for sound generation is limited by ambient pressure. Frequency content was unaffected, providing a possible cue for group or species identification of diving whales. Social calls may be important to maintain social ties for foraging animals, but may be impacted adversely by vessel noise.

  1. Do market participants learn from conference calls?

    NARCIS (Netherlands)

    Roelofsen, E.; Verbeeten, F.; Mertens, G.

    2014-01-01

    We examine whether market participants learn from the information that is disseminated during the Q-and-A section of conference calls. Specifically, we investigate whether stock prices react to information on intangible assets provided during conference calls, and whether conference calls

  2. From supervision to resolution: next steps on the road to European banking union

    OpenAIRE

    Nicolas Véron; Guntram B. Wolff

    2013-01-01

    Listen to the press conference call. The European Council has outlined the creation of a Single Resolution Mechanism (SRM), complementing the Single Supervisory Mechanism. The thinking on the SRM’s legal basis, design and mission is still preliminary and depends on other major initiatives, including the European Stability Mechanism’s involvement in bank recapitalisations and the Bank Recovery and Resolution (BRR) Directive. The SRM should also not be seen as the final step creating Europe’s f...

  3. HOW TO CALL THE CERN FIRE BRIGADE

    CERN Multimedia

    2002-01-01

    The telephone numbers of the CERN Fire Brigade are: 74444 for emergency calls 74848 for other calls Note The number 112 will stay in use for emergency calls from 'wired' telephones, however, from mobile phones it leads to non-CERN emergency services.  

  4. HOW TO CALL THE CERN FIRE BRIGADE

    CERN Multimedia

    2002-01-01

    The telephone numbers of the CERN Fire Brigade are: 74444 for emergency calls 74848 for other calls Note The number 112 will stay in use for emergency calls from 'wired' telephones, however, from mobile phones it leads to non-CERN emergency services.

  5. HOW TO CALL THE CERN FIRE BRIGADE

    CERN Multimedia

    2001-01-01

    The telephone numbers of the CERN Fire Brigade are: 74444 for emergency calls 74848 for other calls Note The number 112 will stay in use for emergency calls from 'wired' telephones, however, from mobile phones it leads to non-CERN emergency services.

  6. HOW TO CALL THE CERN FIRE BRIGADE

    CERN Multimedia

    2001-01-01

    The telephone numbers of the CERN Fire Brigade are: 74444 for emergency calls 74848 for other calls Note: the number 112 will stay in use for emergency calls from 'wired' telephones, however, from mobile phones it leads to non-CERN emergency services.

  7. Telephone calls by individuals with cancer.

    Science.gov (United States)

    Flannery, Marie; McAndrews, Leanne; Stein, Karen F

    2013-09-01

    To describe symptom type and reporting patterns found in spontaneously initiated telephone calls placed to an ambulatory cancer center practice. Retrospective, descriptive. Adult hematology oncology cancer center. 563 individuals with a wide range of oncology diagnoses who initiated 1,229 telephone calls to report symptoms. Raw data were extracted from telephone forms using a data collection sheet with 23 variables obtained for each phone call, using pre-established coding criteria. A literature-based, investigator-developed instrument was used for the coding criteria and selection of which variables to extract. Symptom reporting, telephone calls, pain, and symptoms. A total of 2,378 symptoms were reported by telephone during the four months. At least 10% of the sample reported pain (38%), fatigue (16%), nausea (16%), swelling (12%), diarrhea (12%), dyspnea (10%), and anorexia (10%). The modal response was to call only one time and to report only one symptom (55%). Pain emerged as the symptom that most often prompted an individual to pick up the telephone and call. Although variation was seen in symptom reporting, an interesting pattern emerged with an individual reporting on a solitary symptom in a single telephone call. The emergence of pain as the primary symptom reported by telephone prompted educational efforts for both in-person clinic visit management of pain and prioritizing nursing education and protocol management of pain reported by telephone. Report of symptoms by telephone can provide nurses unique insight into patient-centered needs. Although pain has been an important focus of education and research for decades, it remains a priority for individuals with cancer. A wide range in symptom reporting by telephone was evident.

  8. Outsourcing an Effective Postdischarge Call Program

    Science.gov (United States)

    Meek, Kevin L.; Williams, Paula; Unterschuetz, Caryn J.

    2018-01-01

    To improve patient satisfaction ratings and decrease readmissions, many organizations utilize internal staff to complete postdischarge calls to recently released patients. Developing, implementing, monitoring, and sustaining an effective call program can be challenging and have eluded some of the renowned medical centers in the country. Using collaboration with an outsourced vendor to bring state-of-the-art call technology and staffed with specially trained callers, health systems can achieve elevated levels of engagement and satisfaction for their patients postdischarge. PMID:29494453

  9. The Value of Step-by-Step Risk Assessment for Unmanned Aircraft

    DEFF Research Database (Denmark)

    La Cour-Harbo, Anders

    2018-01-01

    The new European legislation expected in 2018 or 2019 will introduce a step-by-step process for conducting risk assessments for unmanned aircraft flight operations. This is a relatively simple approach to a very complex challenge. This work compares this step-by-step process to high fidelity risk...... modeling, and shows that at least for a series of example flight missions there is reasonable agreement between the two very different methods....

  10. EMERGENCY CALLS

    CERN Multimedia

    Medical Service

    2001-01-01

    IN URGENT NEED OF A DOCTOR GENEVA EMERGENCY SERVICES GENEVA AND VAUD 144 FIRE BRIGADE 118 POLICE 117 CERN FIREMEN 767-44-44 ANTI-POISONS CENTRE Open 24h/24h 01-251-51-51 Patient not fit to be moved, call family doctor, or: GP AT HOME, open 24h/24h 748-49-50 Association Of Geneva Doctors Emergency Doctors at home 07h-23h 322 20 20 Patient fit to be moved: HOPITAL CANTONAL CENTRAL 24 Micheli-du-Crest 372-33-11 ou 382-33-11 EMERGENCIES 382-33-11 ou 372-33-11 CHILDREN'S HOSPITAL 6 rue Willy-Donzé 372-33-11 MATERNITY 32 bvd.de la Cluse 382-68-16 ou 382-33-11 OPHTHALMOLOGY 22 Alcide Jentzer 382-33-11 ou 372-33-11 MEDICAL CENTRE CORNAVIN 1-3 rue du Jura 345 45 50 HOPITAL DE LA TOUR Meyrin EMERGENCIES 719-61-11 URGENCES PEDIATRIQUES 719-61-00 LA TOUR MEDICAL CENTRE 719-74-00 European Emergency Call 112 FRANCE EMERGENCY SERVICES 15 FIRE BRIGADE 18 POLICE 17 CERN FIREMEN AT HOME 00-41-22-767-44-44 ANTI-POISONS CENTRE Open 24h/24h 04-72-11-69-11 All doctors ...

  11. WATERSHED ALGORITHM BASED SEGMENTATION FOR HANDWRITTEN TEXT IDENTIFICATION

    Directory of Open Access Journals (Sweden)

    P. Mathivanan

    2014-02-01

    Full Text Available In this paper we develop a system for writer identification which involves four processing steps like preprocessing, segmentation, feature extraction and writer identification using neural network. In the preprocessing phase the handwritten text is subjected to slant removal process for segmentation and feature extraction. After this step the text image enters into the process of noise removal and gray level conversion. The preprocessed image is further segmented by using morphological watershed algorithm, where the text lines are segmented into single words and then into single letters. The segmented image is feature extracted by Daubechies’5/3 integer wavelet transform to reduce training complexity [1, 6]. This process is lossless and reversible [10], [14]. These extracted features are given as input to our neural network for writer identification process and a target image is selected for each training process in the 2-layer neural network. With the several trained output data obtained from different target help in text identification. It is a multilingual text analysis which provides simple and efficient text segmentation.

  12. Hello, Who is Calling?: Can Words Reveal the Social Nature of Conversations?

    Science.gov (United States)

    Stark, Anthony; Shafran, Izhak; Kaye, Jeffrey

    2012-01-01

    This study aims to infer the social nature of conversations from their content automatically. To place this work in context, our motivation stems from the need to understand how social disengagement affects cognitive decline or depression among older adults. For this purpose, we collected a comprehensive and naturalistic corpus comprising of all the incoming and outgoing telephone calls from 10 subjects over the duration of a year. As a first step, we learned a binary classifier to filter out business related conversation, achieving an accuracy of about 85%. This classification task provides a convenient tool to probe the nature of telephone conversations. We evaluated the utility of openings and closing in differentiating personal calls, and find that empirical results on a large corpus do not support the hypotheses by Schegloff and Sacks that personal conversations are marked by unique closing structures. For classifying different types of social relationships such as family vs other, we investigated features related to language use (entropy), hand-crafted dictionary (LIWC) and topics learned using unsupervised latent Dirichlet models (LDA). Our results show that the posteriors over topics from LDA provide consistently higher accuracy (60-81%) compared to LIWC or language use features in distinguishing different types of conversations.

  13. TPMG Northern California appointments and advice call center.

    Science.gov (United States)

    Conolly, Patricia; Levine, Leslie; Amaral, Debra J; Fireman, Bruce H; Driscoll, Tom

    2005-08-01

    Kaiser Permanente (KP) has been developing its use of call centers as a way to provide an expansive set of healthcare services to KP members efficiently and cost effectively. Since 1995, when The Permanente Medical Group (TPMG) began to consolidate primary care phone services into three physical call centers, the TPMG Appointments and Advice Call Center (AACC) has become the "front office" for primary care services across approximately 89% of Northern California. The AACC provides primary care phone service for approximately 3 million Kaiser Foundation Health Plan members in Northern California and responds to approximately 1 million calls per month across the three AACC sites. A database records each caller's identity as well as the day, time, and duration of each call; reason for calling; services provided to callers as a result of calls; and clinical outcomes of calls. We here summarize this information for the period 2000 through 2003.

  14. Evolution of advertisement calls in African clawed frogs

    Science.gov (United States)

    Tobias, Martha L.; Evans, Ben J.; Kelley, Darcy B.

    2014-01-01

    Summary For most frogs, advertisement calls are essential for reproductive success, conveying information on species identity, male quality, sexual state and location. While the evolutionary divergence of call characters has been examined in a number of species, the relative impacts of genetic drift or natural and sexual selection remain unclear. Insights into the evolutionary trajectory of vocal signals can be gained by examining how advertisement calls vary in a phylogenetic context. Evolution by genetic drift would be supported if more closely related species express more similar songs. Conversely, a poor correlation between evolutionary history and song expression would suggest evolution shaped by natural or sexual selection. Here, we measure seven song characters in 20 described and two undescribed species of African clawed frogs (genera Xenopus and Silurana) and four populations of X. laevis. We identify three call types — click, burst and trill — that can be distinguished by click number, call rate and intensity modulation. A fourth type is biphasic, consisting of two of the above. Call types vary in complexity from the simplest, a click, to the most complex, a biphasic call. Maximum parsimony analysis of variation in call type suggests that the ancestral type was of intermediate complexity. Each call type evolved independently more than once and call type is typically not shared by closely related species. These results indicate that call type is homoplasious and has low phylogenetic signal. We conclude that the evolution of call type is not due to genetic drift, but is under selective pressure. PMID:24723737

  15. Step-by-Step Visual Manuals: Design and Development

    Science.gov (United States)

    Urata, Toshiyuki

    2004-01-01

    The types of handouts and manuals that are used in technology training vary. Some describe procedures in a narrative way without graphics; some employ step-by-step instructions with screen captures. According to Thirlway (1994), a training manual should be like a tutor that permits a student to learn at his own pace and gives him confidence for…

  16. Performance of an attention-demanding task during treadmill walking shifts the noise qualities of step-to-step variation in step width.

    Science.gov (United States)

    Grabiner, Mark D; Marone, Jane R; Wyatt, Marilynn; Sessoms, Pinata; Kaufman, Kenton R

    2018-06-01

    The fractal scaling evident in the step-to-step fluctuations of stepping-related time series reflects, to some degree, neuromotor noise. The primary purpose of this study was to determine the extent to which the fractal scaling of step width, step width and step width variability are affected by performance of an attention-demanding task. We hypothesized that the attention-demanding task would shift the structure of the step width time series toward white, uncorrelated noise. Subjects performed two 10-min treadmill walking trials, a control trial of undisturbed walking and a trial during which they performed a mental arithmetic/texting task. Motion capture data was converted to step width time series, the fractal scaling of which were determined from their power spectra. Fractal scaling decreased by 22% during the texting condition (p Step width and step width variability increased 19% and five percent, respectively (p step width fractal scaling. The change of the fractal scaling of step width is consistent with increased cognitive demand and suggests a transition in the characteristics of the signal noise. This may reflect an important advance toward the understanding of the manner in which neuromotor noise contributes to some types of falls. However, further investigation of the repeatability of the results, the sensitivity of the results to progressive increases in cognitive load imposed by attention-demanding tasks, and the extent to which the results can be generalized to the gait of older adults seems warranted. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. Faster algorithms for RNA-folding using the Four-Russians method.

    Science.gov (United States)

    Venkatachalam, Balaji; Gusfield, Dan; Frid, Yelena

    2014-03-06

    The secondary structure that maximizes the number of non-crossing matchings between complimentary bases of an RNA sequence of length n can be computed in O(n3) time using Nussinov's dynamic programming algorithm. The Four-Russians method is a technique that reduces the running time for certain dynamic programming algorithms by a multiplicative factor after a preprocessing step where solutions to all smaller subproblems of a fixed size are exhaustively enumerated and solved. Frid and Gusfield designed an O(n3logn) algorithm for RNA folding using the Four-Russians technique. In their algorithm the preprocessing is interleaved with the algorithm computation. We simplify the algorithm and the analysis by doing the preprocessing once prior to the algorithm computation. We call this the two-vector method. We also show variants where instead of exhaustive preprocessing, we only solve the subproblems encountered in the main algorithm once and memoize the results. We give a simple proof of correctness and explore the practical advantages over the earlier method.The Nussinov algorithm admits an O(n2) time parallel algorithm. We show a parallel algorithm using the two-vector idea that improves the time bound to O(n2logn). We have implemented the parallel algorithm on graphics processing units using the CUDA platform. We discuss the organization of the data structures to exploit coalesced memory access for fast running times. The ideas to organize the data structures also help in improving the running time of the serial algorithms. For sequences of length up to 6000 bases the parallel algorithm takes only about 2.5 seconds and the two-vector serial method takes about 57 seconds on a desktop and 15 seconds on a server. Among the serial algorithms, the two-vector and memoized versions are faster than the Frid-Gusfield algorithm by a factor of 3, and are faster than Nussinov by up to a factor of 20. The source-code for the algorithms is available at http://github.com/ijalabv/FourRussiansRNAFolding.

  18. Effect of One-Step and Multi-Steps Polishing System on Enamel Roughness

    Directory of Open Access Journals (Sweden)

    Cynthia Sumali

    2013-07-01

    Full Text Available Normal 0 false false false MicrosoftInternetExplorer4 The final procedures of orthodontic treatment are bracket debonding and cleaning the remaining adhesive. Multi-step polishing system is the most common method used. The disadvantage of that system is long working time, because of the stages that should be done. Therefore, dental material manufacturer make an improvement to the system, to reduce several stages into one stage only. This new system is known as one-step polishing system. Objective: To compare the effect of one-step and multi-step polishing system on enamel roughness after orthodontic bracket debonding. Methods: Randomized control trial was conducted included twenty-eight maxillary premolar randomized into two polishing system; one-step OptraPol (Ivoclar, Vivadent and multi-step AstroPol (Ivoclar, Vivadent. After bracket debonding, the remaining adhesive on each group was cleaned by subjective polishing system for ninety seconds using low speed handpiece. The enamel roughness was subjected to profilometer, registering two roughness parameters (Ra, Rz. Independent t-test was used to analyze the mean score of enamel roughness in each group. Results: There was no significant difference of enamel roughness between one-step and multi-step polishing system (p>0.005. Conclusion: One-step polishing system can produce a similar enamel roughness to multi-step polishing system after bracket debonding and adhesive cleaning.DOI: 10.14693/jdi.v19i3.136

  19. 29 CFR 785.17 - On-call time.

    Science.gov (United States)

    2010-07-01

    ... On-call time. An employee who is required to remain on call on the employer's premises or so close... employee who is not required to remain on the employer's premises but is merely required to leave word at his home or with company officials where he may be reached is not working while on call. (Armour & Co...

  20. CALLING AQUARIUM LOVERS...

    CERN Multimedia

    2002-01-01

    CERN's anemones will soon be orphans. We are looking for someone willing to look after the aquarium in the main building, for one year. If you are interested, or if you would like more information, please call 73830. (The anemones living in the aquarium thank you in anticipation.)

  1. Pre-processing of input files for the AZTRAN code; Pre procesamiento de archivos de entrada para el codigo AZTRAN

    Energy Technology Data Exchange (ETDEWEB)

    Vargas E, S. [ININ, Carretera Mexico-Toluca s/n, 52750 Ocoyoacac, Estado de Mexico (Mexico); Ibarra, G., E-mail: samuel.vargas@inin.gob.mx [IPN, Av. Instituto Politecnico Nacional s/n, 07738 Ciudad de Mexico (Mexico)

    2017-09-15

    The AZTRAN code began to be developed in the Nuclear Engineering Department of the Escuela Superior de Fisica y Matematicas (ESFM) of the Instituto Politecnico Nacional (IPN) with the purpose of numerically solving various models arising from the physics and engineering of nuclear reactors. The code is still under development and is part of the AZTLAN platform: Development of a Mexican platform for the analysis and design of nuclear reactors. Due to the complexity to generate an input file for the code, a script based on D language is developed, with the purpose of making its elaboration easier, based on a new input file format which includes specific cards, which have been divided into two blocks, mandatory cards and optional cards, including a pre-processing of the input file to identify possible errors within it, as well as an image generator for the specific problem based on the python interpreter. (Author)

  2. Calling 911! What role does the pediatrician play?

    Science.gov (United States)

    Grossman, Devin; Kunkov, Sergey; Kaplan, Carl; Crain, Ellen F

    2013-06-01

    The objective of this study was to compare admission rates and medical interventions among children whose caregivers called their child's primary care provider (PCP) before taking an ambulance to the pediatric emergency department (PED) versus those who did not. This was a prospective cohort study of patients brought to an urban, public hospital PED via emergency medical system (EMS). Children were included if the caregiver called 911 to have them transported via EMS and was present in the PED. The main variable was whether the child's PCP was called before EMS utilization. Study outcomes were medical interventions, such as intravenous line insertion or laboratory tests, and hospital admission. χ Test and logistic regression were used to evaluate the relationship of the main variable to the study outcomes. Six hundred fourteen patients met inclusion criteria and were enrolled. Five hundred eighty-five patients (95.3%) were reported to have a PCP. Seventy-four caregivers (12.1%) called their child's PCP before calling EMS. Two hundred seventy-seven patients (45.1%) had medical interventions performed; of these, 42 (15.2%) called their PCP (P = 0.03). Forty-two patients (6.8%) were admitted; among these, 14 (33.3%) called their PCP (P < 0.01). Adjusting for triage level, patients whose caregiver called the PCP before calling EMS were 3.2 times (95% confidence interval, 1.9-5.2 times) more likely to be admitted and 1.7 times (95% confidence interval, 1.1-2.9 times) more likely to have a medical intervention compared with patients whose caregivers did not call their child's PCP. Children were more likely to be admitted or require a medical intervention if their caregiver called their PCP before calling EMS. The availability of a PCP for telephone triage may help to optimize EMS utilization.

  3. Microsoft® Visual Basic® 2010 Step by Step

    CERN Document Server

    Halvorson, Michael

    2010-01-01

    Your hands-on, step-by-step guide to learning Visual Basic® 2010. Teach yourself the essential tools and techniques for Visual Basic® 2010-one step at a time. No matter what your skill level, you'll find the practical guidance and examples you need to start building professional applications for Windows® and the Web. Discover how to: Work in the Microsoft® Visual Studio® 2010 Integrated Development Environment (IDE)Master essential techniques-from managing data and variables to using inheritance and dialog boxesCreate professional-looking UIs; add visual effects and print supportBuild com

  4. Using Aspen plus in thermodynamics instruction a step-by-step guide

    CERN Document Server

    Sandler, Stanley I

    2015-01-01

    A step-by-step guide for students (and faculty) on the use of Aspen in teaching thermodynamics Used for a wide variety of important engineering tasks, Aspen Plus software is a modeling tool used for conceptual design, optimization, and performance monitoring of chemical processes. After more than twenty years, it remains one of the most popular and powerful chemical engineering simulation programs used both industrially and academically. Using Aspen Plus in Thermodynamics Instruction: A Step by Step Guide introduces the reader to the use of Aspen Plus in courses in thermodynamics. It prov

  5. QRS Detection Based on Improved Adaptive Threshold

    Directory of Open Access Journals (Sweden)

    Xuanyu Lu

    2018-01-01

    Full Text Available Cardiovascular disease is the first cause of death around the world. In accomplishing quick and accurate diagnosis, automatic electrocardiogram (ECG analysis algorithm plays an important role, whose first step is QRS detection. The threshold algorithm of QRS complex detection is known for its high-speed computation and minimized memory storage. In this mobile era, threshold algorithm can be easily transported into portable, wearable, and wireless ECG systems. However, the detection rate of the threshold algorithm still calls for improvement. An improved adaptive threshold algorithm for QRS detection is reported in this paper. The main steps of this algorithm are preprocessing, peak finding, and adaptive threshold QRS detecting. The detection rate is 99.41%, the sensitivity (Se is 99.72%, and the specificity (Sp is 99.69% on the MIT-BIH Arrhythmia database. A comparison is also made with two other algorithms, to prove our superiority. The suspicious abnormal area is shown at the end of the algorithm and RR-Lorenz plot drawn for doctors and cardiologists to use as aid for diagnosis.

  6. Sharing programming resources between Bio* projects through remote procedure call and native call stack strategies

    DEFF Research Database (Denmark)

    Prins, Pjotr; Goto, Naohisa; Yates, Andrew

    2012-01-01

    Open-source software (OSS) encourages computer programmers to reuse software components written by others. In evolutionary bioinformatics, OSS comes in a broad range of programming languages, including C/C++, Perl, Python, Ruby, Java, and R. To avoid writing the same functionality multiple times...... for different languages, it is possible to share components by bridging computer languages and Bio* projects, such as BioPerl, Biopython, BioRuby, BioJava, and R/Bioconductor. In this chapter, we compare the two principal approaches for sharing software between different programming languages: either by remote...... procedure call (RPC) or by sharing a local call stack. RPC provides a language-independent protocol over a network interface; examples are RSOAP and Rserve. The local call stack provides a between-language mapping not over the network interface, but directly in computer memory; examples are R bindings, RPy...

  7. Sensitive and specific peak detection for SELDI-TOF mass spectrometry using a wavelet/neural-network based approach.

    Directory of Open Access Journals (Sweden)

    Vincent A Emanuele

    Full Text Available SELDI-TOF mass spectrometer's compact size and automated, high throughput design have been attractive to clinical researchers, and the platform has seen steady-use in biomarker studies. Despite new algorithms and preprocessing pipelines that have been developed to address reproducibility issues, visual inspection of the results of SELDI spectra preprocessing by the best algorithms still shows miscalled peaks and systematic sources of error. This suggests that there continues to be problems with SELDI preprocessing. In this work, we study the preprocessing of SELDI in detail and introduce improvements. While many algorithms, including the vendor supplied software, can identify peak clusters of specific mass (or m/z in groups of spectra with high specificity and low false discover rate (FDR, the algorithms tend to underperform estimating the exact prevalence and intensity of peaks in those clusters. Thus group differences that at first appear very strong are shown, after careful and laborious hand inspection of the spectra, to be less than significant. Here we introduce a wavelet/neural network based algorithm which mimics what a team of expert, human users would call for peaks in each of several hundred spectra in a typical SELDI clinical study. The wavelet denoising part of the algorithm optimally smoothes the signal in each spectrum according to an improved suite of signal processing algorithms previously reported (the LibSELDI toolbox under development. The neural network part of the algorithm combines those results with the raw signal and a training dataset of expertly called peaks, to call peaks in a test set of spectra with approximately 95% accuracy. The new method was applied to data collected from a study of cervical mucus for the early detection of cervical cancer in HPV infected women. The method shows promise in addressing the ongoing SELDI reproducibility issues.

  8. Indico CONFERENCE: Define the Call for Abstracts

    CERN Multimedia

    CERN. Geneva; Ferreira, Pedro

    2017-01-01

    In this tutorial, you will learn how to define and open a call for abstracts. When defining a call for abstracts, you will be able to define settings related to the type of questions asked during a review of an abstract, select the users who will review the abstracts, decide when to open the call for abstracts, and more.

  9. Call for Research

    International Development Research Centre (IDRC) Digital Library (Canada)

    Marie-Isabelle Beyer

    2014-10-03

    Oct 3, 2014 ... 5.Submission process. 6.Eligibility criteria. 7.Selection Process. 8. Format and requirements. 9.Evaluation criteria. 10.Country clearance requirements. 11. .... It is envisaged that through this call a single consortium will undertake 6-8 projects within a total budget of up to ... principle qualify for IDRC's support.

  10. MANAGING THE INTERACTION OF RESOURCE DISTRIBUTION IN PROJECT MANAGEMENT OF IMPLEMENTATION AND FUNCTIONING OF EMERGENCY CALL SYSTEMS

    Directory of Open Access Journals (Sweden)

    Дмитро Сергійович КОБИЛКІН

    2016-02-01

    Full Text Available There have been proposed to use a mobile module "Resources manager" and its component model – scheme for managing the distribution of resources during the project management of implementation and functioning of System 112 in Ukraine. Are described the formalized tasks of performance the processes of managing the model – scheme at all stages of the projects life cycle. Also is developed the model – scheme interaction the blocks of the mobile module of resource management at the System 112 project. It describes the step by step interaction of blocks project management with the project data for successful project implementation and obtaining a product of the project, pointing out the environmental impact of the project on each of the project blocks. The conclusions about the expediency and efficiency of implementation of model – scheme in conditions of managing the emergency call systems at a single number were made.

  11. The function of migratory bird calls

    DEFF Research Database (Denmark)

    Reichl, Thomas; Andersen, Bent Bach; Larsen, Ole Næsbye

    The function of migratory bird calls: do they influence orientation and navigation?   Thomas Reichl1, Bent Bach Andersen2, Ole Naesbye Larsen2, Henrik Mouritsen1   1Institute of Biology, University of Oldenburg, Oldenburg, D-26111 Oldenburg, Germany 2Institute of Biology, University of Southern...... migration and to stimulate migratory restlessness in conspecifics. We wished to test if conspecific flight calls influence the flight direction of a nocturnal migrant, the European Robin (Erithacus rubecula), i.e. if flight calls help migrants keeping course. Wild caught birds showing migratory restlessness...... the experimental bird could be activated successively to simulate a migrating Robin cruising E-W, W-E, S-N or N-S at a chosen height (mostly about 40 m), at 10 m/s and emitting Robin flight calls of 80 dB(A) at 1 m. The simulated flight of a "ding" sound served as a control. During an experiment the bird was first...

  12. An empirical analysis of the corporate call decision

    International Nuclear Information System (INIS)

    Carlson, M.D.

    1998-01-01

    An economic study of the the behaviour of financial managers of utility companies was presented. The study examined whether or not an option pricing based model of the call decision does a better job of explaining callable preferred share prices and call decisions compared to other models. In this study, the Rust (1987) empirical technique was extended to include the use of information from preferred share prices in addition to the call decisions. Reasonable estimates were obtained from data of shares of the Pacific Gas and Electric Company (PGE) for the transaction costs associated with a call. It was concluded that the managers of the PGE clearly take into account the value of the option to delay the call when making their call decisions

  13. Perceived Calling and Work Engagement Among Nurses.

    Science.gov (United States)

    Ziedelis, Arunas

    2018-03-01

    The purpose of this study was to explore the relationship of perceived calling and work engagement in nursing over and above major work environment factors. In all, 351 nurses from various health care institutions completed the survey. Data were collected about the most demanding aspects of nursing, major job resources, the degree to which nursing is perceived as a meaningful calling, work engagement, and main demographic information. Hierarchical linear regression was applied to assess the relation between perceived calling and work engagement, while controlling for demographic and work environment factors, and perceived calling was significantly related to two out of three components of nurses' work engagement. The highest association was found with dedication component, and vigor component was related insignificantly. Results have shown that perceived calling might motivate nurses to engage in their work even in burdensome environment, although possible implications for the occupational well-being of nurses themselves remains unclear.

  14. On-call work and health: a review

    Directory of Open Access Journals (Sweden)

    Botterill Jackie S

    2004-12-01

    Full Text Available Abstract Many professions in the fields of engineering, aviation and medicine employ this form of scheduling. However, on-call work has received significantly less research attention than other work patterns such as shift work and overtime hours. This paper reviews the current body of peer-reviewed, published research conducted on the health effects of on-call work The health effects studies done in the area of on-call work are limited to mental health, job stress, sleep disturbances and personal safety. The reviewed research suggests that on-call work scheduling can pose a risk to health, although there are critical gaps in the literature.

  15. Systems configured to distribute a telephone call, communication systems, communication methods and methods of routing a telephone call to a service representative

    Science.gov (United States)

    Harris, Scott H.; Johnson, Joel A.; Neiswanger, Jeffery R.; Twitchell, Kevin E.

    2004-03-09

    The present invention includes systems configured to distribute a telephone call, communication systems, communication methods and methods of routing a telephone call to a customer service representative. In one embodiment of the invention, a system configured to distribute a telephone call within a network includes a distributor adapted to connect with a telephone system, the distributor being configured to connect a telephone call using the telephone system and output the telephone call and associated data of the telephone call; and a plurality of customer service representative terminals connected with the distributor and a selected customer service representative terminal being configured to receive the telephone call and the associated data, the distributor and the selected customer service representative terminal being configured to synchronize, application of the telephone call and associated data from the distributor to the selected customer service representative terminal.

  16. Two-step photoionization of hydrogen atoms in interplanetary space

    International Nuclear Information System (INIS)

    Gruntman, M.A.

    1990-01-01

    Photoionization is one of the key processes which determine the properties of fluxes of neutral atoms in interplanetary space. A new two-step channel (called indirect) of photoionization of hydrogen atoms is proposed. Hydrogen atoms are at first excited to states with principal quantum number n > 2, then decay to metastable H(2S) states, where they can be photoionized. Competing processes due to the interaction with solar wind plasma and solar radiation are considered and the photoionization rate through the proposed indirect channel is calculated. This rate depends on distance from the Sun as ∝ 1/R 4 at large distances (R > 1-2 a.u.) and as ∝ 1/R 2 at close approaches, where it is higher than the rate of direct photoionization. (author)

  17. No Call for Action? Why There Is No Union (Yet in Philippine Call Centers

    Directory of Open Access Journals (Sweden)

    Niklas Reese

    2013-01-01

    Full Text Available This contribution presents findings from a qualitative study which focused on young urban professionals in the Philippines who work(ed in international call centers – workplaces usually characterized by job insecurity and other forms of precarity, factory-like working conditions, and disembeddedness. Nevertheless, trade unions in these centers have not come into existence. Why collective action is not chosen by call center agents as an option to tackle the above mentioned problems – this is what the research project this article is based on tried to understand. After outlining some workrelated problems identified by Filipino call center agents, the article will focus on the strategies the agents employ to counter these problems (mainly accommodation and everyday resistance. By highlighting five objective and five subjective reasons (or reasons by circumstances and reasons by framing, we conclude that it is not repressive regulation policies, but rather the formative power and the internalization of discourses of rule within individual life strategies that are preventing the establishment of unions and other collective action structures.

  18. The NIST Step Class Library (Step Into the Future)

    Science.gov (United States)

    1990-09-01

    Figure 6. Excerpt from a STEP exclange file based on the Geometry model 1be NIST STEP Class Libary Page 13 An issue of concern in this...Scheifler, R., Gettys, J., and Newman, P., X Window System: C Library and Protocol Reference. Digital Press, Bedford, Mass, 1988. [Schenck90] Schenck, D

  19. Valve cam design using numerical step-by-step method

    OpenAIRE

    Vasilyev, Aleksandr; Bakhracheva, Yuliya; Kabore, Ousman; Zelenskiy, Yuriy

    2014-01-01

    This article studies the numerical step-by-step method of cam profile design. The results of the study are used for designing the internal combustion engine valve gear. This method allows to profile the peak efficiency of cams in view of many restrictions, connected with valve gear serviceability and reliability.

  20. Reducing juvenile delinquency with automated cell phone calls.

    Science.gov (United States)

    Burraston, Bert O; Bahr, Stephen J; Cherrington, David J

    2014-05-01

    Using a sample of 70 juvenile probationers (39 treatment and 31 controls), we evaluated the effectiveness of a rehabilitation program that combined cognitive-behavioral training and automated phone calls. The cognitive-behavioral training contained six 90-min sessions, one per week, and the phone calls occurred twice per day for the year following treatment. Recidivism was measured by whether they were rearrested and the total number of rearrests during the 1st year. To test the impact of the phone calls, those who received phone calls were divided into high and low groups depending on whether they answered more or less than half of their phone calls. Those who completed the class and answered at least half of their phone calls were less likely to have been arrested and had fewer total arrests.

  1. Preprocessing of gravity gradients at the GOCE high-level processing facility

    Science.gov (United States)

    Bouman, Johannes; Rispens, Sietse; Gruber, Thomas; Koop, Radboud; Schrama, Ernst; Visser, Pieter; Tscherning, Carl Christian; Veicherts, Martin

    2009-07-01

    One of the products derived from the gravity field and steady-state ocean circulation explorer (GOCE) observations are the gravity gradients. These gravity gradients are provided in the gradiometer reference frame (GRF) and are calibrated in-flight using satellite shaking and star sensor data. To use these gravity gradients for application in Earth scienes and gravity field analysis, additional preprocessing needs to be done, including corrections for temporal gravity field signals to isolate the static gravity field part, screening for outliers, calibration by comparison with existing external gravity field information and error assessment. The temporal gravity gradient corrections consist of tidal and nontidal corrections. These are all generally below the gravity gradient error level, which is predicted to show a 1/ f behaviour for low frequencies. In the outlier detection, the 1/ f error is compensated for by subtracting a local median from the data, while the data error is assessed using the median absolute deviation. The local median acts as a high-pass filter and it is robust as is the median absolute deviation. Three different methods have been implemented for the calibration of the gravity gradients. All three methods use a high-pass filter to compensate for the 1/ f gravity gradient error. The baseline method uses state-of-the-art global gravity field models and the most accurate results are obtained if star sensor misalignments are estimated along with the calibration parameters. A second calibration method uses GOCE GPS data to estimate a low-degree gravity field model as well as gravity gradient scale factors. Both methods allow to estimate gravity gradient scale factors down to the 10-3 level. The third calibration method uses high accurate terrestrial gravity data in selected regions to validate the gravity gradient scale factors, focussing on the measurement band. Gravity gradient scale factors may be estimated down to the 10-2 level with this

  2. Direct aperture optimization: A turnkey solution for step-and-shoot IMRT

    International Nuclear Information System (INIS)

    Shepard, D.M.; Earl, M.A.; Li, X.A.; Naqvi, S.; Yu, C.

    2002-01-01

    IMRT treatment plans for step-and-shoot delivery have traditionally been produced through the optimization of intensity distributions (or maps) for each beam angle. The optimization step is followed by the application of a leaf-sequencing algorithm that translates each intensity map into a set of deliverable aperture shapes. In this article, we introduce an automated planning system in which we bypass the traditional intensity optimization, and instead directly optimize the shapes and the weights of the apertures. We call this approach 'direct aperture optimization'. This technique allows the user to specify the maximum number of apertures per beam direction, and hence provides significant control over the complexity of the treatment delivery. This is possible because the machine dependent delivery constraints imposed by the MLC are enforced within the aperture optimization algorithm rather than in a separate leaf-sequencing step. The leaf settings and the aperture intensities are optimized simultaneously using a simulated annealing algorithm. We have tested direct aperture optimization on a variety of patient cases using the EGS4/BEAM Monte Carlo package for our dose calculation engine. The results demonstrate that direct aperture optimization can produce highly conformal step-and-shoot treatment plans using only three to five apertures per beam direction. As compared with traditional optimization strategies, our studies demonstrate that direct aperture optimization can result in a significant reduction in both the number of beam segments and the number of monitor units. Direct aperture optimization therefore produces highly efficient treatment deliveries that maintain the full dosimetric benefits of IMRT

  3. "You're being paged!" outcomes of a nursing home on-call role-playing and longitudinal curriculum.

    Science.gov (United States)

    Yuasa, Misuzu; Bell, Christina L; Inaba, Michiko; Tamura, Bruce K; Ahsan, Samina; Saunders, Valisa; Masaki, Kamal

    2013-11-01

    Effectively handling telephone calls about nursing home (NH) residents is an important skill for healthcare professionals, but little formal training is typically provided. The objective of the current study was to describe and evaluate the effectiveness of a novel structured role-playing didactic session followed by an on-call NH longitudinal clinical experience. The effectiveness of the structured role-playing didactic session was compared in different learners, including geriatric medicine fellows (n = 10), family medicine residents and faculty (n = 14), nurse practitioner students (n = 31), and other learners (n = 7). The curriculum focused on common problems encountered while caring for NH residents during on-call periods. Learners rated themselves using an 18-item pre/post questionnaire including five attitude and 13 skills questions, using a 1-to-5 Likert scale. T-tests were used to compare means before and after sessions. Significant improvements were found in overall mean attitudes and skills scores. For all learners, the greatest improvements were seen in "comfort in managing residents at the NH," "managing feeding or gastrostomy tube dislodgement," "identifying different availability of medications, laboratory studies, and procedures in NH," and "describing steps to send NH residents to the emergency department." Geriatric medicine fellows' attitudes and skills improved significantly after the longitudinal clinical experience. The faculty survey demonstrated improved documentation, communication, and fellows' management of on-call problems after curriculum implementation. This novel curriculum used role-playing to provide training for on-call management of NH residents. This curriculum has been successfully disseminated on a national geriatrics educational resource website (POGOe) and is applicable to geriatric medicine fellowships, internal medicine and family medicine residency programs, and other training programs. © 2013, Copyright the Authors

  4. Effectiveness of the Call in Beach Volleyball Attacking Play

    Directory of Open Access Journals (Sweden)

    Künzell Stefan

    2014-12-01

    Full Text Available In beach volleyball the setter has the opportunity to give her or his hitter a “call”. The call intends that the setter suggests to her or his partner where to place the attack in the opponent’s court. The effectiveness of a call is still unknown. We investigated the women’s and men’s Swiss National Beach Volleyball Championships in 2011 and analyzed 2185 attacks. We found large differences between female and male players. While men called in only 38.4% of attacks, women used calls in 85.5% of attacks. If the male players followed a given call, 63% of the attacks were successful. The success rate of attacks without any call was 55.8% and 47.6% when the call was ignored. These differences were not significant (χ2(2 = 4.55, p = 0.103. In women’s beach volleyball, the rate of successful attacks was 61.5% when a call was followed, 35% for attacks without a call, and 42.6% when a call was ignored. The differences were highly significant (χ2(2 = 23.42, p < 0.0005. Taking into account the findings of the present study, we suggested that the call was effective in women’s beach volleyball, while its effect in men’s game was unclear. Considering the quality of calls we indicate that there is a significant potential to increase the effectiveness of a call.

  5. Call for volunteers

    CERN Document Server

    2008-01-01

    CERN is calling for volunteers from all members of the Laboratory for organizing the two exceptional Open days.CERN is calling for volunteers from all members of the Laboratory’s personnel to help with the organisation of these two exceptional Open Days, for the visits of CERN personnel and their families on the Saturday and above all for the major public Open Day on the Sunday. As for the 50th anniversary in 2004, the success of the Open Days will depend on a large number of volunteers. All those working for CERN as well as retired members of the personnel can contribute to making this event a success. Many guides will be needed at the LHC points, for the activities at the surface and to man the reception and information points. The aim of these major Open Days is to give the local populations the opportunity to discover the fruits of almost 20 years of work carried out at CERN. We are hoping for some 2000 volunteers for the two Open Days, on the Saturday from 9 a.m. to ...

  6. Leading Change Step-by-Step: Tactics, Tools, and Tales

    Science.gov (United States)

    Spiro, Jody

    2010-01-01

    "Leading Change Step-by-Step" offers a comprehensive and tactical guide for change leaders. Spiro's approach has been field-tested for more than a decade and proven effective in a wide variety of public sector organizations including K-12 schools, universities, international agencies and non-profits. The book is filled with proven tactics for…

  7. Perpetual Cancellable American Call Option

    OpenAIRE

    Emmerling, Thomas J.

    2010-01-01

    This paper examines the valuation of a generalized American-style option known as a Game-style call option in an infinite time horizon setting. The specifications of this contract allow the writer to terminate the call option at any point in time for a fixed penalty amount paid directly to the holder. Valuation of a perpetual Game-style put option was addressed by Kyprianou (2004) in a Black-Scholes setting on a non-dividend paying asset. Here, we undertake a similar analysis for the perpetua...

  8. Leveraging management information in improving call centre productivity

    Directory of Open Access Journals (Sweden)

    Manthisana Mosese

    2016-04-01

    Objectives: This research explored the use of management information and its impact on two fundamental functions namely, improving productivity without compromising the quality of service, in the call centre of a well-known South African fashion retailer, Edcon. Following the implementation of the call centre technology project the research set out to determine how Edcon can transform their call centre to improve productivity and customer service through effective utilisation of their management information. Method: Internal documents and reports were analysed to provide the basis of evaluation between the measures of productivity prior to and post the implementation of a technology project at Edcon’s call centre. Semi-structured in-depth and group interviews were conducted to establish the importance and use of management information in improving productivity and customer service. Results: The results indicated that the availability of management information has indeed contributed to improved efficiency at the Edcon call centre. Although literature claims that there is a correlation between a call centre technology upgrade and improvement in performance, evident in the return on investment being realised within a year or two of implementation, it fell beyond the scope of this study to investigate the return on investment for Edcon’s call centre. Conclusion: Although Edcon has begun realising benefits in improved productivity in their call centre from their available management information, information will continue to play a crucial role in supporting management with informed decisions that will improve the call centre operations. [pdf to follow

  9. A two-step method for rapid characterization of electroosmotic flows in capillary electrophoresis.

    Science.gov (United States)

    Zhang, Wenjing; He, Muyi; Yuan, Tao; Xu, Wei

    2017-12-01

    The measurement of electroosmotic flow (EOF) is important in a capillary electrophoresis (CE) experiment in terms of performance optimization and stability improvement. Although several methods exist, there are demanding needs to accurately characterize ultra-low electroosmotic flow rates (EOF rates), such as in coated capillaries used in protein separations. In this work, a new method, called the two-step method, was developed to accurately and rapidly measure EOF rates in a capillary, especially for measuring the ultra-low EOF rates in coated capillaries. In this two-step method, the EOF rates were calculated by measuring the migration time difference of a neutral marker in two consecutive experiments, in which a pressure driven was introduced to accelerate the migration and the DC voltage was reversed to switch the EOF direction. Uncoated capillaries were first characterized by both this two-step method and a conventional method to confirm the validity of this new method. Then this new method was applied in the study of coated capillaries. Results show that this new method is not only fast in speed, but also better in accuracy. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Video-calls to reduce loneliness and social isolation within care environments for older people: an implementation study using collaborative action research.

    Science.gov (United States)

    Zamir, Sonam; Hennessy, Catherine Hagan; Taylor, Adrian H; Jones, Ray B

    2018-03-02

    Older people in care may be lonely with insufficient contact if families are unable to visit. Face-to-face contact through video-calls may help reduce loneliness, but little is known about the processes of engaging people in care environments in using video-calls. We aimed to identify the barriers to and facilitators of implementing video-calls for older people in care environments. A collaborative action research (CAR) approach was taken to implement a video-call intervention in care environments. We undertook five steps of recruitment, planning, implementation, reflection and re-evaluation, in seven care homes and one hospital in the UK. The video-call intervention 'Skype on Wheels' (SoW) comprised a wheeled device that could hold an iPad and handset, and used Skype to provide a free video-call service. Care staff were collaborators who implemented the intervention within the care-setting by agreeing the intervention, recruiting older people and their family, and setting up video-calls. Field notes and reflective diaries on observations and conversations with staff, older people and family were maintained over 15 months, and analysed using thematic analysis. Four care homes implemented the intervention. Eight older people with their respective social contacts made use of video-calls. Older people were able to use SoW with assistance from staff, and enjoyed the use of video-calls to stay better connected with family. However five barriers towards implementation included staff turnover, risk averseness, the SoW design, lack of family commitment and staff attitudes regarding technology. The SoW intervention, or something similar, could aid older people to stay better connected with their families in care environments, but if implemented as part of a rigorous evaluation, then co-production of the intervention at each recruitment site may be needed to overcome barriers and maximise engagement.

  11. Hardware design and implementation of a wavelet de-noising procedure for medical signal preprocessing.

    Science.gov (United States)

    Chen, Szi-Wen; Chen, Yuan-Ho

    2015-10-16

    In this paper, a discrete wavelet transform (DWT) based de-noising with its applications into the noise reduction for medical signal preprocessing is introduced. This work focuses on the hardware realization of a real-time wavelet de-noising procedure. The proposed de-noising circuit mainly consists of three modules: a DWT, a thresholding, and an inverse DWT (IDWT) modular circuits. We also proposed a novel adaptive thresholding scheme and incorporated it into our wavelet de-noising procedure. Performance was then evaluated on both the architectural designs of the software and. In addition, the de-noising circuit was also implemented by downloading the Verilog codes to a field programmable gate array (FPGA) based platform so that its ability in noise reduction may be further validated in actual practice. Simulation experiment results produced by applying a set of simulated noise-contaminated electrocardiogram (ECG) signals into the de-noising circuit showed that the circuit could not only desirably meet the requirement of real-time processing, but also achieve satisfactory performance for noise reduction, while the sharp features of the ECG signals can be well preserved. The proposed de-noising circuit was further synthesized using the Synopsys Design Compiler with an Artisan Taiwan Semiconductor Manufacturing Company (TSMC, Hsinchu, Taiwan) 40 nm standard cell library. The integrated circuit (IC) synthesis simulation results showed that the proposed design can achieve a clock frequency of 200 MHz and the power consumption was only 17.4 mW, when operated at 200 MHz.

  12. Smart Grid Technology and Consumer Call Center Readiness

    OpenAIRE

    Schamber, Kelsey L.

    2010-01-01

    The following reasearch project deals with utility call center readiness to address customer concerns and questions about the Smart Grid and smart meter technology. Since consumer engagement is important for the benefits of the Smart Grid to be realized, the readiness and ability of utilities to answer consumer questions is an important issue. Assessing the readiness of utility call centers to address pertinant customer concerns was accomplished by calling utility call centers with Smart Grid...

  13. Microsoft Office Word 2007 step by step

    CERN Document Server

    Cox, Joyce

    2007-01-01

    Experience learning made easy-and quickly teach yourself how to create impressive documents with Word 2007. With Step By Step, you set the pace-building and practicing the skills you need, just when you need them!Apply styles and themes to your document for a polished lookAdd graphics and text effects-and see a live previewOrganize information with new SmartArt diagrams and chartsInsert references, footnotes, indexes, a table of contentsSend documents for review and manage revisionsTurn your ideas into blogs, Web pages, and moreYour all-in-one learning experience includes:Files for building sk

  14. Step by Step Microsoft Office Visio 2003

    CERN Document Server

    Lemke, Judy

    2004-01-01

    Experience learning made easy-and quickly teach yourself how to use Visio 2003, the Microsoft Office business and technical diagramming program. With STEP BY STEP, you can take just the lessons you need, or work from cover to cover. Either way, you drive the instruction-building and practicing the skills you need, just when you need them! Produce computer network diagrams, organization charts, floor plans, and moreUse templates to create new diagrams and drawings quicklyAdd text, color, and 1-D and 2-D shapesInsert graphics and pictures, such as company logosConnect shapes to create a basic f

  15. Mobile telephones: a comparison of radiated power between 3G VoIP calls and 3G VoCS calls.

    Science.gov (United States)

    Jovanovic, Dragan; Bragard, Guillaume; Picard, Dominique; Chauvin, Sébastien

    2015-01-01

    The purpose of this study is to assess the mean RF power radiated by mobile telephones during voice calls in 3G VoIP (Voice over Internet Protocol) using an application well known to mobile Internet users, and to compare it with the mean power radiated during voice calls in 3G VoCS (Voice over Circuit Switch) on a traditional network. Knowing that the specific absorption rate (SAR) is proportional to the mean radiated power, the user's exposure could be clearly identified at the same time. Three 3G (High Speed Packet Access) smartphones from three different manufacturers, all dual-band for GSM (900 MHz, 1800 MHz) and dual-band for UMTS (900 MHz, 1950 MHz), were used between 28 July and 04 August 2011 in Paris (France) to make 220 two-minute calls on a mobile telephone network with national coverage. The places where the calls were made were selected in such a way as to describe the whole range of usage situations of the mobile telephone. The measuring equipment, called "SYRPOM", recorded the radiation power levels and the frequency bands used during the calls with a sampling rate of 20,000 per second. In the framework of this study, the mean normalised power radiated by a telephone in 3G VoIP calls was evaluated at 0.75% maximum power of the smartphone, compared with 0.22% in 3G VoCS calls. The very low average power levels associated with use of 3G devices with VoIP or VoCS support the view that RF exposure resulting from their use is far from exceeding the basic restrictions of current exposure limits in terms of SAR.

  16. A cascade reaction network mimicking the basic functional steps of acquired immune response

    Science.gov (United States)

    Han, Da; Wu, Cuichen; You, Mingxu; Zhang, Tao; Wan, Shuo; Chen, Tao; Qiu, Liping; Zheng, Zheng; Liang, Hao; Tan, Weihong

    2015-01-01

    Biological systems use complex ‘information processing cores’ composed of molecular networks to coordinate their external environment and internal states. An example of this is the acquired, or adaptive, immune system (AIS), which is composed of both humoral and cell-mediated components. Here we report the step-by-step construction of a prototype mimic of the AIS which we call Adaptive Immune Response Simulator (AIRS). DNA and enzymes are used as simple artificial analogues of the components of the AIS to create a system which responds to specific molecular stimuli in vitro. We show that this network of reactions can function in a manner which is superficially similar to the most basic responses of the vertebrate acquired immune system, including reaction sequences that mimic both humoral and cellular responses. As such, AIRS provides guidelines for the design and engineering of artificial reaction networks and molecular devices. PMID:26391084

  17. A cascade reaction network mimicking the basic functional steps of adaptive immune response.

    Science.gov (United States)

    Han, Da; Wu, Cuichen; You, Mingxu; Zhang, Tao; Wan, Shuo; Chen, Tao; Qiu, Liping; Zheng, Zheng; Liang, Hao; Tan, Weihong

    2015-10-01

    Biological systems use complex 'information-processing cores' composed of molecular networks to coordinate their external environment and internal states. An example of this is the acquired, or adaptive, immune system (AIS), which is composed of both humoral and cell-mediated components. Here we report the step-by-step construction of a prototype mimic of the AIS that we call an adaptive immune response simulator (AIRS). DNA and enzymes are used as simple artificial analogues of the components of the AIS to create a system that responds to specific molecular stimuli in vitro. We show that this network of reactions can function in a manner that is superficially similar to the most basic responses of the vertebrate AIS, including reaction sequences that mimic both humoral and cellular responses. As such, AIRS provides guidelines for the design and engineering of artificial reaction networks and molecular devices.

  18. Crowdsourcing step-by-step information extraction to enhance existing how-to videos

    OpenAIRE

    Nguyen, Phu Tran; Weir, Sarah; Guo, Philip J.; Miller, Robert C.; Gajos, Krzysztof Z.; Kim, Ju Ho

    2014-01-01

    Millions of learners today use how-to videos to master new skills in a variety of domains. But browsing such videos is often tedious and inefficient because video player interfaces are not optimized for the unique step-by-step structure of such videos. This research aims to improve the learning experience of existing how-to videos with step-by-step annotations. We first performed a formative study to verify that annotations are actually useful to learners. We created ToolScape, an interac...

  19. Stepping out: dare to step forward, step back, or just stand still and breathe.

    Science.gov (United States)

    Waisman, Mary Sue

    2012-01-01

    It is important to step out and make a difference. We have one of the most unique and diverse professions that allows for diversity in thought and practice, permitting each of us to grow in our unique niches and make significant contributions. I was frightened to 'step out' to go to culinary school at the age of 46, but it changed forever the way I look at my profession and I have since experienced the most enjoyable and innovative career. There are also times when it is important to 'step back' to relish the roots of our profession; to help bring food back into nutrition; to translate all of our wonderful science into a language of food that Canadians understand. We all need to take time to 'just stand still and breathe': to celebrate our accomplishments, reflect on our actions, ensure we are heading toward our vision, keep the profession vibrant and relevant, and cherish one another.

  20. The Wireless Nursing Call System

    DEFF Research Database (Denmark)

    Jensen, Casper Bruun

    2006-01-01

    This paper discusses a research project in which social scientists were involved both as analysts and supporters during a pilot with a new wireless nursing call system. The case thus exemplifies an attempt to participate in developing dependable health care systems and offers insight into the cha......This paper discusses a research project in which social scientists were involved both as analysts and supporters during a pilot with a new wireless nursing call system. The case thus exemplifies an attempt to participate in developing dependable health care systems and offers insight...

  1. Integrating heterogeneous healthcare call centers.

    Science.gov (United States)

    Peschel, K M; Reed, W C; Salter, K

    1998-01-01

    In a relatively short period, OHS has absorbed multiple call centers supporting different LOBs from various acquisitions, functioning with diverse standards, processes, and technologies. However, customer and employee satisfaction is predicated on OHS's ability to thoroughly integrate these heterogeneous call centers. The integration was initiated and has successfully progressed through a balanced program of focused leadership and a defined strategy which includes site consolidation, sound performance management philosophies, and enabling technology. Benefits have already been achieved with even more substantive ones to occur as the integration continues to evolve.

  2. Significant improvements of electrical discharge machining performance by step-by-step updated adaptive control laws

    Science.gov (United States)

    Zhou, Ming; Wu, Jianyang; Xu, Xiaoyi; Mu, Xin; Dou, Yunping

    2018-02-01

    In order to obtain improved electrical discharge machining (EDM) performance, we have dedicated more than a decade to correcting one essential EDM defect, the weak stability of the machining, by developing adaptive control systems. The instabilities of machining are mainly caused by complicated disturbances in discharging. To counteract the effects from the disturbances on machining, we theoretically developed three control laws from minimum variance (MV) control law to minimum variance and pole placements coupled (MVPPC) control law and then to a two-step-ahead prediction (TP) control law. Based on real-time estimation of EDM process model parameters and measured ratio of arcing pulses which is also called gap state, electrode discharging cycle was directly and adaptively tuned so that a stable machining could be achieved. To this end, we not only theoretically provide three proved control laws for a developed EDM adaptive control system, but also practically proved the TP control law to be the best in dealing with machining instability and machining efficiency though the MVPPC control law provided much better EDM performance than the MV control law. It was also shown that the TP control law also provided a burn free machining.

  3. Colloidal Quantum Dot Inks for Single-Step-Fabricated Field-Effect Transistors: The Importance of Postdeposition Ligand Removal.

    Science.gov (United States)

    Balazs, Daniel M; Rizkia, Nisrina; Fang, Hong-Hua; Dirin, Dmitry N; Momand, Jamo; Kooi, Bart J; Kovalenko, Maksym V; Loi, Maria Antonietta

    2018-02-14

    Colloidal quantum dots are a class of solution-processed semiconductors with good prospects for photovoltaic and optoelectronic applications. Removal of the surfactant, so-called ligand exchange, is a crucial step in making the solid films conductive, but performing it in solid state introduces surface defects and cracks in the films. Hence, the formation of thick, device-grade films have only been possible through layer-by-layer processing, limiting the technological interest for quantum dot solids. Solution-phase ligand exchange before the deposition allows for the direct deposition of thick, homogeneous films suitable for device applications. In this work, fabrication of field-effect transistors in a single step is reported using blade-coating, an upscalable, industrially relevant technique. Most importantly, a postdeposition washing step results in device properties comparable to the best layer-by-layer processed devices, opening the way for large-scale fabrication and further interest from the research community.

  4. The renewal of hydroelectric concessions in competitive call

    International Nuclear Information System (INIS)

    2013-01-01

    This document discusses various issues associated with the planned competitive call on the French hydraulic power plants. The principles of this competitive call for hydroelectric concessions are first addressed: administrative regime of concessions, competitive call process, criteria of selection of the concession holder, case of 'concession of valleys', potential competitors. It outlines and discusses the difficulties of this competitive call: France is the single country to implement this procedure; it concerns a national asset; it questions the guarantee of a future use of equipment at best for the energy benefits of French consumers; the competitive call is a nice idea indeed but extremely complex. A note discusses the profitability aspects of Plants for Transfer of Energy by Pumping

  5. Automatic extraction of nuclei centroids of mouse embryonic cells from fluorescence microscopy images.

    Directory of Open Access Journals (Sweden)

    Md Khayrul Bashar

    Full Text Available Accurate identification of cell nuclei and their tracking using three dimensional (3D microscopic images is a demanding task in many biological studies. Manual identification of nuclei centroids from images is an error-prone task, sometimes impossible to accomplish due to low contrast and the presence of noise. Nonetheless, only a few methods are available for 3D bioimaging applications, which sharply contrast with 2D analysis, where many methods already exist. In addition, most methods essentially adopt segmentation for which a reliable solution is still unknown, especially for 3D bio-images having juxtaposed cells. In this work, we propose a new method that can directly extract nuclei centroids from fluorescence microscopy images. This method involves three steps: (i Pre-processing, (ii Local enhancement, and (iii Centroid extraction. The first step includes two variations: first variation (Variant-1 uses the whole 3D pre-processed image, whereas the second one (Variant-2 modifies the preprocessed image to the candidate regions or the candidate hybrid image for further processing. At the second step, a multiscale cube filtering is employed in order to locally enhance the pre-processed image. Centroid extraction in the third step consists of three stages. In Stage-1, we compute a local characteristic ratio at every voxel and extract local maxima regions as candidate centroids using a ratio threshold. Stage-2 processing removes spurious centroids from Stage-1 results by analyzing shapes of intensity profiles from the enhanced image. An iterative procedure based on the nearest neighborhood principle is then proposed to combine if there are fragmented nuclei. Both qualitative and quantitative analyses on a set of 100 images of 3D mouse embryo are performed. Investigations reveal a promising achievement of the technique presented in terms of average sensitivity and precision (i.e., 88.04% and 91.30% for Variant-1; 86.19% and 95.00% for Variant-2

  6. Improving the performance of streamflow forecasting model using data-preprocessing technique in Dungun River Basin

    Science.gov (United States)

    Khai Tiu, Ervin Shan; Huang, Yuk Feng; Ling, Lloyd

    2018-03-01

    An accurate streamflow forecasting model is important for the development of flood mitigation plan as to ensure sustainable development for a river basin. This study adopted Variational Mode Decomposition (VMD) data-preprocessing technique to process and denoise the rainfall data before putting into the Support Vector Machine (SVM) streamflow forecasting model in order to improve the performance of the selected model. Rainfall data and river water level data for the period of 1996-2016 were used for this purpose. Homogeneity tests (Standard Normal Homogeneity Test, the Buishand Range Test, the Pettitt Test and the Von Neumann Ratio Test) and normality tests (Shapiro-Wilk Test, Anderson-Darling Test, Lilliefors Test and Jarque-Bera Test) had been carried out on the rainfall series. Homogenous and non-normally distributed data were found in all the stations, respectively. From the recorded rainfall data, it was observed that Dungun River Basin possessed higher monthly rainfall from November to February, which was during the Northeast Monsoon. Thus, the monthly and seasonal rainfall series of this monsoon would be the main focus for this research as floods usually happen during the Northeast Monsoon period. The predicted water levels from SVM model were assessed with the observed water level using non-parametric statistical tests (Biased Method, Kendall's Tau B Test and Spearman's Rho Test).

  7. A New Hybrid Model Based on Data Preprocessing and an Intelligent Optimization Algorithm for Electrical Power System Forecasting

    Directory of Open Access Journals (Sweden)

    Ping Jiang

    2015-01-01

    Full Text Available The establishment of electrical power system cannot only benefit the reasonable distribution and management in energy resources, but also satisfy the increasing demand for electricity. The electrical power system construction is often a pivotal part in the national and regional economic development plan. This paper constructs a hybrid model, known as the E-MFA-BP model, that can forecast indices in the electrical power system, including wind speed, electrical load, and electricity price. Firstly, the ensemble empirical mode decomposition can be applied to eliminate the noise of original time series data. After data preprocessing, the back propagation neural network model is applied to carry out the forecasting. Owing to the instability of its structure, the modified firefly algorithm is employed to optimize the weight and threshold values of back propagation to obtain a hybrid model with higher forecasting quality. Three experiments are carried out to verify the effectiveness of the model. Through comparison with other traditional well-known forecasting models, and models optimized by other optimization algorithms, the experimental results demonstrate that the hybrid model has the best forecasting performance.

  8. Implementation of the LandTrendr Algorithm on Google Earth Engine

    Directory of Open Access Journals (Sweden)

    Robert E Kennedy

    2018-05-01

    Full Text Available The LandTrendr (LT algorithm has been used widely for analysis of change in Landsat spectral time series data, but requires significant pre-processing, data management, and computational resources, and is only accessible to the community in a proprietary programming language (IDL. Here, we introduce LT for the Google Earth Engine (GEE platform. The GEE platform simplifies pre-processing steps, allowing focus on the translation of the core temporal segmentation algorithm. Temporal segmentation involved a series of repeated random access calls to each pixel’s time series, resulting in a set of breakpoints (“vertices” that bound straight-line segments. The translation of the algorithm into GEE included both transliteration and code analysis, resulting in improvement and logic error fixes. At six study areas representing diverse land cover types across the U.S., we conducted a direct comparison of the new LT-GEE code against the heritage code (LT-IDL. The algorithms agreed in most cases, and where disagreements occurred, they were largely attributable to logic error fixes in the code translation process. The practical impact of these changes is minimal, as shown by an example of forest disturbance mapping. We conclude that the LT-GEE algorithm represents a faithful translation of the LT code into a platform easily accessible by the broader user community.

  9. Comparison study on mechanical properties single step and three step artificial aging on duralium

    Science.gov (United States)

    Tsamroh, Dewi Izzatus; Puspitasari, Poppy; Andoko, Sasongko, M. Ilman N.; Yazirin, Cepi

    2017-09-01

    Duralium is kind of non-ferro alloy that used widely in industrial. That caused its properties such as mild, high ductility, and resistance from corrosion. This study aimed to know mechanical properties of duralium on single step and three step articial aging process. Mechanical properties that discussed in this study focused on toughness value, tensile strength, and microstructure of duralium. Toughness value of single step artificial aging was 0.082 joule/mm2, and toughness value of three step artificial aging was 0,0721 joule/mm2. Duralium tensile strength of single step artificial aging was 32.36 kgf/mm^2, and duralium tensile strength of three step artificial aging was 32,70 kgf/mm^2. Based on microstructure photo of duralium of single step artificial aging showed that precipitate (θ) was not spreading evenly indicated by black spot which increasing the toughness of material. While microstructure photo of duralium that treated by three step artificial aging showed that it had more precipitate (θ) spread evenly compared with duralium that treated by single step artificial aging.

  10. Linking Calling Orientations to Organizational Attachment via Organizational Instrumentality

    Science.gov (United States)

    Cardador, M. Teresa; Dane, Erik; Pratt, Michael G.

    2011-01-01

    Despite an emerging interest in callings, researchers know little about whether calling orientations matter in the workplace. We explore the under-examined relationship between a calling orientation and employees' attachment to their organizations. Although some theory suggests that callings may be negatively related to organizational attachment,…

  11. Rapid decay of vacancy islands at step edges on Ag(111): step orientation dependence

    International Nuclear Information System (INIS)

    Shen, Mingmin; Thiel, P A; Jenks, Cynthia J; Evans, J W

    2010-01-01

    Previous work has established that vacancy islands or pits fill much more quickly when they are in contact with a step edge, such that the common boundary is a double step. The present work focuses on the effect of the orientation of that step, with two possibilities existing for a face centered cubic (111) surface: A- and B-type steps. We find that the following features can depend on the orientation: (1) the shapes of islands while they shrink; (2) whether the island remains attached to the step edge; and (3) the rate of filling. The first two effects can be explained by the different rates of adatom diffusion along the A- and B-steps that define the pit, enhanced by the different filling rates. The third observation-the difference in the filling rate itself-is explained within the context of the concerted exchange mechanism at the double step. This process is facile at all regular sites along B-steps, but only at kink sites along A-steps, which explains the different rates. We also observe that oxygen can greatly accelerate the decay process, although it has no apparent effect on an isolated vacancy island (i.e. an island that is not in contact with a step).

  12. Comparison of step-by-step kinematics of resisted, assisted and unloaded 20-m sprint runs.

    Science.gov (United States)

    van den Tillaar, Roland; Gamble, Paul

    2018-03-26

    This investigation examined step-by-step kinematics of sprint running acceleration. Using a randomised counterbalanced approach, 37 female team handball players (age 17.8 ± 1.6 years, body mass 69.6 ± 9.1 kg, height 1.74 ± 0.06 m) performed resisted, assisted and unloaded 20-m sprints within a single session. 20-m sprint times and step velocity, as well as step length, step frequency, contact and flight times of each step were evaluated for each condition with a laser gun and an infrared mat. Almost all measured parameters were altered for each step under the resisted and assisted sprint conditions (η 2  ≥ 0.28). The exception was step frequency, which did not differ between assisted and normal sprints. Contact time, flight time and step frequency at almost each step were different between 'fast' vs. 'slow' sub-groups (η 2  ≥ 0.22). Nevertheless overall both groups responded similarly to the respective sprint conditions. No significant differences in step length were observed between groups for the respective condition. It is possible that continued exposure to assisted sprinting might allow the female team-sports players studied to adapt their coordination to the 'over-speed' condition and increase step frequency. It is notable that step-by-step kinematics in these sprints were easy to obtain using relatively inexpensive equipment with possibilities of direct feedback.

  13. The Influence of Judgment Calls on Meta-Analytic Findings.

    Science.gov (United States)

    Tarrahi, Farid; Eisend, Martin

    2016-01-01

    Previous research has suggested that judgment calls (i.e., methodological choices made in the process of conducting a meta-analysis) have a strong influence on meta-analytic findings and question their robustness. However, prior research applies case study comparison or reanalysis of a few meta-analyses with a focus on a few selected judgment calls. These studies neglect the fact that different judgment calls are related to each other and simultaneously influence the outcomes of a meta-analysis, and that meta-analytic findings can vary due to non-judgment call differences between meta-analyses (e.g., variations of effects over time). The current study analyzes the influence of 13 judgment calls in 176 meta-analyses in marketing research by applying a multivariate, multilevel meta-meta-analysis. The analysis considers simultaneous influences from different judgment calls on meta-analytic effect sizes and controls for alternative explanations based on non-judgment call differences between meta-analyses. The findings suggest that judgment calls have only a minor influence on meta-analytic findings, whereas non-judgment call differences between meta-analyses are more likely to explain differences in meta-analytic findings. The findings support the robustness of meta-analytic results and conclusions.

  14. Optimal scheduling in call centers with a callback option

    OpenAIRE

    Legros , Benjamin; Jouini , Oualid; Koole , Ger

    2016-01-01

    International audience; We consider a call center model with a callback option, which allows to transform an inbound call into an outbound one. A delayed call, with a long anticipated waiting time, receives the option to be called back. We assume a probabilistic customer reaction to the callback offer (option). The objective of the system manager is to characterize the optimal call scheduling that minimizes the expected waiting and abandonment costs. For the single-server case, we prove that ...

  15. Flight calls and orientation

    DEFF Research Database (Denmark)

    Larsen, Ole Næsbye; Andersen, Bent Bach; Kropp, Wibke

    2008-01-01

    flight calls was simulated by sequential computer controlled activation of five loudspeakers placed in a linear array perpendicular to the bird's migration course. The bird responded to this stimulation by changing its migratory course in the direction of that of the ‘flying conspecifics' but after about......  In a pilot experiment a European Robin, Erithacus rubecula, expressing migratory restlessness with a stable orientation, was video filmed in the dark with an infrared camera and its directional migratory activity was recorded. The flight overhead of migrating conspecifics uttering nocturnal...... 30 minutes it drifted back to its original migration course. The results suggest that songbirds migrating alone at night can use the flight calls from conspecifics as additional cues for orientation and that they may compare this information with other cues to decide what course to keep....

  16. When a Step Is Not a Step! Specificity Analysis of Five Physical Activity Monitors.

    Science.gov (United States)

    O'Connell, Sandra; ÓLaighin, Gearóid; Quinlan, Leo R

    2017-01-01

    Physical activity is an essential aspect of a healthy lifestyle for both physical and mental health states. As step count is one of the most utilized measures for quantifying physical activity it is important that activity-monitoring devices be both sensitive and specific in recording actual steps taken and disregard non-stepping body movements. The objective of this study was to assess the specificity of five activity monitors during a variety of prescribed non-stepping activities. Participants wore five activity monitors simultaneously for a variety of prescribed activities including deskwork, taking an elevator, taking a bus journey, automobile driving, washing and drying dishes; functional reaching task; indoor cycling; outdoor cycling; and indoor rowing. Each task was carried out for either a specific duration of time or over a specific distance. Activity monitors tested were the ActivPAL micro™, NL-2000™ pedometer, Withings Smart Activity Monitor Tracker (Pulse O2)™, Fitbit One™ and Jawbone UP™. Participants were video-recorded while carrying out the prescribed activities and the false positive step count registered on each activity monitor was obtained and compared to the video. All activity monitors registered a significant number of false positive steps per minute during one or more of the prescribed activities. The Withings™ activity performed best, registering a significant number of false positive steps per minute during the outdoor cycling activity only (P = 0.025). The Jawbone™ registered a significant number of false positive steps during the functional reaching task and while washing and drying dishes, which involved arm and hand movement (P positive steps during the cycling exercises (P positive steps were registered on the activity monitors during the non-stepping activities, the authors conclude that non-stepping physical activities can result in the false detection of steps. This can negatively affect the quantification of physical

  17. Automatic Semantic Orientation of Adjectives for Indonesian Language Using PMI-IR and Clustering

    Science.gov (United States)

    Riyanti, Dewi; Arif Bijaksana, M.; Adiwijaya

    2018-03-01

    We present our work in the area of sentiment analysis for Indonesian language. We focus on bulding automatic semantic orientation using available resources in Indonesian. In this research we used Indonesian corpus that contains 9 million words from kompas.txt and tempo.txt that manually tagged and annotated with of part-of-speech tagset. And then we construct a dataset by taking all the adjectives from the corpus, removing the adjective with no orientation. The set contained 923 adjective words. This systems will include several steps such as text pre-processing and clustering. The text pre-processing aims to increase the accuracy. And finally clustering method will classify each word to related sentiment which is positive or negative. With improvements to the text preprocessing, can be achieved 72% of accuracy.

  18. On the Convexity of Step out - Step in Sequencing Games

    NARCIS (Netherlands)

    Musegaas, Marieke; Borm, Peter; Quant, Marieke

    2016-01-01

    The main result of this paper is the convexity of Step out - Step in (SoSi) sequencing games, a class of relaxed sequencing games first analyzed by Musegaas, Borm, and Quant (2015). The proof makes use of a polynomial time algorithm determining the value and an optimal processing order for an

  19. Lunar phases and crisis center telephone calls.

    Science.gov (United States)

    Wilson, J E; Tobacyk, J J

    1990-02-01

    The lunar hypothesis, that is, the notion that lunar phases can directly affect human behavior, was tested by time-series analysis of 4,575 crisis center telephone calls (all calls recorded for a 6-month interval). As expected, the lunar hypothesis was not supported. The 28-day lunar cycle accounted for less than 1% of the variance of the frequency of crisis center calls. Also, as hypothesized from an attribution theory framework, crisis center workers reported significantly greater belief in lunar effects than a non-crisis-center-worker comparison group.

  20. Correlates of Gay-Related Name-Calling in Schools

    Science.gov (United States)

    Slaatten, Hilde; Hetland, Jørn; Anderssen, Norman

    2015-01-01

    The aim of this study was to examine whether attitudes about gay-related name-calling, social norms concerning gay-related name-calling among co-students, teacher intervention, and school-related support would predict whether secondary school pupils had called another pupil a gay-related name during the last month. A total of 921 ninth-grade…

  1. 47 CFR 90.241 - Radio call box operations.

    Science.gov (United States)

    2010-10-01

    ... remains on for a period in excess of three minutes. The automatic cutoff system must be designed so the... Public Safety Pool for highway call box systems subject to the following requirements: (1) Call box... effective radiated power (ERP). (3) The height of a call box antenna may not exceed 6.1 meters (20 feet...

  2. Microsoft® Office Access™ 2007 Step by Step

    CERN Document Server

    Lambert, Steve; Lambert, Joan

    2009-01-01

    Experience learning made easy-and quickly teach yourself how to build database solutions with Access 2007. With Step By Step, you set the pace-building and practicing the skills you need, just when you need them! Build databases from scratch or from templatesExchange data with other databases and Office documentsCreate forms to simplify data entryUse filters and queries to find and analyze informationDesign rich reports that help make your data meaningfulHelp prevent data corruption and unauthorized access Your all-in-one learning experience includes: Files for building skills and practic

  3. Flexible Method for the Automated Offline-Detection of Artifacts in Multi-Channel Electroencephalogram Recordings

    DEFF Research Database (Denmark)

    Waser, Markus; Garn, Heinrich; Benke, Thomas

    2017-01-01

    . However, these preprocessing steps do not allow for complete artifact correction. We propose a method for the automated offline-detection of remaining artifacts after preprocessing in multi-channel EEG recordings. In contrast to existing methods it requires neither adaptive parameters varying between...... recordings nor a topography template. It is suited for short EEG segments and is flexible with regard to target applications. The algorithm was developed and tested on 60 clinical EEG samples of 20 seconds each that were recorded both in resting state and during cognitive activation to gain a realistic...

  4. Coaching "Callings" throughout the Adult Life Cycle.

    Science.gov (United States)

    Hudson, Frederic M.

    2001-01-01

    The process of "callings" continues throughout life. Coaching can connect the present to the future in a meaningful way. Callings represent a value shift requiring revision of the nature and scope of one's central purpose in life and meaningful activities. (JOW)

  5. Role of step stiffness and kinks in the relaxation of vicinal (001) with zigzag [110] steps

    Science.gov (United States)

    Mahjoub, B.; Hamouda, Ajmi BH.; Einstein, TL.

    2017-08-01

    We present a kinetic Monte Carlo study of the relaxation dynamics and steady state configurations of 〈110〉 steps on a vicinal (001) simple cubic surface. This system is interesting because 〈110〉 (fully kinked) steps have different elementary excitation energetics and favor step diffusion more than 〈100〉 (nominally straight) steps. In this study we show how this leads to different relaxation dynamics as well as to different steady state configurations, including that 2-bond breaking processes are rate determining for 〈110〉 steps in contrast to 3-bond breaking processes for 〈100〉-steps found in previous work [Surface Sci. 602, 3569 (2008)]. The analysis of the terrace-width distribution (TWD) shows a significant role of kink-generation-annihilation processes during the relaxation of steps: the kinetic of relaxation, toward the steady state, is much faster in the case of 〈110〉-zigzag steps, with a higher standard deviation of the TWD, in agreement with a decrease of step stiffness due to orientation. We conclude that smaller step stiffness leads inexorably to faster step dynamics towards the steady state. The step-edge anisotropy slows the relaxation of steps and increases the strength of step-step effective interactions.

  6. Call to Action: The Case for Advancing Disaster Nursing Education in the United States.

    Science.gov (United States)

    Veenema, Tener Goodwin; Lavin, Roberta Proffitt; Griffin, Anne; Gable, Alicia R; Couig, Mary Pat; Dobalian, Aram

    2017-11-01

    Climate change, human conflict, and emerging infectious diseases are inexorable actors in our rapidly evolving healthcare landscape that are triggering an ever-increasing number of disaster events. A global nursing workforce is needed that possesses the knowledge, skills, and abilities to respond to any disaster or large-scale public health emergency in a timely and appropriate manner. The purpose of this article is to articulate a compelling mandate for the advancement of disaster nursing education within the United States with clear action steps in order to contribute to the achievement of this vision. A national panel of invited disaster nursing experts was convened through a series of monthly semistructured conference calls to work collectively towards the achievement of a national agenda for the future of disaster nursing education. National nursing education experts have developed consensus recommendations for the advancement of disaster nursing education in the United States. This article proposes next steps and action items to achieve the desired vision of national nurse readiness. Novel action steps for expanding disaster educational opportunities across the continuum of nursing are proposed in response to the current compelling need to prepare for, respond to, and mitigate the impact of disasters on human health. U.S. educational institutions and health and human service organizations that employ nurses must commit to increasing access to a variety of quality disaster-related educational programs for nurses and nurse leaders. Opportunities exist to strengthen disaster readiness and enhance national health security by expanding educational programming and training for nurses. © 2017 Sigma Theta Tau International.

  7. Performance indicators for call centers with impatience

    NARCIS (Netherlands)

    Jouini, O.; Koole, G.M.; Roubos, A.

    2013-01-01

    An important feature of call center modeling is the presence of impatient customers. This article considers single-skill call centers including customer abandonments. A number of different service-level definitions are structured, including all those used in practice, and the explicit computation of

  8. Calling, is there anything special about it?

    African Journals Online (AJOL)

    2016-07-15

    Jul 15, 2016 ... when a pastor is installed or a new candidate is ordained, 'The one who calls you is faithful .... extension to secular work of the dignity of a calling' (Fowler ... For Luther, therefore, the private life of devotion exercised in the.

  9. Ultrasound call detection in capybara

    Directory of Open Access Journals (Sweden)

    Selene S.C. Nogueira

    2012-07-01

    Full Text Available The vocal repertoire of some animal species has been considered a non-invasive tool to predict distress reactivity. In rats ultrasound emissions were reported as distress indicator. Capybaras[ vocal repertoire was reported recently and seems to have ultrasound calls, but this has not yet been confirmed. Thus, in order to check if a poor state of welfare was linked to ultrasound calls in the capybara vocal repertoire, the aim of this study was to track the presence of ultrasound emissions in 11 animals under three conditions: 1 unrestrained; 2 intermediately restrained, and 3 highly restrained. The ultrasound track identified frequencies in the range of 31.8±3.5 kHz in adults and 33.2±8.5 kHz in juveniles. These ultrasound frequencies occurred only when animals were highly restrained, physically restrained or injured during handling. We concluded that these calls with ultrasound components are related to pain and restraint because they did not occur when animals were free of restraint. Thus we suggest that this vocalization may be used as an additional tool to assess capybaras[ welfare.

  10. Call for participation in the neurogenetics consortium within the Human Variome Project.

    Science.gov (United States)

    Haworth, Andrea; Bertram, Lars; Carrera, Paola; Elson, Joanna L; Braastad, Corey D; Cox, Diane W; Cruts, Marc; den Dunnen, Johann T; Farrer, Matthew J; Fink, John K; Hamed, Sherifa A; Houlden, Henry; Johnson, Dennis R; Nuytemans, Karen; Palau, Francesc; Rayan, Dipa L Raja; Robinson, Peter N; Salas, Antonio; Schüle, Birgitt; Sweeney, Mary G; Woods, Michael O; Amigo, Jorge; Cotton, Richard G H; Sobrido, Maria-Jesus

    2011-08-01

    The rate of DNA variation discovery has accelerated the need to collate, store and interpret the data in a standardised coherent way and is becoming a critical step in maximising the impact of discovery on the understanding and treatment of human disease. This particularly applies to the field of neurology as neurological function is impaired in many human disorders. Furthermore, the field of neurogenetics has been proven to show remarkably complex genotype-to-phenotype relationships. To facilitate the collection of DNA sequence variation pertaining to neurogenetic disorders, we have initiated the "Neurogenetics Consortium" under the umbrella of the Human Variome Project. The Consortium's founding group consisted of basic researchers, clinicians, informaticians and database creators. This report outlines the strategic aims established at the preliminary meetings of the Neurogenetics Consortium and calls for the involvement of the wider neurogenetic community in enabling the development of this important resource.

  11. Hardware Design and Implementation of a Wavelet De-Noising Procedure for Medical Signal Preprocessing

    Directory of Open Access Journals (Sweden)

    Szi-Wen Chen

    2015-10-01

    Full Text Available In this paper, a discrete wavelet transform (DWT based de-noising with its applications into the noise reduction for medical signal preprocessing is introduced. This work focuses on the hardware realization of a real-time wavelet de-noising procedure. The proposed de-noising circuit mainly consists of three modules: a DWT, a thresholding, and an inverse DWT (IDWT modular circuits. We also proposed a novel adaptive thresholding scheme and incorporated it into our wavelet de-noising procedure. Performance was then evaluated on both the architectural designs of the software and. In addition, the de-noising circuit was also implemented by downloading the Verilog codes to a field programmable gate array (FPGA based platform so that its ability in noise reduction may be further validated in actual practice. Simulation experiment results produced by applying a set of simulated noise-contaminated electrocardiogram (ECG signals into the de-noising circuit showed that the circuit could not only desirably meet the requirement of real-time processing, but also achieve satisfactory performance for noise reduction, while the sharp features of the ECG signals can be well preserved. The proposed de-noising circuit was further synthesized using the Synopsys Design Compiler with an Artisan Taiwan Semiconductor Manufacturing Company (TSMC, Hsinchu, Taiwan 40 nm standard cell library. The integrated circuit (IC synthesis simulation results showed that the proposed design can achieve a clock frequency of 200 MHz and the power consumption was only 17.4 mW, when operated at 200 MHz.

  12. Fast data preprocessing for chromatographic fingerprints of tomato cell wall polysaccharides using chemometric methods.

    Science.gov (United States)

    Quéméner, Bernard; Bertrand, Dominique; Marty, Isabelle; Causse, Mathilde; Lahaye, Marc

    2007-02-02

    The variability in the chemistry of cell wall polysaccharides in pericarp tissue of red-ripe tomato fruit (Solanum lycopersicon Mill.) was characterized by chemical methods and enzymatic degradations coupled to high performance anion exchange chromatography (HPAEC) and mass spectrometry analysis. Large fruited line, Levovil (LEV) carrying introgressed chromosome fragments from a cherry tomato line Cervil (CER) on chromosomes 4 (LC4), 9 (LC9), or on chromosomes 1, 2, 4 and 9 (LCX) and containing quantitative trait loci (QTLs) for texture traits, was studied. In order to differentiate cell wall polysaccharide modifications in the tomato fruit collection by multivariate analysis, chromatograms were corrected for baseline drift and shift of the component elution time using an approach derived from image analysis and mathematical morphology. The baseline was first corrected by using a "moving window" approach while the peak-matching method developed was based upon location of peaks as local maxima within a window of a definite size. The fast chromatographic data preprocessing proposed was a prerequisite for the different chemometric treatments, such as variance and principal component analysis applied herein to the analysis. Applied to the tomato collection, the combined enzymatic degradations and HPAEC analyses revealed that the firm LCX and CER genotypes showed a higher proportion of glucuronoxylans and pectic arabinan side chains while the mealy LC9 genotype demonstrated the highest content of pectic galactan side chains. QTLs on tomato chromosomes 1, 2, 4 and 9 contain important genes controlling glucuronoxylan and pectic neutral side chains biosynthesis and/or metabolism.

  13. When a Step Is Not a Step! Specificity Analysis of Five Physical Activity Monitors.

    Directory of Open Access Journals (Sweden)

    Sandra O'Connell

    Full Text Available Physical activity is an essential aspect of a healthy lifestyle for both physical and mental health states. As step count is one of the most utilized measures for quantifying physical activity it is important that activity-monitoring devices be both sensitive and specific in recording actual steps taken and disregard non-stepping body movements. The objective of this study was to assess the specificity of five activity monitors during a variety of prescribed non-stepping activities.Participants wore five activity monitors simultaneously for a variety of prescribed activities including deskwork, taking an elevator, taking a bus journey, automobile driving, washing and drying dishes; functional reaching task; indoor cycling; outdoor cycling; and indoor rowing. Each task was carried out for either a specific duration of time or over a specific distance. Activity monitors tested were the ActivPAL micro™, NL-2000™ pedometer, Withings Smart Activity Monitor Tracker (Pulse O2™, Fitbit One™ and Jawbone UP™. Participants were video-recorded while carrying out the prescribed activities and the false positive step count registered on each activity monitor was obtained and compared to the video.All activity monitors registered a significant number of false positive steps per minute during one or more of the prescribed activities. The Withings™ activity performed best, registering a significant number of false positive steps per minute during the outdoor cycling activity only (P = 0.025. The Jawbone™ registered a significant number of false positive steps during the functional reaching task and while washing and drying dishes, which involved arm and hand movement (P < 0.01 for both. The ActivPAL™ registered a significant number of false positive steps during the cycling exercises (P < 0.001 for both.As a number of false positive steps were registered on the activity monitors during the non-stepping activities, the authors conclude that non-stepping

  14. Relabeling the Medications We Call Antidepressants

    Directory of Open Access Journals (Sweden)

    David Antonuccio

    2012-01-01

    Full Text Available This paper raises the question about whether the data on the medications we call antidepressants justify the label of antidepressant. The authors argue that a true antidepressant should be clearly superior to placebo, should offer a risk/benefit balance that exceeds that of alternative treatments, should not increase suicidality, should not increase anxiety and agitation, should not interfere with sexual functioning, and should not increase depression chronicity. Unfortunately, these medications appear to fall short on all of these dimensions. Many of the “side effects” of these medications have larger effect sizes than the antidepressant effect size. To call these medications antidepressants may make sense from a marketing standpoint but may be misleading from a scientific perspective. Consumers deserve a label that more accurately reflects the data on the largest effects and helps them understand the range of effects from these medications. In other words, it may make just as much sense to call these medications antiaphrodisiacs as antidepressants because the negative effects on libido and sexual functioning are so common. It can be argued that a misleading label may interfere with our commitment to informed consent. Therefore, it may be time to stop calling these medications antidepressants.

  15. From raw material to dish: pasta quality step by step.

    Science.gov (United States)

    Sicignano, Angelo; Di Monaco, Rossella; Masi, Paolo; Cavella, Silvana

    2015-10-01

    Pasta is a traditional Italian cereal-based food that is popular worldwide because of its convenience, versatility, sensory and nutritional value. The aim of this review is to present a step-by-step guide to facilitate the understanding of the most important events that can affect pasta characteristics, directing the reader to the appropriate production steps. Owing to its unique flavor, color, composition and rheological properties, durum wheat semolina is the best raw material for pasta production. Although pasta is traditionally made from only two ingredients, sensory quality and chemical/physical characteristics of the final product may vary greatly. Starting from the same ingredients, there are a lot of different events in each step of pasta production that can result in the development of varieties of pasta with different characteristics. In particular, numerous studies have demonstrated the importance of temperature and humidity conditions of the pasta drying operation as well as the significance of the choice of raw material and operating conditions on pasta quality. © 2015 Society of Chemical Industry.

  16. Step by step parallel programming method for molecular dynamics code

    International Nuclear Information System (INIS)

    Orii, Shigeo; Ohta, Toshio

    1996-07-01

    Parallel programming for a numerical simulation program of molecular dynamics is carried out with a step-by-step programming technique using the two phase method. As a result, within the range of a certain computing parameters, it is found to obtain parallel performance by using the level of parallel programming which decomposes the calculation according to indices of do-loops into each processor on the vector parallel computer VPP500 and the scalar parallel computer Paragon. It is also found that VPP500 shows parallel performance in wider range computing parameters. The reason is that the time cost of the program parts, which can not be reduced by the do-loop level of the parallel programming, can be reduced to the negligible level by the vectorization. After that, the time consuming parts of the program are concentrated on less parts that can be accelerated by the do-loop level of the parallel programming. This report shows the step-by-step parallel programming method and the parallel performance of the molecular dynamics code on VPP500 and Paragon. (author)

  17. Testing a stepped care model for binge-eating disorder: a two-step randomized controlled trial.

    Science.gov (United States)

    Tasca, Giorgio A; Koszycki, Diana; Brugnera, Agostino; Chyurlia, Livia; Hammond, Nicole; Francis, Kylie; Ritchie, Kerri; Ivanova, Iryna; Proulx, Genevieve; Wilson, Brian; Beaulac, Julie; Bissada, Hany; Beasley, Erin; Mcquaid, Nancy; Grenon, Renee; Fortin-Langelier, Benjamin; Compare, Angelo; Balfour, Louise

    2018-05-24

    A stepped care approach involves patients first receiving low-intensity treatment followed by higher intensity treatment. This two-step randomized controlled trial investigated the efficacy of a sequential stepped care approach for the psychological treatment of binge-eating disorder (BED). In the first step, all participants with BED (n = 135) received unguided self-help (USH) based on a cognitive-behavioral therapy model. In the second step, participants who remained in the trial were randomized either to 16 weeks of group psychodynamic-interpersonal psychotherapy (GPIP) (n = 39) or to a no-treatment control condition (n = 46). Outcomes were assessed for USH in step 1, and then for step 2 up to 6-months post-treatment using multilevel regression slope discontinuity models. In the first step, USH resulted in large and statistically significant reductions in the frequency of binge eating. Statistically significant moderate to large reductions in eating disorder cognitions were also noted. In the second step, there was no difference in change in frequency of binge eating between GPIP and the control condition. Compared with controls, GPIP resulted in significant and large improvement in attachment avoidance and interpersonal problems. The findings indicated that a second step of a stepped care approach did not significantly reduce binge-eating symptoms beyond the effects of USH alone. The study provided some evidence for the second step potentially to reduce factors known to maintain binge eating in the long run, such as attachment avoidance and interpersonal problems.

  18. The Barbados Emergency Ambulance Service: High Frequency of Nontransported Calls

    Directory of Open Access Journals (Sweden)

    Sherwin E. Phillips

    2012-01-01

    Full Text Available Objectives. There are no published studies on the Barbados Emergency Ambulance Service and no assessment of the calls that end in nontransported individuals. We describe reasons for the nontransport of potential clients. Methods. We used the Emergency Medical Dispatch (Medical Priority Dispatch System instrument, augmented with five local call types, to collect information on types of calls. The calls were categorised under 7 headings. Correlations between call types and response time were calculated. Results. Most calls were from the category medical (54%. Nineteen (19% percent of calls were in the non-transported category. Calls from call type Cancelled accounted for most of these and this was related to response time, while Refused service was inversely related (. Conclusions. The Barbados Ambulance Service is mostly used by people with a known illness and for trauma cases. One-fifth of calls fall into a category where the ambulance is not used often due to cancellation which is related to response time. Other factors such as the use of alternative transport are also important. Further study to identify factors that contribute to the non-transported category of calls is necessary if improvements in service quality are to be made.

  19. Quality maternity care for every woman, everywhere: a call to action.

    Science.gov (United States)

    Koblinsky, Marjorie; Moyer, Cheryl A; Calvert, Clara; Campbell, James; Campbell, Oona M R; Feigl, Andrea B; Graham, Wendy J; Hatt, Laurel; Hodgins, Steve; Matthews, Zoe; McDougall, Lori; Moran, Allisyn C; Nandakumar, Allyala K; Langer, Ana

    2016-11-05

    To improve maternal health requires action to ensure quality maternal health care for all women and girls, and to guarantee access to care for those outside the system. In this paper, we highlight some of the most pressing issues in maternal health and ask: what steps can be taken in the next 5 years to catalyse action toward achieving the Sustainable Development Goal target of less than 70 maternal deaths per 100 000 livebirths by 2030, with no single country exceeding 140? What steps can be taken to ensure that high-quality maternal health care is prioritised for every woman and girl everywhere? We call on all stakeholders to work together in securing a healthy, prosperous future for all women. National and local governments must be supported by development partners, civil society, and the private sector in leading efforts to improve maternal-perinatal health. This effort means dedicating needed policies and resources, and sustaining implementation to address the many factors influencing maternal health-care provision and use. Five priority actions emerge for all partners: prioritise quality maternal health services that respond to the local specificities of need, and meet emerging challenges; promote equity through universal coverage of quality maternal health services, including for the most vulnerable women; increase the resilience and strength of health systems by optimising the health workforce, and improve facility capability; guarantee sustainable finances for maternal-perinatal health; and accelerate progress through evidence, advocacy, and accountability. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Step-by-step phacoemulsification training program for ophthalmology residents

    Directory of Open Access Journals (Sweden)

    Wang Yulan

    2013-01-01

    Full Text Available Aims: The aim was to analyze the learning curve of phacoemulsification (phaco performed by residents without experience in performing extra-capsular cataract extraction (ECCE in a step-by-step training program (SBSTP. Materials and Methods: Consecutive surgical records of phaco performed from March 2009 to Sept 2011 by four residents without previous ECCE experience were retrospectively reviewed. The completion rate of the first 30 procedures by each resident was calculated. The main intraoperative phaco parameter records for the first 30 surgeries by each resident were compared with those for their last 30 surgeries. Intraoperative complications in the residents′ procedures were also recorded and analyzed. Results: A total of 1013 surgeries were performed by residents. The completion rate for the first 30 phaco procedures was 79.2 μ 5.8%. The main reasons for halting the procedure were as follows: Anterior capsule tear, inability to crack the nucleus, and posterior capsular rupture during phaco or cortex removal. Cumulative dissipated energy of phaco power used during the surgeries was significantly less in the last 30 cases compared with the first 30 cases (30.10 μ 17.58 vs. 55.41 μ 37.59, P = 0.021. Posterior capsular rupture rate was 2.5 μ 1.2% in total (10.8 μ 4.2% in the first 30 cases and 1.7 μ 1.9% in the last 30 cases, P = 0.008; a statistically significant difference. Conclusion:The step-by-step training program might be a necessary process for a resident to transit from dependence to a self-supported operator. It is also an essential middle step between wet lab training to performing the entire phaco procedure on the patient both effectively and safely.

  1. Make a 21st century phone call

    CERN Multimedia

    Katarina Anthony

    2014-01-01

    Want to avoid roaming charges? Click to call anyone at CERN? How about merging your CERN landline with your existing smartphone? That's all easily done with Lync, CERN's new opt-in service that can take your calls to the next level.   The Lync application on Windows (left) and iPhone (right). Lync unites CERN's traditional telephone service with the digital sphere. "Lync gives you the gift of mobility, by letting you access your CERN landline on the go," explains Pawel Grzywaczewski, service manager of the Lync system. "Once you've registered your CERN telephone with the service, you can run the Lync application and make calls from a range of supported devices. No matter where you are in the world - be it simply out to lunch or off at an international conference - you can make a CERN call as though you were in the office. All you need is an Internet connection!" Following a recent upgrade, CERN's Lync service now has...

  2. Resting-state test-retest reliability of a priori defined canonical networks over different preprocessing steps

    NARCIS (Netherlands)

    Varikuti, D.P.; Hoffstaedter, F.; Genon, S.; Schwender, H.; Reid, A.T.; Eickhoff, S.B.

    2017-01-01

    Resting-state functional connectivity analysis has become a widely used method for the investigation of human brain connectivity and pathology. The measurement of neuronal activity by functional MRI, however, is impeded by various nuisance signals that reduce the stability of functional

  3. Resting-state test-retest reliability of a priori defined canonical networks over different preprocessing steps.

    Science.gov (United States)

    Varikuti, Deepthi P; Hoffstaedter, Felix; Genon, Sarah; Schwender, Holger; Reid, Andrew T; Eickhoff, Simon B

    2017-04-01

    Resting-state functional connectivity analysis has become a widely used method for the investigation of human brain connectivity and pathology. The measurement of neuronal activity by functional MRI, however, is impeded by various nuisance signals that reduce the stability of functional connectivity. Several methods exist to address this predicament, but little consensus has yet been reached on the most appropriate approach. Given the crucial importance of reliability for the development of clinical applications, we here investigated the effect of various confound removal approaches on the test-retest reliability of functional-connectivity estimates in two previously defined functional brain networks. Our results showed that gray matter masking improved the reliability of connectivity estimates, whereas denoising based on principal components analysis reduced it. We additionally observed that refraining from using any correction for global signals provided the best test-retest reliability, but failed to reproduce anti-correlations between what have been previously described as antagonistic networks. This suggests that improved reliability can come at the expense of potentially poorer biological validity. Consistent with this, we observed that reliability was proportional to the retained variance, which presumably included structured noise, such as reliable nuisance signals (for instance, noise induced by cardiac processes). We conclude that compromises are necessary between maximizing test-retest reliability and removing variance that may be attributable to non-neuronal sources.

  4. Resting-state test-retest reliability of a priori defined canonical networks over different preprocessing steps

    Science.gov (United States)

    Varikuti, Deepthi P.; Hoffstaedter, Felix; Genon, Sarah; Schwender, Holger; Reid, Andrew T.; Eickhoff, Simon B.

    2016-01-01

    Resting-state functional connectivity analysis has become a widely used method for the investigation of human brain connectivity and pathology. The measurement of neuronal activity by functional MRI, however, is impeded by various nuisance signals that reduce the stability of functional connectivity. Several methods exist to address this predicament, but little consensus has yet been reached on the most appropriate approach. Given the crucial importance of reliability for the development of clinical applications, we here investigated the effect of various confound removal approaches on the test-retest reliability of functional-connectivity estimates in two previously defined functional brain networks. Our results showed that grey matter masking improved the reliability of connectivity estimates, whereas de-noising based on principal components analysis reduced it. We additionally observed that refraining from using any correction for global signals provided the best test-retest reliability, but failed to reproduce anti-correlations between what have been previously described as antagonistic networks. This suggests that improved reliability can come at the expense of potentially poorer biological validity. Consistent with this, we observed that reliability was proportional to the retained variance, which presumably included structured noise, such as reliable nuisance signals (for instance, noise induced by cardiac processes). We conclude that compromises are necessary between maximizing test-retest reliability and removing variance that may be attributable to non-neuronal sources. PMID:27550015

  5. Sleep Quality of Call Handlers Employed in International Call Centers in National Capital Region of Delhi, India

    Directory of Open Access Journals (Sweden)

    JD Raja

    2016-10-01

    suspicion of insomnia or suspected insomnia; the rest had no sleep problem. Smoking, poor social support, heavy workload, lack of relaxation facility at office, and prolonged travel time to office were independent predictors of sleep quality (p<0.05. Conclusion: Call handlers have to compromise upon their sleep owing to the contemporary work settings in call centers. Safeguarding their health becomes an occupational health challenge to public health specialists.

  6. Consumer Experiences Calling Toll-Free Corporate Hotlines.

    Science.gov (United States)

    Martin, Charles L.; Smart, Denise T.

    1994-01-01

    Finds that dimensions that contribute to caller satisfaction (of toll-free corporate hotlines) included operator characteristics such as knowledge, courtesy, and interest; specific behaviors such as apologizing for a problem, thanking the consumer for calling, and encouraging them to call again; and reducing time placed on "hold." (SR)

  7. Call center performance with direct response advertising

    NARCIS (Netherlands)

    M. Kiygi Calli (Meltem); M. Weverbergh (Marcel); Ph.H.B.F. Franses (Philip Hans)

    2017-01-01

    textabstractThis study investigates the manpower planning and the performance of a national call center dealing with car repairs and on the road interventions. We model the impact of advertising on the capacity required. The starting point is a forecasting model for the incoming calls, where we take

  8. Joint Preprocesser-Based Detectors for One-Way and Two-Way Cooperative Communication Networks

    KAUST Repository

    Abuzaid, Abdulrahman I.

    2014-05-01

    Efficient receiver designs for cooperative communication networks are becoming increasingly important. In previous work, cooperative networks communicated with the use of L relays. As the receiver is constrained, channel shortening and reduced-rank techniques were employed to design the preprocessing matrix that reduces the length of the received vector from L to U. In the first part of the work, a receiver structure is proposed which combines our proposed threshold selection criteria with the joint iterative optimization (JIO) algorithm that is based on the mean square error (MSE). Our receiver assists in determining the optimal U. Furthermore, this receiver provides the freedom to choose U for each frame depending on the tolerable difference allowed for MSE. Our study and simulation results show that by choosing an appropriate threshold, it is possible to gain in terms of complexity savings while having no or minimal effect on the BER performance of the system. Furthermore, the effect of channel estimation on the performance of the cooperative system is investigated. In the second part of the work, a joint preprocessor-based detector for cooperative communication networks is proposed for one-way and two-way relaying. This joint preprocessor-based detector operates on the principles of minimizing the symbol error rate (SER) instead of minimizing MSE. For a realistic assessment, pilot symbols are used to estimate the channel. From our simulations, it can be observed that our proposed detector achieves the same SER performance as that of the maximum likelihood (ML) detector with all participating relays. Additionally, our detector outperforms selection combining (SC), channel shortening (CS) scheme and reduced-rank techniques when using the same U. Finally, our proposed scheme has the lowest computational complexity.

  9. Comparing the efficacy of metronome beeps and stepping stones to adjust gait: steps to follow!

    Science.gov (United States)

    Bank, Paulina J M; Roerdink, Melvyn; Peper, C E

    2011-03-01

    Acoustic metronomes and visual targets have been used in rehabilitation practice to improve pathological gait. In addition, they may be instrumental in evaluating and training instantaneous gait adjustments. The aim of this study was to compare the efficacy of two cue types in inducing gait adjustments, viz. acoustic temporal cues in the form of metronome beeps and visual spatial cues in the form of projected stepping stones. Twenty healthy elderly (aged 63.2 ± 3.6 years) were recruited to walk on an instrumented treadmill at preferred speed and cadence, paced by either metronome beeps or projected stepping stones. Gait adaptations were induced using two manipulations: by perturbing the sequence of cues and by imposing switches from one cueing type to the other. Responses to these manipulations were quantified in terms of step-length and step-time adjustments, the percentage correction achieved over subsequent steps, and the number of steps required to restore the relation between gait and the beeps or stepping stones. The results showed that perturbations in a sequence of stepping stones were overcome faster than those in a sequence of metronome beeps. In switching trials, switching from metronome beeps to stepping stones was achieved faster than vice versa, indicating that gait was influenced more strongly by the stepping stones than the metronome beeps. Together these results revealed that, in healthy elderly, the stepping stones induced gait adjustments more effectively than did the metronome beeps. Potential implications for the use of metronome beeps and stepping stones in gait rehabilitation practice are discussed.

  10. Seven steps to raise world security. Op-Ed, published in the Finanical Times

    International Nuclear Information System (INIS)

    ElBaradei, M.

    2005-01-01

    In recent years, three phenomena have radically altered the security landscape. They are the emergence of a nuclear black market, the determined efforts by more countries to acquire technology to produce the fissile material usable in nuclear weapons and the clear desire of terrorists to acquire weapons of mass destruction. The IAEA has been trying to solve these new problems with existing tools. But for every step forward, we have exposed vulnerabilities in the system. The system itself - the regime that implements non-proliferation treaty needs reinforcement. Some of the necessary remedies can be taken in New York at the Meeting to be held in May, but only if governments are ready to act. With seven straightforward steps, and without amending the treaty, this conference could reach a milestone in strengthening world security. The first step: put a five-year hold on additional facilities for uranium enrichment and plutonium separation. Second, speed up existing efforts, led by the US global threat reduction initiative and others, to modify the research reactors worldwide operating with highly enriched uranium - particularly those with metal fuel that could be readily employed as bomb material. Third, raise the bar for inspection standards by establishing the 'additional protocol' as the norm for verifying compliance with the NPT. Fourth, call on the United Nations Security Council to act swiftly and decisively in the case of any country that withdraws from the NPT, in terms of the threat the withdrawal poses to international peace and security. Fifth, urge states to act on the Security Council's recent resolution 1540, to pursue and prosecute any illicit trading in nuclear material and technology. Sixth, call on the five nuclear weapon states party to the NPT to accelerate implementation of their 'unequivocal commitment' to nuclear disarmament, building on efforts such as the 2002 Moscow treaty between Russia and the US. Last, acknowledge the volatility of

  11. Meta-analysis is not an exact science: Call for guidance on quantitative synthesis decisions.

    Science.gov (United States)

    Haddaway, Neal R; Rytwinski, Trina

    2018-05-01

    Meta-analysis is becoming increasingly popular in the field of ecology and environmental management. It increases the effective power of analyses relative to single studies, and allows researchers to investigate effect modifiers and sources of heterogeneity that could not be easily examined within single studies. Many systematic reviewers will set out to conduct a meta-analysis as part of their synthesis, but meta-analysis requires a niche set of skills that are not widely held by the environmental research community. Each step in the process of carrying out a meta-analysis requires decisions that have both scientific and statistical implications. Reviewers are likely to be faced with a plethora of decisions over which effect size to choose, how to calculate variances, and how to build statistical models. Some of these decisions may be simple based on appropriateness of the options. At other times, reviewers must choose between equally valid approaches given the information available to them. This presents a significant problem when reviewers are attempting to conduct a reliable synthesis, such as a systematic review, where subjectivity is minimised and all decisions are documented and justified transparently. We propose three urgent, necessary developments within the evidence synthesis community. Firstly, we call on quantitative synthesis experts to improve guidance on how to prepare data for quantitative synthesis, providing explicit detail to support systematic reviewers. Secondly, we call on journal editors and evidence synthesis coordinating bodies (e.g. CEE) to ensure that quantitative synthesis methods are adequately reported in a transparent and repeatable manner in published systematic reviews. Finally, where faced with two or more broadly equally valid alternative methods or actions, reviewers should conduct multiple analyses, presenting all options, and discussing the implications of the different analytical approaches. We believe it is vital to tackle

  12. A Probabilistic Approach to Network Event Formation from Pre-Processed Waveform Data

    Science.gov (United States)

    Kohl, B. C.; Given, J.

    2017-12-01

    The current state of the art for seismic event detection still largely depends on signal detection at individual sensor stations, including picking accurate arrivals times and correctly identifying phases, and relying on fusion algorithms to associate individual signal detections to form event hypotheses. But increasing computational capability has enabled progress toward the objective of fully utilizing body-wave recordings in an integrated manner to detect events without the necessity of previously recorded ground truth events. In 2011-2012 Leidos (then SAIC) operated a seismic network to monitor activity associated with geothermal field operations in western Nevada. We developed a new association approach for detecting and quantifying events by probabilistically combining pre-processed waveform data to deal with noisy data and clutter at local distance ranges. The ProbDet algorithm maps continuous waveform data into continuous conditional probability traces using a source model (e.g. Brune earthquake or Mueller-Murphy explosion) to map frequency content and an attenuation model to map amplitudes. Event detection and classification is accomplished by combining the conditional probabilities from the entire network using a Bayesian formulation. This approach was successful in producing a high-Pd, low-Pfa automated bulletin for a local network and preliminary tests with regional and teleseismic data show that it has promise for global seismic and nuclear monitoring applications. The approach highlights several features that we believe are essential to achieving low-threshold automated event detection: Minimizes the utilization of individual seismic phase detections - in traditional techniques, errors in signal detection, timing, feature measurement and initial phase ID compound and propagate into errors in event formation, Has a formalized framework that utilizes information from non-detecting stations, Has a formalized framework that utilizes source information, in

  13. Influence of step complexity and presentation style on step performance of computerized emergency operating procedures

    Energy Technology Data Exchange (ETDEWEB)

    Xu Song [Department of Industrial Engineering, Tsinghua University, Beijing 100084 (China); Li Zhizhong [Department of Industrial Engineering, Tsinghua University, Beijing 100084 (China)], E-mail: zzli@tsinghua.edu.cn; Song Fei; Luo Wei; Zhao Qianyi; Salvendy, Gavriel [Department of Industrial Engineering, Tsinghua University, Beijing 100084 (China)

    2009-02-15

    With the development of information technology, computerized emergency operating procedures (EOPs) are taking the place of paper-based ones. However, ergonomics issues of computerized EOPs have not been studied adequately since the industrial practice is quite limited yet. This study examined the influence of step complexity and presentation style of EOPs on step performance. A simulated computerized EOP system was developed in two presentation styles: Style A: one- and two-dimensional flowcharts combination; Style B: two-dimensional flowchart and success logic tree combination. Step complexity was quantified by a complexity measure model based on an entropy concept. Forty subjects participated in the experiment of EOP execution using the simulated system. The results of data analysis on the experiment data indicate that step complexity and presentation style could significantly influence step performance (both step error rate and operation time). Regression models were also developed. The regression analysis results imply that operation time of a step could be well predicted by step complexity while step error rate could only partly predicted by it. The result of a questionnaire investigation implies that step error rate was influenced not only by the operation task itself but also by other human factors. These findings may be useful for the design and assessment of computerized EOPs.

  14. Advertisement call of Scinax camposseabrai (Bokermann, 1968) (Anura: Hylidae), with comments on the call of three species of the Scinax ruber clade.

    Science.gov (United States)

    Novaes, Gabriel; Zina, Juliana

    2016-02-25

    Scinax camposseabrai was allocated into the Scinax ruber clade by Caramaschi & Cardoso (2006) by overall similarities as snout not pointed, breeding in open areas, and an advertisement calls with multipulsed notes. This assumption about the call was based solely on an onomatopoeia provided by Bokermann (1968). Herein we provide a formal description of the advertisement call of S. camposseabrai and compare it with described calls of other S. ruber clade species. Additionally, we provide descriptions of the advertisement calls of three sympatric species of the S. ruber clade: S. eurydice (Bokermann), S. pachycrus (Miranda-Ribeiro) and S. cf. x-signatus.

  15. Diabetes PSA (:30) Step By Step

    Centers for Disease Control (CDC) Podcasts

    2009-10-24

    First steps to preventing diabetes. For Hispanic and Latino American audiences.  Created: 10/24/2009 by National Diabetes Education Program (NDEP), a joint program of the Centers for Disease Control and Prevention and the National Institutes of Health.   Date Released: 10/24/2009.

  16. Diabetes PSA (:60) Step By Step

    Centers for Disease Control (CDC) Podcasts

    2009-10-24

    First steps to preventing diabetes. For Hispanic and Latino American audiences.  Created: 10/24/2009 by National Diabetes Education Program (NDEP), a joint program of the Centers for Disease Control and Prevention and the National Institutes of Health.   Date Released: 10/24/2009.

  17. Environmental constraints and call evolution in torrent-dwelling frogs.

    Science.gov (United States)

    Goutte, Sandra; Dubois, Alain; Howard, Samuel D; Marquez, Rafael; Rowley, Jodi J L; Dehling, J Maximilian; Grandcolas, Philippe; Rongchuan, Xiong; Legendre, Frédéric

    2016-04-01

    Although acoustic signals are important for communication in many taxa, signal propagation is affected by environmental properties. Strong environmental constraints should drive call evolution, favoring signals with greater transmission distance and content integrity in a given calling habitat. Yet, few empirical studies have verified this prediction, possibly due to a shortcoming in habitat characterization, which is often too broad. Here we assess the potential impact of environmental constraints on the evolution of advertisement call in four groups of torrent-dwelling frogs in the family Ranidae. We reconstruct the evolution of calling site preferences, both broadly categorized and at a finer scale, onto a phylogenetic tree for 148 species with five markers (∼3600 bp). We test models of evolution for six call traits for 79 species with regard to the reconstructed history of calling site preferences and estimate their ancestral states. We find that in spite of existing morphological constraints, vocalizations of torrent-dwelling species are most probably constrained by the acoustic specificities of torrent habitats and particularly their high level of ambient noise. We also show that a fine-scale characterization of calling sites allows a better perception of the impact of environmental constraints on call evolution. © 2016 The Author(s). Evolution © 2016 The Society for the Study of Evolution.

  18. Attitude of Farmers towards Kisan Call Centres

    Directory of Open Access Journals (Sweden)

    Shely Mary Koshy

    2017-09-01

    Full Text Available The present study was conducted to measure the attitude of farmers in Kerala, India towards Kisan Call Centre (KCC. Kisan Call Centre provides free agricultural advisory services to every citizen involved in agriculture through a toll free number. One hundred and fifty farmers who have utilized the Kisan Call Centre service were selected from the database of KCC. The results showed that the respondents had moderately favourable attitude towards KCC followed by highly favourable attitude. The variables digital divide, temporal awareness on KCC, satisfaction towards KCC and utilization of KCC were found to have a positive correlation with the attitude of respondents towards KCC.

  19. 12G: code for conversion of isotope-ordered cross-section libraries into group-ordered cross-section libraries

    International Nuclear Information System (INIS)

    Resnik, W.M. II; Bosler, G.E.

    1977-09-01

    Many current reactor physics codes accept cross-section libraries in an isotope-ordered form, convert them with internal preprocessing routines to a group-ordered form, and then perform calculations using these group-ordered data. Occasionally, because of storage and time limitations, the preprocessing routines in these codes cannot convert very large multigroup isotope-ordered libraries. For this reason, the I2G code, i.e., ISOTXS to GRUPXS, was written to convert externally isotope-ordered cross section libraries in the standard file format called ISOTXS to group-ordered libraries in the standard format called GRUPXS. This code uses standardized multilevel data management routines which establish a strategy for the efficient conversion of large libraries. The I2G code is exportable contingent on access to, and an intimate familiarization with, the multilevel routines. These routines are machine dependent, and therefore must be provided by the importing facility. 6 figures, 3 tables

  20. Partitioning a call graph

    NARCIS (Netherlands)

    Bisseling, R.H.; Byrka, J.; Cerav-Erbas, S.; Gvozdenovic, N.; Lorenz, M.; Pendavingh, R.A.; Reeves, C.; Röger, M.; Verhoeven, A.; Berg, van den J.B.; Bhulai, S.; Hulshof, J.; Koole, G.; Quant, C.; Williams, J.F.

    2006-01-01

    Splitting a large software system into smaller and more manageable units has become an important problem for many organizations. The basic structure of a software system is given by a directed graph with vertices representing the programs of the system and arcs representing calls from one program to

  1. Too close to call

    DEFF Research Database (Denmark)

    Kurrild-Klitgaard, Peter

    2012-01-01

    a number of other frequent explanations and is found to be quite robust. When augmented with approval ratings for incumbent presidents, the explanatory power increases to 83 pct. and only incorrectly calls one of the last 15 US presidential elections. Applied to the 2012 election as a forecasting model...

  2. A call for surveys

    DEFF Research Database (Denmark)

    Bernstein, Philip A.; Jensen, Christian S.; Tan, Kian-Lee

    2012-01-01

    The database field is experiencing an increasing need for survey papers. We call on more researchers to set aside time for this important writing activity. The database field is growing in population, scope of topics covered, and the number of papers published. Each year, thousands of new papers ...

  3. Call centers with a postponed callback offer

    NARCIS (Netherlands)

    B. Legros (Benjamin); S. Ding (Sihan); R.D. van der Mei (Rob); O. Jouini (Oualid)

    2017-01-01

    textabstractWe study a call center model with a postponed callback option. A customer at the head of the queue whose elapsed waiting time achieves a given threshold receives a voice message mentioning the option to be called back later. This callback option differs from the traditional ones found in

  4. Does my step look big in this? A visual illusion leads to safer stepping behaviour.

    Directory of Open Access Journals (Sweden)

    David B Elliott

    Full Text Available BACKGROUND: Tripping is a common factor in falls and a typical safety strategy to avoid tripping on steps or stairs is to increase foot clearance over the step edge. In the present study we asked whether the perceived height of a step could be increased using a visual illusion and whether this would lead to the adoption of a safer stepping strategy, in terms of greater foot clearance over the step edge. The study also addressed the controversial question of whether motor actions are dissociated from visual perception. METHODOLOGY/PRINCIPAL FINDINGS: 21 young, healthy subjects perceived the step to be higher in a configuration of the horizontal-vertical illusion compared to a reverse configuration (p = 0.01. During a simple stepping task, maximum toe elevation changed by an amount corresponding to the size of the visual illusion (p<0.001. Linear regression analyses showed highly significant associations between perceived step height and maximum toe elevation for all conditions. CONCLUSIONS/SIGNIFICANCE: The perceived height of a step can be manipulated using a simple visual illusion, leading to the adoption of a safer stepping strategy in terms of greater foot clearance over a step edge. In addition, the strong link found between perception of a visual illusion and visuomotor action provides additional support to the view that the original, controversial proposal by Goodale and Milner (1992 of two separate and distinct visual streams for perception and visuomotor action should be re-evaluated.

  5. Accuracy of Single-Step versus 2-Step Double-Mix Impression Technique

    DEFF Research Database (Denmark)

    Franco, Eduardo Batista; da Cunha, Leonardo Fernandes; Herrera, Francyle Simões

    2011-01-01

    Objective. To investigate the accuracy of dies obtained from single-step and 2-step double-mix impressions. Material and Methods. Impressions (n = 10) of a stainless steel die simulating a complete crown preparation were performed using a polyether (Impregum Soft Heavy and Light body) and a vinyl...

  6. Tools and Databases of the KOMICS Web Portal for Preprocessing, Mining, and Dissemination of Metabolomics Data

    Directory of Open Access Journals (Sweden)

    Nozomu Sakurai

    2014-01-01

    Full Text Available A metabolome—the collection of comprehensive quantitative data on metabolites in an organism—has been increasingly utilized for applications such as data-intensive systems biology, disease diagnostics, biomarker discovery, and assessment of food quality. A considerable number of tools and databases have been developed to date for the analysis of data generated by various combinations of chromatography and mass spectrometry. We report here a web portal named KOMICS (The Kazusa Metabolomics Portal, where the tools and databases that we developed are available for free to academic users. KOMICS includes the tools and databases for preprocessing, mining, visualization, and publication of metabolomics data. Improvements in the annotation of unknown metabolites and dissemination of comprehensive metabolomic data are the primary aims behind the development of this portal. For this purpose, PowerGet and FragmentAlign include a manual curation function for the results of metabolite feature alignments. A metadata-specific wiki-based database, Metabolonote, functions as a hub of web resources related to the submitters' work. This feature is expected to increase citation of the submitters' work, thereby promoting data publication. As an example of the practical use of KOMICS, a workflow for a study on Jatropha curcas is presented. The tools and databases available at KOMICS should contribute to enhanced production, interpretation, and utilization of metabolomic Big Data.

  7. Tools and databases of the KOMICS web portal for preprocessing, mining, and dissemination of metabolomics data.

    Science.gov (United States)

    Sakurai, Nozomu; Ara, Takeshi; Enomoto, Mitsuo; Motegi, Takeshi; Morishita, Yoshihiko; Kurabayashi, Atsushi; Iijima, Yoko; Ogata, Yoshiyuki; Nakajima, Daisuke; Suzuki, Hideyuki; Shibata, Daisuke

    2014-01-01

    A metabolome--the collection of comprehensive quantitative data on metabolites in an organism--has been increasingly utilized for applications such as data-intensive systems biology, disease diagnostics, biomarker discovery, and assessment of food quality. A considerable number of tools and databases have been developed to date for the analysis of data generated by various combinations of chromatography and mass spectrometry. We report here a web portal named KOMICS (The Kazusa Metabolomics Portal), where the tools and databases that we developed are available for free to academic users. KOMICS includes the tools and databases for preprocessing, mining, visualization, and publication of metabolomics data. Improvements in the annotation of unknown metabolites and dissemination of comprehensive metabolomic data are the primary aims behind the development of this portal. For this purpose, PowerGet and FragmentAlign include a manual curation function for the results of metabolite feature alignments. A metadata-specific wiki-based database, Metabolonote, functions as a hub of web resources related to the submitters' work. This feature is expected to increase citation of the submitters' work, thereby promoting data publication. As an example of the practical use of KOMICS, a workflow for a study on Jatropha curcas is presented. The tools and databases available at KOMICS should contribute to enhanced production, interpretation, and utilization of metabolomic Big Data.

  8. Perturbed Strong Stability Preserving Time-Stepping Methods For Hyperbolic PDEs

    KAUST Repository

    Hadjimichael, Yiannis

    2017-09-30

    A plethora of physical phenomena are modelled by hyperbolic partial differential equations, for which the exact solution is usually not known. Numerical methods are employed to approximate the solution to hyperbolic problems; however, in many cases it is difficult to satisfy certain physical properties while maintaining high order of accuracy. In this thesis, we develop high-order time-stepping methods that are capable of maintaining stability constraints of the solution, when coupled with suitable spatial discretizations. Such methods are called strong stability preserving (SSP) time integrators, and we mainly focus on perturbed methods that use both upwind- and downwind-biased spatial discretizations. Firstly, we introduce a new family of third-order implicit Runge–Kuttas methods with arbitrarily large SSP coefficient. We investigate the stability and accuracy of these methods and we show that they perform well on hyperbolic problems with large CFL numbers. Moreover, we extend the analysis of SSP linear multistep methods to semi-discretized problems for which different terms on the right-hand side of the initial value problem satisfy different forward Euler (or circle) conditions. Optimal perturbed and additive monotonicity-preserving linear multistep methods are studied in the context of such problems. Optimal perturbed methods attain augmented monotonicity-preserving step sizes when the different forward Euler conditions are taken into account. On the other hand, we show that optimal SSP additive methods achieve a monotonicity-preserving step-size restriction no better than that of the corresponding non-additive SSP linear multistep methods. Furthermore, we develop the first SSP linear multistep methods of order two and three with variable step size, and study their optimality. We describe an optimal step-size strategy and demonstrate the effectiveness of these methods on various one- and multi-dimensional problems. Finally, we establish necessary conditions

  9. Can sedentary behavior be made more active? A randomized pilot study of TV commercial stepping versus walking

    Directory of Open Access Journals (Sweden)

    Steeves Jeremy A

    2012-08-01

    Full Text Available Abstract Background There is a growing problem of physical inactivity in America, and approximately a quarter of the population report being completely sedentary during their leisure time. In the U.S., TV viewing is the most common leisure-time activity. Stepping in place during TV commercials (TV Commercial Stepping could increase physical activity. The purpose of this study was to examine the feasibility of incorporating physical activity (PA into a traditionally sedentary activity, by comparing TV Commercial Stepping during 90 min/d of TV programming to traditional exercise (Walking. Methods A randomized controlled pilot study of the impact of 6 months of TV Commercial Stepping versus Walking 30 min/day in adults was conducted. 58 sedentary, overweight (body mass index 33.5 ± 4.8 kg/m2 adults (age 52.0 ± 8.6 y were randomly assigned to one of two 6-mo behavioral PA programs: 1 TV Commercial Stepping; or 2 Walking 30 min/day. To help facilitate behavior changes participants received 6 monthly phone calls, attended monthly meetings for the first 3 months, and received monthly newsletters for the last 3 months. Using intent-to-treat analysis, changes in daily steps, TV viewing, diet, body weight, waist and hip circumference, and percent fat were compared at baseline, 3, and 6 mo. Data were collected in 2010–2011, and analyzed in 2011. Results Of the 58 subjects, 47 (81% were retained for follow-up at the completion of the 6-mo program. From baseline to 6-mo, both groups significantly increased their daily steps [4611 ± 1553 steps/d vs. 7605 ± 2471 steps/d (TV Commercial Stepping; 4909 ± 1335 steps/d vs. 7865 ± 1939 steps/d (Walking; P  Conclusions Participants in both the TV Commercial Stepping and Walking groups had favorable changes in daily steps, TV viewing, diet, and anthropometrics. PA can be performed while viewing TV commercials and this may be a feasible alternative to traditional approaches for

  10. The Role of Analyst Conference Calls in Capital Markets

    NARCIS (Netherlands)

    E.M. Roelofsen (Erik)

    2010-01-01

    textabstractMany firms conduct a conference call with analysts shortly after the quarterly earnings announcement. In these calls, management discusses the completed quarter, and analysts can ask questions. Due to SEC requirements, conference calls in the United States are virtually always live

  11. Parser Adaptation for Social Media by Integrating Normalization

    NARCIS (Netherlands)

    van der Goot, Rob; van Noord, Gerardus

    This work explores normalization for parser adaptation. Traditionally, normalization is used as separate pre-processing step. We show that integrating the normalization model into the parsing algorithm is beneficial. This way, multiple normalization candidates can be leveraged, which improves

  12. [Influence of Spectral Pre-Processing on PLS Quantitative Model of Detecting Cu in Navel Orange by LIBS].

    Science.gov (United States)

    Li, Wen-bing; Yao, Lin-tao; Liu, Mu-hua; Huang, Lin; Yao, Ming-yin; Chen, Tian-bing; He, Xiu-wen; Yang, Ping; Hu, Hui-qin; Nie, Jiang-hui

    2015-05-01

    Cu in navel orange was detected rapidly by laser-induced breakdown spectroscopy (LIBS) combined with partial least squares (PLS) for quantitative analysis, then the effect on the detection accuracy of the model with different spectral data ptetreatment methods was explored. Spectral data for the 52 Gannan navel orange samples were pretreated by different data smoothing, mean centralized and standard normal variable transform. Then 319~338 nm wavelength section containing characteristic spectral lines of Cu was selected to build PLS models, the main evaluation indexes of models such as regression coefficient (r), root mean square error of cross validation (RMSECV) and the root mean square error of prediction (RMSEP) were compared and analyzed. Three indicators of PLS model after 13 points smoothing and processing of the mean center were found reaching 0. 992 8, 3. 43 and 3. 4 respectively, the average relative error of prediction model is only 5. 55%, and in one word, the quality of calibration and prediction of this model are the best results. The results show that selecting the appropriate data pre-processing method, the prediction accuracy of PLS quantitative model of fruits and vegetables detected by LIBS can be improved effectively, providing a new method for fast and accurate detection of fruits and vegetables by LIBS.

  13. Haralick texture features from apparent diffusion coefficient (ADC) MRI images depend on imaging and pre-processing parameters.

    Science.gov (United States)

    Brynolfsson, Patrik; Nilsson, David; Torheim, Turid; Asklund, Thomas; Karlsson, Camilla Thellenberg; Trygg, Johan; Nyholm, Tufve; Garpebring, Anders

    2017-06-22

    In recent years, texture analysis of medical images has become increasingly popular in studies investigating diagnosis, classification and treatment response assessment of cancerous disease. Despite numerous applications in oncology and medical imaging in general, there is no consensus regarding texture analysis workflow, or reporting of parameter settings crucial for replication of results. The aim of this study was to assess how sensitive Haralick texture features of apparent diffusion coefficient (ADC) MR images are to changes in five parameters related to image acquisition and pre-processing: noise, resolution, how the ADC map is constructed, the choice of quantization method, and the number of gray levels in the quantized image. We found that noise, resolution, choice of quantization method and the number of gray levels in the quantized images had a significant influence on most texture features, and that the effect size varied between different features. Different methods for constructing the ADC maps did not have an impact on any texture feature. Based on our results, we recommend using images with similar resolutions and noise levels, using one quantization method, and the same number of gray levels in all quantized images, to make meaningful comparisons of texture feature results between different subjects.

  14. Individual and contextual variation in Thomas langur male loud calls

    NARCIS (Netherlands)

    Wich, S.A.; Koski, S.; Vries, Han de; Schaik, Carel P. van

    2003-01-01

    Individual and contextual differences in male loud calls of wild Thomas langurs (Presbytis thomasi) were studied in northern Sumatra, Indonesia. Loud calls were given in the following contexts: morning calls, vocal responses to other groups, between-group encounter calls and alarmcalls. Loud

  15. Preimages for Step-Reduced SHA-2

    DEFF Research Database (Denmark)

    Aoki, Kazumaro; Guo, Jian; Matusiewicz, Krystian

    2009-01-01

    In this paper, we present preimage attacks on up to 43-step SHA-256 (around 67% of the total 64 steps) and 46-step SHA-512 (around 57.5% of the total 80 steps), which significantly increases the number of attacked steps compared to the best previously published preimage attack working for 24 steps....... The time complexities are 2^251.9, 2^509 for finding pseudo-preimages and 2^254.9, 2^511.5 compression function operations for full preimages. The memory requirements are modest, around 2^6 words for 43-step SHA-256 and 46-step SHA-512. The pseudo-preimage attack also applies to 43-step SHA-224 and SHA-384...

  16. Help Options in CALL: A Systematic Review

    Science.gov (United States)

    Cardenas-Claros, Monica S.; Gruba, Paul A.

    2009-01-01

    This paper is a systematic review of research investigating help options in the different language skills in computer-assisted language learning (CALL). In this review, emerging themes along with is-sues affecting help option research are identified and discussed. We argue that help options in CALL are application resources that do not only seem…

  17. The Call to Teach and Teacher Hopefulness

    Science.gov (United States)

    Bullough, Robert V., Jr.; Hall-Kenyon, Kendra M.

    2011-01-01

    The purpose of this paper is to explore teacher motivation and well-being. Our analysis focuses on two central concepts, the notion of a "calling to teach" and of teacher "hopefulness." Data from 205 preservice and inservice teachers were collected to determine teachers' sense of calling and level of hope. Results indicate that overwhelmingly,…

  18. Phase-aware echocardiogram stabilization using keyframes.

    Science.gov (United States)

    Wu, Hui; Huynh, Toan T; Souvenir, Richard

    2017-01-01

    This paper presents an echocardiogram stabilization method designed to compensate for unwanted auxilliary motion. Echocardiograms contain both deformable cardiac motion and approximately rigid motion due to a number of factors. The goal of this work is to stabilize the video, while preserving the informative deformable cardiac motion. Our approach incorporates synchronized side information, extracted from electrocardiography (ECG), which provides a proxy for cardiac phase. To avoid the computational expense of pairwise alignment, we propose an efficient strategy for keyframe selection, formulated as a submodular optimization problem. We evaluate our approach quantitatively on synthetic data and demonstrate its benefit as a preprocessing step for two common echocardiogram applications: denoising and left ventricle segmentation. In both cases, preprocessing with our method improved the performance compared to no preprocessing or other alignment approaches. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Examining calling as a double-edged sword for employability

    NARCIS (Netherlands)

    Lysova, Evgenia I.; Jansen, Paul G.W.; Khapova, Svetlana N.; Plomp, Judith; Tims, Maria

    2018-01-01

    Using a two-study design (total N = 1232), this paper examines the relationship between calling and employability. We suggest that, on the one hand, calling can positively relate to employability due to individuals’ engagement in proactive professional development (PPD). On the other hand, calling

  20. Process service quality evaluation based on Dempster-Shafer theory and support vector machine.

    Science.gov (United States)

    Pei, Feng-Que; Li, Dong-Bo; Tong, Yi-Fei; He, Fei

    2017-01-01

    Human involvement influences traditional service quality evaluations, which triggers an evaluation's low accuracy, poor reliability and less impressive predictability. This paper proposes a method by employing a support vector machine (SVM) and Dempster-Shafer evidence theory to evaluate the service quality of a production process by handling a high number of input features with a low sampling data set, which is called SVMs-DS. Features that can affect production quality are extracted by a large number of sensors. Preprocessing steps such as feature simplification and normalization are reduced. Based on three individual SVM models, the basic probability assignments (BPAs) are constructed, which can help the evaluation in a qualitative and quantitative way. The process service quality evaluation results are validated by the Dempster rules; the decision threshold to resolve conflicting results is generated from three SVM models. A case study is presented to demonstrate the effectiveness of the SVMs-DS method.