WorldWideScience

Sample records for classification response times

  1. Logical-Rule Models of Classification Response Times: A Synthesis of Mental-Architecture, Random-Walk, and Decision-Bound Approaches

    Science.gov (United States)

    Fific, Mario; Little, Daniel R.; Nosofsky, Robert M.

    2010-01-01

    We formalize and provide tests of a set of logical-rule models for predicting perceptual classification response times (RTs) and choice probabilities. The models are developed by synthesizing mental-architecture, random-walk, and decision-bound approaches. According to the models, people make independent decisions about the locations of stimuli…

  2. Material Classification Using Raw Time-of-Flight Measurements

    KAUST Repository

    Su, Shuochen; Heide, Felix; Swanson, Robin J.; Klein, Jonathan; Callenberg, Clara; Hullin, Matthias; Heidrich, Wolfgang

    2016-01-01

    We propose a material classification method using raw time-of-flight (ToF) measurements. ToF cameras capture the correlation between a reference signal and the temporal response of material to incident illumination. Such measurements encode unique

  3. Effectiveness of Multivariate Time Series Classification Using Shapelets

    Directory of Open Access Journals (Sweden)

    A. P. Karpenko

    2015-01-01

    Full Text Available Typically, time series classifiers require signal pre-processing (filtering signals from noise and artifact removal, etc., enhancement of signal features (amplitude, frequency, spectrum, etc., classification of signal features in space using the classical techniques and classification algorithms of multivariate data. We consider a method of classifying time series, which does not require enhancement of the signal features. The method uses the shapelets of time series (time series shapelets i.e. small fragments of this series, which reflect properties of one of its classes most of all.Despite the significant number of publications on the theory and shapelet applications for classification of time series, the task to evaluate the effectiveness of this technique remains relevant. An objective of this publication is to study the effectiveness of a number of modifications of the original shapelet method as applied to the multivariate series classification that is a littlestudied problem. The paper presents the problem statement of multivariate time series classification using the shapelets and describes the shapelet–based basic method of binary classification, as well as various generalizations and proposed modification of the method. It also offers the software that implements a modified method and results of computational experiments confirming the effectiveness of the algorithmic and software solutions.The paper shows that the modified method and the software to use it allow us to reach the classification accuracy of about 85%, at best. The shapelet search time increases in proportion to input data dimension.

  4. Time-reversal of electromagnetic scattering for small scatterer classification

    International Nuclear Information System (INIS)

    Smith, J Torquil; Berryman, James G

    2012-01-01

    Time-reversal operators, or the alternatively labelled, but equivalent, multistatic response matrix methods, are used to show how to determine the number of scatterers present in an electromagnetic scattering scenario that might be typical of UneXploded Ordinance (UXO) detection, classification and removal applications. Because the nature of the target UXO application differs from that of many other common inversion problems, emphasis is placed here on classification and enumeration rather than on detailed imaging. The main technical issues necessarily revolve around showing that it is possible to find a sufficient number of constraints via multiple measurements (i.e. using several distinct views at the target site) to solve the enumeration problem. The main results show that five measurements with antenna pairs are generally adequate to solve the classification and enumeration problems. However, these results also demonstrate a need for decreasing noise levels in the multistatic matrix as the number n of scatterers increases for the intended practical applications of the method. (paper)

  5. Material Classification Using Raw Time-of-Flight Measurements

    KAUST Repository

    Su, Shuochen

    2016-12-13

    We propose a material classification method using raw time-of-flight (ToF) measurements. ToF cameras capture the correlation between a reference signal and the temporal response of material to incident illumination. Such measurements encode unique signatures of the material, i.e. the degree of subsurface scattering inside a volume. Subsequently, it offers an orthogonal domain of feature representation compared to conventional spatial and angular reflectance-based approaches. We demonstrate the effectiveness, robustness, and efficiency of our method through experiments and comparisons of real-world materials.

  6. Response Mixture Modeling: Accounting for Heterogeneity in Item Characteristics across Response Times.

    Science.gov (United States)

    Molenaar, Dylan; de Boeck, Paul

    2018-06-01

    In item response theory modeling of responses and response times, it is commonly assumed that the item responses have the same characteristics across the response times. However, heterogeneity might arise in the data if subjects resort to different response processes when solving the test items. These differences may be within-subject effects, that is, a subject might use a certain process on some of the items and a different process with different item characteristics on the other items. If the probability of using one process over the other process depends on the subject's response time, within-subject heterogeneity of the item characteristics across the response times arises. In this paper, the method of response mixture modeling is presented to account for such heterogeneity. Contrary to traditional mixture modeling where the full response vectors are classified, response mixture modeling involves classification of the individual elements in the response vector. In a simulation study, the response mixture model is shown to be viable in terms of parameter recovery. In addition, the response mixture model is applied to a real dataset to illustrate its use in investigating within-subject heterogeneity in the item characteristics across response times.

  7. Real-time fMRI using brain-state classification.

    Science.gov (United States)

    LaConte, Stephen M; Peltier, Scott J; Hu, Xiaoping P

    2007-10-01

    We have implemented a real-time functional magnetic resonance imaging system based on multivariate classification. This approach is distinctly different from spatially localized real-time implementations, since it does not require prior assumptions about functional localization and individual performance strategies, and has the ability to provide feedback based on intuitive translations of brain state rather than localized fluctuations. Thus this approach provides the capability for a new class of experimental designs in which real-time feedback control of the stimulus is possible-rather than using a fixed paradigm, experiments can adaptively evolve as subjects receive brain-state feedback. In this report, we describe our implementation and characterize its performance capabilities. We observed approximately 80% classification accuracy using whole brain, block-design, motor data. Within both left and right motor task conditions, important differences exist between the initial transient period produced by task switching (changing between rapid left or right index finger button presses) and the subsequent stable period during sustained activity. Further analysis revealed that very high accuracy is achievable during stable task periods, and that the responsiveness of the classifier to changes in task condition can be much faster than signal time-to-peak rates. Finally, we demonstrate the versatility of this implementation with respect to behavioral task, suggesting that our results are applicable across a spectrum of cognitive domains. Beyond basic research, this technology can complement electroencephalography-based brain computer interface research, and has potential applications in the areas of biofeedback rehabilitation, lie detection, learning studies, virtual reality-based training, and enhanced conscious awareness. Wiley-Liss, Inc.

  8. Real time automatic scene classification

    NARCIS (Netherlands)

    Verbrugge, R.; Israël, Menno; Taatgen, N.; van den Broek, Egon; van der Putten, Peter; Schomaker, L.; den Uyl, Marten J.

    2004-01-01

    This work has been done as part of the EU VICAR (IST) project and the EU SCOFI project (IAP). The aim of the first project was to develop a real time video indexing classification annotation and retrieval system. For our systems, we have adapted the approach of Picard and Minka [3], who categorized

  9. A Classification Scheme for Glaciological AVA Responses

    Science.gov (United States)

    Booth, A.; Emir, E.

    2014-12-01

    A classification scheme is proposed for amplitude vs. angle (AVA) responses as an aid to the interpretation of seismic reflectivity in glaciological research campaigns. AVA responses are a powerful tool in characterising the material properties of glacier ice and its substrate. However, before interpreting AVA data, careful true amplitude processing is required to constrain basal reflectivity and compensate amplitude decay mechanisms, including anelastic attenuation and spherical divergence. These fundamental processing steps can be difficult to design in cases of noisy data, e.g. where a target reflection is contaminated by surface wave energy (in the case of shallow glaciers) or by energy reflected from out of the survey plane. AVA methods have equally powerful usage in estimating the fluid fill of potential hydrocarbon reservoirs. However, such applications seldom use true amplitude data and instead consider qualitative AVA responses using a well-defined classification scheme. Such schemes are often defined in terms of the characteristics of best-fit responses to the observed reflectivity, e.g. the intercept (I) and gradient (G) of a linear approximation to the AVA data. The position of the response on a cross-plot of I and G then offers a diagnostic attribute for certain fluid types. We investigate the advantages in glaciology of emulating this practice, and develop a cross-plot based on the 3-term Shuey AVA approximation (using I, G, and a curvature term C). Model AVA curves define a clear lithification trend: AVA responses to stiff (lithified) substrates fall discretely into one quadrant of the cross-plot, with positive I and negative G, whereas those to fluid-rich substrates plot diagonally opposite (in the negative I and positive G quadrant). The remaining quadrants are unoccupied by plausible single-layer responses and may therefore be diagnostic of complex thin-layer reflectivity, and the magnitude and polarity of the C term serves as a further indicator

  10. Improving Music Genre Classification by Short-Time Feature Integration

    DEFF Research Database (Denmark)

    Meng, Anders; Ahrendt, Peter; Larsen, Jan

    2005-01-01

    Many different short-time features, using time windows in the size of 10-30 ms, have been proposed for music segmentation, retrieval and genre classification. However, often the available time frame of the music to make the actual decision or comparison (the decision time horizon) is in the range...... of seconds instead of milliseconds. The problem of making new features on the larger time scale from the short-time features (feature integration) has only received little attention. This paper investigates different methods for feature integration and late information fusion for music genre classification...

  11. A Literature Survey of Early Time Series Classification and Deep Learning

    OpenAIRE

    Santos, Tiago; Kern, Roman

    2017-01-01

    This paper provides an overview of current literature on time series classification approaches, in particular of early time series classification. A very common and effective time series classification approach is the 1-Nearest Neighbor classier, with different distance measures such as the Euclidean or dynamic time warping distances. This paper starts by reviewing these baseline methods. More recently, with the gain in popularity in the application of deep neural networks to the eld of...

  12. Combining Blink, Pupil, and Response Time Measures in a Concealed Knowledge Test

    Directory of Open Access Journals (Sweden)

    Travis eSeymour

    2013-02-01

    Full Text Available The response time (RT based Concealed Knowledge Test (CKT has been shown to accurately detect participants’ knowledge of mock-crime related information. Tests based on ocular measures such as pupil size and blink rate have sometimes resulted in poor classification, or lacked detailed classification analyses. The present study examines the fitness of multiple pupil and blink related responses in the CKT paradigm. To maximize classification efficiency, participants’ concealed knowledge was assessed using both individual test measures and combinations of test measures. Results show that individual pupil-size, pupil-slope, and pre-response blink-rate measures produce efficient classifications. Combining pupil and blink measures yielded more accuracy classifications than individual ocular measures. Although RT-based tests proved efficient, combining RT with ocular measures had little incremental benefit. It is argued that covertly assessing ocular measures during RT-based tests may guard against effective countermeasure use in applied settings. A compound classification procedure was used to categorize individual participants and yielded high hit rates and low false-alarm rates without the need for adjustments between test paradigms or subject populations. We conclude that with appropriate test paradigms and classification analyses, ocular measures may prove as effective as other indices, though additional research is needed.

  13. Internal representations for face detection: an application of noise-based image classification to BOLD responses.

    Science.gov (United States)

    Nestor, Adrian; Vettel, Jean M; Tarr, Michael J

    2013-11-01

    What basic visual structures underlie human face detection and how can we extract such structures directly from the amplitude of neural responses elicited by face processing? Here, we address these issues by investigating an extension of noise-based image classification to BOLD responses recorded in high-level visual areas. First, we assess the applicability of this classification method to such data and, second, we explore its results in connection with the neural processing of faces. To this end, we construct luminance templates from white noise fields based on the response of face-selective areas in the human ventral cortex. Using behaviorally and neurally-derived classification images, our results reveal a family of simple but robust image structures subserving face representation and detection. Thus, we confirm the role played by classical face selective regions in face detection and we help clarify the representational basis of this perceptual function. From a theory standpoint, our findings support the idea of simple but highly diagnostic neurally-coded features for face detection. At the same time, from a methodological perspective, our work demonstrates the ability of noise-based image classification in conjunction with fMRI to help uncover the structure of high-level perceptual representations. Copyright © 2012 Wiley Periodicals, Inc.

  14. Real-Time Gas Identification by Analyzing the Transient Response of Capillary-Attached Conductive Gas Sensor

    Directory of Open Access Journals (Sweden)

    Behzad Bahraminejad

    2010-05-01

    Full Text Available In this study, the ability of the Capillary-attached conductive gas sensor (CGS in real-time gas identification was investigated. The structure of the prototype fabricated CGS is presented. Portions were selected from the beginning of the CGS transient response including the first 11 samples to the first 100 samples. Different feature extraction and classification methods were applied on the selected portions. Validation of methods was evaluated to study the ability of an early portion of the CGS transient response in target gas (TG identification. Experimental results proved that applying extracted features from an early part of the CGS transient response along with a classifier can distinguish short-chain alcohols from each other perfectly. Decreasing time of exposition in the interaction between target gas and sensing element improved the reliability of the sensor. Classification rate was also improved and time of identification was decreased. Moreover, the results indicated the optimum interval of the early transient response of the CGS for selecting portions to achieve the best classification rates.

  15. EEG Eye State Identification Using Incremental Attribute Learning with Time-Series Classification

    Directory of Open Access Journals (Sweden)

    Ting Wang

    2014-01-01

    Full Text Available Eye state identification is a kind of common time-series classification problem which is also a hot spot in recent research. Electroencephalography (EEG is widely used in eye state classification to detect human's cognition state. Previous research has validated the feasibility of machine learning and statistical approaches for EEG eye state classification. This paper aims to propose a novel approach for EEG eye state identification using incremental attribute learning (IAL based on neural networks. IAL is a novel machine learning strategy which gradually imports and trains features one by one. Previous studies have verified that such an approach is applicable for solving a number of pattern recognition problems. However, in these previous works, little research on IAL focused on its application to time-series problems. Therefore, it is still unknown whether IAL can be employed to cope with time-series problems like EEG eye state classification. Experimental results in this study demonstrates that, with proper feature extraction and feature ordering, IAL can not only efficiently cope with time-series classification problems, but also exhibit better classification performance in terms of classification error rates in comparison with conventional and some other approaches.

  16. REAL-TIME INTELLIGENT MULTILAYER ATTACK CLASSIFICATION SYSTEM

    Directory of Open Access Journals (Sweden)

    T. Subbhulakshmi

    2014-01-01

    Full Text Available Intrusion Detection Systems (IDS takes the lion’s share of the current security infrastructure. Detection of intrusions is vital for initiating the defensive procedures. Intrusion detection was done by statistical and distance based methods. A threshold value is used in these methods to indicate the level of normalcy. When the network traffic crosses the level of normalcy then above which it is flagged as anomalous. When there are occurrences of new intrusion events which are increasingly a key part of system security, the statistical techniques cannot detect them. To overcome this issue, learning techniques are used which helps in identifying new intrusion activities in a computer system. The objective of the proposed system designed in this paper is to classify the intrusions using an Intelligent Multi Layered Attack Classification System (IMLACS which helps in detecting and classifying the intrusions with improved classification accuracy. The intelligent multi layered approach contains three intelligent layers. The first layer involves Binary Support Vector Machine classification for detecting the normal and attack. The second layer involves neural network classification to classify the attacks into classes of attacks. The third layer involves fuzzy inference system to classify the attacks into various subclasses. The proposed IMLACS can be able to detect an intrusion behavior of the networks since the system contains a three intelligent layer classification and better set of rules. Feature selection is also used to improve the time of detection. The experimental results show that the IMLACS achieves the Classification Rate of 97.31%.

  17. Real-time, resource-constrained object classification on a micro-air vehicle

    Science.gov (United States)

    Buck, Louis; Ray, Laura

    2013-12-01

    A real-time embedded object classification algorithm is developed through the novel combination of binary feature descriptors, a bag-of-visual-words object model and the cortico-striatal loop (CSL) learning algorithm. The BRIEF, ORB and FREAK binary descriptors are tested and compared to SIFT descriptors with regard to their respective classification accuracies, execution times, and memory requirements when used with CSL on a 12.6 g ARM Cortex embedded processor running at 800 MHz. Additionally, the effect of x2 feature mapping and opponent-color representations used with these descriptors is examined. These tests are performed on four data sets of varying sizes and difficulty, and the BRIEF descriptor is found to yield the best combination of speed and classification accuracy. Its use with CSL achieves accuracies between 67% and 95% of those achieved with SIFT descriptors and allows for the embedded classification of a 128x192 pixel image in 0.15 seconds, 60 times faster than classification with SIFT. X2 mapping is found to provide substantial improvements in classification accuracy for all of the descriptors at little cost, while opponent-color descriptors are offer accuracy improvements only on colorful datasets.

  18. Using Response Times to Assess Learning Progress: A Joint Model for Responses and Response Times

    Science.gov (United States)

    Wang, Shiyu; Zhang, Susu; Douglas, Jeff; Culpepper, Steven

    2018-01-01

    Analyzing students' growth remains an important topic in educational research. Most recently, Diagnostic Classification Models (DCMs) have been used to track skill acquisition in a longitudinal fashion, with the purpose to provide an estimate of students' learning trajectories in terms of the change of fine-grained skills overtime. Response time…

  19. Classification of Animal Movement Behavior through Residence in Space and Time.

    Science.gov (United States)

    Torres, Leigh G; Orben, Rachael A; Tolkova, Irina; Thompson, David R

    2017-01-01

    Identification and classification of behavior states in animal movement data can be complex, temporally biased, time-intensive, scale-dependent, and unstandardized across studies and taxa. Large movement datasets are increasingly common and there is a need for efficient methods of data exploration that adjust to the individual variability of each track. We present the Residence in Space and Time (RST) method to classify behavior patterns in movement data based on the concept that behavior states can be partitioned by the amount of space and time occupied in an area of constant scale. Using normalized values of Residence Time and Residence Distance within a constant search radius, RST is able to differentiate behavior patterns that are time-intensive (e.g., rest), time & distance-intensive (e.g., area restricted search), and transit (short time and distance). We use grey-headed albatross (Thalassarche chrysostoma) GPS tracks to demonstrate RST's ability to classify behavior patterns and adjust to the inherent scale and individuality of each track. Next, we evaluate RST's ability to discriminate between behavior states relative to other classical movement metrics. We then temporally sub-sample albatross track data to illustrate RST's response to less resolved data. Finally, we evaluate RST's performance using datasets from four taxa with diverse ecology, functional scales, ecosystems, and data-types. We conclude that RST is a robust, rapid, and flexible method for detailed exploratory analysis and meta-analyses of behavioral states in animal movement data based on its ability to integrate distance and time measurements into one descriptive metric of behavior groupings. Given the increasing amount of animal movement data collected, it is timely and useful to implement a consistent metric of behavior classification to enable efficient and comparative analyses. Overall, the application of RST to objectively explore and compare behavior patterns in movement data can

  20. The Ecohydrological Context of Drought and Classification of Plant Responses

    Science.gov (United States)

    Feng, X.; Ackerly, D.; Dawson, T. E.; Manzoni, S.; Skelton, R. P.; Vico, G.; Thompson, S. E.

    2017-12-01

    Many recent studies on drought-induced vegetation mortality have explored how plant functional traits, and classifications of such traits along axes of, e.g., isohydry - anisohydry, might contribute to predicting drought survival and recovery. As these studies proliferate, concerns are growing about the consistency and predictive value of such classifications. Here, we outline the basis for a systematic classification of drought strategies that accounts for both environmental conditions and functional traits. We (1) identify drawbacks of exiting isohydricity and trait-based metrics, (2) identify major axes of trait and environmental variation that determine drought mortality pathways (hydraulic failure and carbon starvation) using non-dimensional trait groups, and (3) demonstrate that these trait groupings predict physiological drought outcomes using both measured and synthetic data. In doing so we untangle some confounding effects of environment and trait variations that undermine current classification schemes, outline a pathway to progress towards a general classification of drought vulnerability, and advocate for more careful treatment of the environmental conditions within which plant drought responses occur.

  1. Mapping Plant Functional Types over Broad Mountainous Regions: A Hierarchical Soft Time-Space Classification Applied to the Tibetan Plateau

    Directory of Open Access Journals (Sweden)

    Danlu Cai

    2014-04-01

    Full Text Available Research on global climate change requires plant functional type (PFT products. Although several PFT mapping procedures for remote sensing imagery are being used, none of them appears to be specifically designed to map and evaluate PFTs over broad mountainous areas which are highly relevant regions to identify and analyze the response of natural ecosystems. We present a methodology for generating soft classifications of PFTs from remotely sensed time series that are based on a hierarchical strategy by integrating time varying integrated NDVI and phenological information with topography: (i Temporal variability: a Fourier transform of a vegetation index (MODIS NDVI, 2006 to 2010. (ii Spatial partitioning: a primary image segmentation based on a small number of thresholds applied to the Fourier amplitude. (iii Classification by a supervised soft classification step is based on a normalized distance metric constructed from a subset of Fourier coefficients and complimentary altitude data from a digital elevation model. Applicability and effectiveness is tested for the eastern Tibetan Plateau. A classification nomenclature is determined from temporally stable pixels in the MCD12Q1 time series. Overall accuracy statistics of the resulting classification reveal a gain of about 7% from 64.4% compared to 57.7% by the MODIS PFT products.

  2. Automatic classification of time-variable X-ray sources

    Energy Technology Data Exchange (ETDEWEB)

    Lo, Kitty K.; Farrell, Sean; Murphy, Tara; Gaensler, B. M. [Sydney Institute for Astronomy, School of Physics, The University of Sydney, Sydney, NSW 2006 (Australia)

    2014-05-01

    To maximize the discovery potential of future synoptic surveys, especially in the field of transient science, it will be necessary to use automatic classification to identify some of the astronomical sources. The data mining technique of supervised classification is suitable for this problem. Here, we present a supervised learning method to automatically classify variable X-ray sources in the Second XMM-Newton Serendipitous Source Catalog (2XMMi-DR2). Random Forest is our classifier of choice since it is one of the most accurate learning algorithms available. Our training set consists of 873 variable sources and their features are derived from time series, spectra, and other multi-wavelength contextual information. The 10 fold cross validation accuracy of the training data is ∼97% on a 7 class data set. We applied the trained classification model to 411 unknown variable 2XMM sources to produce a probabilistically classified catalog. Using the classification margin and the Random Forest derived outlier measure, we identified 12 anomalous sources, of which 2XMM J180658.7–500250 appears to be the most unusual source in the sample. Its X-ray spectra is suggestive of a ultraluminous X-ray source but its variability makes it highly unusual. Machine-learned classification and anomaly detection will facilitate scientific discoveries in the era of all-sky surveys.

  3. Automatic classification of time-variable X-ray sources

    International Nuclear Information System (INIS)

    Lo, Kitty K.; Farrell, Sean; Murphy, Tara; Gaensler, B. M.

    2014-01-01

    To maximize the discovery potential of future synoptic surveys, especially in the field of transient science, it will be necessary to use automatic classification to identify some of the astronomical sources. The data mining technique of supervised classification is suitable for this problem. Here, we present a supervised learning method to automatically classify variable X-ray sources in the Second XMM-Newton Serendipitous Source Catalog (2XMMi-DR2). Random Forest is our classifier of choice since it is one of the most accurate learning algorithms available. Our training set consists of 873 variable sources and their features are derived from time series, spectra, and other multi-wavelength contextual information. The 10 fold cross validation accuracy of the training data is ∼97% on a 7 class data set. We applied the trained classification model to 411 unknown variable 2XMM sources to produce a probabilistically classified catalog. Using the classification margin and the Random Forest derived outlier measure, we identified 12 anomalous sources, of which 2XMM J180658.7–500250 appears to be the most unusual source in the sample. Its X-ray spectra is suggestive of a ultraluminous X-ray source but its variability makes it highly unusual. Machine-learned classification and anomaly detection will facilitate scientific discoveries in the era of all-sky surveys.

  4. Decision time horizon for music genre classification using short time features

    DEFF Research Database (Denmark)

    Ahrendt, Peter; Meng, Anders; Larsen, Jan

    2004-01-01

    In this paper music genre classification has been explored with special emphasis on the decision time horizon and ranking of tapped-delay-line short-time features. Late information fusion as e.g. majority voting is compared with techniques of early information fusion such as dynamic PCA (DPCA......). The most frequently suggested features in the literature were employed including mel-frequency cepstral coefficients (MFCC), linear prediction coefficients (LPC), zero-crossing rate (ZCR), and MPEG-7 features. To rank the importance of the short time features consensus sensitivity analysis is applied...

  5. CLASSIFICATION OF THE MGR EMERGENCY RESPONSE SYSTEM

    International Nuclear Information System (INIS)

    Zeigler, J.A.

    1999-01-01

    The purpose of this analysis is to document the Quality Assurance (QA) classification of the Monitored Geologic Repository (MGR) emergency response system structures, systems and components (SSCs) performed by the MGR Safety Assurance Department. This analysis also provides the basis for revision of YMP/90-55Q, Q-List (YMP 1998). The Q-List identifies those MGR SSCs subject to the requirements of DOE/RW-0333P7 ''Quality Assurance Requirements and Description'' (QARD) (DOE 1998)

  6. Extending an emergency classification expert system to the real-time environment

    International Nuclear Information System (INIS)

    Greene, K.R.; Robinson, A.H.

    1990-01-01

    The process of determining emergency action level (EAL) during real or simulated emergencies at the Trojan nuclear power plant was automated in 1988 with development of the EM-CLASS expert system. This system serves to replace the manual flip-chart method of determining the EAL. While the task of performing the classification is more reliable when using EM-CLASS, it still takes as long to determine the appropriate EAL with EM-CLASS as it does with the flowchart tracing method currently in use. During a plant emergency, an environment will exist where there are not enough resources to complete all of the desired tasks. To change this condition, some tasks must be accomplished with greater efficiency. The EM-CLASS application may be improved by taking advantage of the fact that most of the responses to the questions in the emergency classification procedure, EP-001, are available directly from plant measurements. This information could be passed to the expert system electronically. A prototype demonstration of a real-time emergency classification expert system has been developed. It repetitively performs the consultation, acquiring the necessary data electronically when possible and from the user when electronic data are unavailable. The expert system is being tested with scenarios from the drills and graded exercises that have taken place at the Trojan nuclear power plant. The goal of this project is to install the system on the plant simulator and/or the plant computer

  7. Modeling time-to-event (survival) data using classification tree analysis.

    Science.gov (United States)

    Linden, Ariel; Yarnold, Paul R

    2017-12-01

    Time to the occurrence of an event is often studied in health research. Survival analysis differs from other designs in that follow-up times for individuals who do not experience the event by the end of the study (called censored) are accounted for in the analysis. Cox regression is the standard method for analysing censored data, but the assumptions required of these models are easily violated. In this paper, we introduce classification tree analysis (CTA) as a flexible alternative for modelling censored data. Classification tree analysis is a "decision-tree"-like classification model that provides parsimonious, transparent (ie, easy to visually display and interpret) decision rules that maximize predictive accuracy, derives exact P values via permutation tests, and evaluates model cross-generalizability. Using empirical data, we identify all statistically valid, reproducible, longitudinally consistent, and cross-generalizable CTA survival models and then compare their predictive accuracy to estimates derived via Cox regression and an unadjusted naïve model. Model performance is assessed using integrated Brier scores and a comparison between estimated survival curves. The Cox regression model best predicts average incidence of the outcome over time, whereas CTA survival models best predict either relatively high, or low, incidence of the outcome over time. Classification tree analysis survival models offer many advantages over Cox regression, such as explicit maximization of predictive accuracy, parsimony, statistical robustness, and transparency. Therefore, researchers interested in accurate prognoses and clear decision rules should consider developing models using the CTA-survival framework. © 2017 John Wiley & Sons, Ltd.

  8. Expected Classification Accuracy

    Directory of Open Access Journals (Sweden)

    Lawrence M. Rudner

    2005-08-01

    Full Text Available Every time we make a classification based on a test score, we should expect some number..of misclassifications. Some examinees whose true ability is within a score range will have..observed scores outside of that range. A procedure for providing a classification table of..true and expected scores is developed for polytomously scored items under item response..theory and applied to state assessment data. A simplified procedure for estimating the..table entries is also presented.

  9. Automatic parquet block sorting using real-time spectral classification

    Science.gov (United States)

    Astrom, Anders; Astrand, Erik; Johansson, Magnus

    1999-03-01

    This paper presents a real-time spectral classification system based on the PGP spectrograph and a smart image sensor. The PGP is a spectrograph which extracts the spectral information from a scene and projects the information on an image sensor, which is a method often referred to as Imaging Spectroscopy. The classification is based on linear models and categorizes a number of pixels along a line. Previous systems adopting this method have used standard sensors, which often resulted in poor performance. The new system, however, is based on a patented near-sensor classification method, which exploits analogue features on the smart image sensor. The method reduces the enormous amount of data to be processed at an early stage, thus making true real-time spectral classification possible. The system has been evaluated on hardwood parquet boards showing very good results. The color defects considered in the experiments were blue stain, white sapwood, yellow decay and red decay. In addition to these four defect classes, a reference class was used to indicate correct surface color. The system calculates a statistical measure for each parquet block, giving the pixel defect percentage. The patented method makes it possible to run at very high speeds with a high spectral discrimination ability. Using a powerful illuminator, the system can run with a line frequency exceeding 2000 line/s. This opens up the possibility to maintain high production speed and still measure with good resolution.

  10. Drunk driving detection based on classification of multivariate time series.

    Science.gov (United States)

    Li, Zhenlong; Jin, Xue; Zhao, Xiaohua

    2015-09-01

    This paper addresses the problem of detecting drunk driving based on classification of multivariate time series. First, driving performance measures were collected from a test in a driving simulator located in the Traffic Research Center, Beijing University of Technology. Lateral position and steering angle were used to detect drunk driving. Second, multivariate time series analysis was performed to extract the features. A piecewise linear representation was used to represent multivariate time series. A bottom-up algorithm was then employed to separate multivariate time series. The slope and time interval of each segment were extracted as the features for classification. Third, a support vector machine classifier was used to classify driver's state into two classes (normal or drunk) according to the extracted features. The proposed approach achieved an accuracy of 80.0%. Drunk driving detection based on the analysis of multivariate time series is feasible and effective. The approach has implications for drunk driving detection. Copyright © 2015 Elsevier Ltd and National Safety Council. All rights reserved.

  11. Comparing Features for Classification of MEG Responses to Motor Imagery.

    Directory of Open Access Journals (Sweden)

    Hanna-Leena Halme

    Full Text Available Motor imagery (MI with real-time neurofeedback could be a viable approach, e.g., in rehabilitation of cerebral stroke. Magnetoencephalography (MEG noninvasively measures electric brain activity at high temporal resolution and is well-suited for recording oscillatory brain signals. MI is known to modulate 10- and 20-Hz oscillations in the somatomotor system. In order to provide accurate feedback to the subject, the most relevant MI-related features should be extracted from MEG data. In this study, we evaluated several MEG signal features for discriminating between left- and right-hand MI and between MI and rest.MEG was measured from nine healthy participants imagining either left- or right-hand finger tapping according to visual cues. Data preprocessing, feature extraction and classification were performed offline. The evaluated MI-related features were power spectral density (PSD, Morlet wavelets, short-time Fourier transform (STFT, common spatial patterns (CSP, filter-bank common spatial patterns (FBCSP, spatio-spectral decomposition (SSD, and combined SSD+CSP, CSP+PSD, CSP+Morlet, and CSP+STFT. We also compared four classifiers applied to single trials using 5-fold cross-validation for evaluating the classification accuracy and its possible dependence on the classification algorithm. In addition, we estimated the inter-session left-vs-right accuracy for each subject.The SSD+CSP combination yielded the best accuracy in both left-vs-right (mean 73.7% and MI-vs-rest (mean 81.3% classification. CSP+Morlet yielded the best mean accuracy in inter-session left-vs-right classification (mean 69.1%. There were large inter-subject differences in classification accuracy, and the level of the 20-Hz suppression correlated significantly with the subjective MI-vs-rest accuracy. Selection of the classification algorithm had only a minor effect on the results.We obtained good accuracy in sensor-level decoding of MI from single-trial MEG data. Feature extraction

  12. Automatic multi-modal MR tissue classification for the assessment of response to bevacizumab in patients with glioblastoma

    International Nuclear Information System (INIS)

    Liberman, Gilad; Louzoun, Yoram; Aizenstein, Orna; Blumenthal, Deborah T.; Bokstein, Felix; Palmon, Mika; Corn, Benjamin W.; Ben Bashat, Dafna

    2013-01-01

    Background: Current methods for evaluation of treatment response in glioblastoma are inaccurate, limited and time-consuming. This study aimed to develop a multi-modal MRI automatic classification method to improve accuracy and efficiency of treatment response assessment in patients with recurrent glioblastoma (GB). Materials and methods: A modification of the k-Nearest-Neighbors (kNN) classification method was developed and applied to 59 longitudinal MR data sets of 13 patients with recurrent GB undergoing bevacizumab (anti-angiogenic) therapy. Changes in the enhancing tumor volume were assessed using the proposed method and compared with Macdonald's criteria and with manual volumetric measurements. The edema-like area was further subclassified into peri- and non-peri-tumoral edema, using both the kNN method and an unsupervised method, to monitor longitudinal changes. Results: Automatic classification using the modified kNN method was applicable in all scans, even when the tumors were infiltrative with unclear borders. The enhancing tumor volume obtained using the automatic method was highly correlated with manual measurements (N = 33, r = 0.96, p < 0.0001), while standard radiographic assessment based on Macdonald's criteria matched manual delineation and automatic results in only 68% of cases. A graded pattern of tumor infiltration within the edema-like area was revealed by both automatic methods, showing high agreement. All classification results were confirmed by a senior neuro-radiologist and validated using MR spectroscopy. Conclusion: This study emphasizes the important role of automatic tools based on a multi-modal view of the tissue in monitoring therapy response in patients with high grade gliomas specifically under anti-angiogenic therapy

  13. Classification of time series patterns from complex dynamic systems

    Energy Technology Data Exchange (ETDEWEB)

    Schryver, J.C.; Rao, N.

    1998-07-01

    An increasing availability of high-performance computing and data storage media at decreasing cost is making possible the proliferation of large-scale numerical databases and data warehouses. Numeric warehousing enterprises on the order of hundreds of gigabytes to terabytes are a reality in many fields such as finance, retail sales, process systems monitoring, biomedical monitoring, surveillance and transportation. Large-scale databases are becoming more accessible to larger user communities through the internet, web-based applications and database connectivity. Consequently, most researchers now have access to a variety of massive datasets. This trend will probably only continue to grow over the next several years. Unfortunately, the availability of integrated tools to explore, analyze and understand the data warehoused in these archives is lagging far behind the ability to gain access to the same data. In particular, locating and identifying patterns of interest in numerical time series data is an increasingly important problem for which there are few available techniques. Temporal pattern recognition poses many interesting problems in classification, segmentation, prediction, diagnosis and anomaly detection. This research focuses on the problem of classification or characterization of numerical time series data. Highway vehicles and their drivers are examples of complex dynamic systems (CDS) which are being used by transportation agencies for field testing to generate large-scale time series datasets. Tools for effective analysis of numerical time series in databases generated by highway vehicle systems are not yet available, or have not been adapted to the target problem domain. However, analysis tools from similar domains may be adapted to the problem of classification of numerical time series data.

  14. Machine-learning-based Brokers for Real-time Classification of the LSST Alert Stream

    Science.gov (United States)

    Narayan, Gautham; Zaidi, Tayeb; Soraisam, Monika D.; Wang, Zhe; Lochner, Michelle; Matheson, Thomas; Saha, Abhijit; Yang, Shuo; Zhao, Zhenge; Kececioglu, John; Scheidegger, Carlos; Snodgrass, Richard T.; Axelrod, Tim; Jenness, Tim; Maier, Robert S.; Ridgway, Stephen T.; Seaman, Robert L.; Evans, Eric Michael; Singh, Navdeep; Taylor, Clark; Toeniskoetter, Jackson; Welch, Eric; Zhu, Songzhe; The ANTARES Collaboration

    2018-05-01

    The unprecedented volume and rate of transient events that will be discovered by the Large Synoptic Survey Telescope (LSST) demand that the astronomical community update its follow-up paradigm. Alert-brokers—automated software system to sift through, characterize, annotate, and prioritize events for follow-up—will be critical tools for managing alert streams in the LSST era. The Arizona-NOAO Temporal Analysis and Response to Events System (ANTARES) is one such broker. In this work, we develop a machine learning pipeline to characterize and classify variable and transient sources only using the available multiband optical photometry. We describe three illustrative stages of the pipeline, serving the three goals of early, intermediate, and retrospective classification of alerts. The first takes the form of variable versus transient categorization, the second a multiclass typing of the combined variable and transient data set, and the third a purity-driven subtyping of a transient class. Although several similar algorithms have proven themselves in simulations, we validate their performance on real observations for the first time. We quantitatively evaluate our pipeline on sparse, unevenly sampled, heteroskedastic data from various existing observational campaigns, and demonstrate very competitive classification performance. We describe our progress toward adapting the pipeline developed in this work into a real-time broker working on live alert streams from time-domain surveys.

  15. Refining Time-Activity Classification of Human Subjects Using the Global Positioning System.

    Science.gov (United States)

    Hu, Maogui; Li, Wei; Li, Lianfa; Houston, Douglas; Wu, Jun

    2016-01-01

    Detailed spatial location information is important in accurately estimating personal exposure to air pollution. Global Position System (GPS) has been widely used in tracking personal paths and activities. Previous researchers have developed time-activity classification models based on GPS data, most of them were developed for specific regions. An adaptive model for time-location classification can be widely applied to air pollution studies that use GPS to track individual level time-activity patterns. Time-activity data were collected for seven days using GPS loggers and accelerometers from thirteen adult participants from Southern California under free living conditions. We developed an automated model based on random forests to classify major time-activity patterns (i.e. indoor, outdoor-static, outdoor-walking, and in-vehicle travel). Sensitivity analysis was conducted to examine the contribution of the accelerometer data and the supplemental spatial data (i.e. roadway and tax parcel data) to the accuracy of time-activity classification. Our model was evaluated using both leave-one-fold-out and leave-one-subject-out methods. Maximum speeds in averaging time intervals of 7 and 5 minutes, and distance to primary highways with limited access were found to be the three most important variables in the classification model. Leave-one-fold-out cross-validation showed an overall accuracy of 99.71%. Sensitivities varied from 84.62% (outdoor walking) to 99.90% (indoor). Specificities varied from 96.33% (indoor) to 99.98% (outdoor static). The exclusion of accelerometer and ambient light sensor variables caused a slight loss in sensitivity for outdoor walking, but little loss in overall accuracy. However, leave-one-subject-out cross-validation showed considerable loss in sensitivity for outdoor static and outdoor walking conditions. The random forests classification model can achieve high accuracy for the four major time-activity categories. The model also performed well

  16. Change classification in SAR time series: a functional approach

    Science.gov (United States)

    Boldt, Markus; Thiele, Antje; Schulz, Karsten; Hinz, Stefan

    2017-10-01

    Change detection represents a broad field of research in SAR remote sensing, consisting of many different approaches. Besides the simple recognition of change areas, the analysis of type, category or class of the change areas is at least as important for creating a comprehensive result. Conventional strategies for change classification are based on supervised or unsupervised landuse / landcover classifications. The main drawback of such approaches is that the quality of the classification result directly depends on the selection of training and reference data. Additionally, supervised processing methods require an experienced operator who capably selects the training samples. This training step is not necessary when using unsupervised strategies, but nevertheless meaningful reference data must be available for identifying the resulting classes. Consequently, an experienced operator is indispensable. In this study, an innovative concept for the classification of changes in SAR time series data is proposed. Regarding the drawbacks of traditional strategies given above, it copes without using any training data. Moreover, the method can be applied by an operator, who does not have detailed knowledge about the available scenery yet. This knowledge is provided by the algorithm. The final step of the procedure, which main aspect is given by the iterative optimization of an initial class scheme with respect to the categorized change objects, is represented by the classification of these objects to the finally resulting classes. This assignment step is subject of this paper.

  17. Classification of time-series images using deep convolutional neural networks

    Science.gov (United States)

    Hatami, Nima; Gavet, Yann; Debayle, Johan

    2018-04-01

    Convolutional Neural Networks (CNN) has achieved a great success in image recognition task by automatically learning a hierarchical feature representation from raw data. While the majority of Time-Series Classification (TSC) literature is focused on 1D signals, this paper uses Recurrence Plots (RP) to transform time-series into 2D texture images and then take advantage of the deep CNN classifier. Image representation of time-series introduces different feature types that are not available for 1D signals, and therefore TSC can be treated as texture image recognition task. CNN model also allows learning different levels of representations together with a classifier, jointly and automatically. Therefore, using RP and CNN in a unified framework is expected to boost the recognition rate of TSC. Experimental results on the UCR time-series classification archive demonstrate competitive accuracy of the proposed approach, compared not only to the existing deep architectures, but also to the state-of-the art TSC algorithms.

  18. Prognosis and therapeutic response according to the world health organization histological classification in advanced thymoma

    International Nuclear Information System (INIS)

    Tagawa, Tetsuzo; Kometani, Takuro; Yamazaki, Koji

    2011-01-01

    The clinical efficacy of the World Health Organization (WHO) classification of thymoma has been reported to be a prognostic factor for patients with thymomas. This study focuses on the relationship between the therapeutic response and the WHO histological classification in patients with advanced thymoma. A retrospective review was performed on 22 patients with Masaoka stage III and IV thymoma treated from 1975 to 2007. There were 1, 1, 7, 3, and 10 patients with WHO histological subtypes A, AB, B1, B2, and B3, respectively. Surgery was performed on 10 patients. There were 2 complete resections, 2 incomplete resections, and 6 exploratory thoracotomies. Of 18 patients with unresectable tumors, 8, 5, and 5 were treated with radiotherapy, chemotherapy, and chemoradiotherapy as the initial therapy, respectively. The response rate in 9 patients with type A-B2 was significantly better than that in 9 patients with type B3 regardless of treatment modality (100% vs 11.1%, P=0.0001). Only the WHO classification was significantly associated with survival, with type B3 having a worse prognosis than A-B2 (P=0.01). Type B3 thymoma showed a lower response rate to treatments and thus shorter survival. The WHO classification is a good predictive factor for therapeutic response in advanced thymoma. (author)

  19. 78 FR 68983 - Cotton Futures Classification: Optional Classification Procedure

    Science.gov (United States)

    2013-11-18

    ...-AD33 Cotton Futures Classification: Optional Classification Procedure AGENCY: Agricultural Marketing... regulations to allow for the addition of an optional cotton futures classification procedure--identified and... response to requests from the U.S. cotton industry and ICE, AMS will offer a futures classification option...

  20. Classification and overview of research in real-time imaging

    Science.gov (United States)

    Sinha, Purnendu; Gorinsky, Sergey V.; Laplante, Phillip A.; Stoyenko, Alexander D.; Marlowe, Thomas J.

    1996-10-01

    Real-time imaging has application in areas such as multimedia, virtual reality, medical imaging, and remote sensing and control. Recently, the imaging community has witnessed a tremendous growth in research and new ideas in these areas. To lend structure to this growth, we outline a classification scheme and provide an overview of current research in real-time imaging. For convenience, we have categorized references by research area and application.

  1. Volumetric response classification in metastatic solid tumors on MSCT: Initial results in a whole-body setting

    International Nuclear Information System (INIS)

    Wulff, A.M.; Fabel, M.; Freitag-Wolf, S.; Tepper, M.; Knabe, H.M.; Schäfer, J.P.; Jansen, O.; Bolte, H.

    2013-01-01

    Purpose: To examine technical parameters of measurement accuracy and differences in tumor response classification using RECIST 1.1 and volumetric assessment in three common metastasis types (lung nodules, liver lesions, lymph node metastasis) simultaneously. Materials and methods: 56 consecutive patients (32 female) aged 41–82 years with a wide range of metastatic solid tumors were examined with MSCT for baseline and follow up. Images were evaluated by three experienced radiologists using manual measurements and semi-automatic lesion segmentation. Institutional ethics review was obtained and all patients gave written informed consent. Data analysis comprised interobserver variability operationalized as coefficient of variation and categorical response classification according to RECIST 1.1 for both manual and volumetric measures. Continuous data were assessed for statistical significance with Wilcoxon signed-rank test and categorical data with Fleiss kappa. Results: Interobserver variability was 6.3% (IQR 4.6%) for manual and 4.1% (IQR 4.4%) for volumetrically obtained sum of relevant diameters (p < 0.05, corrected). 4–8 patients’ response to therapy was classified differently across observers by using volumetry compared to standard manual measurements. Fleiss kappa revealed no significant difference in categorical agreement of response classification between manual (0.7558) and volumetric (0.7623) measurements. Conclusion: Under standard RECIST thresholds there was no advantage of volumetric compared to manual response evaluation. However volumetric assessment yielded significantly lower interobserver variability. This may allow narrower thresholds for volumetric response classification in the future

  2. Volumetric response classification in metastatic solid tumors on MSCT: Initial results in a whole-body setting

    Energy Technology Data Exchange (ETDEWEB)

    Wulff, A.M., E-mail: a.wulff@rad.uni-kiel.de [Klinik für Diagnostische Radiologie, Arnold-Heller-Straße 3, Haus 23, 24105 Kiel (Germany); Fabel, M. [Klinik für Diagnostische Radiologie, Arnold-Heller-Straße 3, Haus 23, 24105 Kiel (Germany); Freitag-Wolf, S., E-mail: freitag@medinfo.uni-kiel.de [Institut für Medizinische Informatik und Statistik, Brunswiker Str. 10, 24105 Kiel (Germany); Tepper, M., E-mail: m.tepper@rad.uni-kiel.de [Klinik für Diagnostische Radiologie, Arnold-Heller-Straße 3, Haus 23, 24105 Kiel (Germany); Knabe, H.M., E-mail: h.knabe@rad.uni-kiel.de [Klinik für Diagnostische Radiologie, Arnold-Heller-Straße 3, Haus 23, 24105 Kiel (Germany); Schäfer, J.P., E-mail: jp.schaefer@rad.uni-kiel.de [Klinik für Diagnostische Radiologie, Arnold-Heller-Straße 3, Haus 23, 24105 Kiel (Germany); Jansen, O., E-mail: o.jansen@neurorad.uni-kiel.de [Klinik für Diagnostische Radiologie, Arnold-Heller-Straße 3, Haus 23, 24105 Kiel (Germany); Bolte, H., E-mail: hendrik.bolte@ukmuenster.de [Klinik für Nuklearmedizin, Albert-Schweitzer-Campus 1, Gebäude A1, 48149 Münster (Germany)

    2013-10-01

    Purpose: To examine technical parameters of measurement accuracy and differences in tumor response classification using RECIST 1.1 and volumetric assessment in three common metastasis types (lung nodules, liver lesions, lymph node metastasis) simultaneously. Materials and methods: 56 consecutive patients (32 female) aged 41–82 years with a wide range of metastatic solid tumors were examined with MSCT for baseline and follow up. Images were evaluated by three experienced radiologists using manual measurements and semi-automatic lesion segmentation. Institutional ethics review was obtained and all patients gave written informed consent. Data analysis comprised interobserver variability operationalized as coefficient of variation and categorical response classification according to RECIST 1.1 for both manual and volumetric measures. Continuous data were assessed for statistical significance with Wilcoxon signed-rank test and categorical data with Fleiss kappa. Results: Interobserver variability was 6.3% (IQR 4.6%) for manual and 4.1% (IQR 4.4%) for volumetrically obtained sum of relevant diameters (p < 0.05, corrected). 4–8 patients’ response to therapy was classified differently across observers by using volumetry compared to standard manual measurements. Fleiss kappa revealed no significant difference in categorical agreement of response classification between manual (0.7558) and volumetric (0.7623) measurements. Conclusion: Under standard RECIST thresholds there was no advantage of volumetric compared to manual response evaluation. However volumetric assessment yielded significantly lower interobserver variability. This may allow narrower thresholds for volumetric response classification in the future.

  3. MSCT follow-up in malignant lymphoma. Comparison of manual linear measurements with semi-automated lymph node analysis for therapy response classification

    International Nuclear Information System (INIS)

    Wessling, J.; Puesken, M.; Kohlhase, N.; Persigehl, T.; Mesters, R.; Heindel, W.; Buerke, B.; Koch, R.

    2012-01-01

    Purpose: Assignment of semi-automated lymph node analysis compared to manual measurements for therapy response classification of malignant lymphoma in MSCT. Materials and Methods: MSCT scans of 63 malignant lymphoma patients before and after 2 cycles of chemotherapy (307 target lymph nodes) were evaluated. The long axis diameter (LAD), short axis diameter (SAD) and bi-dimensional WHO were determined manually and semi-automatically. The time for manual and semi-automatic segmentation was evaluated. The ref. standard response was defined as the mean relative change across all manual and semi-automatic measurements (mean manual/semi-automatic LAD, SAD, semi-automatic volume). Statistical analysis encompassed t-test and McNemar's test for clustered data. Results: Response classification per lymph node revealed semi-automated volumetry and bi-dimensional WHO to be significantly more accurate than manual linear metric measurements. Response classification per patient based on RECIST revealed more patients to be correctly classified by semi-automatic measurements, e.g. 96.0 %/92.9 % (WHO bi-dimensional/volume) compared to 85.7/84.1 % for manual LAD and SAD, respectively (mean reduction in misclassified patients of 9.95 %). Considering the use of correction tools, the time expenditure for lymph node segmentation (29.7 ± 17.4 sec) was the same as with the manual approach (29.1 ± 14.5 sec). Conclusion: Semi-automatically derived 'lymph node volume' and 'bi-dimensional WHO' significantly reduce the number of misclassified patients in the CT follow-up of malignant lymphoma by at least 10 %. However, lymph node volumetry does not outperform bi-dimensional WHO. (orig.)

  4. MSCT follow-up in malignant lymphoma. Comparison of manual linear measurements with semi-automated lymph node analysis for therapy response classification

    Energy Technology Data Exchange (ETDEWEB)

    Wessling, J.; Puesken, M.; Kohlhase, N.; Persigehl, T.; Mesters, R.; Heindel, W.; Buerke, B. [Muenster Univ. (Germany). Dept. of Clinical Radiology; Koch, R. [Muenster Univ. (Germany). Inst. of Biostatistics and Clinical Research

    2012-09-15

    Purpose: Assignment of semi-automated lymph node analysis compared to manual measurements for therapy response classification of malignant lymphoma in MSCT. Materials and Methods: MSCT scans of 63 malignant lymphoma patients before and after 2 cycles of chemotherapy (307 target lymph nodes) were evaluated. The long axis diameter (LAD), short axis diameter (SAD) and bi-dimensional WHO were determined manually and semi-automatically. The time for manual and semi-automatic segmentation was evaluated. The ref. standard response was defined as the mean relative change across all manual and semi-automatic measurements (mean manual/semi-automatic LAD, SAD, semi-automatic volume). Statistical analysis encompassed t-test and McNemar's test for clustered data. Results: Response classification per lymph node revealed semi-automated volumetry and bi-dimensional WHO to be significantly more accurate than manual linear metric measurements. Response classification per patient based on RECIST revealed more patients to be correctly classified by semi-automatic measurements, e.g. 96.0 %/92.9 % (WHO bi-dimensional/volume) compared to 85.7/84.1 % for manual LAD and SAD, respectively (mean reduction in misclassified patients of 9.95 %). Considering the use of correction tools, the time expenditure for lymph node segmentation (29.7 {+-} 17.4 sec) was the same as with the manual approach (29.1 {+-} 14.5 sec). Conclusion: Semi-automatically derived 'lymph node volume' and 'bi-dimensional WHO' significantly reduce the number of misclassified patients in the CT follow-up of malignant lymphoma by at least 10 %. However, lymph node volumetry does not outperform bi-dimensional WHO. (orig.)

  5. CLASSIFICATION OF CROPLANDS THROUGH FUSION OF OPTICAL AND SAR TIME SERIES DATA

    Directory of Open Access Journals (Sweden)

    S. Park

    2016-06-01

    Full Text Available Many satellite sensors including Landsat series have been extensively used for land cover classification. Studies have been conducted to mitigate classification problems associated with the use of single data (e.g., such as cloud contamination through multi-sensor data fusion and the use of time series data. This study investigated two areas with different environment and climate conditions: one in South Korea and the other in US. Cropland classification was conducted by using multi-temporal Landsat 5, Radarsat-1 and digital elevation models (DEM based on two machine learning approaches (i.e., random forest and support vector machines. Seven classification scenarios were examined and evaluated through accuracy assessment. Results show that SVM produced the best performance (overall accuracy of 93.87% when using all temporal and spectral data as input variables. Normalized Difference Water Index (NDWI, SAR backscattering, and Normalized Difference Vegetation Index (NDVI were identified as more contributing variables than the others for cropland classification.

  6. Pornography classification: The hidden clues in video space-time.

    Science.gov (United States)

    Moreira, Daniel; Avila, Sandra; Perez, Mauricio; Moraes, Daniel; Testoni, Vanessa; Valle, Eduardo; Goldenstein, Siome; Rocha, Anderson

    2016-11-01

    As web technologies and social networks become part of the general public's life, the problem of automatically detecting pornography is into every parent's mind - nobody feels completely safe when their children go online. In this paper, we focus on video-pornography classification, a hard problem in which traditional methods often employ still-image techniques - labeling frames individually prior to a global decision. Frame-based approaches, however, ignore significant cogent information brought by motion. Here, we introduce a space-temporal interest point detector and descriptor called Temporal Robust Features (TRoF). TRoF was custom-tailored for efficient (low processing time and memory footprint) and effective (high classification accuracy and low false negative rate) motion description, particularly suited to the task at hand. We aggregate local information extracted by TRoF into a mid-level representation using Fisher Vectors, the state-of-the-art model of Bags of Visual Words (BoVW). We evaluate our original strategy, contrasting it both to commercial pornography detection solutions, and to BoVW solutions based upon other space-temporal features from the scientific literature. The performance is assessed using the Pornography-2k dataset, a new challenging pornographic benchmark, comprising 2000 web videos and 140h of video footage. The dataset is also a contribution of this work and is very assorted, including both professional and amateur content, and it depicts several genres of pornography, from cartoon to live action, with diverse behavior and ethnicity. The best approach, based on a dense application of TRoF, yields a classification error reduction of almost 79% when compared to the best commercial classifier. A sparse description relying on TRoF detector is also noteworthy, for yielding a classification error reduction of over 69%, with 19× less memory footprint than the dense solution, and yet can also be implemented to meet real-time requirements

  7. Predict or classify: The deceptive role of time-locking in brain signal classification

    Science.gov (United States)

    Rusconi, Marco; Valleriani, Angelo

    2016-06-01

    Several experimental studies claim to be able to predict the outcome of simple decisions from brain signals measured before subjects are aware of their decision. Often, these studies use multivariate pattern recognition methods with the underlying assumption that the ability to classify the brain signal is equivalent to predict the decision itself. Here we show instead that it is possible to correctly classify a signal even if it does not contain any predictive information about the decision. We first define a simple stochastic model that mimics the random decision process between two equivalent alternatives, and generate a large number of independent trials that contain no choice-predictive information. The trials are first time-locked to the time point of the final event and then classified using standard machine-learning techniques. The resulting classification accuracy is above chance level long before the time point of time-locking. We then analyze the same trials using information theory. We demonstrate that the high classification accuracy is a consequence of time-locking and that its time behavior is simply related to the large relaxation time of the process. We conclude that when time-locking is a crucial step in the analysis of neural activity patterns, both the emergence and the timing of the classification accuracy are affected by structural properties of the network that generates the signal.

  8. Maximum mutual information regularized classification

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-09-07

    In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.

  9. Maximum mutual information regularized classification

    KAUST Repository

    Wang, Jim Jing-Yan; Wang, Yi; Zhao, Shiguang; Gao, Xin

    2014-01-01

    In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.

  10. Joint Probability-Based Neuronal Spike Train Classification

    Directory of Open Access Journals (Sweden)

    Yan Chen

    2009-01-01

    Full Text Available Neuronal spike trains are used by the nervous system to encode and transmit information. Euclidean distance-based methods (EDBMs have been applied to quantify the similarity between temporally-discretized spike trains and model responses. In this study, using the same discretization procedure, we developed and applied a joint probability-based method (JPBM to classify individual spike trains of slowly adapting pulmonary stretch receptors (SARs. The activity of individual SARs was recorded in anaesthetized, paralysed adult male rabbits, which were artificially-ventilated at constant rate and one of three different volumes. Two-thirds of the responses to the 600 stimuli presented at each volume were used to construct three response models (one for each stimulus volume consisting of a series of time bins, each with spike probabilities. The remaining one-third of the responses where used as test responses to be classified into one of the three model responses. This was done by computing the joint probability of observing the same series of events (spikes or no spikes, dictated by the test response in a given model and determining which probability of the three was highest. The JPBM generally produced better classification accuracy than the EDBM, and both performed well above chance. Both methods were similarly affected by variations in discretization parameters, response epoch duration, and two different response alignment strategies. Increasing bin widths increased classification accuracy, which also improved with increased observation time, but primarily during periods of increasing lung inflation. Thus, the JPBM is a simple and effective method performing spike train classification.

  11. Automated Feature Design for Time Series Classification by Genetic Programming

    OpenAIRE

    Harvey, Dustin Yewell

    2014-01-01

    Time series classification (TSC) methods discover and exploit patterns in time series and other one-dimensional signals. Although many accurate, robust classifiers exist for multivariate feature sets, general approaches are needed to extend machine learning techniques to make use of signal inputs. Numerous applications of TSC can be found in structural engineering, especially in the areas of structural health monitoring and non-destructive evaluation. Additionally, the fields of process contr...

  12. Object-Based Classification of Grasslands from High Resolution Satellite Image Time Series Using Gaussian Mean Map Kernels

    Directory of Open Access Journals (Sweden)

    Mailys Lopes

    2017-07-01

    Full Text Available This paper deals with the classification of grasslands using high resolution satellite image time series. Grasslands considered in this work are semi-natural elements in fragmented landscapes, i.e., they are heterogeneous and small elements. The first contribution of this study is to account for grassland heterogeneity while working at the object level by modeling its pixels distributions by a Gaussian distribution. To measure the similarity between two grasslands, a new kernel is proposed as a second contribution: the α -Gaussian mean kernel. It allows one to weight the influence of the covariance matrix when comparing two Gaussian distributions. This kernel is introduced in support vector machines for the supervised classification of grasslands from southwest France. A dense intra-annual multispectral time series of the Formosat-2 satellite is used for the classification of grasslands’ management practices, while an inter-annual NDVI time series of Formosat-2 is used for old and young grasslands’ discrimination. Results are compared to other existing pixel- and object-based approaches in terms of classification accuracy and processing time. The proposed method is shown to be a good compromise between processing speed and classification accuracy. It can adapt to the classification constraints, and it encompasses several similarity measures known in the literature. It is appropriate for the classification of small and heterogeneous objects such as grasslands.

  13. Time Series of Images to Improve Tree Species Classification

    Science.gov (United States)

    Miyoshi, G. T.; Imai, N. N.; de Moraes, M. V. A.; Tommaselli, A. M. G.; Näsi, R.

    2017-10-01

    Tree species classification provides valuable information to forest monitoring and management. The high floristic variation of the tree species appears as a challenging issue in the tree species classification because the vegetation characteristics changes according to the season. To help to monitor this complex environment, the imaging spectroscopy has been largely applied since the development of miniaturized sensors attached to Unmanned Aerial Vehicles (UAV). Considering the seasonal changes in forests and the higher spectral and spatial resolution acquired with sensors attached to UAV, we present the use of time series of images to classify four tree species. The study area is an Atlantic Forest area located in the western part of São Paulo State. Images were acquired in August 2015 and August 2016, generating three data sets of images: only with the image spectra of 2015; only with the image spectra of 2016; with the layer stacking of images from 2015 and 2016. Four tree species were classified using Spectral angle mapper (SAM), Spectral information divergence (SID) and Random Forest (RF). The results showed that SAM and SID caused an overfitting of the data whereas RF showed better results and the use of the layer stacking improved the classification achieving a kappa coefficient of 18.26 %.

  14. Leveraging Long-term Seismic Catalogs for Automated Real-time Event Classification

    Science.gov (United States)

    Linville, L.; Draelos, T.; Pankow, K. L.; Young, C. J.; Alvarez, S.

    2017-12-01

    We investigate the use of labeled event types available through reviewed seismic catalogs to produce automated event labels on new incoming data from the crustal region spanned by the cataloged events. Using events cataloged by the University of Utah Seismograph Stations between October, 2012 and June, 2017, we calculate the spectrogram for a time window that spans the duration of each event as seen on individual stations, resulting in 110k event spectrograms (50% local earthquakes examples, 50% quarry blasts examples). Using 80% of the randomized example events ( 90k), a classifier is trained to distinguish between local earthquakes and quarry blasts. We explore variations of deep learning classifiers, incorporating elements of convolutional and recurrent neural networks. Using a single-layer Long Short Term Memory recurrent neural network, we achieve 92% accuracy on the classification task on the remaining 20K test examples. Leveraging the decisions from a group of stations that detected the same event by using the median of all classifications in the group increases the model accuracy to 96%. Additional data with equivalent processing from 500 more recently cataloged events (July, 2017), achieves the same accuracy as our test data on both single-station examples and multi-station medians, suggesting that the model can maintain accurate and stable classification rates on real-time automated events local to the University of Utah Seismograph Stations, with potentially minimal levels of re-training through time.

  15. Non-Hodgkin lymphoma response evaluation with MRI texture classification

    Directory of Open Access Journals (Sweden)

    Heinonen Tomi T

    2009-06-01

    Full Text Available Abstract Background To show magnetic resonance imaging (MRI texture appearance change in non-Hodgkin lymphoma (NHL during treatment with response controlled by quantitative volume analysis. Methods A total of 19 patients having NHL with an evaluable lymphoma lesion were scanned at three imaging timepoints with 1.5T device during clinical treatment evaluation. Texture characteristics of images were analyzed and classified with MaZda application and statistical tests. Results NHL tissue MRI texture imaged before treatment and under chemotherapy was classified within several subgroups, showing best discrimination with 96% correct classification in non-linear discriminant analysis of T2-weighted images. Texture parameters of MRI data were successfully tested with statistical tests to assess the impact of the separability of the parameters in evaluating chemotherapy response in lymphoma tissue. Conclusion Texture characteristics of MRI data were classified successfully; this proved texture analysis to be potential quantitative means of representing lymphoma tissue changes during chemotherapy response monitoring.

  16. Spectrophotometric Rapid-Response Classification of Near-Earth Objects

    Science.gov (United States)

    Mommert, Michael; Trilling, David; Butler, Nat; Axelrod, Tim; Moskovitz, Nick; Jedicke, Robert; Pichardo, Barbara; Reyes-Ruiz, Mauricio

    2015-08-01

    Small NEOs are, as a whole, poorly characterized, and we know nothing about the physical properties of the majority of all NEOs. The rate of NEO discoveries is increasing each year, and projects to determine the physical properties of NEOs are lagging behind. NEOs are faint, and generally even fainter by the time that follow-up characterizations can be made days or weeks after their discovery. There is a need for a high-throughput, high-efficiency physical characterization strategy in which hundreds of faint NEOs can be characterized each year. Broadband photometry in the near-infrared is sufficiently diagnostic to assign taxonomic types, and hence constrain both the individual and ensemble properties of NEOs.We present results from our rapid response near-infrared spectrophotometric characterization program of NEOs. We are using UKIRT (on Mauna Kea) and the RATIR instrument on the 1.5m telescope at the San Pedro Martir Observatory (Mexico) to allow us to make observations most nights of the year in robotic/queue mode. We derive taxonomic classifications for our targets using machine-learning techniques that are trained on a large sample of measured asteroid spectra. For each target we assign a probability for it to belong to a number of different taxa. Target selection, observation, data reduction, and analysis are highly automated, requiring only a minimum of user interaction, making this technique powerful and fast. Our targets are NEOs that are generally too faint for other characterization techniques, or would require many hours of large telescope time.

  17. Classifying Classifications

    DEFF Research Database (Denmark)

    Debus, Michael S.

    2017-01-01

    This paper critically analyzes seventeen game classifications. The classifications were chosen on the basis of diversity, ranging from pre-digital classification (e.g. Murray 1952), over game studies classifications (e.g. Elverdam & Aarseth 2007) to classifications of drinking games (e.g. LaBrie et...... al. 2013). The analysis aims at three goals: The classifications’ internal consistency, the abstraction of classification criteria and the identification of differences in classification across fields and/or time. Especially the abstraction of classification criteria can be used in future endeavors...... into the topic of game classifications....

  18. Mid-level image representations for real-time heart view plane classification of echocardiograms.

    Science.gov (United States)

    Penatti, Otávio A B; Werneck, Rafael de O; de Almeida, Waldir R; Stein, Bernardo V; Pazinato, Daniel V; Mendes Júnior, Pedro R; Torres, Ricardo da S; Rocha, Anderson

    2015-11-01

    In this paper, we explore mid-level image representations for real-time heart view plane classification of 2D echocardiogram ultrasound images. The proposed representations rely on bags of visual words, successfully used by the computer vision community in visual recognition problems. An important element of the proposed representations is the image sampling with large regions, drastically reducing the execution time of the image characterization procedure. Throughout an extensive set of experiments, we evaluate the proposed approach against different image descriptors for classifying four heart view planes. The results show that our approach is effective and efficient for the target problem, making it suitable for use in real-time setups. The proposed representations are also robust to different image transformations, e.g., downsampling, noise filtering, and different machine learning classifiers, keeping classification accuracy above 90%. Feature extraction can be performed in 30 fps or 60 fps in some cases. This paper also includes an in-depth review of the literature in the area of automatic echocardiogram view classification giving the reader a through comprehension of this field of study. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Time series classification using k-Nearest neighbours, Multilayer Perceptron and Learning Vector Quantization algorithms

    Directory of Open Access Journals (Sweden)

    Jiří Fejfar

    2012-01-01

    Full Text Available We are presenting results comparison of three artificial intelligence algorithms in a classification of time series derived from musical excerpts in this paper. Algorithms were chosen to represent different principles of classification – statistic approach, neural networks and competitive learning. The first algorithm is a classical k-Nearest neighbours algorithm, the second algorithm is Multilayer Perceptron (MPL, an example of artificial neural network and the third one is a Learning Vector Quantization (LVQ algorithm representing supervised counterpart to unsupervised Self Organizing Map (SOM.After our own former experiments with unlabelled data we moved forward to the data labels utilization, which generally led to a better accuracy of classification results. As we need huge data set of labelled time series (a priori knowledge of correct class which each time series instance belongs to, we used, with a good experience in former studies, musical excerpts as a source of real-world time series. We are using standard deviation of the sound signal as a descriptor of a musical excerpts volume level.We are describing principle of each algorithm as well as its implementation briefly, giving links for further research. Classification results of each algorithm are presented in a confusion matrix showing numbers of misclassifications and allowing to evaluate overall accuracy of the algorithm. Results are compared and particular misclassifications are discussed for each algorithm. Finally the best solution is chosen and further research goals are given.

  20. Joint Time-Frequency-Space Classification of EEG in a Brain-Computer Interface Application

    Directory of Open Access Journals (Sweden)

    Molina Gary N Garcia

    2003-01-01

    Full Text Available Brain-computer interface is a growing field of interest in human-computer interaction with diverse applications ranging from medicine to entertainment. In this paper, we present a system which allows for classification of mental tasks based on a joint time-frequency-space decorrelation, in which mental tasks are measured via electroencephalogram (EEG signals. The efficiency of this approach was evaluated by means of real-time experimentations on two subjects performing three different mental tasks. To do so, a number of protocols for visualization, as well as training with and without feedback, were also developed. Obtained results show that it is possible to obtain good classification of simple mental tasks, in view of command and control, after a relatively small amount of training, with accuracies around 80%, and in real time.

  1. Pros and cons of conjoint analysis of discrete choice experiments to define classification and response criteria in rheumatology.

    Science.gov (United States)

    Taylor, William J

    2016-03-01

    Conjoint analysis of choice or preference data has been used in marketing for over 40 years but has appeared in healthcare settings much more recently. It may be a useful technique for applications within the rheumatology field. Conjoint analysis in rheumatology contexts has mainly used the approaches implemented in 1000Minds Ltd, Dunedin, New Zealand, Sawtooth Software, Orem UT, USA. Examples include classification criteria, composite response criteria, service prioritization tools and utilities assessment. Limitations imposed by very many attributes can be managed using new techniques. Conjoint analysis studies of classification and response criteria suggest that the assumption of equal weighting of attributes cannot be met, which challenges traditional approaches to composite criteria construction. Weights elicited through choice experiments with experts can derive more accurate classification criteria, than unweighted criteria. Studies that find significant variation in attribute weights for composite response criteria for gout make construction of such criteria problematic. Better understanding of various multiattribute phenomena is likely to increase with increased use of conjoint analysis, especially when the attributes concern individual perceptions or opinions. In addition to classification criteria, some applications for conjoint analysis that are emerging in rheumatology include prioritization tools, remission criteria, and utilities for life areas.

  2. Classification of Physical Activity: Information to Artificial Pancreas Control Systems in Real Time.

    Science.gov (United States)

    Turksoy, Kamuran; Paulino, Thiago Marques Luz; Zaharieva, Dessi P; Yavelberg, Loren; Jamnik, Veronica; Riddell, Michael C; Cinar, Ali

    2015-10-06

    Physical activity has a wide range of effects on glucose concentrations in type 1 diabetes (T1D) depending on the type (ie, aerobic, anaerobic, mixed) and duration of activity performed. This variability in glucose responses to physical activity makes the development of artificial pancreas (AP) systems challenging. Automatic detection of exercise type and intensity, and its classification as aerobic or anaerobic would provide valuable information to AP control algorithms. This can be achieved by using a multivariable AP approach where biometric variables are measured and reported to the AP at high frequency. We developed a classification system that identifies, in real time, the exercise intensity and its reliance on aerobic or anaerobic metabolism and tested this approach using clinical data collected from 5 persons with T1D and 3 individuals without T1D in a controlled laboratory setting using a variety of common types of physical activity. The classifier had an average sensitivity of 98.7% for physiological data collected over a range of exercise modalities and intensities in these subjects. The classifier will be added as a new module to the integrated multivariable adaptive AP system to enable the detection of aerobic and anaerobic exercise for enhancing the accuracy of insulin infusion strategies during and after exercise. © 2015 Diabetes Technology Society.

  3. Classification of brain tumours using short echo time 1H MR spectra

    Science.gov (United States)

    Devos, A.; Lukas, L.; Suykens, J. A. K.; Vanhamme, L.; Tate, A. R.; Howe, F. A.; Majós, C.; Moreno-Torres, A.; van der Graaf, M.; Arús, C.; Van Huffel, S.

    2004-09-01

    The purpose was to objectively compare the application of several techniques and the use of several input features for brain tumour classification using Magnetic Resonance Spectroscopy (MRS). Short echo time 1H MRS signals from patients with glioblastomas ( n = 87), meningiomas ( n = 57), metastases ( n = 39), and astrocytomas grade II ( n = 22) were provided by six centres in the European Union funded INTERPRET project. Linear discriminant analysis, least squares support vector machines (LS-SVM) with a linear kernel and LS-SVM with radial basis function kernel were applied and evaluated over 100 stratified random splittings of the dataset into training and test sets. The area under the receiver operating characteristic curve (AUC) was used to measure the performance of binary classifiers, while the percentage of correct classifications was used to evaluate the multiclass classifiers. The influence of several factors on the classification performance has been tested: L2- vs. water normalization, magnitude vs. real spectra and baseline correction. The effect of input feature reduction was also investigated by using only the selected frequency regions containing the most discriminatory information, and peak integrated values. Using L2-normalized complete spectra the automated binary classifiers reached a mean test AUC of more than 0.95, except for glioblastomas vs. metastases. Similar results were obtained for all classification techniques and input features except for water normalized spectra, where classification performance was lower. This indicates that data acquisition and processing can be simplified for classification purposes, excluding the need for separate water signal acquisition, baseline correction or phasing.

  4. The family and family structure classification redefined for the current times

    Directory of Open Access Journals (Sweden)

    Rahul Sharma

    2013-01-01

    Full Text Available The family is a basic unit of study in many medical and social science disciplines. Definitions of family have varied from country to country, and also within country. Because of this and the changing realities of the current times, there is a felt need for redefining the family and the common family structure types, for the purpose of study of the family as a factor in health and other variables of interest. A redefinition of a ′′family′′ has been proposed and various nuances of the definition are also discussed in detail. A classification scheme for the various types of family has also been put forward. A few exceptional case scenarios have been envisaged and their classification as per the new scheme is discussed, in a bid to clarify the classification scheme further. The proposed scheme should prove to be of use across various countries and cultures, for broadly classifying the family structure. The unique scenarios of particular cultures can be taken into account by defining region or culture-specific subtypes of the overall types of family structure.

  5. Real-time classification of humans versus animals using profiling sensors and hidden Markov tree model

    Science.gov (United States)

    Hossen, Jakir; Jacobs, Eddie L.; Chari, Srikant

    2015-07-01

    Linear pyroelectric array sensors have enabled useful classifications of objects such as humans and animals to be performed with relatively low-cost hardware in border and perimeter security applications. Ongoing research has sought to improve the performance of these sensors through signal processing algorithms. In the research presented here, we introduce the use of hidden Markov tree (HMT) models for object recognition in images generated by linear pyroelectric sensors. HMTs are trained to statistically model the wavelet features of individual objects through an expectation-maximization learning process. Human versus animal classification for a test object is made by evaluating its wavelet features against the trained HMTs using the maximum-likelihood criterion. The classification performance of this approach is compared to two other techniques; a texture, shape, and spectral component features (TSSF) based classifier and a speeded-up robust feature (SURF) classifier. The evaluation indicates that among the three techniques, the wavelet-based HMT model works well, is robust, and has improved classification performance compared to a SURF-based algorithm in equivalent computation time. When compared to the TSSF-based classifier, the HMT model has a slightly degraded performance but almost an order of magnitude improvement in computation time enabling real-time implementation.

  6. Real-Time Subject-Independent Pattern Classification of Overt and Covert Movements from fNIRS Signals.

    Directory of Open Access Journals (Sweden)

    Neethu Robinson

    Full Text Available Recently, studies have reported the use of Near Infrared Spectroscopy (NIRS for developing Brain-Computer Interface (BCI by applying online pattern classification of brain states from subject-specific fNIRS signals. The purpose of the present study was to develop and test a real-time method for subject-specific and subject-independent classification of multi-channel fNIRS signals using support-vector machines (SVM, so as to determine its feasibility as an online neurofeedback system. Towards this goal, we used left versus right hand movement execution and movement imagery as study paradigms in a series of experiments. In the first two experiments, activations in the motor cortex during movement execution and movement imagery were used to develop subject-dependent models that obtained high classification accuracies thereby indicating the robustness of our classification method. In the third experiment, a generalized classifier-model was developed from the first two experimental data, which was then applied for subject-independent neurofeedback training. Application of this method in new participants showed mean classification accuracy of 63% for movement imagery tasks and 80% for movement execution tasks. These results, and their corresponding offline analysis reported in this study demonstrate that SVM based real-time subject-independent classification of fNIRS signals is feasible. This method has important applications in the field of hemodynamic BCIs, and neuro-rehabilitation where patients can be trained to learn spatio-temporal patterns of healthy brain activity.

  7. A Study of Time Response for Safety-Related Operator Actions in Non-LOCA Safety Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Min Seok; Lee, Sang Seob; Park, Min Soo; Lee, Gyu Cheon; Kim, Shin Whan [KEPCO E and C Company, Daejeon (Korea, Republic of)

    2014-10-15

    The classification of initiating events for safety analysis report (SAR) chapter 15 is categorized into moderate frequency events (MF), infrequent events (IF), and limiting faults (LF) depending on the frequency of its occurrence. For the non-LOCA safety analysis with the purpose to get construction or operation license, however, it is assumed that the operator response action to mitigate the events starts at 30 minutes after the initiation of the transient regardless of the event categorization. Such an assumption of corresponding operator response time may have over conservatism with the MF and IF events and results in a decrease in the safety margin compared to its acceptance criteria. In this paper, the plant conditions (PC) are categorized with the definitions in SAR 15 and ANS 51.1. Then, the consequence of response for safety-related operator action time is determined based on the PC in ANSI 58.8. The operator response time for safety analysis regarding PC are reviewed and suggested. The clarifying alarm response procedure would be required for the guideline to reduce the operator response time when the alarms indicate the occurrence of the transient.

  8. Using Hierarchical Time Series Clustering Algorithm and Wavelet Classifier for Biometric Voice Classification

    Directory of Open Access Journals (Sweden)

    Simon Fong

    2012-01-01

    Full Text Available Voice biometrics has a long history in biosecurity applications such as verification and identification based on characteristics of the human voice. The other application called voice classification which has its important role in grouping unlabelled voice samples, however, has not been widely studied in research. Lately voice classification is found useful in phone monitoring, classifying speakers’ gender, ethnicity and emotion states, and so forth. In this paper, a collection of computational algorithms are proposed to support voice classification; the algorithms are a combination of hierarchical clustering, dynamic time wrap transform, discrete wavelet transform, and decision tree. The proposed algorithms are relatively more transparent and interpretable than the existing ones, though many techniques such as Artificial Neural Networks, Support Vector Machine, and Hidden Markov Model (which inherently function like a black box have been applied for voice verification and voice identification. Two datasets, one that is generated synthetically and the other one empirically collected from past voice recognition experiment, are used to verify and demonstrate the effectiveness of our proposed voice classification algorithm.

  9. Ensemble Deep Learning for Biomedical Time Series Classification

    Directory of Open Access Journals (Sweden)

    Lin-peng Jin

    2016-01-01

    Full Text Available Ensemble learning has been proved to improve the generalization ability effectively in both theory and practice. In this paper, we briefly outline the current status of research on it first. Then, a new deep neural network-based ensemble method that integrates filtering views, local views, distorted views, explicit training, implicit training, subview prediction, and Simple Average is proposed for biomedical time series classification. Finally, we validate its effectiveness on the Chinese Cardiovascular Disease Database containing a large number of electrocardiogram recordings. The experimental results show that the proposed method has certain advantages compared to some well-known ensemble methods, such as Bagging and AdaBoost.

  10. Real-time classification of auditory sentences using evoked cortical activity in humans

    Science.gov (United States)

    Moses, David A.; Leonard, Matthew K.; Chang, Edward F.

    2018-06-01

    Objective. Recent research has characterized the anatomical and functional basis of speech perception in the human auditory cortex. These advances have made it possible to decode speech information from activity in brain regions like the superior temporal gyrus, but no published work has demonstrated this ability in real-time, which is necessary for neuroprosthetic brain-computer interfaces. Approach. Here, we introduce a real-time neural speech recognition (rtNSR) software package, which was used to classify spoken input from high-resolution electrocorticography signals in real-time. We tested the system with two human subjects implanted with electrode arrays over the lateral brain surface. Subjects listened to multiple repetitions of ten sentences, and rtNSR classified what was heard in real-time from neural activity patterns using direct sentence-level and HMM-based phoneme-level classification schemes. Main results. We observed single-trial sentence classification accuracies of 90% or higher for each subject with less than 7 minutes of training data, demonstrating the ability of rtNSR to use cortical recordings to perform accurate real-time speech decoding in a limited vocabulary setting. Significance. Further development and testing of the package with different speech paradigms could influence the design of future speech neuroprosthetic applications.

  11. Detecting lies about consumer attitudes using the timed antagonistic response alethiometer.

    Science.gov (United States)

    Gregg, Aiden P; Mahadevan, Nikhila; Edwards, Sonja E; Klymowsky, James

    2014-09-01

    The Timed Antagonistic Response Alethiometer (TARA) is a true-false statement classification task that diagnoses lying on the basis of slower average response speeds. Previous research (Gregg in Applied Cognitive Psychology, 21, 621-647, 2007) showed that a computer-based TARA was about 80 % accurate when its statements conveyed demographic facts or religious views. Here, we tested the TARA's diagnostic potential when its statements conveyed attitudes-here, toward both branded and generic consumer products-across different versions of the TARA (Exps. 1a, 1b, and 1c), as well as across consecutive administrations (Exp. 2). The results generalized well across versions, and maximal accuracy rates exceeding 80 % were obtained, although accuracy declined somewhat upon readministration. Overall, the TARA shows promise as a comparatively cheap, convenient, and diagnostic index of lying about attitudes.

  12. A new patient classification for laser resurfacing and peels: predicting responses, risks, and results.

    Science.gov (United States)

    Fanous, Nabil

    2002-01-01

    Traditional classifications for skin treatment modalities are based on skin characteristics, the most important being skin color. Other factors are considered as well, such as oiliness, thickness, pathology, and sensitivity. While useful, these classifications are occasionally inadequate in predicting and explaining the outcome of some peels, dermabrasions, or laser resurfacing procedures. Why, for example, would a Korean patient with a light white skin inadvertently develop more hyperpigmentation than his darker skinned French counterpart? The new classification introduced here is based on the racial and genetic origins of patients. It suggests that racial genetic predisposition is the determining factor in human response to skin injury, including skin treatments. This classification takes into account both skin and features, rather than skin alone. It offers a new approach in evaluating patients scheduled for skin peels or laser resurfacing, in the hope of helping physicians to better predict reactions, select the appropriate type and intensity of the skin treatment and, ultimately, better control the outcome. Six categories (sub-races) are described: Nordics, Europeans, Mediterraneans, Indo-Pakistanis, Africans, and Asians. The reaction of each sub-race to peels, laser resurfacing, or dermabrasion is analyzed. The risks associated with each group are noted. This new classification provides physicians with a practical way to evaluate patients prior to treatment, with a view to determining each patient's suitability, postoperative reaction, the likelihood of complications, and likely result.

  13. Automated classification of Permanent Scatterers time-series based on statistical characterization tests

    Science.gov (United States)

    Berti, Matteo; Corsini, Alessandro; Franceschini, Silvia; Iannacone, Jean Pascal

    2013-04-01

    The application of space borne synthetic aperture radar interferometry has progressed, over the last two decades, from the pioneer use of single interferograms for analyzing changes on the earth's surface to the development of advanced multi-interferogram techniques to analyze any sort of natural phenomena which involves movements of the ground. The success of multi-interferograms techniques in the analysis of natural hazards such as landslides and subsidence is widely documented in the scientific literature and demonstrated by the consensus among the end-users. Despite the great potential of this technique, radar interpretation of slope movements is generally based on the sole analysis of average displacement velocities, while the information embraced in multi interferogram time series is often overlooked if not completely neglected. The underuse of PS time series is probably due to the detrimental effect of residual atmospheric errors, which make the PS time series characterized by erratic, irregular fluctuations often difficult to interpret, and also to the difficulty of performing a visual, supervised analysis of the time series for a large dataset. In this work is we present a procedure for automatic classification of PS time series based on a series of statistical characterization tests. The procedure allows to classify the time series into six distinctive target trends (0=uncorrelated; 1=linear; 2=quadratic; 3=bilinear; 4=discontinuous without constant velocity; 5=discontinuous with change in velocity) and retrieve for each trend a series of descriptive parameters which can be efficiently used to characterize the temporal changes of ground motion. The classification algorithms were developed and tested using an ENVISAT datasets available in the frame of EPRS-E project (Extraordinary Plan of Environmental Remote Sensing) of the Italian Ministry of Environment (track "Modena", Northern Apennines). This dataset was generated using standard processing, then the

  14. Machine learning algorithms for mode-of-action classification in toxicity assessment.

    Science.gov (United States)

    Zhang, Yile; Wong, Yau Shu; Deng, Jian; Anton, Cristina; Gabos, Stephan; Zhang, Weiping; Huang, Dorothy Yu; Jin, Can

    2016-01-01

    Real Time Cell Analysis (RTCA) technology is used to monitor cellular changes continuously over the entire exposure period. Combining with different testing concentrations, the profiles have potential in probing the mode of action (MOA) of the testing substances. In this paper, we present machine learning approaches for MOA assessment. Computational tools based on artificial neural network (ANN) and support vector machine (SVM) are developed to analyze the time-concentration response curves (TCRCs) of human cell lines responding to tested chemicals. The techniques are capable of learning data from given TCRCs with known MOA information and then making MOA classification for the unknown toxicity. A novel data processing step based on wavelet transform is introduced to extract important features from the original TCRC data. From the dose response curves, time interval leading to higher classification success rate can be selected as input to enhance the performance of the machine learning algorithm. This is particularly helpful when handling cases with limited and imbalanced data. The validation of the proposed method is demonstrated by the supervised learning algorithm applied to the exposure data of HepG2 cell line to 63 chemicals with 11 concentrations in each test case. Classification success rate in the range of 85 to 95 % are obtained using SVM for MOA classification with two clusters to cases up to four clusters. Wavelet transform is capable of capturing important features of TCRCs for MOA classification. The proposed SVM scheme incorporated with wavelet transform has a great potential for large scale MOA classification and high-through output chemical screening.

  15. Adaptation or Resistance: a classification of responses to sea-level rise

    Science.gov (United States)

    Cooper, J. A.

    2016-02-01

    Societal responses to sea level rise and associated coastal change are apparently diverse in nature and motivation. Most are commonly referred to as 'adaptation'. Based on a review of current practice, however, it is argued that many of these responses do not involve adaptation, but are rather resisting change. There are several instances where formerly adaptive initiatives involving human adaptability are being replaced by initiatives that resist change. A classification is presented that recognises a continuum of responses ranging from adaptation to resistance, depending upon the willingness to change human activities to accommodate environmental change. In many cases climate change adaptation resources are being used for projects that are purely resistant and which foreclose future adaptation options. It is argued that a more concise definition of adaptation is needed if coastal management is to move beyond the current position of holding the shoreline, other tah n in a few showcase examples.

  16. Predicting decisions in human social interactions using real-time fMRI and pattern classification.

    Science.gov (United States)

    Hollmann, Maurice; Rieger, Jochem W; Baecke, Sebastian; Lützkendorf, Ralf; Müller, Charles; Adolf, Daniela; Bernarding, Johannes

    2011-01-01

    Negotiation and trade typically require a mutual interaction while simultaneously resting in uncertainty which decision the partner ultimately will make at the end of the process. Assessing already during the negotiation in which direction one's counterpart tends would provide a tremendous advantage. Recently, neuroimaging techniques combined with multivariate pattern classification of the acquired data have made it possible to discriminate subjective states of mind on the basis of their neuronal activation signature. However, to enable an online-assessment of the participant's mind state both approaches need to be extended to a real-time technique. By combining real-time functional magnetic resonance imaging (fMRI) and online pattern classification techniques, we show that it is possible to predict human behavior during social interaction before the interacting partner communicates a specific decision. Average accuracy reached approximately 70% when we predicted online the decisions of volunteers playing the ultimatum game, a well-known paradigm in economic game theory. Our results demonstrate the successful online analysis of complex emotional and cognitive states using real-time fMRI, which will enable a major breakthrough for social fMRI by providing information about mental states of partners already during the mutual interaction. Interestingly, an additional whole brain classification across subjects confirmed the online results: anterior insula, ventral striatum, and lateral orbitofrontal cortex, known to act in emotional self-regulation and reward processing for adjustment of behavior, appeared to be strong determinants of later overt behavior in the ultimatum game. Using whole brain classification we were also able to discriminate between brain processes related to subjective emotional and motivational states and brain processes related to the evaluation of objective financial incentives.

  17. Predicting decisions in human social interactions using real-time fMRI and pattern classification.

    Directory of Open Access Journals (Sweden)

    Maurice Hollmann

    Full Text Available Negotiation and trade typically require a mutual interaction while simultaneously resting in uncertainty which decision the partner ultimately will make at the end of the process. Assessing already during the negotiation in which direction one's counterpart tends would provide a tremendous advantage. Recently, neuroimaging techniques combined with multivariate pattern classification of the acquired data have made it possible to discriminate subjective states of mind on the basis of their neuronal activation signature. However, to enable an online-assessment of the participant's mind state both approaches need to be extended to a real-time technique. By combining real-time functional magnetic resonance imaging (fMRI and online pattern classification techniques, we show that it is possible to predict human behavior during social interaction before the interacting partner communicates a specific decision. Average accuracy reached approximately 70% when we predicted online the decisions of volunteers playing the ultimatum game, a well-known paradigm in economic game theory. Our results demonstrate the successful online analysis of complex emotional and cognitive states using real-time fMRI, which will enable a major breakthrough for social fMRI by providing information about mental states of partners already during the mutual interaction. Interestingly, an additional whole brain classification across subjects confirmed the online results: anterior insula, ventral striatum, and lateral orbitofrontal cortex, known to act in emotional self-regulation and reward processing for adjustment of behavior, appeared to be strong determinants of later overt behavior in the ultimatum game. Using whole brain classification we were also able to discriminate between brain processes related to subjective emotional and motivational states and brain processes related to the evaluation of objective financial incentives.

  18. Classification of frequency response areas in the inferior colliculus reveals continua not discrete classes.

    Science.gov (United States)

    Palmer, Alan R; Shackleton, Trevor M; Sumner, Christian J; Zobay, Oliver; Rees, Adrian

    2013-08-15

    A differential response to sound frequency is a fundamental property of auditory neurons. Frequency analysis in the cochlea gives rise to V-shaped tuning functions in auditory nerve fibres, but by the level of the inferior colliculus (IC), the midbrain nucleus of the auditory pathway, neuronal receptive fields display diverse shapes that reflect the interplay of excitation and inhibition. The origin and nature of these frequency receptive field types is still open to question. One proposed hypothesis is that the frequency response class of any given neuron in the IC is predominantly inherited from one of three major afferent pathways projecting to the IC, giving rise to three distinct receptive field classes. Here, we applied subjective classification, principal component analysis, cluster analysis, and other objective statistical measures, to a large population (2826) of frequency response areas from single neurons recorded in the IC of the anaesthetised guinea pig. Subjectively, we recognised seven frequency response classes (V-shaped, non-monotonic Vs, narrow, closed, tilt down, tilt up and double-peaked), that were represented at all frequencies. We could identify similar classes using our objective classification tools. Importantly, however, many neurons exhibited properties intermediate between these classes, and none of the objective methods used here showed evidence of discrete response classes. Thus receptive field shapes in the IC form continua rather than discrete classes, a finding consistent with the integration of afferent inputs in the generation of frequency response areas. The frequency disposition of inhibition in the response areas of some neurons suggests that across-frequency inputs originating at or below the level of the IC are involved in their generation.

  19. Site classification of Indian strong motion network using response spectra ratios

    Science.gov (United States)

    Chopra, Sumer; Kumar, Vikas; Choudhury, Pallabee; Yadav, R. B. S.

    2018-03-01

    In the present study, we tried to classify the Indian strong motion sites spread all over Himalaya and adjoining region, located on varied geological formations, based on response spectral ratio. A total of 90 sites were classified based on 395 strong motion records from 94 earthquakes recorded at these sites. The magnitude of these earthquakes are between 2.3 and 7.7 and the hypocentral distance for most of the cases is less than 50 km. The predominant period obtained from response spectral ratios is used to classify these sites. It was found that the shape and predominant peaks of the spectra at these sites match with those in Japan, Italy, Iran, and at some of the sites in Europe and the same classification scheme can be applied to Indian strong motion network. We found that the earlier schemes based on description of near-surface geology, geomorphology, and topography were not able to capture the effect of sediment thickness. The sites are classified into seven classes (CL-I to CL-VII) with varying predominant periods and ranges as proposed by Alessandro et al. (Bull Seismol Soc Am 102:680-695 2012). The effect of magnitudes and hypocentral distances on the shape and predominant peaks were also studied and found to be very small. The classification scheme is robust and cost-effective and can be used in region-specific attenuation relationships for accounting local site effect.

  20. Linear Discriminant Analysis achieves high classification accuracy for the BOLD fMRI response to naturalistic movie stimuli.

    Directory of Open Access Journals (Sweden)

    Hendrik eMandelkow

    2016-03-01

    Full Text Available Naturalistic stimuli like movies evoke complex perceptual processes, which are of great interest in the study of human cognition by functional MRI (fMRI. However, conventional fMRI analysis based on statistical parametric mapping (SPM and the general linear model (GLM is hampered by a lack of accurate parametric models of the BOLD response to complex stimuli. In this situation, statistical machine-learning methods, a.k.a. multivariate pattern analysis (MVPA, have received growing attention for their ability to generate stimulus response models in a data-driven fashion. However, machine-learning methods typically require large amounts of training data as well as computational resources. In the past this has largely limited their application to fMRI experiments involving small sets of stimulus categories and small regions of interest in the brain. By contrast, the present study compares several classification algorithms known as Nearest Neighbour (NN, Gaussian Naïve Bayes (GNB, and (regularised Linear Discriminant Analysis (LDA in terms of their classification accuracy in discriminating the global fMRI response patterns evoked by a large number of naturalistic visual stimuli presented as a movie.Results show that LDA regularised by principal component analysis (PCA achieved high classification accuracies, above 90% on average for single fMRI volumes acquired 2s apart during a 300s movie (chance level 0.7% = 2s/300s. The largest source of classification errors were autocorrelations in the BOLD signal compounded by the similarity of consecutive stimuli. All classifiers performed best when given input features from a large region of interest comprising around 25% of the voxels that responded significantly to the visual stimulus. Consistent with this, the most informative principal components represented widespread distributions of co-activated brain regions that were similar between subjects and may represent functional networks. In light of these

  1. Reconstruction of road defects and road roughness classification using vehicle responses with artificial neural networks simulation

    CSIR Research Space (South Africa)

    Ngwangwa, HM

    2010-04-01

    Full Text Available -1 Journal of Terramechanics Volume 47, Issue 2, April 2010, Pages 97-111 Reconstruction of road defects and road roughness classification using vehicle responses with artificial neural networks simulation H.M. Ngwangwaa, P.S. Heynsa, , , F...

  2. Low-cost real-time automatic wheel classification system

    Science.gov (United States)

    Shabestari, Behrouz N.; Miller, John W. V.; Wedding, Victoria

    1992-11-01

    This paper describes the design and implementation of a low-cost machine vision system for identifying various types of automotive wheels which are manufactured in several styles and sizes. In this application, a variety of wheels travel on a conveyor in random order through a number of processing steps. One of these processes requires the identification of the wheel type which was performed manually by an operator. A vision system was designed to provide the required identification. The system consisted of an annular illumination source, a CCD TV camera, frame grabber, and 386-compatible computer. Statistical pattern recognition techniques were used to provide robust classification as well as a simple means for adding new wheel designs to the system. Maintenance of the system can be performed by plant personnel with minimal training. The basic steps for identification include image acquisition, segmentation of the regions of interest, extraction of selected features, and classification. The vision system has been installed in a plant and has proven to be extremely effective. The system properly identifies the wheels correctly up to 30 wheels per minute regardless of rotational orientation in the camera's field of view. Correct classification can even be achieved if a portion of the wheel is blocked off from the camera. Significant cost savings have been achieved by a reduction in scrap associated with incorrect manual classification as well as a reduction of labor in a tedious task.

  3. A New Methodology Based on Imbalanced Classification for Predicting Outliers in Electricity Demand Time Series

    Directory of Open Access Journals (Sweden)

    Francisco Javier Duque-Pintor

    2016-09-01

    Full Text Available The occurrence of outliers in real-world phenomena is quite usual. If these anomalous data are not properly treated, unreliable models can be generated. Many approaches in the literature are focused on a posteriori detection of outliers. However, a new methodology to a priori predict the occurrence of such data is proposed here. Thus, the main goal of this work is to predict the occurrence of outliers in time series, by using, for the first time, imbalanced classification techniques. In this sense, the problem of forecasting outlying data has been transformed into a binary classification problem, in which the positive class represents the occurrence of outliers. Given that the number of outliers is much lower than the number of common values, the resultant classification problem is imbalanced. To create training and test sets, robust statistical methods have been used to detect outliers in both sets. Once the outliers have been detected, the instances of the dataset are labeled accordingly. Namely, if any of the samples composing the next instance are detected as an outlier, the label is set to one. As a study case, the methodology has been tested on electricity demand time series in the Spanish electricity market, in which most of the outliers were properly forecast.

  4. Graph-based semi-supervised learning with genomic data integration using condition-responsive genes applied to phenotype classification.

    Science.gov (United States)

    Doostparast Torshizi, Abolfazl; Petzold, Linda R

    2018-01-01

    Data integration methods that combine data from different molecular levels such as genome, epigenome, transcriptome, etc., have received a great deal of interest in the past few years. It has been demonstrated that the synergistic effects of different biological data types can boost learning capabilities and lead to a better understanding of the underlying interactions among molecular levels. In this paper we present a graph-based semi-supervised classification algorithm that incorporates latent biological knowledge in the form of biological pathways with gene expression and DNA methylation data. The process of graph construction from biological pathways is based on detecting condition-responsive genes, where 3 sets of genes are finally extracted: all condition responsive genes, high-frequency condition-responsive genes, and P-value-filtered genes. The proposed approach is applied to ovarian cancer data downloaded from the Human Genome Atlas. Extensive numerical experiments demonstrate superior performance of the proposed approach compared to other state-of-the-art algorithms, including the latest graph-based classification techniques. Simulation results demonstrate that integrating various data types enhances classification performance and leads to a better understanding of interrelations between diverse omics data types. The proposed approach outperforms many of the state-of-the-art data integration algorithms. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  5. [Correlation coefficient-based principle and method for the classification of jump degree in hydrological time series].

    Science.gov (United States)

    Wu, Zi Yi; Xie, Ping; Sang, Yan Fang; Gu, Hai Ting

    2018-04-01

    The phenomenon of jump is one of the importantly external forms of hydrological variabi-lity under environmental changes, representing the adaption of hydrological nonlinear systems to the influence of external disturbances. Presently, the related studies mainly focus on the methods for identifying the jump positions and jump times in hydrological time series. In contrast, few studies have focused on the quantitative description and classification of jump degree in hydrological time series, which make it difficult to understand the environmental changes and evaluate its potential impacts. Here, we proposed a theatrically reliable and easy-to-apply method for the classification of jump degree in hydrological time series, using the correlation coefficient as a basic index. The statistical tests verified the accuracy, reasonability, and applicability of this method. The relationship between the correlation coefficient and the jump degree of series were described using mathematical equation by derivation. After that, several thresholds of correlation coefficients under different statistical significance levels were chosen, based on which the jump degree could be classified into five levels: no, weak, moderate, strong and very strong. Finally, our method was applied to five diffe-rent observed hydrological time series, with diverse geographic and hydrological conditions in China. The results of the classification of jump degrees in those series were closely accorded with their physically hydrological mechanisms, indicating the practicability of our method.

  6. Is overall similarity classification less effortful than single-dimension classification?

    Science.gov (United States)

    Wills, Andy J; Milton, Fraser; Longmore, Christopher A; Hester, Sarah; Robinson, Jo

    2013-01-01

    It is sometimes argued that the implementation of an overall similarity classification is less effortful than the implementation of a single-dimension classification. In the current article, we argue that the evidence securely in support of this view is limited, and report additional evidence in support of the opposite proposition--overall similarity classification is more effortful than single-dimension classification. Using a match-to-standards procedure, Experiments 1A, 1B and 2 demonstrate that concurrent load reduces the prevalence of overall similarity classification, and that this effect is robust to changes in the concurrent load task employed, the level of time pressure experienced, and the short-term memory requirements of the classification task. Experiment 3 demonstrates that participants who produced overall similarity classifications from the outset have larger working memory capacities than those who produced single-dimension classifications initially, and Experiment 4 demonstrates that instructions to respond meticulously increase the prevalence of overall similarity classification.

  7. Non-invasive classification of severe sepsis and systemic inflammatory response syndrome using a nonlinear support vector machine: a preliminary study

    International Nuclear Information System (INIS)

    Tang, Collin H H; Savkin, Andrey V; Chan, Gregory S H; Middleton, Paul M; Bishop, Sarah; Lovell, Nigel H

    2010-01-01

    Sepsis has been defined as the systemic response to infection in critically ill patients, with severe sepsis and septic shock representing increasingly severe stages of the same disease. Based on the non-invasive cardiovascular spectrum analysis, this paper presents a pilot study on the potential use of the nonlinear support vector machine (SVM) in the classification of the sepsis continuum into severe sepsis and systemic inflammatory response syndrome (SIRS) groups. 28 consecutive eligible patients attending the emergency department with presumptive diagnoses of sepsis syndrome have participated in this study. Through principal component analysis (PCA), the first three principal components were used to construct the SVM feature space. The SVM classifier with a fourth-order polynomial kernel was found to have a better overall performance compared with the other SVM classifiers, showing the following classification results: sensitivity = 94.44%, specificity = 62.50%, positive predictive value = 85.00%, negative predictive value = 83.33% and accuracy = 84.62%. Our classification results suggested that the combinatory use of cardiovascular spectrum analysis and the proposed SVM classification of autonomic neural activity is a potentially useful clinical tool to classify the sepsis continuum into two distinct pathological groups of varying sepsis severity

  8. FACET CLASSIFICATIONS OF E-LEARNING TOOLS

    Directory of Open Access Journals (Sweden)

    Olena Yu. Balalaieva

    2013-12-01

    Full Text Available The article deals with the classification of e-learning tools based on the facet method, which suggests the separation of the parallel set of objects into independent classification groups; at the same time it is not assumed rigid classification structure and pre-built finite groups classification groups are formed by a combination of values taken from the relevant facets. An attempt to systematize the existing classification of e-learning tools from the standpoint of classification theory is made for the first time. Modern Ukrainian and foreign facet classifications of e-learning tools are described; their positive and negative features compared to classifications based on a hierarchical method are analyzed. The original author's facet classification of e-learning tools is proposed.

  9. A real-time classification algorithm for EEG-based BCI driven by self-induced emotions.

    Science.gov (United States)

    Iacoviello, Daniela; Petracca, Andrea; Spezialetti, Matteo; Placidi, Giuseppe

    2015-12-01

    The aim of this paper is to provide an efficient, parametric, general, and completely automatic real time classification method of electroencephalography (EEG) signals obtained from self-induced emotions. The particular characteristics of the considered low-amplitude signals (a self-induced emotion produces a signal whose amplitude is about 15% of a really experienced emotion) require exploring and adapting strategies like the Wavelet Transform, the Principal Component Analysis (PCA) and the Support Vector Machine (SVM) for signal processing, analysis and classification. Moreover, the method is thought to be used in a multi-emotions based Brain Computer Interface (BCI) and, for this reason, an ad hoc shrewdness is assumed. The peculiarity of the brain activation requires ad-hoc signal processing by wavelet decomposition, and the definition of a set of features for signal characterization in order to discriminate different self-induced emotions. The proposed method is a two stages algorithm, completely parameterized, aiming at a multi-class classification and may be considered in the framework of machine learning. The first stage, the calibration, is off-line and is devoted at the signal processing, the determination of the features and at the training of a classifier. The second stage, the real-time one, is the test on new data. The PCA theory is applied to avoid redundancy in the set of features whereas the classification of the selected features, and therefore of the signals, is obtained by the SVM. Some experimental tests have been conducted on EEG signals proposing a binary BCI, based on the self-induced disgust produced by remembering an unpleasant odor. Since in literature it has been shown that this emotion mainly involves the right hemisphere and in particular the T8 channel, the classification procedure is tested by using just T8, though the average accuracy is calculated and reported also for the whole set of the measured channels. The obtained

  10. A study on the development of a real-time intelligent system for ultrasonic flaw classification

    International Nuclear Information System (INIS)

    Song, Sung Jin; Kim, Hak Joon; Lee, Hyun; Lee, Seung Seok

    1998-01-01

    In spite of significant progress in research on ultrasonic pattern recognition it is not widely used in many practical field inspection in weldments. For the convenience of field application of this methodology, following four key issues have to be suitably addressed; 1) a software where the ultrasonic pattern recognition algorithm is efficiently implemented, 2) a real-time ultrasonic testing system which can capture the digitized ultrasonic flaw signal so the pattern recognition software can be applied in a real-time fashion, 3) database of ultrasonic flaw signals in weldments, which is served as a foundation of the ultrasonic pattern recognition algorithm, and finally, 4) ultrasonic features which should be invariant to operational variables of the ultrasonic test system. Presented here is the recent progress in the development of a real-time ultrasonic flaw classification by the novel combination of followings; an intelligent software for ultrasonic flaw classification in weldments, a computer-base real-time ultrasonic nondestructive evaluation system, database of ultrasonic flaw signals, and invariant ultrasonic features called 'normalized features.'

  11. Analysis of Time n Frequency EEG Feature Extraction Methods for Mental Task Classification

    Directory of Open Access Journals (Sweden)

    Caglar Uyulan

    2017-01-01

    Full Text Available Many endogenous and external components may affect the physiological, mental and behavioral states in humans. Monitoring tools are required to evaluate biomarkers, identify biological events, and predict their outcomes. Being one of the valuable indicators, brain biomarkers derived from temporal or spectral electroencephalography (EEG signals processing, allow for the classification of mental disorders and mental tasks. An EEG signal has a nonstationary nature and individual frequency feature, hence it can be concluded that each subject has peculiar timing and data to extract unique features. In order to classify data, which are collected by performing four mental task (reciting the alphabet backwards, imagination of rotation of a cube, imagination of right hand movements (open/close and performing mathematical operations, discriminative features were extracted using four competitive time-frequency techniques; Wavelet Packet Decomposition (WPD, Morlet Wavelet Transform (MWT, Short Time Fourier Transform (STFT and Wavelet Filter Bank (WFB, respectively. The extracted features using both time and frequency domain information were then reduced using a principal component analysis for subset reduction. Finally, the reduced subsets were fed into a multi-layer perceptron neural network (MP-NN trained with back propagation (BP algorithm to generate a predictive model. This study mainly focuses on comparing the relative performance of time-frequency feature extraction methods that are used to classify mental tasks. The real-time (RT conducted experimental results underlined that the WPD feature extraction method outperforms with 92% classification accuracy compared to three other aforementioned methods for four different mental tasks.

  12. Hidden Markov Item Response Theory Models for Responses and Response Times.

    Science.gov (United States)

    Molenaar, Dylan; Oberski, Daniel; Vermunt, Jeroen; De Boeck, Paul

    2016-01-01

    Current approaches to model responses and response times to psychometric tests solely focus on between-subject differences in speed and ability. Within subjects, speed and ability are assumed to be constants. Violations of this assumption are generally absorbed in the residual of the model. As a result, within-subject departures from the between-subject speed and ability level remain undetected. These departures may be of interest to the researcher as they reflect differences in the response processes adopted on the items of a test. In this article, we propose a dynamic approach for responses and response times based on hidden Markov modeling to account for within-subject differences in responses and response times. A simulation study is conducted to demonstrate acceptable parameter recovery and acceptable performance of various fit indices in distinguishing between different models. In addition, both a confirmatory and an exploratory application are presented to demonstrate the practical value of the modeling approach.

  13. Artificial Neural Network classification of operator workload with an assessment of time variation and noise-enhancement to increase performance

    Directory of Open Access Journals (Sweden)

    Alexander James Casson

    2014-12-01

    Full Text Available Workload classification---the determination of whether a human operator is in a high or low workload state to allow their working environment to be optimized---is an emerging application of passive Brain-Computer Interface (BCI systems. Practical systems must not only accurately detect the current workload state, but also have good temporal performance: requiring little time to set up and train the classifier, and ensuring that the reported performance level is consistent and predictable over time. This paper investigates the temporal performance of an Artificial Neural Network based classification system. For networks trained on little EEG data good classification accuracies (86% are achieved over very short time frames, but substantial decreases in accuracy are found as the time gap between the network training and the actual use is increased. Noise-enhanced processing, where artificially generated noise is deliberately added to the testing signals, is investigated as a potential technique to mitigate this degradation without requiring the network to be re-trained using more data. Small stochastic resonance effects are demonstrated whereby the classification process gets better in the presence of more noise. The effect is small and does not eliminate the need for re-training, but it is consistent, and this is the first demonstration of such effects for non-evoked/free-running EEG signals suitable for passive BCI.

  14. Response moderation models for conditional dependence between response time and response accuracy.

    Science.gov (United States)

    Bolsinova, Maria; Tijmstra, Jesper; Molenaar, Dylan

    2017-05-01

    It is becoming more feasible and common to register response times in the application of psychometric tests. Researchers thus have the opportunity to jointly model response accuracy and response time, which provides users with more relevant information. The most common choice is to use the hierarchical model (van der Linden, 2007, Psychometrika, 72, 287), which assumes conditional independence between response time and accuracy, given a person's speed and ability. However, this assumption may be violated in practice if, for example, persons vary their speed or differ in their response strategies, leading to conditional dependence between response time and accuracy and confounding measurement. We propose six nested hierarchical models for response time and accuracy that allow for conditional dependence, and discuss their relationship to existing models. Unlike existing approaches, the proposed hierarchical models allow for various forms of conditional dependence in the model and allow the effect of continuous residual response time on response accuracy to be item-specific, person-specific, or both. Estimation procedures for the models are proposed, as well as two information criteria that can be used for model selection. Parameter recovery and usefulness of the information criteria are investigated using simulation, indicating that the procedure works well and is likely to select the appropriate model. Two empirical applications are discussed to illustrate the different types of conditional dependence that may occur in practice and how these can be captured using the proposed hierarchical models. © 2016 The British Psychological Society.

  15. Cluster Based Text Classification Model

    DEFF Research Database (Denmark)

    Nizamani, Sarwat; Memon, Nasrullah; Wiil, Uffe Kock

    2011-01-01

    We propose a cluster based classification model for suspicious email detection and other text classification tasks. The text classification tasks comprise many training examples that require a complex classification model. Using clusters for classification makes the model simpler and increases...... the accuracy at the same time. The test example is classified using simpler and smaller model. The training examples in a particular cluster share the common vocabulary. At the time of clustering, we do not take into account the labels of the training examples. After the clusters have been created......, the classifier is trained on each cluster having reduced dimensionality and less number of examples. The experimental results show that the proposed model outperforms the existing classification models for the task of suspicious email detection and topic categorization on the Reuters-21578 and 20 Newsgroups...

  16. Sound insulation and reverberation time for classrooms - Criteria in regulations and classification schemes in the Nordic countries

    DEFF Research Database (Denmark)

    Rasmussen, Birgit

    2016-01-01

    Acoustic regulations or guidelines for schools exist in all five Nordic countries. The acoustic criteria depend on room uses and deal with airborne and impact sound insulation, reverberation time, sound absorption, traffic noise, service equipment noise and other acoustic performance...... have become more extensive and stricter during the last two decades. The paper focuses on comparison of sound insulation and reverberation time criteria for classrooms in regulations and classification schemes in the Nordic countries. Limit values and changes over time will be discussed as well as how...... not identical. The national criteria for quality level C correspond to the national regulations or recommendations for new-build. The quality levels A and B are intended to define better acoustic performance than C, and D lower performance. Typically, acoustic regulations and classification criteria for schools...

  17. Creating high-resolution time series land-cover classifications in rapidly changing forested areas with BULC-U in Google Earth Engine

    Science.gov (United States)

    Cardille, J. A.; Lee, J.

    2017-12-01

    With the opening of the Landsat archive, there is a dramatically increased potential for creating high-quality time series of land use/land-cover (LULC) classifications derived from remote sensing. Although LULC time series are appealing, their creation is typically challenging in two fundamental ways. First, there is a need to create maximally correct LULC maps for consideration at each time step; and second, there is a need to have the elements of the time series be consistent with each other, without pixels that flip improbably between covers due only to unavoidable, stray classification errors. We have developed the Bayesian Updating of Land Cover - Unsupervised (BULC-U) algorithm to address these challenges simultaneously, and introduce and apply it here for two related but distinct purposes. First, with minimal human intervention, we produced an internally consistent, high-accuracy LULC time series in rapidly changing Mato Grosso, Brazil for a time interval (1986-2000) in which cropland area more than doubled. The spatial and temporal resolution of the 59 LULC snapshots allows users to witness the establishment of towns and farms at the expense of forest. The new time series could be used by policy-makers and analysts to unravel important considerations for conservation and management, including the timing and location of past development, the rate and nature of changes in forest connectivity, the connection with road infrastructure, and more. The second application of BULC-U is to sharpen the well-known GlobCover 2009 classification from 300m to 30m, while improving accuracy measures for every class. The greatly improved resolution and accuracy permits a better representation of the true LULC proportions, the use of this map in models, and quantification of the potential impacts of changes. Given that there may easily be thousands and potentially millions of images available to harvest for an LULC time series, it is imperative to build useful algorithms

  18. Tree Species Classification in Temperate Forests Using Formosat-2 Satellite Image Time Series

    Directory of Open Access Journals (Sweden)

    David Sheeren

    2016-09-01

    Full Text Available Mapping forest composition is a major concern for forest management, biodiversity assessment and for understanding the potential impacts of climate change on tree species distribution. In this study, the suitability of a dense high spatial resolution multispectral Formosat-2 satellite image time-series (SITS to discriminate tree species in temperate forests is investigated. Based on a 17-date SITS acquired across one year, thirteen major tree species (8 broadleaves and 5 conifers are classified in a study area of southwest France. The performance of parametric (GMM and nonparametric (k-NN, RF, SVM methods are compared at three class hierarchy levels for different versions of the SITS: (i a smoothed noise-free version based on the Whittaker smoother; (ii a non-smoothed cloudy version including all the dates; (iii a non-smoothed noise-free version including only 14 dates. Noise refers to pixels contaminated by clouds and cloud shadows. The results of the 108 distinct classifications show a very high suitability of the SITS to identify the forest tree species based on phenological differences (average κ = 0 . 93 estimated by cross-validation based on 1235 field-collected plots. SVM is found to be the best classifier with very close results from the other classifiers. No clear benefit of removing noise by smoothing can be observed. Classification accuracy is even improved using the non-smoothed cloudy version of the SITS compared to the 14 cloud-free image time series. However conclusions of the results need to be considered with caution because of possible overfitting. Disagreements also appear between the maps produced by the classifiers for complex mixed forests, suggesting a higher classification uncertainty in these contexts. Our findings suggest that time-series data can be a good alternative to hyperspectral data for mapping forest types. It also demonstrates the potential contribution of the recently launched Sentinel-2 satellite for

  19. SB certification handout material requirements, test methods, responsibilities, and minimum classification levels for mixture-based specification for flexible base.

    Science.gov (United States)

    2012-10-01

    A handout with tables representing the material requirements, test methods, responsibilities, and minimum classification levels mixture-based specification for flexible base and details on aggregate and test methods employed, along with agency and co...

  20. Understanding about the classification of pulp inflammation

    Directory of Open Access Journals (Sweden)

    Trijoedani Widodo

    2007-03-01

    Full Text Available Since most authors use the reversible pulpitis and irreversible pulpitis classification, however, many dentists still do not implement these new classifications. Research was made using a descriptive method by proposing questionnaire to dentists from various dental clinics. The numbers of the dentists participating in this research are 22 dentists. All respondents use the diagnosis sheet during their examinations on patients. Nonetheless, it can't be known what diagnosis card used and most of the dentists are still using the old classification. Concerning responses given towards the new classification: a the new classification had been heard, however, it was not clear (36.3%; b the new classification has never been heard at all (63.6%. Then, responses concerning whether a new development is important to be followed-up or not: a there are those who think that information concerning new development is very important (27.2%; b those who feel that it is important to have new information (68.3%; c those who think that new information is not important (8%. It concluded that information concerning the development of classification of pulp inflammation did not reach the dentists.

  1. Supply chain planning classification

    Science.gov (United States)

    Hvolby, Hans-Henrik; Trienekens, Jacques; Bonde, Hans

    2001-10-01

    Industry experience a need to shift in focus from internal production planning towards planning in the supply network. In this respect customer oriented thinking becomes almost a common good amongst companies in the supply network. An increase in the use of information technology is needed to enable companies to better tune their production planning with customers and suppliers. Information technology opportunities and supply chain planning systems facilitate companies to monitor and control their supplier network. In spite if these developments, most links in today's supply chains make individual plans, because the real demand information is not available throughout the chain. The current systems and processes of the supply chains are not designed to meet the requirements now placed upon them. For long term relationships with suppliers and customers, an integrated decision-making process is needed in order to obtain a satisfactory result for all parties. Especially when customized production and short lead-time is in focus. An effective value chain makes inventory available and visible among the value chain members, minimizes response time and optimizes total inventory value held throughout the chain. In this paper a supply chain planning classification grid is presented based current manufacturing classifications and supply chain planning initiatives.

  2. Yarn-dyed fabric defect classification based on convolutional neural network

    Science.gov (United States)

    Jing, Junfeng; Dong, Amei; Li, Pengfei; Zhang, Kaibing

    2017-09-01

    Considering that manual inspection of the yarn-dyed fabric can be time consuming and inefficient, we propose a yarn-dyed fabric defect classification method by using a convolutional neural network (CNN) based on a modified AlexNet. CNN shows powerful ability in performing feature extraction and fusion by simulating the learning mechanism of human brain. The local response normalization layers in AlexNet are replaced by the batch normalization layers, which can enhance both the computational efficiency and classification accuracy. In the training process of the network, the characteristics of the defect are extracted step by step and the essential features of the image can be obtained from the fusion of the edge details with several convolution operations. Then the max-pooling layers, the dropout layers, and the fully connected layers are employed in the classification model to reduce the computation cost and extract more precise features of the defective fabric. Finally, the results of the defect classification are predicted by the softmax function. The experimental results show promising performance with an acceptable average classification rate and strong robustness on yarn-dyed fabric defect classification.

  3. 32 CFR 2001.21 - Original classification.

    Science.gov (United States)

    2010-07-01

    ... 32 National Defense 6 2010-07-01 2010-07-01 false Original classification. 2001.21 Section 2001.21... Markings § 2001.21 Original classification. (a) Primary markings. At the time of original classification... authority. The name and position, or personal identifier, of the original classification authority shall...

  4. Falls classification using tri-axial accelerometers during the five-times-sit-to-stand test.

    Science.gov (United States)

    Doheny, Emer P; Walsh, Cathal; Foran, Timothy; Greene, Barry R; Fan, Chie Wei; Cunningham, Clodagh; Kenny, Rose Anne

    2013-09-01

    The five-times-sit-to-stand test (FTSS) is an established assessment of lower limb strength, balance dysfunction and falls risk. Clinically, the time taken to complete the task is recorded with longer times indicating increased falls risk. Quantifying the movement using tri-axial accelerometers may provide a more objective and potentially more accurate falls risk estimate. 39 older adults, 19 with a history of falls, performed four repetitions of the FTSS in their homes. A tri-axial accelerometer was attached to the lateral thigh and used to identify each sit-stand-sit phase and sit-stand and stand-sit transitions. A second tri-axial accelerometer, attached to the sternum, captured torso acceleration. The mean and variation of the root-mean-squared amplitude, jerk and spectral edge frequency of the acceleration during each section of the assessment were examined. The test-retest reliability of each feature was examined using intra-class correlation analysis, ICC(2,k). A model was developed to classify participants according to falls status. Only features with ICC>0.7 were considered during feature selection. Sequential forward feature selection within leave-one-out cross-validation resulted in a model including four reliable accelerometer-derived features, providing 74.4% classification accuracy, 80.0% specificity and 68.7% sensitivity. An alternative model using FTSS time alone resulted in significantly reduced classification performance. Results suggest that the described methodology could provide a robust and accurate falls risk assessment. Copyright © 2013 Elsevier B.V. All rights reserved.

  5. Statistical Analysis of Categorical Time Series of Atmospheric Elementary Circulation Mechanisms - Dzerdzeevski Classification for the Northern Hemisphere.

    Science.gov (United States)

    Brenčič, Mihael

    2016-01-01

    Northern hemisphere elementary circulation mechanisms, defined with the Dzerdzeevski classification and published on a daily basis from 1899-2012, are analysed with statistical methods as continuous categorical time series. Classification consists of 41 elementary circulation mechanisms (ECM), which are assigned to calendar days. Empirical marginal probabilities of each ECM were determined. Seasonality and the periodicity effect were investigated with moving dispersion filters and randomisation procedure on the ECM categories as well as with the time analyses of the ECM mode. The time series were determined as being non-stationary with strong time-dependent trends. During the investigated period, periodicity interchanges with periods when no seasonality is present. In the time series structure, the strongest division is visible at the milestone of 1986, showing that the atmospheric circulation pattern reflected in the ECM has significantly changed. This change is result of the change in the frequency of ECM categories; before 1986, the appearance of ECM was more diverse, and afterwards fewer ECMs appear. The statistical approach applied to the categorical climatic time series opens up new potential insight into climate variability and change studies that have to be performed in the future.

  6. Model-based Clustering of Categorical Time Series with Multinomial Logit Classification

    Science.gov (United States)

    Frühwirth-Schnatter, Sylvia; Pamminger, Christoph; Winter-Ebmer, Rudolf; Weber, Andrea

    2010-09-01

    A common problem in many areas of applied statistics is to identify groups of similar time series in a panel of time series. However, distance-based clustering methods cannot easily be extended to time series data, where an appropriate distance-measure is rather difficult to define, particularly for discrete-valued time series. Markov chain clustering, proposed by Pamminger and Frühwirth-Schnatter [6], is an approach for clustering discrete-valued time series obtained by observing a categorical variable with several states. This model-based clustering method is based on finite mixtures of first-order time-homogeneous Markov chain models. In order to further explain group membership we present an extension to the approach of Pamminger and Frühwirth-Schnatter [6] by formulating a probabilistic model for the latent group indicators within the Bayesian classification rule by using a multinomial logit model. The parameters are estimated for a fixed number of clusters within a Bayesian framework using an Markov chain Monte Carlo (MCMC) sampling scheme representing a (full) Gibbs-type sampler which involves only draws from standard distributions. Finally, an application to a panel of Austrian wage mobility data is presented which leads to an interesting segmentation of the Austrian labour market.

  7. A Procedure for Classification of Cup-Anemometers

    DEFF Research Database (Denmark)

    Friis Pedersen, Troels; Paulsen, Uwe Schmidt

    1997-01-01

    The paper proposes a classification procedure for cup-anemometers based on similar principles as for power converters. A range of operational parameters are established within which the response of the cup-anemometer is evaluated. The characteristics of real cup-anemometers are fitted...... to a realistic 3D cup-anemometer model. Afterwards, the model is used to calculate the response under the range of operational conditions which are set up for the classification. Responses are compared to the normal linear calibration relationship, derived from Wind tunnel calibrations. Results of the 3D cup...

  8. Ionic classification of Xe laser lines: A new approach through time resolved spectroscopy

    International Nuclear Information System (INIS)

    Schinca, D.; Duchowicz, R.; Gallardo, M.

    1992-01-01

    Visible and UV laser emission from a highly ionized pulsed Xe plasma was studied in relation to the ionic assignment of the laser lines. Time-resolved spectroscopy was used to determine the ionic origin of the studied lines. The results are in agreement with an intensity versus pressure analysis performed over the same wavelength range. From the temporal behaviour of the spontaneous emission, a probable classification can be obtained. (author). 7 refs, 7 figs, 1 tab

  9. Classification of coronary artery bifurcation lesions and treatments: Time for a consensus!

    DEFF Research Database (Denmark)

    Louvard, Yves; Thomas, Martyn; Dzavik, Vladimir

    2007-01-01

    by intention to treat, it is necessary to clearly define which vessel is the distal main branch and which is (are) the side branche(s) and give each branch a distinct name. Each segment of the bifurcation has been named following the same pattern as the Medina classification. The classification......, heterogeneity, and inadequate description of techniques implemented. Methods: The aim is to propose a consensus established by the European Bifurcation Club (EBC), on the definition and classification of bifurcation lesions and treatments implemented with the purpose of allowing comparisons between techniques...... in various anatomical and clinical settings. Results: A bifurcation lesion is a coronary artery narrowing occurring adjacent to, and/or involving, the origin of a significant side branch. The simple lesion classification proposed by Medina has been adopted. To analyze the outcomes of different techniques...

  10. New technique for real-time distortion-invariant multiobject recognition and classification

    Science.gov (United States)

    Hong, Rutong; Li, Xiaoshun; Hong, En; Wang, Zuyi; Wei, Hongan

    2001-04-01

    A real-time hybrid distortion-invariant OPR system was established to make 3D multiobject distortion-invariant automatic pattern recognition. Wavelet transform technique was used to make digital preprocessing of the input scene, to depress the noisy background and enhance the recognized object. A three-layer backpropagation artificial neural network was used in correlation signal post-processing to perform multiobject distortion-invariant recognition and classification. The C-80 and NOA real-time processing ability and the multithread programming technology were used to perform high speed parallel multitask processing and speed up the post processing rate to ROIs. The reference filter library was constructed for the distortion version of 3D object model images based on the distortion parameter tolerance measuring as rotation, azimuth and scale. The real-time optical correlation recognition testing of this OPR system demonstrates that using the preprocessing, post- processing, the nonlinear algorithm os optimum filtering, RFL construction technique and the multithread programming technology, a high possibility of recognition and recognition rate ere obtained for the real-time multiobject distortion-invariant OPR system. The recognition reliability and rate was improved greatly. These techniques are very useful to automatic target recognition.

  11. Data Mining and Machine Learning in Time-Domain Discovery and Classification

    Science.gov (United States)

    Bloom, Joshua S.; Richards, Joseph W.

    2012-03-01

    -domain aspect of the data and the objects of interest presents some unique challenges. First, any collection, storage, transport, and computational framework for processing the streaming data must be able to keep up with the dataflow. This is not necessarily true, for instance, with static sky science, where metrics of interest can be computed off-line and on a timescale much longer than the time required to obtain the data. Second, many types of transient (one-off) events evolve quickly in time and require more observations to fully understand the nature of the events. This demands that time-changing events are quickly discovered, classified, and broadcast to other follow-up facilities. All of this must happen robustly with, in some cases, very limited data. Last, the process of discovery and classification must be calibrated to the available resources for computation and follow-up. That is, the precision of classification must be weighed against the computational cost of producing that level of precision. Likewise, the cost of being wrong about the classification of some sorts of sources must be balanced against the scientific gains about being right about the classification of other types of sources. Quantifying these trade-offs, especially in the presence of a limited amount of follow-up resources (such as the availability of larger telescope observations) is not straightforward and inheres domain-specific imperatives that will, in general, differ from astronomer to astronomer. This chapter presents an overview of the current directions in ML and data-mining techniques in the context of time-domain astronomy. Ultimately the goal - if not just the necessity given the data rates and the diversity of questions to be answered - is to abstract the traditional role of astronomer in the entire scientific process. In some sense, this takes us full circle from the pre modern view of the scientific pursuit presented in Vermeer's "The Astronomer" (Figure 6.2): in broad daylight, he

  12. Phase information of time-frequency transforms as a key feature for classification of atrial fibrillation episodes

    International Nuclear Information System (INIS)

    Ortigosa, Nuria; Fernández, Carmen; Galbis, Antonio; Cano, Óscar

    2015-01-01

    Patients suffering from atrial fibrillation can be classified into different subtypes, according to the temporal pattern of the arrhythmia and its recurrence. Nowadays, clinicians cannot differentiate a priori between the different subtypes, and patient classification is done afterwards, when its clinical course is available. In this paper we present a comparison of classification performances when differentiating paroxysmal and persistent atrial fibrillation episodes by means of support vector machines. We analyze short surface electrocardiogram recordings by extracting modulus and phase features from several time-frequency transforms: short-time Fourier transform, Wigner–Ville, Choi–Williams, Stockwell transform, and general Fourier-family transform. Overall, accuracy higher than 81% is obtained when classifying phase information features of real test ECGs from a heterogeneous cohort of patients (in terms of progression of the arrhythmia and antiarrhythmic treatment) recorded in a tertiary center. Therefore, phase features can facilitate the clinicians’ choice of the most appropriate treatment for each patient by means of a non-invasive technique (the surface ECG). (paper)

  13. The effect of time on EMG classification of hand motions in able-bodied and transradial amputees

    DEFF Research Database (Denmark)

    Waris, Asim; Niazi, Imran Khan; Jamil, Mohsin

    2018-01-01

    While several studies have demonstrated the short-term performance of pattern recognition systems, long-term investigations are very limited. In this study, we investigated changes in classification performance over time. Ten able-bodied individuals and six amputees took part in this study. EMG s...... difference between training and testing day increases. Furthermore, for iEMG, performance in amputees was directly proportional to the size of the residual limb.......While several studies have demonstrated the short-term performance of pattern recognition systems, long-term investigations are very limited. In this study, we investigated changes in classification performance over time. Ten able-bodied individuals and six amputees took part in this study. EMG...... was computed for all possible combinations between the days. For all subjects, surface sEMG (7.2 ± 7.6%), iEMG (11.9 ± 9.1%) and cEMG (4.6 ± 4.8%) were significantly different (P 

  14. Real-time classification and sensor fusion with a spiking deep belief network.

    Science.gov (United States)

    O'Connor, Peter; Neil, Daniel; Liu, Shih-Chii; Delbruck, Tobi; Pfeiffer, Michael

    2013-01-01

    Deep Belief Networks (DBNs) have recently shown impressive performance on a broad range of classification problems. Their generative properties allow better understanding of the performance, and provide a simpler solution for sensor fusion tasks. However, because of their inherent need for feedback and parallel update of large numbers of units, DBNs are expensive to implement on serial computers. This paper proposes a method based on the Siegert approximation for Integrate-and-Fire neurons to map an offline-trained DBN onto an efficient event-driven spiking neural network suitable for hardware implementation. The method is demonstrated in simulation and by a real-time implementation of a 3-layer network with 2694 neurons used for visual classification of MNIST handwritten digits with input from a 128 × 128 Dynamic Vision Sensor (DVS) silicon retina, and sensory-fusion using additional input from a 64-channel AER-EAR silicon cochlea. The system is implemented through the open-source software in the jAER project and runs in real-time on a laptop computer. It is demonstrated that the system can recognize digits in the presence of distractions, noise, scaling, translation and rotation, and that the degradation of recognition performance by using an event-based approach is less than 1%. Recognition is achieved in an average of 5.8 ms after the onset of the presentation of a digit. By cue integration from both silicon retina and cochlea outputs we show that the system can be biased to select the correct digit from otherwise ambiguous input.

  15. Real-time ultrasound image classification for spine anesthesia using local directional Hadamard features.

    Science.gov (United States)

    Pesteie, Mehran; Abolmaesumi, Purang; Ashab, Hussam Al-Deen; Lessoway, Victoria A; Massey, Simon; Gunka, Vit; Rohling, Robert N

    2015-06-01

    Injection therapy is a commonly used solution for back pain management. This procedure typically involves percutaneous insertion of a needle between or around the vertebrae, to deliver anesthetics near nerve bundles. Most frequently, spinal injections are performed either blindly using palpation or under the guidance of fluoroscopy or computed tomography. Recently, due to the drawbacks of the ionizing radiation of such imaging modalities, there has been a growing interest in using ultrasound imaging as an alternative. However, the complex spinal anatomy with different wave-like structures, affected by speckle noise, makes the accurate identification of the appropriate injection plane difficult. The aim of this study was to propose an automated system that can identify the optimal plane for epidural steroid injections and facet joint injections. A multi-scale and multi-directional feature extraction system to provide automated identification of the appropriate plane is proposed. Local Hadamard coefficients are obtained using the sequency-ordered Hadamard transform at multiple scales. Directional features are extracted from local coefficients which correspond to different regions in the ultrasound images. An artificial neural network is trained based on the local directional Hadamard features for classification. The proposed method yields distinctive features for classification which successfully classified 1032 images out of 1090 for epidural steroid injection and 990 images out of 1052 for facet joint injection. In order to validate the proposed method, a leave-one-out cross-validation was performed. The average classification accuracy for leave-one-out validation was 94 % for epidural and 90 % for facet joint targets. Also, the feature extraction time for the proposed method was 20 ms for a native 2D ultrasound image. A real-time machine learning system based on the local directional Hadamard features extracted by the sequency-ordered Hadamard transform for

  16. Featureless classification of light curves

    Science.gov (United States)

    Kügler, S. D.; Gianniotis, N.; Polsterer, K. L.

    2015-08-01

    In the era of rapidly increasing amounts of time series data, classification of variable objects has become the main objective of time-domain astronomy. Classification of irregularly sampled time series is particularly difficult because the data cannot be represented naturally as a vector which can be directly fed into a classifier. In the literature, various statistical features serve as vector representations. In this work, we represent time series by a density model. The density model captures all the information available, including measurement errors. Hence, we view this model as a generalization to the static features which directly can be derived, e.g. as moments from the density. Similarity between each pair of time series is quantified by the distance between their respective models. Classification is performed on the obtained distance matrix. In the numerical experiments, we use data from the OGLE (Optical Gravitational Lensing Experiment) and ASAS (All Sky Automated Survey) surveys and demonstrate that the proposed representation performs up to par with the best currently used feature-based approaches. The density representation preserves all static information present in the observational data, in contrast to a less-complete description by features. The density representation is an upper boundary in terms of information made available to the classifier. Consequently, the predictive power of the proposed classification depends on the choice of similarity measure and classifier, only. Due to its principled nature, we advocate that this new approach of representing time series has potential in tasks beyond classification, e.g. unsupervised learning.

  17. Asynchronous data-driven classification of weapon systems

    International Nuclear Information System (INIS)

    Jin, Xin; Mukherjee, Kushal; Gupta, Shalabh; Ray, Asok; Phoha, Shashi; Damarla, Thyagaraju

    2009-01-01

    This communication addresses real-time weapon classification by analysis of asynchronous acoustic data, collected from microphones on a sensor network. The weapon classification algorithm consists of two parts: (i) feature extraction from time-series data using symbolic dynamic filtering (SDF), and (ii) pattern classification based on the extracted features using the language measure (LM) and support vector machine (SVM). The proposed algorithm has been tested on field data, generated by firing of two types of rifles. The results of analysis demonstrate high accuracy and fast execution of the pattern classification algorithm with low memory requirements. Potential applications include simultaneous shooter localization and weapon classification with soldier-wearable networked sensors. (rapid communication)

  18. GLOBAL LAND COVER CLASSIFICATION USING MODIS SURFACE REFLECTANCE PROSUCTS

    Directory of Open Access Journals (Sweden)

    K. Fukue

    2016-06-01

    Full Text Available The objective of this study is to develop high accuracy land cover classification algorithm for Global scale by using multi-temporal MODIS land reflectance products. In this study, time-domain co-occurrence matrix was introduced as a classification feature which provides time-series signature of land covers. Further, the non-parametric minimum distance classifier was introduced for timedomain co-occurrence matrix, which performs multi-dimensional pattern matching for time-domain co-occurrence matrices of a classification target pixel and each classification classes. The global land cover classification experiments have been conducted by applying the proposed classification method using 46 multi-temporal(in one year SR(Surface Reflectance and NBAR(Nadir BRDF-Adjusted Reflectance products, respectively. IGBP 17 land cover categories were used in our classification experiments. As the results, SR and NBAR products showed similar classification accuracy of 99%.

  19. 14 CFR 1203.407 - Duration of classification.

    Science.gov (United States)

    2010-01-01

    ... 14 Aeronautics and Space 5 2010-01-01 2010-01-01 false Duration of classification. 1203.407... PROGRAM Guides for Original Classification § 1203.407 Duration of classification. (a) Information shall be... date or event for declassification shall be set by the original classification authority at the time...

  20. Artificial neural network classification using a minimal training set - Comparison to conventional supervised classification

    Science.gov (United States)

    Hepner, George F.; Logan, Thomas; Ritter, Niles; Bryant, Nevin

    1990-01-01

    Recent research has shown an artificial neural network (ANN) to be capable of pattern recognition and the classification of image data. This paper examines the potential for the application of neural network computing to satellite image processing. A second objective is to provide a preliminary comparison and ANN classification. An artificial neural network can be trained to do land-cover classification of satellite imagery using selected sites representative of each class in a manner similar to conventional supervised classification. One of the major problems associated with recognition and classifications of pattern from remotely sensed data is the time and cost of developing a set of training sites. This reseach compares the use of an ANN back propagation classification procedure with a conventional supervised maximum likelihood classification procedure using a minimal training set. When using a minimal training set, the neural network is able to provide a land-cover classification superior to the classification derived from the conventional classification procedure. This research is the foundation for developing application parameters for further prototyping of software and hardware implementations for artificial neural networks in satellite image and geographic information processing.

  1. Classification and description of world formation types

    Science.gov (United States)

    D. Faber-Langendoen; T. Keeler-Wolf; D. Meidinger; C. Josse; A. Weakley; D. Tart; G. Navarro; B. Hoagland; S. Ponomarenko; G. Fults; Eileen Helmer

    2016-01-01

    An ecological vegetation classification approach has been developed in which a combination of vegetation attributes (physiognomy, structure, and floristics) and their response to ecological and biogeographic factors are used as the basis for classifying vegetation types. This approach can help support international, national, and subnational classification efforts. The...

  2. Feature Selection as a Time and Cost-Saving Approach for Land Suitability Classification (Case Study of Shavur Plain, Iran

    Directory of Open Access Journals (Sweden)

    Saeid Hamzeh

    2016-10-01

    Full Text Available Land suitability classification is important in planning and managing sustainable land use. Most approaches to land suitability analysis combine a large number of land and soil parameters, and are time-consuming and costly. In this study, a potentially useful technique (combined feature selection and fuzzy-AHP method to increase the efficiency of land suitability analysis was presented. To this end, three different feature selection algorithms—random search, best search and genetic methods—were used to determine the most effective parameters for land suitability classification for the cultivation of barely in the Shavur Plain, southwest Iran. Next, land suitability classes were calculated for all methods by using the fuzzy-AHP approach. Salinity (electrical conductivity (EC, alkalinity (exchangeable sodium percentage (ESP, wetness and soil texture were selected using the random search method. Gypsum, EC, ESP, and soil texture were selected using both the best search and genetic methods. The result shows a strong agreement between the standard fuzzy-AHP methods and methods presented in this study. The values of Kappa coefficients were 0.82, 0.79 and 0.79 for the random search, best search and genetic methods, respectively, compared with the standard fuzzy-AHP method. Our results indicate that EC, ESP, soil texture and wetness are the most effective features for evaluating land suitability classification for the cultivation of barely in the study area, and uses of these parameters, together with their appropriate weights as obtained from fuzzy-AHP, can perform good results for land suitability classification. So, the combined feature selection presented and the fuzzy-AHP approach has the potential to save time and money for land suitability classification.

  3. Artificial intelligence in label-free microscopy biological cell classification by time stretch

    CERN Document Server

    Mahjoubfar, Ata; Jalali, Bahram

    2017-01-01

    This book introduces time-stretch quantitative phase imaging (TS-QPI), a high-throughput label-free imaging flow cytometer developed for big data acquisition and analysis in phenotypic screening. TS-QPI is able to capture quantitative optical phase and intensity images simultaneously, enabling high-content cell analysis, cancer diagnostics, personalized genomics, and drug development. The authors also demonstrate a complete machine learning pipeline that performs optical phase measurement, image processing, feature extraction, and classification, enabling high-throughput quantitative imaging that achieves record high accuracy in label -free cellular phenotypic screening and opens up a new path to data-driven diagnosis. • Demonstrates how machine learning is used in high-speed microscopy imaging to facilitate medical diagnosis; • Provides a systematic and comprehensive illustration of time stretch technology; • Enables multidisciplinary application, including industrial, biomedical, and artificial intell...

  4. 47 CFR 64.2345 - Primary advertising classification.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 3 2010-10-01 2010-10-01 false Primary advertising classification. 64.2345 Section 64.2345 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES... advertising classification. A primary advertising classification is assigned at the time of the establishment...

  5. Optimal Subset Selection of Time-Series MODIS Images and Sample Data Transfer with Random Forests for Supervised Classification Modelling.

    Science.gov (United States)

    Zhou, Fuqun; Zhang, Aining

    2016-10-25

    Nowadays, various time-series Earth Observation data with multiple bands are freely available, such as Moderate Resolution Imaging Spectroradiometer (MODIS) datasets including 8-day composites from NASA, and 10-day composites from the Canada Centre for Remote Sensing (CCRS). It is challenging to efficiently use these time-series MODIS datasets for long-term environmental monitoring due to their vast volume and information redundancy. This challenge will be greater when Sentinel 2-3 data become available. Another challenge that researchers face is the lack of in-situ data for supervised modelling, especially for time-series data analysis. In this study, we attempt to tackle the two important issues with a case study of land cover mapping using CCRS 10-day MODIS composites with the help of Random Forests' features: variable importance, outlier identification. The variable importance feature is used to analyze and select optimal subsets of time-series MODIS imagery for efficient land cover mapping, and the outlier identification feature is utilized for transferring sample data available from one year to an adjacent year for supervised classification modelling. The results of the case study of agricultural land cover classification at a regional scale show that using only about a half of the variables we can achieve land cover classification accuracy close to that generated using the full dataset. The proposed simple but effective solution of sample transferring could make supervised modelling possible for applications lacking sample data.

  6. Real-time network traffic classification technique for wireless local area networks based on compressed sensing

    Science.gov (United States)

    Balouchestani, Mohammadreza

    2017-05-01

    Network traffic or data traffic in a Wireless Local Area Network (WLAN) is the amount of network packets moving across a wireless network from each wireless node to another wireless node, which provide the load of sampling in a wireless network. WLAN's Network traffic is the main component for network traffic measurement, network traffic control and simulation. Traffic classification technique is an essential tool for improving the Quality of Service (QoS) in different wireless networks in the complex applications such as local area networks, wireless local area networks, wireless personal area networks, wireless metropolitan area networks, and wide area networks. Network traffic classification is also an essential component in the products for QoS control in different wireless network systems and applications. Classifying network traffic in a WLAN allows to see what kinds of traffic we have in each part of the network, organize the various kinds of network traffic in each path into different classes in each path, and generate network traffic matrix in order to Identify and organize network traffic which is an important key for improving the QoS feature. To achieve effective network traffic classification, Real-time Network Traffic Classification (RNTC) algorithm for WLANs based on Compressed Sensing (CS) is presented in this paper. The fundamental goal of this algorithm is to solve difficult wireless network management problems. The proposed architecture allows reducing False Detection Rate (FDR) to 25% and Packet Delay (PD) to 15 %. The proposed architecture is also increased 10 % accuracy of wireless transmission, which provides a good background for establishing high quality wireless local area networks.

  7. Real-time classification of signals from three-component seismic sensors using neural nets

    Science.gov (United States)

    Bowman, B. C.; Dowla, F.

    1992-05-01

    Adaptive seismic data acquisition systems with capabilities of signal discrimination and event classification are important in treaty monitoring, proliferation, and earthquake early detection systems. Potential applications include monitoring underground chemical explosions, as well as other military, cultural, and natural activities where characteristics of signals change rapidly and without warning. In these applications, the ability to detect and interpret events rapidly without falling behind the influx of the data is critical. We developed a system for real-time data acquisition, analysis, learning, and classification of recorded events employing some of the latest technology in computer hardware, software, and artificial neural networks methods. The system is able to train dynamically, and updates its knowledge based on new data. The software is modular and hardware-independent; i.e., the front-end instrumentation is transparent to the analysis system. The software is designed to take advantage of the multiprocessing environment of the Unix operating system. The Unix System V shared memory and static RAM protocols for data access and the semaphore mechanism for interprocess communications were used. As the three-component sensor detects a seismic signal, it is displayed graphically on a color monitor using X11/Xlib graphics with interactive screening capabilities. For interesting events, the triaxial signal polarization is computed, a fast Fourier Transform (FFT) algorithm is applied, and the normalized power spectrum is transmitted to a backpropagation neural network for event classification. The system is currently capable of handling three data channels with a sampling rate of 500 Hz, which covers the bandwidth of most seismic events. The system has been tested in laboratory setting with artificial events generated in the vicinity of a three-component sensor.

  8. Hydrologic-Process-Based Soil Texture Classifications for Improved Visualization of Landscape Function

    Science.gov (United States)

    Groenendyk, Derek G.; Ferré, Ty P.A.; Thorp, Kelly R.; Rice, Amy K.

    2015-01-01

    Soils lie at the interface between the atmosphere and the subsurface and are a key component that control ecosystem services, food production, and many other processes at the Earth’s surface. There is a long-established convention for identifying and mapping soils by texture. These readily available, georeferenced soil maps and databases are used widely in environmental sciences. Here, we show that these traditional soil classifications can be inappropriate, contributing to bias and uncertainty in applications from slope stability to water resource management. We suggest a new approach to soil classification, with a detailed example from the science of hydrology. Hydrologic simulations based on common meteorological conditions were performed using HYDRUS-1D, spanning textures identified by the United States Department of Agriculture soil texture triangle. We consider these common conditions to be: drainage from saturation, infiltration onto a drained soil, and combined infiltration and drainage events. Using a k-means clustering algorithm, we created soil classifications based on the modeled hydrologic responses of these soils. The hydrologic-process-based classifications were compared to those based on soil texture and a single hydraulic property, Ks. Differences in classifications based on hydrologic response versus soil texture demonstrate that traditional soil texture classification is a poor predictor of hydrologic response. We then developed a QGIS plugin to construct soil maps combining a classification with georeferenced soil data from the Natural Resource Conservation Service. The spatial patterns of hydrologic response were more immediately informative, much simpler, and less ambiguous, for use in applications ranging from trafficability to irrigation management to flood control. The ease with which hydrologic-process-based classifications can be made, along with the improved quantitative predictions of soil responses and visualization of landscape

  9. Reverberation time in class rooms – Comparison of regulations and classification criteria in the Nordic countries

    DEFF Research Database (Denmark)

    Rasmussen, Birgit; Brunskog, Jonas; Hoffmeyer, Dan

    2012-01-01

    Regulatory requirements or guidelines for classroom reverberation time exist in all five Nordic countries and in most of Europe – as well as other acoustic criteria for schools, e.g. concerning airborne and impact sound insulation, facade sound insulation and installation noise. There are several...... reasons for having such requirements: Improving learning efficiency for pupils and work conditions for teachers and reducing noise levels, thus increasing comfort for everyone. Instead of including acoustic regulatory requirements for schools directly in the building regulations, Iceland, Norway...... and Sweden have introduced acoustic quality classes A, B, C and D in national standards with class C referred to as regulatory requirements. These national classification standards are dealing with acoustic classes for several types of buildings. A classification scheme also exists in Finland...

  10. Response time patterns in a stated choice experiment

    DEFF Research Database (Denmark)

    Börjesson, Maria; Fosgerau, Mogens

    2015-01-01

    This paper studies how response times vary between unlabelled binary choice occasions in a stated choice (SC) experiment, with alternatives differing with respect to in-vehicle travel time and travel cost. The pattern of response times is interpreted as an indicator of the cognitive processes...... employed by the respondents when making their choices. We find clear signs of reference-dependence in response times in the form of a strong gain–loss asymmetry. Moreover, different patterns of response times for travel time and travel cost indicate that these attributes are processed in different ways...

  11. Hydrological Climate Classification: Can We Improve on Köppen-Geiger?

    Science.gov (United States)

    Knoben, W.; Woods, R. A.; Freer, J. E.

    2017-12-01

    Classification is essential in the study of complex natural systems, yet hydrology so far has no formal way to structure the climate forcing which underlies hydrologic response. Various climate classification systems can be borrowed from other disciplines but these are based on different organizing principles than a hydrological classification might use. From gridded global data we calculate a gridded aridity index, an aridity seasonality index and a rain-vs-snow index, which we use to cluster global locations into climate groups. We then define the membership degree of nearly 1100 catchments to each of our climate groups based on each catchment's climate and investigate the extent to which streamflow responses within each climate group are similar. We compare this climate classification approach with the often-used Köppen-Geiger classification, using statistical tests based on streamflow signature values. We find that three climate indices are sufficient to distinguish 18 different climate types world-wide. Climates tend to change gradually in space and catchments can thus belong to multiple climate groups, albeit with different degrees of membership. Streamflow responses within a climate group tend to be similar, regardless of the catchments' geographical proximity. A Wilcoxon two-sample test based on streamflow signature values for each climate group shows that the new classification can distinguish different flow regimes using this classification scheme. The Köppen-Geiger approach uses 29 climate classes but is less able to differentiate streamflow regimes. Climate forcing exerts a strong control on typical hydrologic response and both change gradually in space. This makes arbitrary hard boundaries in any classification scheme difficult to defend. Any hydrological classification should thus acknowledge these gradual changes in forcing. Catchment characteristics (soil or vegetation type, land use, etc) can vary more quickly in space than climate does, which

  12. 28 CFR 524.76 - Appeals of CIM classification.

    Science.gov (United States)

    2010-07-01

    ... 28 Judicial Administration 2 2010-07-01 2010-07-01 false Appeals of CIM classification. 524.76..., CLASSIFICATION, AND TRANSFER CLASSIFICATION OF INMATES Central Inmate Monitoring (CIM) System § 524.76 Appeals of CIM classification. An inmate may at any time appeal (through the Administrative Remedy Program) the...

  13. Assessing nonchoosers' eyewitness identification accuracy from photographic showups by using confidence and response times.

    Science.gov (United States)

    Sauerland, Melanie; Sagana, Anna; Sporer, Siegfried L

    2012-10-01

    While recent research has shown that the accuracy of positive identification decisions can be assessed via confidence and decision times, gauging lineup rejections has been less successful. The current study focused on 2 different aspects which are inherent in lineup rejections. First, we hypothesized that decision times and confidence ratings should be postdictive of identification rejections if they refer to a single lineup member only. Second, we hypothesized that dividing nonchoosers according to the reasons they provided for their decisions can serve as a useful postdictor for nonchoosers' accuracy. To test these assumptions, we used (1) 1-person lineups (showups) in order to obtain confidence and response time measures referring to a single lineup member, and (2) asked nonchoosers about their reasons for making a rejection. Three hundred and eighty-four participants were asked to identify 2 different persons after watching 1 of 2 stimulus films. The results supported our hypotheses. Nonchoosers' postdecision confidence ratings were well-calibrated. Likewise, we successfully established optimum time and confidence boundaries for nonchoosers. Finally, combinations of postdictors increased the number of accurate classifications compared with individual postdictors. PsycINFO Database Record (c) 2012 APA, all rights reserved.

  14. Non-linear dynamical classification of short time series of the rössler system in high noise regimes.

    Science.gov (United States)

    Lainscsek, Claudia; Weyhenmeyer, Jonathan; Hernandez, Manuel E; Poizner, Howard; Sejnowski, Terrence J

    2013-01-01

    Time series analysis with delay differential equations (DDEs) reveals non-linear properties of the underlying dynamical system and can serve as a non-linear time-domain classification tool. Here global DDE models were used to analyze short segments of simulated time series from a known dynamical system, the Rössler system, in high noise regimes. In a companion paper, we apply the DDE model developed here to classify short segments of encephalographic (EEG) data recorded from patients with Parkinson's disease and healthy subjects. Nine simulated subjects in each of two distinct classes were generated by varying the bifurcation parameter b and keeping the other two parameters (a and c) of the Rössler system fixed. All choices of b were in the chaotic parameter range. We diluted the simulated data using white noise ranging from 10 to -30 dB signal-to-noise ratios (SNR). Structure selection was supervised by selecting the number of terms, delays, and order of non-linearity of the model DDE model that best linearly separated the two classes of data. The distances d from the linear dividing hyperplane was then used to assess the classification performance by computing the area A' under the ROC curve. The selected model was tested on untrained data using repeated random sub-sampling validation. DDEs were able to accurately distinguish the two dynamical conditions, and moreover, to quantify the changes in the dynamics. There was a significant correlation between the dynamical bifurcation parameter b of the simulated data and the classification parameter d from our analysis. This correlation still held for new simulated subjects with new dynamical parameters selected from each of the two dynamical regimes. Furthermore, the correlation was robust to added noise, being significant even when the noise was greater than the signal. We conclude that DDE models may be used as a generalizable and reliable classification tool for even small segments of noisy data.

  15. Identification of pests and diseases of Dalbergia hainanensis based on EVI time series and classification of decision tree

    Science.gov (United States)

    Luo, Qiu; Xin, Wu; Qiming, Xiong

    2017-06-01

    In the process of vegetation remote sensing information extraction, the problem of phenological features and low performance of remote sensing analysis algorithm is not considered. To solve this problem, the method of remote sensing vegetation information based on EVI time-series and the classification of decision-tree of multi-source branch similarity is promoted. Firstly, to improve the time-series stability of recognition accuracy, the seasonal feature of vegetation is extracted based on the fitting span range of time-series. Secondly, the decision-tree similarity is distinguished by adaptive selection path or probability parameter of component prediction. As an index, it is to evaluate the degree of task association, decide whether to perform migration of multi-source decision tree, and ensure the speed of migration. Finally, the accuracy of classification and recognition of pests and diseases can reach 87%--98% of commercial forest in Dalbergia hainanensis, which is significantly better than that of MODIS coverage accuracy of 80%--96% in this area. Therefore, the validity of the proposed method can be verified.

  16. Cognitive Reflection, Decision Biases, and Response Times.

    Science.gov (United States)

    Alós-Ferrer, Carlos; Garagnani, Michele; Hügelschäfer, Sabine

    2016-01-01

    We present novel evidence on response times and personality traits in standard questions from the decision-making literature where responses are relatively slow (medians around half a minute or above). To this end, we measured response times in a number of incentivized, framed items (decisions from description) including the Cognitive Reflection Test, two additional questions following the same logic, and a number of classic questions used to study decision biases in probability judgments (base-rate neglect, the conjunction fallacy, and the ratio bias). All questions create a conflict between an intuitive process and more deliberative thinking. For each item, we then created a non-conflict version by either making the intuitive impulse correct (resulting in an alignment question), shutting it down (creating a neutral question), or making it dominant (creating a heuristic question). For CRT questions, the differences in response times are as predicted by dual-process theories, with alignment and heuristic variants leading to faster responses and neutral questions to slower responses than the original, conflict questions. For decision biases (where responses are slower), evidence is mixed. To explore the possible influence of personality factors on both choices and response times, we used standard personality scales including the Rational-Experiential Inventory and the Big Five, and used them as controls in regression analysis.

  17. What nephrolopathologists need to know about antiphospholipid syndrome-associated nephropathy: Is it time for formulating a classification for renal morphologic lesions?

    Science.gov (United States)

    Mubarak, Muhammed; Nasri, Hamid

    2014-01-01

    Antiphospholipid syndrome (APS) is a systemic autoimmune disorder which commonly affects kidneys. Directory of Open Access Journals (DOAJ), Google Scholar, PubMed (NLM), LISTA (EBSCO) and Web of Science have been searched. There is sufficient epidemiological, clinical and histopathological evidence to show that antiphospholipid syndrome is a distinctive lesion caused by antiphospholipid antibodies in patients with different forms of antiphospholipid syndrome. It is now time to devise a classification for an accurate diagnosis and prognostication of the disease. Now that the morphological lesions of APSN are sufficiently well characterized, it is prime time to devise a classification which is of diagnostic and prognostic utility in this disease.

  18. The Effect of Sports and Physical Activity on Elderly Reaction Time and Response Time

    Directory of Open Access Journals (Sweden)

    Abdolrahman Khezri

    2014-07-01

    Full Text Available Objectives: Physical activities ameliorate elderly motor and cognitive performance. The aim of this research is to study the effect of sport and physical activity on elderly reaction time and response time. Methods & Materials: The research method is causal-comparative and its statistical population consists of 60 active and non-active old males over 60 years residing at Mahabad city. Reaction time was measured by reaction timer apparatus, made in Takei Company (YB1000 model. Response time was measured via Nelson’s Choice- Response Movement Test. At first, reaction time and then response time was measured. For data analysis, descriptive statistic, K-S Test and One Sample T Test were used Results K-S Test show that research data was parametric. According to the results of this research, physical activity affected reaction time and response time. Results: of T test show that reaction time (P=0.000 and response time (P=0.000 of active group was statistically shorter than non- active group. Conclusion: The result of current study demonstrate that sport and physical activity, decrease reaction and response time via psychomotor and physiological positive changes.

  19. Object-based Dimensionality Reduction in Land Surface Phenology Classification

    Directory of Open Access Journals (Sweden)

    Brian E. Bunker

    2016-11-01

    Full Text Available Unsupervised classification or clustering of multi-decadal land surface phenology provides a spatio-temporal synopsis of natural and agricultural vegetation response to environmental variability and anthropogenic activities. Notwithstanding the detailed temporal information available in calibrated bi-monthly normalized difference vegetation index (NDVI and comparable time series, typical pre-classification workflows average a pixel’s bi-monthly index within the larger multi-decadal time series. While this process is one practical way to reduce the dimensionality of time series with many hundreds of image epochs, it effectively dampens temporal variation from both intra and inter-annual observations related to land surface phenology. Through a novel application of object-based segmentation aimed at spatial (not temporal dimensionality reduction, all 294 image epochs from a Moderate Resolution Imaging Spectroradiometer (MODIS bi-monthly NDVI time series covering the northern Fertile Crescent were retained (in homogenous landscape units as unsupervised classification inputs. Given the inherent challenges of in situ or manual image interpretation of land surface phenology classes, a cluster validation approach based on transformed divergence enabled comparison between traditional and novel techniques. Improved intra-annual contrast was clearly manifest in rain-fed agriculture and inter-annual trajectories showed increased cluster cohesion, reducing the overall number of classes identified in the Fertile Crescent study area from 24 to 10. Given careful segmentation parameters, this spatial dimensionality reduction technique augments the value of unsupervised learning to generate homogeneous land surface phenology units. By combining recent scalable computational approaches to image segmentation, future work can pursue new global land surface phenology products based on the high temporal resolution signatures of vegetation index time series.

  20. Radar transmitter classification using non-stationary signal classifier

    CSIR Research Space (South Africa)

    Du Plessis, MC

    2009-07-01

    Full Text Available support vector machine which is applied to the radar pulse's time-frequency representation. The time-frequency representation is refined using particle swarm optimization to increase the classification accuracy. The classification accuracy is tested...

  1. Real-time detection and classification of anomalous events in streaming data

    Science.gov (United States)

    Ferragut, Erik M.; Goodall, John R.; Iannacone, Michael D.; Laska, Jason A.; Harrison, Lane T.

    2016-04-19

    A system is described for receiving a stream of events and scoring the events based on anomalousness and maliciousness (or other classification). The events can be displayed to a user in user-defined groupings in an animated fashion. The system can include a plurality of anomaly detectors that together implement an algorithm to identify low probability events and detect atypical traffic patterns. The atypical traffic patterns can then be classified as being of interest or not. In one particular example, in a network environment, the classification can be whether the network traffic is malicious or not.

  2. Classification of operational characteristics of commercial cup-anemometers

    Energy Technology Data Exchange (ETDEWEB)

    Friis Pedersen, T; Schmidt Paulsen, U [Risoe National Lab., Wind Energy and Atmospheric Physics Dept., Roskilde (Denmark)

    1999-03-01

    The present classification of cup-anemometers is based on a procedure for classification of operational characteristics of cup-anemometers that was proposed at the EWEC `97 conference in Dublin 1997. Three definitions of wind speed are considered. The average longitudinal wind speed (ID), the average horizontal wind speed (2D) and the average vector wind speed (3D). The classification is provided in these terms, and additionally, the turbulence intensities, which are defined from the same wind speed definitions. The commercial cup-anemometers have all been calibrated in wind tunnel for the normal calibrations and angular characteristics. Friction was measured by blywheel testing, where the surrounding temperatures were varied over a wide range. The characteristics of the cup-anemometers have been fitted to the heuristic dynamic model, and the response has been calculated in time domain for prescribed ranges of external operational conditions. The results are presented in ranges of maximum deviations of `measured` average wind speed. For each definition of wind speed and turbulence intensity, the cup-anemometers are ranked according to the most precise instrument. Finally, the most important systematic error sources are commented. (au)

  3. A Box-Cox normal model for response times.

    Science.gov (United States)

    Klein Entink, R H; van der Linden, W J; Fox, J-P

    2009-11-01

    The log-transform has been a convenient choice in response time modelling on test items. However, motivated by a dataset of the Medical College Admission Test where the lognormal model violated the normality assumption, the possibilities of the broader class of Box-Cox transformations for response time modelling are investigated. After an introduction and an outline of a broader framework for analysing responses and response times simultaneously, the performance of a Box-Cox normal model for describing response times is investigated using simulation studies and a real data example. A transformation-invariant implementation of the deviance information criterium (DIC) is developed that allows for comparing model fit between models with different transformation parameters. Showing an enhanced description of the shape of the response time distributions, its application in an educational measurement context is discussed at length.

  4. Bioelectric signal classification using a recurrent probabilistic neural network with time-series discriminant component analysis.

    Science.gov (United States)

    Hayashi, Hideaki; Shima, Keisuke; Shibanoki, Taro; Kurita, Yuichi; Tsuji, Toshio

    2013-01-01

    This paper outlines a probabilistic neural network developed on the basis of time-series discriminant component analysis (TSDCA) that can be used to classify high-dimensional time-series patterns. TSDCA involves the compression of high-dimensional time series into a lower-dimensional space using a set of orthogonal transformations and the calculation of posterior probabilities based on a continuous-density hidden Markov model that incorporates a Gaussian mixture model expressed in the reduced-dimensional space. The analysis can be incorporated into a neural network so that parameters can be obtained appropriately as network coefficients according to backpropagation-through-time-based training algorithm. The network is considered to enable high-accuracy classification of high-dimensional time-series patterns and to reduce the computation time taken for network training. In the experiments conducted during the study, the validity of the proposed network was demonstrated for EEG signals.

  5. Improving Student Question Classification

    Science.gov (United States)

    Heiner, Cecily; Zachary, Joseph L.

    2009-01-01

    Students in introductory programming classes often articulate their questions and information needs incompletely. Consequently, the automatic classification of student questions to provide automated tutorial responses is a challenging problem. This paper analyzes 411 questions from an introductory Java programming course by reducing the natural…

  6. Specific classification of financial analysis of enterprise activity

    Directory of Open Access Journals (Sweden)

    Synkevych Nadiia I.

    2014-01-01

    Full Text Available Despite the fact that one can find a big variety of classifications of types of financial analysis of enterprise activity, which differ with their approach to classification and a number of classification features and their content, in modern scientific literature, their complex comparison and analysis of existing classification have not been done. This explains urgency of this study. The article studies classification of types of financial analysis of scientists and presents own approach to this problem. By the results of analysis the article improves and builds up a specific classification of financial analysis of enterprise activity and offers classification by the following features: objects, subjects, goals of study, automation level, time period of the analytical base, scope of study, organisation system, classification features of the subject, spatial belonging, sufficiency, information sources, periodicity, criterial base, method of data selection for analysis and time direction. All types of financial analysis significantly differ with their inherent properties and parameters depending on the goals of financial analysis. The developed specific classification provides subjects of financial analysis of enterprise activity with a possibility to identify a specific type of financial analysis, which would correctly meet the set goals.

  7. Time response for sensor sensed to actuator response for mobile robotic system

    Science.gov (United States)

    Amir, N. S.; Shafie, A. A.

    2017-11-01

    Time and performance of a mobile robot are very important in completing the tasks given to achieve its ultimate goal. Tasks may need to be done within a time constraint to ensure smooth operation of a mobile robot and can result in better performance. The main purpose of this research was to improve the performance of a mobile robot so that it can complete the tasks given within time constraint. The problem that is needed to be solved is to minimize the time interval between sensor detection and actuator response. The research objective is to analyse the real time operating system performance of sensors and actuators on one microcontroller and on two microcontroller for a mobile robot. The task for a mobile robot for this research is line following with an obstacle avoidance. Three runs will be carried out for the task and the time between the sensors senses to the actuator responses were recorded. Overall, the results show that two microcontroller system have better response time compared to the one microcontroller system. For this research, the average difference of response time is very important to improve the internal performance between the occurrence of a task, sensors detection, decision making and actuator response of a mobile robot. This research helped to develop a mobile robot with a better performance and can complete task within the time constraint.

  8. Impact of job classification on employment of seasonal workers

    Directory of Open Access Journals (Sweden)

    Zoran Pandža

    2011-07-01

    Full Text Available The paper aims to improve the existing work organization, thus improving success of business process and ultimately reducing company costs. A change in organizational structure is proposed with the objective of achieving better and more efficient use of resources available within the company. Since the existing organization and classification of jobs does not meet the requirements of the age we live in, there is a need for new classification which would address many changes that have taken place over the years, including changes that are yet to be made for the purpose of further development of the company. Organization and management of the company as well as reorganization and implementation of a new classification is necessary to make it possible for the company to perform regular adjustment of business activities, because the conditions in which the company operates are changing fast. New classification would not actually change the number of sectors. Rather, existing personnel would be allocated in a better way, which would result in reduced needs for seasonal work force. In the process of defining the new organizational structure, one should consider the type, way of doing business, structural variables (division of labour, unity of command, authority and responsibility, span of control, division in business units, etc.. Expected results include improved organization and classification of jobs, improved quality of work, speed and efficiency. It should result in a company organized according to standards that are adjusted to modern times.

  9. Classification and Mapping of Paddy Rice by Combining Landsat and SAR Time Series Data

    Directory of Open Access Journals (Sweden)

    Seonyoung Park

    2018-03-01

    Full Text Available Rice is an important food resource, and the demand for rice has increased as population has expanded. Therefore, accurate paddy rice classification and monitoring are necessary to identify and forecast rice production. Satellite data have been often used to produce paddy rice maps with more frequent update cycle (e.g., every year than field surveys. Many satellite data, including both optical and SAR sensor data (e.g., Landsat, MODIS, and ALOS PALSAR, have been employed to classify paddy rice. In the present study, time series data from Landsat, RADARSAT-1, and ALOS PALSAR satellite sensors were synergistically used to classify paddy rice through machine learning approaches over two different climate regions (sites A and B. Six schemes considering the composition of various combinations of input data by sensor and collection date were evaluated. Scheme 6 that fused optical and SAR sensor time series data at the decision level yielded the highest accuracy (98.67% for site A and 93.87% for site B. Performance of paddy rice classification was better in site A than site B, which consists of heterogeneous land cover and has low data availability due to a high cloud cover rate. This study also proposed Paddy Rice Mapping Index (PMI considering spectral and phenological characteristics of paddy rice. PMI represented well the spatial distribution of paddy rice in both regions. Google Earth Engine was adopted to produce paddy rice maps over larger areas using the proposed PMI-based approach.

  10. Multi-Functional Sensing for Swarm Robots Using Time Sequence Classification: HoverBot, an Example

    Directory of Open Access Journals (Sweden)

    Markus P. Nemitz

    2018-05-01

    Full Text Available Scaling up robot swarms to collectives of hundreds or even thousands without sacrificing sensing, processing, and locomotion capabilities is a challenging problem. Low-cost robots are potentially scalable, but the majority of existing systems have limited capabilities, and these limitations substantially constrain the type of experiments that could be performed by robotics researchers. Instead of adding functionality by adding more components and therefore increasing the cost, we demonstrate how low-cost hardware can be used beyond its standard functionality. We systematically review 15 swarm robotic systems and analyse their sensing capabilities by applying a general sensor model from the sensing and measurement community. This work is based on the HoverBot system. A HoverBot is a levitating circuit board that manoeuvres by pulling itself towards magnetic anchors that are embedded into the robot arena. We show that HoverBot’s magnetic field readouts from its Hall-effect sensor can be associated to successful movement, robot rotation and collision measurands. We build a time series classifier based on these magnetic field readouts. We modify and apply signal processing techniques to enable the online classification of the time-variant magnetic field measurements on HoverBot’s low-cost microcontroller. We enabled HoverBot with successful movement, rotation, and collision sensing capabilities by utilising its single Hall-effect sensor. We discuss how our classification method could be applied to other sensors to increase a robot’s functionality while retaining its cost.

  11. The Value of Response Times in Item Response Modeling

    Science.gov (United States)

    Molenaar, Dylan

    2015-01-01

    A new and very interesting approach to the analysis of responses and response times is proposed by Goldhammer (this issue). In his approach, differences in the speed-ability compromise within respondents are considered to confound the differences in ability between respondents. These confounding effects of speed on the inferences about ability can…

  12. Dynamic species classification of microorganisms across time, abiotic and biotic environments-A sliding window approach.

    Directory of Open Access Journals (Sweden)

    Frank Pennekamp

    Full Text Available The development of video-based monitoring methods allows for rapid, dynamic and accurate monitoring of individuals or communities, compared to slower traditional methods, with far reaching ecological and evolutionary applications. Large amounts of data are generated using video-based methods, which can be effectively processed using machine learning (ML algorithms into meaningful ecological information. ML uses user defined classes (e.g. species, derived from a subset (i.e. training data of video-observed quantitative features (e.g. phenotypic variation, to infer classes in subsequent observations. However, phenotypic variation often changes due to environmental conditions, which may lead to poor classification, if environmentally induced variation in phenotypes is not accounted for. Here we describe a framework for classifying species under changing environmental conditions based on the random forest classification. A sliding window approach was developed that restricts temporal and environmentally conditions to improve the classification. We tested our approach by applying the classification framework to experimental data. The experiment used a set of six ciliate species to monitor changes in community structure and behavior over hundreds of generations, in dozens of species combinations and across a temperature gradient. Differences in biotic and abiotic conditions caused simplistic classification approaches to be unsuccessful. In contrast, the sliding window approach allowed classification to be highly successful, as phenotypic differences driven by environmental change, could be captured by the classifier. Importantly, classification using the random forest algorithm showed comparable success when validated against traditional, slower, manual identification. Our framework allows for reliable classification in dynamic environments, and may help to improve strategies for long-term monitoring of species in changing environments. Our

  13. Sensor response time monitoring using noise analysis

    International Nuclear Information System (INIS)

    Hashemian, H.M.; Thie, J.A.; Upadhyaya, B.R.; Holbert, K.E.

    1988-01-01

    Random noise techniques in nuclear power plants have been developed for system surveillance and for analysis of reactor core dynamics. The noise signals also contain information about sensor dynamics, and this can be extracted using frequency, amplitude and time domain analyses. Even though noise analysis has been used for sensor response time testing in some nuclear power plants, an adequate validation of this method has never been carried out. This paper presents the results of limited work recently performed to examine the validity of the noise analysis for sensor response time testing in nuclear power plants. The conclusion is that noise analysis has the potential for detecting gross changes in sensor response but it cannot be used for reliable measurement of response time until more laboratory and field experience is accumulated. The method is more advantageous for testing pressure sensors than it is for temperature sensors. This is because: 1) for temperature sensors, a method called Loop Current Step Response test is available which is quantitatively more exact than noise analysis, 2) no method currently exists for on-line testing of pressure transmitters other than the Power-Interrupt test which is applicable only to force balance pressure transmitters, and 3) pressure sensor response time is affected by sensing line degradation which is inherently taken into account by testing with noise analysis. (author)

  14. Incorporating Response Times in Item Response Theory Models of Reading Comprehension Fluency

    Science.gov (United States)

    Su, Shiyang

    2017-01-01

    With the online assessment becoming mainstream and the recording of response times becoming straightforward, the importance of response times as a measure of psychological constructs has been recognized and the literature of modeling times has been growing during the last few decades. Previous studies have tried to formulate models and theories to…

  15. An edit script for taxonomic classifications

    Directory of Open Access Journals (Sweden)

    Valiente Gabriel

    2005-08-01

    Full Text Available Abstract Background The NCBI taxonomy provides one of the most powerful ways to navigate sequence data bases but currently users are forced to formulate queries according to a single taxonomic classification. Given that there is not universal agreement on the classification of organisms, providing a single classification places constraints on the questions biologists can ask. However, maintaining multiple classifications is burdensome in the face of a constantly growing NCBI classification. Results In this paper, we present a solution to the problem of generating modifications of the NCBI taxonomy, based on the computation of an edit script that summarises the differences between two classification trees. Our algorithms find the shortest possible edit script based on the identification of all shared subtrees, and only take time quasi linear in the size of the trees because classification trees have unique node labels. Conclusion These algorithms have been recently implemented, and the software is freely available for download from http://darwin.zoology.gla.ac.uk/~rpage/forest/.

  16. The last classification of vasculitis

    NARCIS (Netherlands)

    Kallenberg, Cees G. M.

    2008-01-01

    Systemic vasculitides are a group of diverse conditions characterized by inflammation of the blood vessels. To obtain homogeneity in clinical characteristics, prognosis, and response to treatment, patients with vasculitis should be classified into defined disease categories. Many classification

  17. Sow-activity classification from acceleration patterns

    DEFF Research Database (Denmark)

    Escalante, Hugo Jair; Rodriguez, Sara V.; Cordero, Jorge

    2013-01-01

    sow-activity classification can be approached with standard machine learning methods for pattern classification. Individual predictions for elements of times series of arbitrary length are combined to classify it as a whole. An extensive comparison of representative learning algorithms, including......This paper describes a supervised learning approach to sow-activity classification from accelerometer measurements. In the proposed methodology, pairs of accelerometer measurements and activity types are considered as labeled instances of a usual supervised classification task. Under this scenario...... neural networks, support vector machines, and ensemble methods, is presented. Experimental results are reported using a data set for sow-activity classification collected in a real production herd. The data set, which has been widely used in related works, includes measurements from active (Feeding...

  18. Morphological classification of plant cell deaths.

    Science.gov (United States)

    van Doorn, W G; Beers, E P; Dangl, J L; Franklin-Tong, V E; Gallois, P; Hara-Nishimura, I; Jones, A M; Kawai-Yamada, M; Lam, E; Mundy, J; Mur, L A J; Petersen, M; Smertenko, A; Taliansky, M; Van Breusegem, F; Wolpert, T; Woltering, E; Zhivotovsky, B; Bozhkov, P V

    2011-08-01

    Programmed cell death (PCD) is an integral part of plant development and of responses to abiotic stress or pathogens. Although the morphology of plant PCD is, in some cases, well characterised and molecular mechanisms controlling plant PCD are beginning to emerge, there is still confusion about the classification of PCD in plants. Here we suggest a classification based on morphological criteria. According to this classification, the use of the term 'apoptosis' is not justified in plants, but at least two classes of PCD can be distinguished: vacuolar cell death and necrosis. During vacuolar cell death, the cell contents are removed by a combination of autophagy-like process and release of hydrolases from collapsed lytic vacuoles. Necrosis is characterised by early rupture of the plasma membrane, shrinkage of the protoplast and absence of vacuolar cell death features. Vacuolar cell death is common during tissue and organ formation and elimination, whereas necrosis is typically found under abiotic stress. Some examples of plant PCD cannot be ascribed to either major class and are therefore classified as separate modalities. These are PCD associated with the hypersensitive response to biotrophic pathogens, which can express features of both necrosis and vacuolar cell death, PCD in starchy cereal endosperm and during self-incompatibility. The present classification is not static, but will be subject to further revision, especially when specific biochemical pathways are better defined.

  19. A definition and classification of status epilepticus--Report of the ILAE Task Force on Classification of Status Epilepticus.

    Science.gov (United States)

    Trinka, Eugen; Cock, Hannah; Hesdorffer, Dale; Rossetti, Andrea O; Scheffer, Ingrid E; Shinnar, Shlomo; Shorvon, Simon; Lowenstein, Daniel H

    2015-10-01

    The Commission on Classification and Terminology and the Commission on Epidemiology of the International League Against Epilepsy (ILAE) have charged a Task Force to revise concepts, definition, and classification of status epilepticus (SE). The proposed new definition of SE is as follows: Status epilepticus is a condition resulting either from the failure of the mechanisms responsible for seizure termination or from the initiation of mechanisms, which lead to abnormally, prolonged seizures (after time point t1 ). It is a condition, which can have long-term consequences (after time point t2 ), including neuronal death, neuronal injury, and alteration of neuronal networks, depending on the type and duration of seizures. This definition is conceptual, with two operational dimensions: the first is the length of the seizure and the time point (t1 ) beyond which the seizure should be regarded as "continuous seizure activity." The second time point (t2 ) is the time of ongoing seizure activity after which there is a risk of long-term consequences. In the case of convulsive (tonic-clonic) SE, both time points (t1 at 5 min and t2 at 30 min) are based on animal experiments and clinical research. This evidence is incomplete, and there is furthermore considerable variation, so these time points should be considered as the best estimates currently available. Data are not yet available for other forms of SE, but as knowledge and understanding increase, time points can be defined for specific forms of SE based on scientific evidence and incorporated into the definition, without changing the underlying concepts. A new diagnostic classification system of SE is proposed, which will provide a framework for clinical diagnosis, investigation, and therapeutic approaches for each patient. There are four axes: (1) semiology; (2) etiology; (3) electroencephalography (EEG) correlates; and (4) age. Axis 1 (semiology) lists different forms of SE divided into those with prominent motor

  20. ACCUWIND - Methods for classification of cup anemometers

    DEFF Research Database (Denmark)

    Dahlberg, J.-Å.; Friis Pedersen, Troels; Busche, P.

    2006-01-01

    the errors associated with the use of cup anemometers, and to develop a classification system for quantification of systematic errors of cup anemometers. This classification system has now been implementedin the IEC 61400-12-1 standard on power performance measurements in annex I and J. The classification...... of cup anemometers requires general external climatic operational ranges to be applied for the analysis of systematic errors. A Class A categoryclassification is connected to reasonably flat sites, and another Class B category is connected to complex terrain, General classification indices are the result...... developed in the CLASSCUP projectand earlier. A number of approaches including the use of two cup anemometer models, two methods of torque coefficient measurement, two angular response measurements, and inclusion and exclusion of influence of friction have been implemented in theclassification process...

  1. Response times of operators in a control room

    International Nuclear Information System (INIS)

    Platz, O.; Rasmussen, J.; Skanborg, P.Z.

    1982-12-01

    A statistical analysis was made of operator response times recorded in the control room of a research reactor during the years 1972-1974. A homogeneity test revealed that the data consist of a mixture of populations. A small but statistically significant difference is found between day and night response times. Lognormal distributions are found to provide the best fit of the day and the night response times. (author)

  2. Solid waste bin detection and classification using Dynamic Time Warping and MLP classifier

    Energy Technology Data Exchange (ETDEWEB)

    Islam, Md. Shafiqul, E-mail: shafique@eng.ukm.my [Dept. of Electrical, Electronic and Systems Engineering, Universiti Kebangsaan Malaysia, Bangi 43600, Selangore (Malaysia); Hannan, M.A., E-mail: hannan@eng.ukm.my [Dept. of Electrical, Electronic and Systems Engineering, Universiti Kebangsaan Malaysia, Bangi 43600, Selangore (Malaysia); Basri, Hassan [Dept. of Civil and Structural Engineering, Universiti Kebangsaan Malaysia, Bangi 43600, Selangore (Malaysia); Hussain, Aini; Arebey, Maher [Dept. of Electrical, Electronic and Systems Engineering, Universiti Kebangsaan Malaysia, Bangi 43600, Selangore (Malaysia)

    2014-02-15

    Highlights: • Solid waste bin level detection using Dynamic Time Warping (DTW). • Gabor wavelet filter is used to extract the solid waste image features. • Multi-Layer Perceptron classifier network is used for bin image classification. • The classification performance evaluated by ROC curve analysis. - Abstract: The increasing requirement for Solid Waste Management (SWM) has become a significant challenge for municipal authorities. A number of integrated systems and methods have introduced to overcome this challenge. Many researchers have aimed to develop an ideal SWM system, including approaches involving software-based routing, Geographic Information Systems (GIS), Radio-frequency Identification (RFID), or sensor intelligent bins. Image processing solutions for the Solid Waste (SW) collection have also been developed; however, during capturing the bin image, it is challenging to position the camera for getting a bin area centralized image. As yet, there is no ideal system which can correctly estimate the amount of SW. This paper briefly discusses an efficient image processing solution to overcome these problems. Dynamic Time Warping (DTW) was used for detecting and cropping the bin area and Gabor wavelet (GW) was introduced for feature extraction of the waste bin image. Image features were used to train the classifier. A Multi-Layer Perceptron (MLP) classifier was used to classify the waste bin level and estimate the amount of waste inside the bin. The area under the Receiver Operating Characteristic (ROC) curves was used to statistically evaluate classifier performance. The results of this developed system are comparable to previous image processing based system. The system demonstration using DTW with GW for feature extraction and an MLP classifier led to promising results with respect to the accuracy of waste level estimation (98.50%). The application can be used to optimize the routing of waste collection based on the estimated bin level.

  3. A study for Unsafe Act classification under crew interaction during procedure-driven operation

    International Nuclear Information System (INIS)

    Choi, Sun Yeong; Park, Jinkyun; Kim, Yochan; Kim, Seunghwan; Jung, Wondea

    2016-01-01

    Highlights: • The procedure driven operation was divided into four stages by considering the crew relations such as instructions and responses. • Ten patterns of UA occurrence paths and the related operators per path were identified. • The UA type classification scheme was proposed based on the ten patterns of UA occurrence paths. • A case study to implement the UA type classification and to define the related operators per UA was performed. • The UA type classification scheme can be practical in that it prevents bias by subjective judgment. - Abstract: In this study, a method for UA (Unsafe Act) classification under a simulated procedure driven operation was proposed. To this end, a procedure driven operation was divided into four stages by considering the crew relations such as instructions and responses. Based on the four stages of a procedure driven operation, ten patterns of UA occurrence paths and the related operators per path were identified. From the ten types of UA occurrence paths including related operators, it is practicable to trace when and by whom a UA is initiated during a procedure driven operation, and the interaction or causality among the crew after the UA is initiated. Therefore, the types of UAs were classified into ‘Instruction UA’, ‘Reporting UA’, and ‘Execution UA’ by considering the initiation time and initiator of UA. A case study to implement the UA type classification and to define the related operators per UA was performed with the ISLOCA scenario simulator training data. The UA classification scheme proposed in this paper can be practical in that it does not require expertise relatively in a human performance analysis and it prevents bias by subjective judgment because it is based on an observation-based approach to exclude subjective judgment.

  4. Phylogenetic classification of the world’s tropical forests

    OpenAIRE

    Slik, J. W. Ferry; Franklin, Janet; Arroyo-Rodríguez, Víctor; Field, Richard; Aguilar, Salomon; Aguirre, Nikolay; Ahumada, Jorge; Aiba, Shin-Ichiro; Alves, Luciana F.; K, Anitha; Avella, Andres; Mora, Francisco; Aymard C., Gerardo A.; Báez, Selene; Balvanera, Patricia

    2018-01-01

    Identifying and explaining regional differences in tropical forest dynamics, structure, diversity, and composition are critical for anticipating region-specific responses to global environmental change. Floristic classifications are of fundamental importance for these efforts. Here we provide a global tropical forest classification that is explicitly based on community evolutionary similarity, resulting in identification of five major tropical forest regions and their relationships: (i) Indo-...

  5. A Comparison of Response Rate, Response Time, and Costs of Mail and Electronic Surveys.

    Science.gov (United States)

    Shannon, David M.; Bradshaw, Carol C.

    2002-01-01

    Compared response rates, response time, and costs of mail and electronic surveys using a sample of 377 college faculty members. Mail surveys yielded a higher response rate and a lower rate of undeliverable surveys, but response time was longer and costs were higher than for electronic surveys. (SLD)

  6. Contaminant classification using cosine distances based on multiple conventional sensors.

    Science.gov (United States)

    Liu, Shuming; Che, Han; Smith, Kate; Chang, Tian

    2015-02-01

    Emergent contamination events have a significant impact on water systems. After contamination detection, it is important to classify the type of contaminant quickly to provide support for remediation attempts. Conventional methods generally either rely on laboratory-based analysis, which requires a long analysis time, or on multivariable-based geometry analysis and sequence analysis, which is prone to being affected by the contaminant concentration. This paper proposes a new contaminant classification method, which discriminates contaminants in a real time manner independent of the contaminant concentration. The proposed method quantifies the similarities or dissimilarities between sensors' responses to different types of contaminants. The performance of the proposed method was evaluated using data from contaminant injection experiments in a laboratory and compared with a Euclidean distance-based method. The robustness of the proposed method was evaluated using an uncertainty analysis. The results show that the proposed method performed better in identifying the type of contaminant than the Euclidean distance based method and that it could classify the type of contaminant in minutes without significantly compromising the correct classification rate (CCR).

  7. Social and ethical implications of psychiatric classification for low ...

    African Journals Online (AJOL)

    Classification of Diseases, currently 10th edition, it is timely to consider the wider societal implications of evolving psychiatric classification, especially within low- and middle-income countries (LMICs). The author reviewed developments in psychiatric classification, especially the move from categorical to dimensional ...

  8. Increasing accuracy of vehicle detection from conventional vehicle detectors - counts, speeds, classification, and travel time.

    Science.gov (United States)

    2014-09-01

    Vehicle classification is an important traffic parameter for transportation planning and infrastructure : management. Length-based vehicle classification from dual loop detectors is among the lowest cost : technologies commonly used for collecting th...

  9. Classification of ion mobility spectra by functional groups using neural networks

    Science.gov (United States)

    Bell, S.; Nazarov, E.; Wang, Y. F.; Eiceman, G. A.

    1999-01-01

    Neural networks were trained using whole ion mobility spectra from a standardized database of 3137 spectra for 204 chemicals at various concentrations. Performance of the network was measured by the success of classification into ten chemical classes. Eleven stages for evaluation of spectra and of spectral pre-processing were employed and minimums established for response thresholds and spectral purity. After optimization of the database, network, and pre-processing routines, the fraction of successful classifications by functional group was 0.91 throughout a range of concentrations. Network classification relied on a combination of features, including drift times, number of peaks, relative intensities, and other factors apparently including peak shape. The network was opportunistic, exploiting different features within different chemical classes. Application of neural networks in a two-tier design where chemicals were first identified by class and then individually eliminated all but one false positive out of 161 test spectra. These findings establish that ion mobility spectra, even with low resolution instrumentation, contain sufficient detail to permit the development of automated identification systems.

  10. Classification of Teleparallel Homothetic Vector Fields in Cylindrically Symmetric Static Space-Times in Teleparallel Theory of Gravitation

    International Nuclear Information System (INIS)

    Shabbir, Ghulam; Khan, Suhail

    2010-01-01

    In this paper we classify cylindrically symmetric static space-times according to their teleparallel homothetic vector fields using direct integration technique. It turns out that the dimensions of the teleparallel homothetic vector fields are 4, 5, 7 or 11, which are the same in numbers as in general relativity. In case of 4, 5 or 7 proper teleparallel homothetic vector fields exist for the special choice to the space-times. In the case of 11 teleparallel homothetic vector fields the space-time becomes Minkowski with all the zero torsion components. Teleparallel homothetic vector fields in this case are exactly the same as in general relativity. It is important to note that this classification also covers the plane symmetric static space-times. (general)

  11. Classification of visual and linguistic tasks using eye-movement features.

    Science.gov (United States)

    Coco, Moreno I; Keller, Frank

    2014-03-07

    The role of the task has received special attention in visual-cognition research because it can provide causal explanations of goal-directed eye-movement responses. The dependency between visual attention and task suggests that eye movements can be used to classify the task being performed. A recent study by Greene, Liu, and Wolfe (2012), however, fails to achieve accurate classification of visual tasks based on eye-movement features. In the present study, we hypothesize that tasks can be successfully classified when they differ with respect to the involvement of other cognitive domains, such as language processing. We extract the eye-movement features used by Greene et al. as well as additional features from the data of three different tasks: visual search, object naming, and scene description. First, we demonstrated that eye-movement responses make it possible to characterize the goals of these tasks. Then, we trained three different types of classifiers and predicted the task participants performed with an accuracy well above chance (a maximum of 88% for visual search). An analysis of the relative importance of features for classification accuracy reveals that just one feature, i.e., initiation time, is sufficient for above-chance performance (a maximum of 79% accuracy in object naming). Crucially, this feature is independent of task duration, which differs systematically across the three tasks we investigated. Overall, the best task classification performance was obtained with a set of seven features that included both spatial information (e.g., entropy of attention allocation) and temporal components (e.g., total fixation on objects) of the eye-movement record. This result confirms the task-dependent allocation of visual attention and extends previous work by showing that task classification is possible when tasks differ in the cognitive processes involved (purely visual tasks such as search vs. communicative tasks such as scene description).

  12. Bookseller’s Classification: Classification Examples and Criteria of Croatian Booksellers in Sales Catalogs and Book Lists from the Beginning of the 20th Century

    Directory of Open Access Journals (Sweden)

    Nada Topić

    2012-12-01

    Full Text Available The aim of the paper is to conduct research on the topic of ways of bookstore (sales classification of Croatian bookstores from the beginning of the 20th century. By content analysis of the 17 sales lists/catalogs of books from Dubrovnik, Split, Zadar, Karlovac, Zagreb and Osijek, the classification structure has been reconstructed, and the criteria according to which the booksellers offerings have been classified in the early 20th century have been determined. Conducting of the analysis established the following criteria of the bookstore classification: topic/content, form/type of work, type of corpus, genre, language, purpose, publishing series, publisher, time of publication, (new edition, time of publication/purchase, customer's specific interests, number, letter and author. Order of enumeration within specific categories is mostly alphabetic, numeric or according to order of publication. Unlike the library classification and classification systems in general, the problematics of bookstore classification is not very present in the current existing sources. Research studies that focus on the history of bookselling, even if they reveal ways of classification of booksellers offers remain on a descriptive level without any deeper analysis of the criteria or possible reasons of such classification. Therefore, the contribution of the paper is a detailed analysis of a larger pattern of bookstore sales catalogs, and also an attempt of illuminating the criteria and reasons of creating a system of bookstore classification in the defined historical, spatial and time context.

  13. Characterisation and classification of RISOe P2546 cup anemometer

    Energy Technology Data Exchange (ETDEWEB)

    Friis Pedersen, T.

    2003-04-01

    The characteristics of the RISOe P2546 cup anemometer were investigated in detail by wind tunnel and laboratory tests. The characteristics include accredited calibration, tilt response measurements for tilt angles between -40 degC to 40 degC, gust response measurements at 8m/s and turbulence intensities of 10%, 16% and 23%, step response measurements at step wind speeds 3,7, 8, 11,9 and 15,2m/s, measurement of torque characteristics at 8m/s, rotor inertia measurements and measurements of friction of bearings at temperatures -20 degC to 40 degC. Characteristics were fitted to a time domain cup anemometer model. The characteristics were transformed into the CLASSCUP classification scheme, and were related to the cup anemometer requirements in the Danish certification system and in the IEC 61400-121 Committee Draft. (au)

  14. Revised Soil Classification System for Coarse-Fine Mixtures

    KAUST Repository

    Park, Junghee; Santamarina, Carlos

    2017-01-01

    Soil classification systems worldwide capture great physical insight and enable geotechnical engineers to anticipate the properties and behavior of soils by grouping them into similar response categories based on their index properties. Yet gravimetric analysis and data trends summarized from published papers reveal critical limitations in soil group boundaries adopted in current systems. In particular, current classification systems fail to capture the dominant role of fines on the mechanical and hydraulic properties of soils. A revised soil classification system (RSCS) for coarse-fine mixtures is proposed herein. Definitions of classification boundaries use low and high void ratios that gravel, sand, and fines may attain. This research adopts emax and emin for gravels and sands, and three distinctive void ratio values for fines: soft eF|10  kPa and stiff eF|1  MPa for mechanical response (at effective stress 10 kPa and 1 MPa, respectively), and viscous λ⋅eF|LL for fluid flow control, where λ=2log(LL−25) and eF|LL is the void ratio at the liquid limit. For classification purposes, these void ratios can be estimated from index properties such as particle shape, the coefficient of uniformity, and the liquid limit. Analytically computed and data-adjusted boundaries are soil-specific, in contrast with the Unified Soil Classification System (USCS). Threshold fractions for mechanical control and for flow control are quite distinct in the proposed system. Therefore, the RSCS uses a two-name nomenclature whereby the first letters identify the component(s) that controls mechanical properties, followed by a letter (shown in parenthesis) that identifies the component that controls fluid flow. Sample charts in this paper and a Microsoft Excel facilitate the implementation of this revised classification system.

  15. Revised Soil Classification System for Coarse-Fine Mixtures

    KAUST Repository

    Park, Junghee

    2017-04-17

    Soil classification systems worldwide capture great physical insight and enable geotechnical engineers to anticipate the properties and behavior of soils by grouping them into similar response categories based on their index properties. Yet gravimetric analysis and data trends summarized from published papers reveal critical limitations in soil group boundaries adopted in current systems. In particular, current classification systems fail to capture the dominant role of fines on the mechanical and hydraulic properties of soils. A revised soil classification system (RSCS) for coarse-fine mixtures is proposed herein. Definitions of classification boundaries use low and high void ratios that gravel, sand, and fines may attain. This research adopts emax and emin for gravels and sands, and three distinctive void ratio values for fines: soft eF|10  kPa and stiff eF|1  MPa for mechanical response (at effective stress 10 kPa and 1 MPa, respectively), and viscous λ⋅eF|LL for fluid flow control, where λ=2log(LL−25) and eF|LL is the void ratio at the liquid limit. For classification purposes, these void ratios can be estimated from index properties such as particle shape, the coefficient of uniformity, and the liquid limit. Analytically computed and data-adjusted boundaries are soil-specific, in contrast with the Unified Soil Classification System (USCS). Threshold fractions for mechanical control and for flow control are quite distinct in the proposed system. Therefore, the RSCS uses a two-name nomenclature whereby the first letters identify the component(s) that controls mechanical properties, followed by a letter (shown in parenthesis) that identifies the component that controls fluid flow. Sample charts in this paper and a Microsoft Excel facilitate the implementation of this revised classification system.

  16. A semi-parametric within-subject mixture approach to the analyses of responses and response times.

    Science.gov (United States)

    Molenaar, Dylan; Bolsinova, Maria; Vermunt, Jeroen K

    2018-05-01

    In item response theory, modelling the item response times in addition to the item responses may improve the detection of possible between- and within-subject differences in the process that resulted in the responses. For instance, if respondents rely on rapid guessing on some items but not on all, the joint distribution of the responses and response times will be a multivariate within-subject mixture distribution. Suitable parametric methods to detect these within-subject differences have been proposed. In these approaches, a distribution needs to be assumed for the within-class response times. In this paper, it is demonstrated that these parametric within-subject approaches may produce false positives and biased parameter estimates if the assumption concerning the response time distribution is violated. A semi-parametric approach is proposed which resorts to categorized response times. This approach is shown to hardly produce false positives and parameter bias. In addition, the semi-parametric approach results in approximately the same power as the parametric approach. © 2017 The British Psychological Society.

  17. A Hidden Markov Models Approach for Crop Classification: Linking Crop Phenology to Time Series of Multi-Sensor Remote Sensing Data

    Directory of Open Access Journals (Sweden)

    Sofia Siachalou

    2015-03-01

    Full Text Available Vegetation monitoring and mapping based on multi-temporal imagery has recently received much attention due to the plethora of medium-high spatial resolution satellites and the improved classification accuracies attained compared to uni-temporal approaches. Efficient image processing strategies are needed to exploit the phenological information present in temporal image sequences and to limit data redundancy and computational complexity. Within this framework, we implement the theory of Hidden Markov Models in crop classification, based on the time-series analysis of phenological states, inferred by a sequence of remote sensing observations. More specifically, we model the dynamics of vegetation over an agricultural area of Greece, characterized by spatio-temporal heterogeneity and small-sized fields, using RapidEye and Landsat ETM+ imagery. In addition, the classification performance of image sequences with variable spatial and temporal characteristics is evaluated and compared. The classification model considering one RapidEye and four pan-sharpened Landsat ETM+ images was found superior, resulting in a conditional kappa from 0.77 to 0.94 per class and an overall accuracy of 89.7%. The results highlight the potential of the method for operational crop mapping in Euro-Mediterranean areas and provide some hints for optimal image acquisition windows regarding major crop types in Greece.

  18. The Importance of Responsibility in Times of Crisis

    Directory of Open Access Journals (Sweden)

    Jacob Dahl Rendtorff

    2014-06-01

    Full Text Available In this paper I would like to show the importance of the concept of responsibility as the foundation of ethics in times of crisis in particular in the fields of politics and economics in the modern civilisation marked by globalization and technological progres. I consider the concept of responsibility as the key notion in order to understand the ethical duty in a modern technological civilisation. We can indeed observe a moralization of the concept of responsibility going beyond a strict legal definition in terms of imputability. The paper begins by discussing the humanistic foundations of such a concept of responsibility. It treats the historical origins of responsibility and it relates this concept to the concept of accountability. On the basis of this historical determination of the concept I would like to present the definition of the concept of responsibility as fundamental ethical principle that has increasing importance as the foundation of the principles of governance in modern welfare states. In this context the paper discusses the extension of the concept of responsibility towards institutional or corporate responsibility where responsibility does not only concerns the responsibility of individuals but also deals with the responsibility of institutional collectivities. In this way the paper is based on the following structure : 1 The ethical foundation of the concept of responsibility 2 Responsibility in technological civilisation 3 Political responsibility for good governance in the welfare state 4 Social responsibility of business corporations in times of globalization 5 Conclusion and discussion : changed conditions of responsibility in modern times.

  19. Classification and Analysis of Computer Network Traffic

    DEFF Research Database (Denmark)

    Bujlow, Tomasz

    2014-01-01

    various classification modes (decision trees, rulesets, boosting, softening thresholds) regarding the classification accuracy and the time required to create the classifier. We showed how to use our VBS tool to obtain per-flow, per-application, and per-content statistics of traffic in computer networks...

  20. 42 CFR 84.52 - Respiratory hazards; classification.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 1 2010-10-01 2010-10-01 false Respiratory hazards; classification. 84.52 Section... SAFETY AND HEALTH RESEARCH AND RELATED ACTIVITIES APPROVAL OF RESPIRATORY PROTECTIVE DEVICES Classification of Approved Respirators; Scope of Approval; Atmospheric Hazards; Service Time § 84.52 Respiratory...

  1. Study on time characteristics of fast time response inorganic scintillator CeF3

    International Nuclear Information System (INIS)

    Hu Mengchun; Zhou Dianzhong; Guo Cun; Ye Wenying

    2003-01-01

    The cerium fluoride (CeF 3 ) is a kind of new fast time response inorganic scintillator. The physical characteristics of CeF 3 are well suitable for detection of domestic pulse γ-rays. The time response of detector composed by phototube with CeF 3 are measured by use of the pulse radiation source with rise time about 0.8 ns, and FWHM time 1.5-2.2 ns. Experiment results show that the rise time is less than 2 ns, FWHM time is about 10 ns, fall time is about 60 ns, average decay time constant is 20-30 ns, respectively for CeF 3

  2. Conceptual question response times in Peer Instruction classrooms

    Directory of Open Access Journals (Sweden)

    Kelly Miller

    2014-08-01

    Full Text Available Classroom response systems are widely used in interactive teaching environments as a way to engage students by asking them questions. Previous research on the time taken by students to respond to conceptual questions has yielded insights on how students think and change conceptions. We measure the amount of time students take to respond to in-class, conceptual questions [ConcepTests (CTs] in two introductory physics courses taught using Peer Instruction and use item response theory to determine the difficulty of the CTs. We examine response time differences between correct and incorrect answers both before and after the peer discussion for CTs of varying difficulty. We also determine the relationship between response time and student performance on a standardized test of incoming physics knowledge, precourse self-efficacy, and gender. Our data reveal three results of interest. First, response time for correct answers is significantly faster than for incorrect answers, both before and after peer discussion, especially for easy CTs. Second, students with greater incoming physics knowledge and higher self-efficacy respond faster in both rounds. Third, there is no gender difference in response rate after controlling for incoming physics knowledge scores, although males register significantly more attempts before committing to a final answer than do female students. These results provide insight into effective CT pacing during Peer Instruction. In particular, in order to maintain a pace that keeps everyone engaged, students should not be given too much time to respond. When around 80% of the answers are in, the ratio of correct to incorrect responses rapidly approaches levels indicating random guessing and instructors should close the poll.

  3. A time-domain method to generate artificial time history from a given reference response spectrum

    Energy Technology Data Exchange (ETDEWEB)

    Shin, Gang Sik [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of); Song, Oh Seop [Dept. of Mechanical Engineering, Chungnam National University, Daejeon (Korea, Republic of)

    2016-06-15

    Seismic qualification by test is widely used as a way to show the integrity and functionality of equipment that is related to the overall safety of nuclear power plants. Another means of seismic qualification is by direct integration analysis. Both approaches require a series of time histories as an input. However, in most cases, the possibility of using real earthquake data is limited. Thus, artificial time histories are widely used instead. In many cases, however, response spectra are given. Thus, most of the artificial time histories are generated from the given response spectra. Obtaining the response spectrum from a given time history is straightforward. However, the procedure for generating artificial time histories from a given response spectrum is difficult and complex to understand. Thus, this paper presents a simple time-domain method for generating a time history from a given response spectrum; the method was shown to satisfy conditions derived from nuclear regulatory guidance.

  4. A time-domain method to generate artificial time history from a given reference response spectrum

    International Nuclear Information System (INIS)

    Shin, Gang Sik; Song, Oh Seop

    2016-01-01

    Seismic qualification by test is widely used as a way to show the integrity and functionality of equipment that is related to the overall safety of nuclear power plants. Another means of seismic qualification is by direct integration analysis. Both approaches require a series of time histories as an input. However, in most cases, the possibility of using real earthquake data is limited. Thus, artificial time histories are widely used instead. In many cases, however, response spectra are given. Thus, most of the artificial time histories are generated from the given response spectra. Obtaining the response spectrum from a given time history is straightforward. However, the procedure for generating artificial time histories from a given response spectrum is difficult and complex to understand. Thus, this paper presents a simple time-domain method for generating a time history from a given response spectrum; the method was shown to satisfy conditions derived from nuclear regulatory guidance

  5. The Potential of Time Series Merged from Landsat-5 TM and HJ-1 CCD for Crop Classification: A Case Study for Bole and Manas Counties in Xinjiang, China

    Directory of Open Access Journals (Sweden)

    Pengyu Hao

    2014-08-01

    Full Text Available Time series data capture crop growth dynamics and are some of the most effective data sources for crop mapping. However, a drawback of precise crop classification at medium resolution (30 m using multi-temporal data is that some images at crucial time periods are absent from a single sensor. In this research, a medium-resolution, 15-day time series was obtained by merging Landsat-5 TM and HJ-1 CCD data (with similar radiometric performances in multi-spectral bands. Subsequently, optimal temporal windows for accurate crop mapping were evaluated using an extension of the Jeffries–Matusita (JM distance from the merged time series. A support vector machine (SVM was then used to compare the classification accuracy of the optimal temporal windows and the entire time series. In addition, different training sample sizes (10% to 90% of the entire training sample in 10% increments; five repetitions for each sample size were used to investigate the stability of optimal temporal windows. The results showed that time series in optimal temporal windows can achieve high classification accuracies. The optimal temporal windows were robust when the training sample size was sufficiently large. However, they were not stable when the sample size was too small (i.e., less than 300 and may shift in different agro-ecosystems, because of different classes. In addition, merged time series had higher temporal resolution and were more likely to comprise the optimal temporal periods than time series from single-sensor data. Therefore, the use of merged time series increased the possibility of precise crop classification.

  6. A possibilistic approach for transient identification with 'don't know' response capability optimized by genetic algorithm

    International Nuclear Information System (INIS)

    Almeida, Jose Carlos S. de; Schirru, Roberto; Pereira, Claudio M.N.A.; Universidade Federal, Rio de Janeiro, RJ

    2002-01-01

    This work describes a possibilistic approach for transient identification based on the minimum centroids set method, proposed in previous work, optimized by genetic algorithm. The idea behind this method is to split the complex classification problem into small and simple ones, so that the performance in the classification can be increased. In order to accomplish that, a genetic algorithm is used to learn, from realistic simulated data, the optimized time partitions, which the robustness and correctness in the classification are maximized. The use of a possibilistic classification approach propitiates natural and consistent classification rules, leading naturally to a good heuristic to handle the 'don't know 'response, in case of unrecognized transient, which is fairly desirable in transient classification systems where safety is critical. Application of the proposed approach to a nuclear transient indentification problem reveals good capability of the genetic algorithm in learning optimized possibilistic classification rules for efficient diagnosis including 'don't know' response. Obtained results are shown and commented. (author)

  7. Use of Response Time for Measuring Cognitive Ability

    Directory of Open Access Journals (Sweden)

    Patrick C. Kyllonen

    2016-11-01

    Full Text Available The purpose of this paper is to review some of the key literature on response time as it has played a role in cognitive ability measurement, providing a historical perspective as well as covering current research. We discuss the speed-level distinction, dimensions of speed and level in cognitive abilities frameworks, speed–accuracy tradeoff, approaches to addressing speed–accuracy tradeoff, analysis methods, particularly item response theory-based, response time models from cognitive psychology (ex-Gaussian function, and the diffusion model, and other uses of response time in testing besides ability measurement. We discuss several new methods that can be used to provide greater insight into the speed and level aspects of cognitive ability and speed–accuracy tradeoff decisions. These include item-level time limits, the use of feedback (e.g., CUSUMs, explicit scoring rules that combine speed and accuracy information (e.g., count down timing, and cognitive psychology models. We also review some of the key psychometric advances in modeling speed and level, which combine speed and ability measurement, address speed–accuracy tradeoff, allow for distinctions between response times on items responded to correctly and incorrectly, and integrate psychometrics with information-processing modeling. We suggest that the application of these models and tools is likely to advance both the science and measurement of human abilities for theory and applications.

  8. Tongue Images Classification Based on Constrained High Dispersal Network

    Directory of Open Access Journals (Sweden)

    Dan Meng

    2017-01-01

    Full Text Available Computer aided tongue diagnosis has a great potential to play important roles in traditional Chinese medicine (TCM. However, the majority of the existing tongue image analyses and classification methods are based on the low-level features, which may not provide a holistic view of the tongue. Inspired by deep convolutional neural network (CNN, we propose a novel feature extraction framework called constrained high dispersal neural networks (CHDNet to extract unbiased features and reduce human labor for tongue diagnosis in TCM. Previous CNN models have mostly focused on learning convolutional filters and adapting weights between them, but these models have two major issues: redundancy and insufficient capability in handling unbalanced sample distribution. We introduce high dispersal and local response normalization operation to address the issue of redundancy. We also add multiscale feature analysis to avoid the problem of sensitivity to deformation. Our proposed CHDNet learns high-level features and provides more classification information during training time, which may result in higher accuracy when predicting testing samples. We tested the proposed method on a set of 267 gastritis patients and a control group of 48 healthy volunteers. Test results show that CHDNet is a promising method in tongue image classification for the TCM study.

  9. A heteroscedastic generalized linear model with a non-normal speed factor for responses and response times.

    Science.gov (United States)

    Molenaar, Dylan; Bolsinova, Maria

    2017-05-01

    In generalized linear modelling of responses and response times, the observed response time variables are commonly transformed to make their distribution approximately normal. A normal distribution for the transformed response times is desirable as it justifies the linearity and homoscedasticity assumptions in the underlying linear model. Past research has, however, shown that the transformed response times are not always normal. Models have been developed to accommodate this violation. In the present study, we propose a modelling approach for responses and response times to test and model non-normality in the transformed response times. Most importantly, we distinguish between non-normality due to heteroscedastic residual variances, and non-normality due to a skewed speed factor. In a simulation study, we establish parameter recovery and the power to separate both effects. In addition, we apply the model to a real data set. © 2017 The Authors. British Journal of Mathematical and Statistical Psychology published by John Wiley & Sons Ltd on behalf of British Psychological Society.

  10. Refinement of diagnosis and disease classification in psychiatry.

    Science.gov (United States)

    Lecrubier, Yves

    2008-03-01

    Knowledge concerning the classification of mental disorders progressed substantially with the use of DSM III-IV and IDCD 10 because it was based on observed data, with precise definitions. These classifications a priori avoided to generate definitions related to etiology or treatment response. They are based on a categorical approach where diagnostic entities share common phenomenological features. Modifications proposed or discussed are related to the weak validity of the classification strategy described above. (a) Disorders are supposed to be independent but the current coexistence of two or more disorders is the rule; (b) They also are supposed to have stability, however anxiety disorders most of the time precede major depression. For GAD age at onset, family history, biology and symptomatology are close to those of depression. As a consequence broader entities such as depression-GAD spectrum, panic-phobias spectrum and OCD spectrum including eating disorders and pathological gambling are taken into consideration; (c) Diagnostic categories use thresholds to delimitate a border with normals. This creates "subthreshold" conditions. The relevance of such conditions is well documented. Measuring the presence and severity of different dimensions, independent from a threshold, will improve the relevance of the description of patients pathology. In addition, this dimensional approach will improve the problems posed by the mutually exclusive diagnoses (depression and GAD, schizophrenia and depression); (d) Some disorders are based on the coexistence of different dimensions. Patients may present only one set of symptoms and have different characteristics, evolution and response to treatment. An example would be negative symptoms in Schizophrenia; (e) Because no etiological model is available and most measures are subjective, objective measures (cognitive, biological) and genetics progresses created important hopes. None of these measures is pathognomonic and most appear

  11. Improved prognostic classification of breast cancer defined by antagonistic activation patterns of immune response pathway modules

    International Nuclear Information System (INIS)

    Teschendorff, Andrew E; Gomez, Sergio; Arenas, Alex; El-Ashry, Dorraya; Schmidt, Marcus; Gehrmann, Mathias; Caldas, Carlos

    2010-01-01

    Elucidating the activation pattern of molecular pathways across a given tumour type is a key challenge necessary for understanding the heterogeneity in clinical response and for developing novel more effective therapies. Gene expression signatures of molecular pathway activation derived from perturbation experiments in model systems as well as structural models of molecular interactions ('model signatures') constitute an important resource for estimating corresponding activation levels in tumours. However, relatively few strategies for estimating pathway activity from such model signatures exist and only few studies have used activation patterns of pathways to refine molecular classifications of cancer. Here we propose a novel network-based method for estimating pathway activation in tumours from model signatures. We find that although the pathway networks inferred from cancer expression data are highly consistent with the prior information contained in the model signatures, that they also exhibit a highly modular structure and that estimation of pathway activity is dependent on this modular structure. We apply our methodology to a panel of 438 estrogen receptor negative (ER-) and 785 estrogen receptor positive (ER+) breast cancers to infer activation patterns of important cancer related molecular pathways. We show that in ER negative basal and HER2+ breast cancer, gene expression modules reflecting T-cell helper-1 (Th1) and T-cell helper-2 (Th2) mediated immune responses play antagonistic roles as major risk factors for distant metastasis. Using Boolean interaction Cox-regression models to identify non-linear pathway combinations associated with clinical outcome, we show that simultaneous high activation of Th1 and low activation of a TGF-beta pathway module defines a subtype of particularly good prognosis and that this classification provides a better prognostic model than those based on the individual pathways. In ER+ breast cancer, we find that

  12. A new analytical method for the classification of time-location data obtained from the global positioning system (GPS).

    Science.gov (United States)

    Kim, Taehyun; Lee, Kiyoung; Yang, Wonho; Yu, Seung Do

    2012-08-01

    Although the global positioning system (GPS) has been suggested as an alternative way to determine time-location patterns, its use has been limited. The purpose of this study was to evaluate a new analytical method of classifying time-location data obtained by GPS. A field technician carried a GPS device while simulating various scripted activities and recorded all movements by the second in an activity diary. The GPS device recorded geological data once every 15 s. The daily monitoring was repeated 18 times. The time-location data obtained by the GPS were compared with the activity diary to determine selection criteria for the classification of the GPS data. The GPS data were classified into four microenvironments (residential indoors, other indoors, transit, and walking outdoors); the selection criteria used were used number of satellites (used-NSAT), speed, and distance from residence. The GPS data were classified as indoors when the used-NSAT was below 9. Data classified as indoors were further classified as residential indoors when the distance from the residence was less than 40 m; otherwise, they were classified as other indoors. Data classified as outdoors were further classified as being in transit when the speed exceeded 2.5 m s(-1); otherwise, they were classified as walking outdoors. The average simple percentage agreement between the time-location classifications and the activity diary was 84.3 ± 12.4%, and the kappa coefficient was 0.71. The average differences between the time diary and the GPS results were 1.6 ± 2.3 h for the time spent in residential indoors, 0.9 ± 1.7 h for the time spent in other indoors, 0.4 ± 0.4 h for the time spent in transit, and 0.8 ± 0.5 h for the time spent walking outdoors. This method can be used to determine time-activity patterns in exposure-science studies.

  13. Parallel exploitation of a spatial-spectral classification approach for hyperspectral images on RVC-CAL

    Science.gov (United States)

    Lazcano, R.; Madroñal, D.; Fabelo, H.; Ortega, S.; Salvador, R.; Callicó, G. M.; Juárez, E.; Sanz, C.

    2017-10-01

    Hyperspectral Imaging (HI) assembles high resolution spectral information from hundreds of narrow bands across the electromagnetic spectrum, thus generating 3D data cubes in which each pixel gathers the spectral information of the reflectance of every spatial pixel. As a result, each image is composed of large volumes of data, which turns its processing into a challenge, as performance requirements have been continuously tightened. For instance, new HI applications demand real-time responses. Hence, parallel processing becomes a necessity to achieve this requirement, so the intrinsic parallelism of the algorithms must be exploited. In this paper, a spatial-spectral classification approach has been implemented using a dataflow language known as RVCCAL. This language represents a system as a set of functional units, and its main advantage is that it simplifies the parallelization process by mapping the different blocks over different processing units. The spatial-spectral classification approach aims at refining the classification results previously obtained by using a K-Nearest Neighbors (KNN) filtering process, in which both the pixel spectral value and the spatial coordinates are considered. To do so, KNN needs two inputs: a one-band representation of the hyperspectral image and the classification results provided by a pixel-wise classifier. Thus, spatial-spectral classification algorithm is divided into three different stages: a Principal Component Analysis (PCA) algorithm for computing the one-band representation of the image, a Support Vector Machine (SVM) classifier, and the KNN-based filtering algorithm. The parallelization of these algorithms shows promising results in terms of computational time, as the mapping of them over different cores presents a speedup of 2.69x when using 3 cores. Consequently, experimental results demonstrate that real-time processing of hyperspectral images is achievable.

  14. A New Classification Approach Based on Multiple Classification Rules

    OpenAIRE

    Zhongmei Zhou

    2014-01-01

    A good classifier can correctly predict new data for which the class label is unknown, so it is important to construct a high accuracy classifier. Hence, classification techniques are much useful in ubiquitous computing. Associative classification achieves higher classification accuracy than some traditional rule-based classification approaches. However, the approach also has two major deficiencies. First, it generates a very large number of association classification rules, especially when t...

  15. Time response of temperature sensors using neural networks

    International Nuclear Information System (INIS)

    Santos, Roberto Carlos dos

    2010-01-01

    In a PWR nuclear power plant, the primary coolant temperature and feedwater temperature are measured using RTDs (Resistance Temperature Detectors). These RTDs typically feed the plant's control and safety systems and must, therefore, be very accurate and have good dynamic performance. The response time of RTDs is characterized by a single parameter called the Plunge Time Constant defined as the time it takes the sensor output to achieve 63.2 percent of its final value after a step change in temperature. Nuclear reactor service conditions are difficult to reproduce in the laboratory, and an in-situ test method called LCSR (Loop Current Step Response) test was developed to measure remotely the response time of RTDs. >From this test, the time constant of the sensor is identified by means of the LCSR transformation that involves the dynamic response modal time constants determination using a nodal heat-transfer model. This calculation is not simple and requires specialized personnel. For this reason an Artificial Neural Network has been developed to predict the time constant of RTD from LCSR test transient. It eliminates the transformations involved in the LCSR application. A series of LCSR tests on RTDs generates the response transients of the sensors, the input data of the networks. Plunge tests are used to determine the time constants of the RTDs, the desired output of the ANN, trained using these sets of input/output data. This methodology was firstly applied to theoretical data simulating 10 RTDs with different time constant values, resulting in an average error of about 0.74 %. Experimental data from three different RTDs was used to predict time constant resulting in a maximum error of 3,34 %. The time constants values predicted from ANN were compared with those obtained from traditional way resulting in an average error of about 18 % and that shows the network is able to predict accurately the sensor time constant. (author)

  16. Critical Evaluation of Headache Classifications.

    Science.gov (United States)

    Özge, Aynur

    2013-08-01

    Transforming a subjective sense like headache into an objective state and establishing a common language for this complaint which can be both a symptom and a disease all by itself have kept the investigators busy for years. Each recommendation proposed has brought along a set of patients who do not meet the criteria. While almost the most ideal and most comprehensive classification studies continued at this point, this time criticisims about withdrawing from daily practice came to the fore. In this article, the classification adventure of scientists who work in the area of headache will be summarized. More specifically, 2 classifications made by the International Headache Society (IHS) and the point reached in relation with the 3rd classification which is still being worked on will be discussed together with headache subtypes. It has been presented with the wish and belief that it will contribute to the readers and young investigators who are interested in this subject.

  17. Progressive Classification Using Support Vector Machines

    Science.gov (United States)

    Wagstaff, Kiri; Kocurek, Michael

    2009-01-01

    An algorithm for progressive classification of data, analogous to progressive rendering of images, makes it possible to compromise between speed and accuracy. This algorithm uses support vector machines (SVMs) to classify data. An SVM is a machine learning algorithm that builds a mathematical model of the desired classification concept by identifying the critical data points, called support vectors. Coarse approximations to the concept require only a few support vectors, while precise, highly accurate models require far more support vectors. Once the model has been constructed, the SVM can be applied to new observations. The cost of classifying a new observation is proportional to the number of support vectors in the model. When computational resources are limited, an SVM of the appropriate complexity can be produced. However, if the constraints are not known when the model is constructed, or if they can change over time, a method for adaptively responding to the current resource constraints is required. This capability is particularly relevant for spacecraft (or any other real-time systems) that perform onboard data analysis. The new algorithm enables the fast, interactive application of an SVM classifier to a new set of data. The classification process achieved by this algorithm is characterized as progressive because a coarse approximation to the true classification is generated rapidly and thereafter iteratively refined. The algorithm uses two SVMs: (1) a fast, approximate one and (2) slow, highly accurate one. New data are initially classified by the fast SVM, producing a baseline approximate classification. For each classified data point, the algorithm calculates a confidence index that indicates the likelihood that it was classified correctly in the first pass. Next, the data points are sorted by their confidence indices and progressively reclassified by the slower, more accurate SVM, starting with the items most likely to be incorrectly classified. The user

  18. Talar Fractures and Dislocations: A Radiologist's Guide to Timely Diagnosis and Classification.

    Science.gov (United States)

    Melenevsky, Yulia; Mackey, Robert A; Abrahams, R Brad; Thomson, Norman B

    2015-01-01

    The talus, the second largest tarsal bone, has distinctive imaging characteristics and injury patterns. The predominantly extraosseous vascular supply of the talus predisposes it to significant injury in the setting of trauma. In addition, the lack of muscular attachments and absence of a secondary blood supply can lead to subsequent osteonecrosis. Although talar fractures account for less than 1% of all fractures, they commonly result from high-energy trauma and may lead to complications and long-term morbidity if not recognized and managed appropriately. While initial evaluation is with foot and ankle radiographs, computed tomography (CT) is often performed to evaluate the extent of the fracture, displacement, comminution, intra-articular extension, and associated injuries. Talar fractures are divided by anatomic region: head, neck, and body. Talar head fractures can be treated conservatively if nondisplaced, warranting careful radiographic and CT evaluation to assess rotation, displacement, and extension into the neck. The modified Hawkins-Canale classification of talar neck fractures is most commonly used due to its simplicity, usefulness in guiding treatment, and prognostic value, as it correlates associated malalignment with risk of subsequent osteonecrosis. Isolated talar body fractures may be more common than previously thought. The Sneppen classification further divides talar body fractures into osteochondral talar dome, lateral and posterior process, and shear and crush comminuted central body fractures. Crush comminuted central body fractures carry a poor prognosis due to nonanatomic reduction, bone loss, and subsequent osteonecrosis. Lateral process fractures can be radiographically occult and require a higher index of suspicion for successful diagnosis. Subtalar dislocations are often accompanied by fractures, necessitating postreduction CT. Familiarity with the unique talar anatomy and injury patterns is essential for radiologists to facilitate

  19. Texture classification by texton: statistical versus binary.

    Directory of Open Access Journals (Sweden)

    Zhenhua Guo

    Full Text Available Using statistical textons for texture classification has shown great success recently. The maximal response 8 (Statistical_MR8, image patch (Statistical_Joint and locally invariant fractal (Statistical_Fractal are typical statistical texton algorithms and state-of-the-art texture classification methods. However, there are two limitations when using these methods. First, it needs a training stage to build a texton library, thus the recognition accuracy will be highly depended on the training samples; second, during feature extraction, local feature is assigned to a texton by searching for the nearest texton in the whole library, which is time consuming when the library size is big and the dimension of feature is high. To address the above two issues, in this paper, three binary texton counterpart methods were proposed, Binary_MR8, Binary_Joint, and Binary_Fractal. These methods do not require any training step but encode local feature into binary representation directly. The experimental results on the CUReT, UIUC and KTH-TIPS databases show that binary texton could get sound results with fast feature extraction, especially when the image size is not big and the quality of image is not poor.

  20. Functional classification of pulmonary hypertension in children: Report from the PVRI pediatric taskforce, Panama 2011.

    Science.gov (United States)

    Lammers, Astrid E; Adatia, Ian; Cerro, Maria Jesus Del; Diaz, Gabriel; Freudenthal, Alexandra Heath; Freudenthal, Franz; Harikrishnan, S; Ivy, Dunbar; Lopes, Antonio A; Raj, J Usha; Sandoval, Julio; Stenmark, Kurt; Haworth, Sheila G

    2011-08-02

    The members of the Pediatric Task Force of the Pulmonary Vascular Research Institute (PVRI) were aware of the need to develop a functional classification of pulmonary hypertension in children. The proposed classification follows the same pattern and uses the same criteria as the Dana Point pulmonary hypertension specific classification for adults. Modifications were necessary for children, since age, physical growth and maturation influences the way in which the functional effects of a disease are expressed. It is essential to encapsulate a child's clinical status, to make it possible to review progress with time as he/she grows up, as consistently and as objectively as possible. Particularly in younger children we sought to include objective indicators such as thriving, need for supplemental feeds and the record of school or nursery attendance. This helps monitor the clinical course of events and response to treatment over the years. It also facilitates the development of treatment algorithms for children. We present a consensus paper on a functional classification system for children with pulmonary hypertension, discussed at the Annual Meeting of the PVRI in Panama City, February 2011.

  1. Investigations on response time of magnetorheological elastomer under compression mode

    Science.gov (United States)

    Zhu, Mi; Yu, Miao; Qi, Song; Fu, Jie

    2018-05-01

    For efficient fast control of vibration system with magnetorheological elastomer (MRE)-based smart device, the response time of MRE material is the key parameter which directly affects the control performance of the vibration system. For a step coil current excitation, this paper proposed a Maxwell behavior model with time constant λ to describe the normal force response of MRE, and the response time of MRE was extracted through the separation of coil response time. Besides, the transient responses of MRE under compression mode were experimentally investigated, and the effects of (i) applied current, (ii) particle distribution and (iii) compressive strain on the response time of MRE were addressed. The results revealed that the three factors can affect the response characteristic of MRE quite significantly. Besides the intrinsic importance for contributing to the response evaluation and effective design of MRE device, this study may conduce to the optimal design of controller for MRE control system.

  2. Rice-planted area extraction by time series analysis of ENVISAT ASAR WS data using a phenology-based classification approach: A case study for Red River Delta, Vietnam

    Science.gov (United States)

    Nguyen, D.; Wagner, W.; Naeimi, V.; Cao, S.

    2015-04-01

    Recent studies have shown the potential of Synthetic Aperture Radars (SAR) for mapping of rice fields and some other vegetation types. For rice field classification, conventional classification techniques have been mostly used including manual threshold-based and supervised classification approaches. The challenge of the threshold-based approach is to find acceptable thresholds to be used for each individual SAR scene. Furthermore, the influence of local incidence angle on backscatter hinders using a single threshold for the entire scene. Similarly, the supervised classification approach requires different training samples for different output classes. In case of rice crop, supervised classification using temporal data requires different training datasets to perform classification procedure which might lead to inconsistent mapping results. In this study we present an automatic method to identify rice crop areas by extracting phonological parameters after performing an empirical regression-based normalization of the backscatter to a reference incidence angle. The method is evaluated in the Red River Delta (RRD), Vietnam using the time series of ENVISAT Advanced SAR (ASAR) Wide Swath (WS) mode data. The results of rice mapping algorithm compared to the reference data indicate the Completeness (User accuracy), Correctness (Producer accuracy) and Quality (Overall accuracies) of 88.8%, 92.5 % and 83.9 % respectively. The total area of the classified rice fields corresponds to the total rice cultivation areas given by the official statistics in Vietnam (R2  0.96). The results indicates that applying a phenology-based classification approach using backscatter time series in optimal incidence angle normalization can achieve high classification accuracies. In addition, the method is not only useful for large scale early mapping of rice fields in the Red River Delta using the current and future C-band Sentinal-1A&B backscatter data but also might be applied for other rice

  3. Real-time odor discrimination using a bioelectronic sensor array based on the insect electroantennogram

    International Nuclear Information System (INIS)

    Myrick, A J; Hetling, J R; Park, K-C; Baker, T C

    2008-01-01

    Current trends in artificial nose research are strongly influenced by knowledge of biological olfactory systems. Insects have evolved over millions of years to detect and maneuver toward a food source or mate, or away from predators. The insect olfactory system is able to identify volatiles on a time scale that matches their ability to maneuver. Here, biological olfactory sense organs, insect antennae, have been exploited in a hybrid-device biosensor, demonstrating the ability to identify individual strands of odor in a plume passing over the sensor on a sub-second time scale. A portable system was designed to utilize the electrophysiological responses recorded from a sensor array composed of male or female antennae from four or eight different species of insects (a multi-channel electroantennogram, EAG). A computational analysis strategy that allows discrimination between odors in real time is described in detail. Following a training period, both semi-parametric and k-nearest neighbor (k-NN) classifiers with the ability to discard ambiguous responses are applied toward the classification of up to eight odors. EAG responses to individual strands in an odor plume are classified or discarded as ambiguous with a delay (sensor response to classification report) on the order of 1 s. The dependence of classification error rate on several parameters is described. Finally, the performance of the approach is compared to that of a minimal conditional risk classifier

  4. The impact of weight classification on safety: timing steps to adapt to external constraints

    Science.gov (United States)

    Gill, S.V.

    2015-01-01

    Objectives: The purpose of the current study was to evaluate how weight classification influences safety by examining adults’ ability to meet a timing constraint: walking to the pace of an audio metronome. Methods: With a cross-sectional design, walking parameters were collected as 55 adults with normal (n=30) and overweight (n=25) body mass index scores walked to slow, normal, and fast audio metronome paces. Results: Between group comparisons showed that at the fast pace, those with overweight body mass index (BMI) had longer double limb support and stance times and slower cadences than the normal weight group (all psmetronome paces revealed that participants who were overweight had higher cadences at the slow and fast paces (all ps<0.05). Conclusions: Findings suggest that those with overweight BMI alter their gait to maintain biomechanical stability. Understanding how excess weight influences gait adaptation can inform interventions to improve safety for individuals with obesity. PMID:25730658

  5. Cognitive Reflection, Decision Biases, and Response Times

    Directory of Open Access Journals (Sweden)

    Carlos Alos-Ferrer

    2016-09-01

    Full Text Available We present novel evidence on decision times and personality traits in standard questions from the decision-making literature where responses are relatively slow (medians around half a minute or above. To this end, we measured decision times in a number of incentivized, framed items (decisions from description including the Cognitive Reflection Test, two additional questions following the same logic, and a number of classic questions used to study decision biases in probability judgments (base-rate neglect, the conjunction fallacy, and the ratio bias. All questions create a conflict between an intuitive process and more deliberative thinking. For each item, we then created a non-conflict version by either making the intuitive impulse correct (resulting in an alignment question, shutting it down (creating a neutral question, or making it dominant (creating a heuristic question. For CRT questions, the differences in decision times are as predicted by dual-process theories, with alignment and heuristic variants leading to faster responses and neutral questions to slower responses than the original, conflict questions. For decision biases (where responses are slower, evidence is mixed. To explore the possible influence of personality factors on both choices and decision times, we used standard personality scales including the Rational-Experiential Inventory and the Big Five, and used the mas controls in regression analysis.

  6. Time for a paradigm shift in the classification of muscle injuries

    Directory of Open Access Journals (Sweden)

    Bruce Hamilton

    2017-09-01

    Full Text Available Muscle injuries remain one of the most common injuries in sport, yet despite this, there is little consensus on how to either effectively describe or determine the prognosis of a specific muscle injury. Numerous approaches to muscle classification and grading of medicine have been applied over the last century, but over the last decade the limitations of historic approaches have been recognized. As a consequence, in the past 10 years, clinical research groups have begun to question the historic approaches and reconsider the way muscle injuries are classified and described. Using a narrative approach, this manuscript describes several of the most recent attempts to classify and grade muscle injuries and highlights the relative strengths and weaknesses of each system. While each of the new classification and grading systems have strengths, there remains little consensus on a system that is both comprehensive and evidence based. Few of the currently identified features within the grading systems have relevance to accurately determining prognosis.

  7. A Phenology-Based Classification of Time-Series MODIS Data for Rice Crop Monitoring in Mekong Delta, Vietnam

    Directory of Open Access Journals (Sweden)

    Nguyen-Thanh Son

    2013-12-01

    Full Text Available Rice crop monitoring is an important activity for crop management. This study aimed to develop a phenology-based classification approach for the assessment of rice cropping systems in Mekong Delta, Vietnam, using Moderate Resolution Imaging Spectroradiometer (MODIS data. The data were processed from December 2000, to December 2012, using empirical mode decomposition (EMD in three main steps: (1 data pre-processing to construct the smooth MODIS enhanced vegetation index (EVI time-series data; (2 rice crop classification; and (3 accuracy assessment. The comparisons between the classification maps and the ground reference data indicated overall accuracies and Kappa coefficients, respectively, of 81.4% and 0.75 for 2002, 80.6% and 0.74 for 2006 and 85.5% and 0.81 for 2012. The results by comparisons between MODIS-derived rice area and rice area statistics were slightly overestimated, with a relative error in area (REA from 0.9–15.9%. There was, however, a close correlation between the two datasets (R2 ≥ 0.89. From 2001 to 2012, the areas of triple-cropped rice increased approximately 31.6%, while those of the single-cropped rain-fed rice, double-cropped irrigated rice and double-cropped rain-fed rice decreased roughly −5.0%, −19.2% and −7.4%, respectively. This study demonstrates the validity of such an approach for rice-crop monitoring with MODIS data and could be transferable to other regions.

  8. Timing criteria for supplemental BWR emergency response equipment

    International Nuclear Information System (INIS)

    Bickel, John H.

    2015-01-01

    The Great Tohuku Earthquake and subsequent Tsunami represented a double failure event which destroyed offsite power connections to Fukushima-Daiichi site and then destroyed on-site electrical systems needed to run decay heat removal systems. The accident could have been mitigated had there been supplemental portable battery chargers, supplemental pumps, and in-place piping connections to provide alternate decay heat removal. In response to this event in the USA, two national response centers, one in Memphis, Tennessee, and another in Phoenix, Arizona, will begin operation. They will be able to dispatch supplemental emergency response equipment to any nuclear plant in the U.S. within 24 hours. In order to define requirements for supplemental nuclear power plant emergency response equipment maintained onsite vs. in a regional support center it is necessary to confirm: (a) the earliest time such equipment might be needed depending on the specific scenario, (b) the nominal time to move the equipment from a storage location either on-site or within the region of a nuclear power plant, and (c) the time required to connect in the supplemental equipment to use it. This paper describes an evaluation process for a BWR-4 with a Mark I Containment starting with: (a) severe accident simulation to define best estimate times available for recovery based on the specific scenario, (b) identify the key supplemental response equipment needed at specific times to accomplish recovery of key safety functions, and (c) evaluate what types of equipment should be warehoused on-site vs. in regional response centers. (authors)

  9. Characterisation and classification of RISØ P2546 cup anemometer

    DEFF Research Database (Denmark)

    Friis Pedersen, Troels

    2003-01-01

    The characteristics of the RISØ P2546 cup anemometer were investigated in detail by wind tunnel and laboratory tests. The characteristics include accredited calibration, tilt response measurements for tilt angles between -40° to 40°, gust responsemeasurements at 8m/s and turbulence intensities...... of 10%, 16% and 23%, step response measurements at step wind speeds 3,7, 8, 11,9 and 15,2m/s, measurement of torque characteristics at 8m/s, rotor inertia measurements and measurements of friction ofbearings at temperatures -20°C to 40°C. Characteristics were fitted to a time domain cup anemometer model....... The characteristics were transformed into the CLASSCUP classification scheme, and were related to the cup anemometer requirements in the Danishcertification system and in the IEC 61400-121 Committee Draft....

  10. Characterisation and classification of RISØ P2546 cup anemometer

    DEFF Research Database (Denmark)

    Friis Pedersen, Troels

    2004-01-01

    The characteristics of the RISØ P2546 cup anemometer were investigated in detail by wind tunnel and laboratory tests. The characteristics include accredited calibration, tilt response measurements for tilt angles between -40° to 40°, gust responsemeasurements at 8m/s and turbulence intensities...... of 10%, 16% and 23%, step response measurements at step wind speeds 3,7, 8, 11,9 and 15,2m/s, measurement of torque characteristics at 8m/s, rotor inertia measurements and measurements of friction ofbearings at temperatures -20°C to 40°C. Characteristics were fitted to a time domain cup anemometer model....... The characteristics were transformed into the CLASSCUP classification scheme, and were related to the cup anemometer requirements in the Danishcertification system and in the IEC 61400-121 Committee Draft....

  11. Formalization of the classification pattern: survey of classification modeling in information systems engineering.

    Science.gov (United States)

    Partridge, Chris; de Cesare, Sergio; Mitchell, Andrew; Odell, James

    2018-01-01

    Formalization is becoming more common in all stages of the development of information systems, as a better understanding of its benefits emerges. Classification systems are ubiquitous, no more so than in domain modeling. The classification pattern that underlies these systems provides a good case study of the move toward formalization in part because it illustrates some of the barriers to formalization, including the formal complexity of the pattern and the ontological issues surrounding the "one and the many." Powersets are a way of characterizing the (complex) formal structure of the classification pattern, and their formalization has been extensively studied in mathematics since Cantor's work in the late nineteenth century. One can use this formalization to develop a useful benchmark. There are various communities within information systems engineering (ISE) that are gradually working toward a formalization of the classification pattern. However, for most of these communities, this work is incomplete, in that they have not yet arrived at a solution with the expressiveness of the powerset benchmark. This contrasts with the early smooth adoption of powerset by other information systems communities to, for example, formalize relations. One way of understanding the varying rates of adoption is recognizing that the different communities have different historical baggage. Many conceptual modeling communities emerged from work done on database design, and this creates hurdles to the adoption of the high level of expressiveness of powersets. Another relevant factor is that these communities also often feel, particularly in the case of domain modeling, a responsibility to explain the semantics of whatever formal structures they adopt. This paper aims to make sense of the formalization of the classification pattern in ISE and surveys its history through the literature, starting from the relevant theoretical works of the mathematical literature and gradually shifting focus

  12. Learning classification models with soft-label information.

    Science.gov (United States)

    Nguyen, Quang; Valizadegan, Hamed; Hauskrecht, Milos

    2014-01-01

    Learning of classification models in medicine often relies on data labeled by a human expert. Since labeling of clinical data may be time-consuming, finding ways of alleviating the labeling costs is critical for our ability to automatically learn such models. In this paper we propose a new machine learning approach that is able to learn improved binary classification models more efficiently by refining the binary class information in the training phase with soft labels that reflect how strongly the human expert feels about the original class labels. Two types of methods that can learn improved binary classification models from soft labels are proposed. The first relies on probabilistic/numeric labels, the other on ordinal categorical labels. We study and demonstrate the benefits of these methods for learning an alerting model for heparin induced thrombocytopenia. The experiments are conducted on the data of 377 patient instances labeled by three different human experts. The methods are compared using the area under the receiver operating characteristic curve (AUC) score. Our AUC results show that the new approach is capable of learning classification models more efficiently compared to traditional learning methods. The improvement in AUC is most remarkable when the number of examples we learn from is small. A new classification learning framework that lets us learn from auxiliary soft-label information provided by a human expert is a promising new direction for learning classification models from expert labels, reducing the time and cost needed to label data.

  13. Validation of a simple response-time measure of listening effort

    NARCIS (Netherlands)

    Pals, Carina; Sarampalis, Anastasios; van Rijn, Hedderik; Başkent, Deniz

    This study compares two response-time measures of listening effort that can be combined with a clinical speech test for a more comprehensive evaluation of total listening experience; verbal response times to auditory stimuli (RTaud) and response times to a visual task (RTsvis) in a dual- task

  14. Nurses' perception about risk classification in an emergency service

    Directory of Open Access Journals (Sweden)

    Cristiane Chaves de Souza

    2014-04-01

    Full Text Available Objective. Get to know how nurses perceive the accomplishment of risk classification in an emergency service. Methodology. In this qualitative study, 11 nurses were included with at least two months of experience in the risk classification of patients who visited the emergency service. Semistructured interviews were used to collect the information. The data were collected between August and December 2011. For data analysis, Bardin's theoretical framework was used. Results. The nurses in the study consider the risk classification as a work organization instruments that permits closer contact between nurses and patients. The nursing skills needed for risk classification were identified: knowledge about the scale used, clinical perspective, patience and agility. The availability of risk classification scales was the main facilitator of this work. The main difficulties were the disorganization of the care network and the health team's lack of knowledge of the protocol. Conclusion. Risk classification offers an opportunity for professional autonomy to the extent that it is the main responsible for regulating care at the entry door of the emergency services.

  15. Value of real-time shear wave elastography in evaluating classification of liver fibrosis: a Meta-analysis

    Directory of Open Access Journals (Sweden)

    WU Yue

    2017-09-01

    Full Text Available ObjectiveTo investigate the diagnostic value of real-time shear wave elastography (SWE technique in evaluating classification of liver fibrosis. MethodsPubMed, CNKI, CBM, VIP, and Wanfang Data were searched for Chinese and English articles on SWE for evaluating classification of liver fibrosis published from January 2010 to December 2016, and these articles were screened and evaluated. Meta-disc 1.4 software was used for the meta-analysis of the data in the articles included. ResultsA total of 11 English articles were included, with 1560 cases in total. In the ≥F2 group, SWE had a pooled sensitivity of 0.85 (95% confidence interval [CI]:0.82-0.87, a specificity of 0.79 (95%CI:0.76-0.82, and a diagnostic odds ratio (DOR of 30.81 (95%CI: 16.55-57.34. In the ≥F3 group, SWE had a pooled sensitivity of 0.87 (95%CI:0.84-0.91, a specificity of 0.84 (95%CI:0.82-0.87, and a DOR of 41.45 (95%CI:18.25-94.45. In the F4 group, SWE had a pooled sensitivity of 0.88 (95%CI:0.83-0.91, a specificity of 0.91 (95%CI:0.89-092, and a DOR of 67.18 (95%CI:30.03-150.31. The areas under the receiver operating characteristic curve for these three groups were 0.914 7, 0.922 3, and 0.952 0, respectively. ConclusionSWE has a high diagnostic value in evaluating the classification of liver fibrosis and can be used to determine liver fibrosis stage in clinical practice.

  16. Quantitative Classification of Quartz by Laser Induced Breakdown Spectroscopy in Conjunction with Discriminant Function Analysis

    Directory of Open Access Journals (Sweden)

    A. Ali

    2016-01-01

    Full Text Available A responsive laser induced breakdown spectroscopic system was developed and improved for utilizing it as a sensor for the classification of quartz samples on the basis of trace elements present in the acquired samples. Laser induced breakdown spectroscopy (LIBS in conjunction with discriminant function analysis (DFA was applied for the classification of five different types of quartz samples. The quartz plasmas were produced at ambient pressure using Nd:YAG laser at fundamental harmonic mode (1064 nm. We optimized the detection system by finding the suitable delay time of the laser excitation. This is the first study, where the developed technique (LIBS+DFA was successfully employed to probe and confirm the elemental composition of quartz samples.

  17. Mapping of the Universe of Knowledge in Different Classification Schemes

    Directory of Open Access Journals (Sweden)

    M. P. Satija

    2017-06-01

    Full Text Available Given the variety of approaches to mapping the universe of knowledge that have been presented and discussed in the literature, the purpose of this paper is to systematize their main principles and their applications in the major general modern library classification schemes. We conducted an analysis of the literature on classification and the main classification systems, namely Dewey/Universal Decimal Classification, Cutter’s Expansive Classification, Subject Classification of J.D. Brown, Colon Classification, Library of Congress Classification, Bibliographic Classification, Rider’s International Classification, Bibliothecal Bibliographic Klassification (BBK, and Broad System of Ordering (BSO. We conclude that the arrangement of the main classes can be done following four principles that are not mutually exclusive: ideological principle, social purpose principle, scientific order, and division by discipline. The paper provides examples and analysis of each system. We also conclude that as knowledge is ever-changing, classifications also change and present a different structure of knowledge depending upon the society and time of their design.

  18. Hazard classification of environmental restoration activities at the INEL

    International Nuclear Information System (INIS)

    Peatross, R.G.

    1996-04-01

    The following documents require that a hazard classification be prepared for all activities for which US Department of Energy (DOE) has assumed environmental, safety, and health responsibility: the DOE Order 5481.1B, Safety Analysis and Review System and DOE Order 5480.23, Nuclear Safety Analysis Reports. A hazard classification defines the level of hazard posed by an operation or activity, assuming an unmitigated release of radioactive and nonradioactive hazardous material. For environmental restoration activities, the release threshold criteria presented in Hazard Baseline Documentation (DOE-EM-STD-5502-94) are used to determine classifications, such as Radiological, Nonnuclear, and Other Industrial facilities. Based upon DOE-EM-STD-5502-94, environmental restoration activities in all but one of the sites addressed by the scope of this classification (see Section 2) can be classified as ''Other Industrial Facility''. DOE-EM-STD-5502-94 states that a Health and Safety Plan and compliance with the applicable Occupational Safety and Health Administration (OSHA) standards are sufficient safety controls for this classification

  19. Classification and Monitoring of Reed Belts Using Dual-Polarimetric TerraSAR-X Time Series

    Directory of Open Access Journals (Sweden)

    Iris Heine

    2016-06-01

    Full Text Available Synthetic aperture radar polarimetry (PolSAR and polarimetric decomposition techniques have proven to be useful tools for wetland mapping. In this study we classify reed belts and monitor their phenological changes at a natural lake in northeastern Germany using dual-co-polarized (HH, VV TerraSAR-X time series. The time series comprises 19 images, acquired between August 2014 and May 2015, in ascending and descending orbit. We calculated different polarimetric indices using the HH and VV intensities, the dual-polarimetric coherency matrix including dominant and mean alpha scattering angles, and entropy and anisotropy (normalized eigenvalue difference as well as combinations of entropy and anisotropy for the analysis of the scattering scenarios. The image classifications were performed with the random forest classifier and validated with high-resolution digital orthophotos. The time series analysis of the reed belts revealed significant seasonal changes for the double-bounce–sensitive parameters (intensity ratio HH/VV and intensity difference HH-VV, the co-polarimetric coherence phase and the dominant and mean alpha scattering angles and in the dual-polarimetric coherence (amplitude, anisotropy, entropy, and anisotropy-entropy combinations; whereas in summer dense leaves cause volume scattering, in winter, after leaves have fallen, the reed stems cause predominately double-bounce scattering. Our study showed that the five most important parameters for the classification of reed are the intensity difference HH-VV, the mean alpha scattering angle, intensity ratio HH/VV, and the coherence (phase. Due to the better separation of reed and other vegetation (deciduous forest, coniferous forest, meadow, winter acquisitions are preferred for the mapping of reed. Multi-temporal stacks of winter images performed better than summer ones. The combination of ascending and descending images also improved the result as it reduces the influence of the sensor

  20. Classification of dry-cured hams according to the maturation time using near infrared spectra and artificial neural networks.

    Science.gov (United States)

    Prevolnik, M; Andronikov, D; Žlender, B; Font-i-Furnols, M; Novič, M; Škorjanc, D; Čandek-Potokar, M

    2014-01-01

    An attempt to classify dry-cured hams according to the maturation time on the basis of near infrared (NIR) spectra was studied. The study comprised 128 samples of biceps femoris (BF) muscle from dry-cured hams matured for 10 (n=32), 12 (n=32), 14 (n=32) or 16 months (n=32). Samples were minced and scanned in the wavelength range from 400 to 2500 nm using spectrometer NIR System model 6500 (Silver Spring, MD, USA). Spectral data were used for i) splitting of samples into the training and test set using 2D Kohonen artificial neural networks (ANN) and for ii) construction of classification models using counter-propagation ANN (CP-ANN). Different models were tested, and the one selected was based on the lowest percentage of misclassified test samples (external validation). Overall correctness of the classification was 79.7%, which demonstrates practical relevance of using NIR spectroscopy and ANN for dry-cured ham processing control. Copyright © 2013 Elsevier Ltd. All rights reserved.

  1. Characterisation and classification of RISOe P2546 cup anemometer

    Energy Technology Data Exchange (ETDEWEB)

    Friis Pedersen, T.

    2004-03-01

    The characteristics of the RISOe P2546 cup anemometer were investigated in detail, and all data presented in figures and tables. The characteristics include: wind tunnel calibrations, including an accredited calibration; tilt response meas-urements for tilt angles from -40 deg. C to 40 deg. C; gust response measurements at 8m/s, 10,5m/s and 13m/s and turbulence intensities of 10%, 16% and 23%; step response measurements at step wind speeds 4, 8, 12 and 15m/s; measurement of torque characteristics at 8m/s; rotor inertia measurements and measurements of friction of bearings at temperatures -20 deg. C to 40 deg. C. The characteristics are fitted to a time domain cup anemometer model, and the cup anemometer is put into the CLASSCUP classification scheme. The characteristics are also compared to the requirements to cup anemometers in the Danish wind turbine certification system and the CD and CDV of the revision of the standard IEC 61400-12. (au)

  2. ACCUWIND - Methods for classification of cup anemometers

    Energy Technology Data Exchange (ETDEWEB)

    Dahlberg, J.Aa.; Friis Pedersen, T.; Busche, P.

    2006-05-15

    Errors associated with the measurement of wind speed are the major sources of uncertainties in power performance testing of wind turbines. Field comparisons of well-calibrated anemometers show significant and not acceptable difference. The European CLASSCUP project posed the objectives to quantify the errors associated with the use of cup anemometers, and to develop a classification system for quantification of systematic errors of cup anemometers. This classification system has now been implemented in the IEC 61400-12-1 standard on power performance measurements in annex I and J. The classification of cup anemometers requires general external climatic operational ranges to be applied for the analysis of systematic errors. A Class A category classification is connected to reasonably flat sites, and another Class B category is connected to complex terrain, General classification indices are the result of assessment of systematic deviations. The present report focuses on methods that can be applied for assessment of such systematic deviations. A new alternative method for torque coefficient measurements at inclined flow have been developed, which have then been applied and compared to the existing methods developed in the CLASSCUP project and earlier. A number of approaches including the use of two cup anemometer models, two methods of torque coefficient measurement, two angular response measurements, and inclusion and exclusion of influence of friction have been implemented in the classification process in order to assess the robustness of methods. The results of the analysis are presented as classification indices, which are compared and discussed. (au)

  3. A Study of Improving Response Time Verification Method for Pressure Transmitters

    International Nuclear Information System (INIS)

    Lee, Jungyang; Ha, Jaehong; Jung, Insoo; Jo, Junghee; Kim, Hangbae

    2007-01-01

    Technical Specifications (TS) of OPR1000 type nuclear power plants in Korea require pressure sensor response time testing (RTT) to ensure sensor performance per assumption in plant safety analyses. However, the need for pressure sensor response time testing is not clear because the nominal sensor response times are in the order of milliseconds while overall loop response time limits being from several seconds to tens of seconds. Additionally, response time testing does not appear to identify response time degradation or failures. Consequently, the need for this testing has been questioned, and a study to determine if response time testing is necessary to justify the assumptions in plant safety analyses in the United States has been conducted and NRC has approved to remove the test requirements for them. A similar study was conducted for OPR1000 type nuclear power plants and the results are presented here

  4. Trends and concepts in fern classification

    Science.gov (United States)

    Christenhusz, Maarten J. M.; Chase, Mark W.

    2014-01-01

    Background and Aims Throughout the history of fern classification, familial and generic concepts have been highly labile. Many classifications and evolutionary schemes have been proposed during the last two centuries, reflecting different interpretations of the available evidence. Knowledge of fern structure and life histories has increased through time, providing more evidence on which to base ideas of possible relationships, and classification has changed accordingly. This paper reviews previous classifications of ferns and presents ideas on how to achieve a more stable consensus. Scope An historical overview is provided from the first to the most recent fern classifications, from which conclusions are drawn on past changes and future trends. The problematic concept of family in ferns is discussed, with a particular focus on how this has changed over time. The history of molecular studies and the most recent findings are also presented. Key Results Fern classification generally shows a trend from highly artificial, based on an interpretation of a few extrinsic characters, via natural classifications derived from a multitude of intrinsic characters, towards more evolutionary circumscriptions of groups that do not in general align well with the distribution of these previously used characters. It also shows a progression from a few broad family concepts to systems that recognized many more narrowly and highly controversially circumscribed families; currently, the number of families recognized is stabilizing somewhere between these extremes. Placement of many genera was uncertain until the arrival of molecular phylogenetics, which has rapidly been improving our understanding of fern relationships. As a collective category, the so-called ‘fern allies’ (e.g. Lycopodiales, Psilotaceae, Equisetaceae) were unsurprisingly found to be polyphyletic, and the term should be abandoned. Lycopodiaceae, Selaginellaceae and Isoëtaceae form a clade (the lycopods) that is

  5. Trends and concepts in fern classification.

    Science.gov (United States)

    Christenhusz, Maarten J M; Chase, Mark W

    2014-03-01

    Throughout the history of fern classification, familial and generic concepts have been highly labile. Many classifications and evolutionary schemes have been proposed during the last two centuries, reflecting different interpretations of the available evidence. Knowledge of fern structure and life histories has increased through time, providing more evidence on which to base ideas of possible relationships, and classification has changed accordingly. This paper reviews previous classifications of ferns and presents ideas on how to achieve a more stable consensus. An historical overview is provided from the first to the most recent fern classifications, from which conclusions are drawn on past changes and future trends. The problematic concept of family in ferns is discussed, with a particular focus on how this has changed over time. The history of molecular studies and the most recent findings are also presented. Fern classification generally shows a trend from highly artificial, based on an interpretation of a few extrinsic characters, via natural classifications derived from a multitude of intrinsic characters, towards more evolutionary circumscriptions of groups that do not in general align well with the distribution of these previously used characters. It also shows a progression from a few broad family concepts to systems that recognized many more narrowly and highly controversially circumscribed families; currently, the number of families recognized is stabilizing somewhere between these extremes. Placement of many genera was uncertain until the arrival of molecular phylogenetics, which has rapidly been improving our understanding of fern relationships. As a collective category, the so-called 'fern allies' (e.g. Lycopodiales, Psilotaceae, Equisetaceae) were unsurprisingly found to be polyphyletic, and the term should be abandoned. Lycopodiaceae, Selaginellaceae and Isoëtaceae form a clade (the lycopods) that is sister to all other vascular plants, whereas

  6. Transportation Modes Classification Using Sensors on Smartphones

    Directory of Open Access Journals (Sweden)

    Shih-Hau Fang

    2016-08-01

    Full Text Available This paper investigates the transportation and vehicular modes classification by using big data from smartphone sensors. The three types of sensors used in this paper include the accelerometer, magnetometer, and gyroscope. This study proposes improved features and uses three machine learning algorithms including decision trees, K-nearest neighbor, and support vector machine to classify the user’s transportation and vehicular modes. In the experiments, we discussed and compared the performance from different perspectives including the accuracy for both modes, the executive time, and the model size. Results show that the proposed features enhance the accuracy, in which the support vector machine provides the best performance in classification accuracy whereas it consumes the largest prediction time. This paper also investigates the vehicle classification mode and compares the results with that of the transportation modes.

  7. Real-Time Classification of Patients with Balance Disorders vs. Normal Subjects Using a Low-Cost Small Wireless Wearable Gait Sensor

    Directory of Open Access Journals (Sweden)

    Bhargava Teja Nukala

    2016-11-01

    Full Text Available Gait analysis using wearable wireless sensors can be an economical, convenient and effective way to provide diagnostic and clinical information for various health-related issues. In this work, our custom designed low-cost wireless gait analysis sensor that contains a basic inertial measurement unit (IMU was used to collect the gait data for four patients diagnosed with balance disorders and additionally three normal subjects, each performing the Dynamic Gait Index (DGI tests while wearing the custom wireless gait analysis sensor (WGAS. The small WGAS includes a tri-axial accelerometer integrated circuit (IC, two gyroscopes ICs and a Texas Instruments (TI MSP430 microcontroller and is worn by each subject at the T4 position during the DGI tests. The raw gait data are wirelessly transmitted from the WGAS to a near-by PC for real-time gait data collection and analysis. In order to perform successful classification of patients vs. normal subjects, we used several different classification algorithms, such as the back propagation artificial neural network (BP-ANN, support vector machine (SVM, k-nearest neighbors (KNN and binary decision trees (BDT, based on features extracted from the raw gait data of the gyroscopes and accelerometers. When the range was used as the input feature, the overall classification accuracy obtained is 100% with BP-ANN, 98% with SVM, 96% with KNN and 94% using BDT. Similar high classification accuracy results were also achieved when the standard deviation or other values were used as input features to these classifiers. These results show that gait data collected from our very low-cost wearable wireless gait sensor can effectively differentiate patients with balance disorders from normal subjects in real time using various classifiers, the success of which may eventually lead to accurate and objective diagnosis of abnormal human gaits and their underlying etiologies in the future, as more patient data are being collected.

  8. Regional Landslide Mapping Aided by Automated Classification of SqueeSAR™ Time Series (Northern Apennines, Italy)

    Science.gov (United States)

    Iannacone, J.; Berti, M.; Allievi, J.; Del Conte, S.; Corsini, A.

    2013-12-01

    Space borne InSAR has proven to be very valuable for landslides detection. In particular, extremely slow landslides (Cruden and Varnes, 1996) can be now clearly identified, thanks to the millimetric precision reached by recent multi-interferometric algorithms. The typical approach in radar interpretation for landslides mapping is based on average annual velocity of the deformation which is calculated over the entire times series. The Hotspot and Cluster Analysis (Lu et al., 2012) and the PSI-based matrix approach (Cigna et al., 2013) are examples of landslides mapping techniques based on average annual velocities. However, slope movements can be affected by non-linear deformation trends, (i.e. reactivation of dormant landslides, deceleration due to natural or man-made slope stabilization, seasonal activity, etc). Therefore, analyzing deformation time series is crucial in order to fully characterize slope dynamics. While this is relatively simple to be carried out manually when dealing with small dataset, the time series analysis over regional scale dataset requires automated classification procedures. Berti et al. (2013) developed an automatic procedure for the analysis of InSAR time series based on a sequence of statistical tests. The analysis allows to classify the time series into six distinctive target trends (0=uncorrelated; 1=linear; 2=quadratic; 3=bilinear; 4=discontinuous without constant velocity; 5=discontinuous with change in velocity) which are likely to represent different slope processes. The analysis also provides a series of descriptive parameters which can be used to characterize the temporal changes of ground motion. All the classification algorithms were integrated into a Graphical User Interface called PSTime. We investigated an area of about 2000 km2 in the Northern Apennines of Italy by using SqueeSAR™ algorithm (Ferretti et al., 2011). Two Radarsat-1 data stack, comprising of 112 scenes in descending orbit and 124 scenes in ascending orbit

  9. SAW Classification Algorithm for Chinese Text Classification

    OpenAIRE

    Xiaoli Guo; Huiyu Sun; Tiehua Zhou; Ling Wang; Zhaoyang Qu; Jiannan Zang

    2015-01-01

    Considering the explosive growth of data, the increased amount of text data’s effect on the performance of text categorization forward the need for higher requirements, such that the existing classification method cannot be satisfied. Based on the study of existing text classification technology and semantics, this paper puts forward a kind of Chinese text classification oriented SAW (Structural Auxiliary Word) algorithm. The algorithm uses the special space effect of Chinese text where words...

  10. Classification in context

    DEFF Research Database (Denmark)

    Mai, Jens Erik

    2004-01-01

    This paper surveys classification research literature, discusses various classification theories, and shows that the focus has traditionally been on establishing a scientific foundation for classification research. This paper argues that a shift has taken place, and suggests that contemporary...... classification research focus on contextual information as the guide for the design and construction of classification schemes....

  11. Bianchi like classification of cosmologies in conformally flat space-times

    International Nuclear Information System (INIS)

    Tauber, G.E.

    1989-01-01

    Solutions of Killing's equations for a conformally flat line element have been found, which are seen to correspond to the conformal group of transformations consisting of the pure conformal group, the Lorentz group, translation and dilation. A classification of the line element has been carried out, singly and combining several of them. Upon comparison with expanding universes it has been found that the Friedmann universes are a subclass with other cosmologies resulting in wider subclasses. (orig.)

  12. An automated cirrus classification

    Science.gov (United States)

    Gryspeerdt, Edward; Quaas, Johannes; Goren, Tom; Klocke, Daniel; Brueck, Matthias

    2018-05-01

    Cirrus clouds play an important role in determining the radiation budget of the earth, but many of their properties remain uncertain, particularly their response to aerosol variations and to warming. Part of the reason for this uncertainty is the dependence of cirrus cloud properties on the cloud formation mechanism, which itself is strongly dependent on the local meteorological conditions. In this work, a classification system (Identification and Classification of Cirrus or IC-CIR) is introduced to identify cirrus clouds by the cloud formation mechanism. Using reanalysis and satellite data, cirrus clouds are separated into four main types: orographic, frontal, convective and synoptic. Through a comparison to convection-permitting model simulations and back-trajectory-based analysis, it is shown that these observation-based regimes can provide extra information on the cloud-scale updraughts and the frequency of occurrence of liquid-origin ice, with the convective regime having higher updraughts and a greater occurrence of liquid-origin ice compared to the synoptic regimes. Despite having different cloud formation mechanisms, the radiative properties of the regimes are not distinct, indicating that retrieved cloud properties alone are insufficient to completely describe them. This classification is designed to be easily implemented in GCMs, helping improve future model-observation comparisons and leading to improved parametrisations of cirrus cloud processes.

  13. Improving the Computational Performance of Ontology-Based Classification Using Graph Databases

    Directory of Open Access Journals (Sweden)

    Thomas J. Lampoltshammer

    2015-07-01

    Full Text Available The increasing availability of very high-resolution remote sensing imagery (i.e., from satellites, airborne laser scanning, or aerial photography represents both a blessing and a curse for researchers. The manual classification of these images, or other similar geo-sensor data, is time-consuming and leads to subjective and non-deterministic results. Due to this fact, (semi- automated classification approaches are in high demand in affected research areas. Ontologies provide a proper way of automated classification for various kinds of sensor data, including remotely sensed data. However, the processing of data entities—so-called individuals—is one of the most cost-intensive computational operations within ontology reasoning. Therefore, an approach based on graph databases is proposed to overcome the issue of a high time consumption regarding the classification task. The introduced approach shifts the classification task from the classical Protégé environment and its common reasoners to the proposed graph-based approaches. For the validation, the authors tested the approach on a simulation scenario based on a real-world example. The results demonstrate a quite promising improvement of classification speed—up to 80,000 times faster than the Protégé-based approach.

  14. Classification

    DEFF Research Database (Denmark)

    Hjørland, Birger

    2017-01-01

    This article presents and discusses definitions of the term “classification” and the related concepts “Concept/conceptualization,”“categorization,” “ordering,” “taxonomy” and “typology.” It further presents and discusses theories of classification including the influences of Aristotle...... and Wittgenstein. It presents different views on forming classes, including logical division, numerical taxonomy, historical classification, hermeneutical and pragmatic/critical views. Finally, issues related to artificial versus natural classification and taxonomic monism versus taxonomic pluralism are briefly...

  15. Peripheral visual response time and visual display layout

    Science.gov (United States)

    Haines, R. F.

    1974-01-01

    Experiments were performed on a group of 42 subjects in a study of their peripheral visual response time to visual signals under positive acceleration, during prolonged bedrest, at passive 70 deg headup body lift, under exposures to high air temperatures and high luminance levels, and under normal stress-free laboratory conditions. Diagrams are plotted for mean response times to white, red, yellow, green, and blue stimuli under different conditions.

  16. T-ray relevant frequencies for osteosarcoma classification

    Science.gov (United States)

    Withayachumnankul, W.; Ferguson, B.; Rainsford, T.; Findlay, D.; Mickan, S. P.; Abbott, D.

    2006-01-01

    We investigate the classification of the T-ray response of normal human bone cells and human osteosarcoma cells, grown in culture. Given the magnitude and phase responses within a reliable spectral range as features for input vectors, a trained support vector machine can correctly classify the two cell types to some extent. Performance of the support vector machine is deteriorated by the curse of dimensionality, resulting from the comparatively large number of features in the input vectors. Feature subset selection methods are used to select only an optimal number of relevant features for inputs. As a result, an improvement in generalization performance is attainable, and the selected frequencies can be used for further describing different mechanisms of the cells, responding to T-rays. We demonstrate a consistent classification accuracy of 89.6%, while the only one fifth of the original features are retained in the data set.

  17. Collateral in Loan Classification and Provisioning

    OpenAIRE

    In W Song

    2002-01-01

    Adequate loan classification practices are an essential part of a sound and effective credit risk-management process in a bank. Failure to identify deterioration in credit quality in a timely manner can aggravate and prolong the problem. Two key issues arise with regard to the use of collateral in the context of loan classification and provisioning. In particular, the questions arise whether collateral should be taken into account in classifying a collateralized loan, and whether it should be...

  18. 78 FR 54970 - Cotton Futures Classification: Optional Classification Procedure

    Science.gov (United States)

    2013-09-09

    ... Service 7 CFR Part 27 [AMS-CN-13-0043] RIN 0581-AD33 Cotton Futures Classification: Optional Classification Procedure AGENCY: Agricultural Marketing Service, USDA. ACTION: Proposed rule. SUMMARY: The... optional cotton futures classification procedure--identified and known as ``registration'' by the U.S...

  19. Lymphoma classification update: B-cell non-Hodgkin lymphomas.

    Science.gov (United States)

    Jiang, Manli; Bennani, N Nora; Feldman, Andrew L

    2017-05-01

    Lymphomas are classified based on the normal counterpart, or cell of origin, from which they arise. Because lymphocytes have physiologic immune functions that vary both by lineage and by stage of differentiation, the classification of lymphomas arising from these normal lymphoid populations is complex. Recent genomic data have contributed additional complexity. Areas covered: Lymphoma classification follows the World Health Organization (WHO) system, which reflects international consensus and is based on pathological, genetic, and clinical factors. A 2016 revision to the WHO classification of lymphoid neoplasms recently was reported. The present review focuses on B-cell non-Hodgkin lymphomas, the most common group of lymphomas, and summarizes recent changes most relevant to hematologists and other clinicians who care for lymphoma patients. Expert commentary: Lymphoma classification is a continually evolving field that needs to be responsive to new clinical, pathological, and molecular understanding of lymphoid neoplasia. Among the entities covered in this review, the 2016 revision of the WHO classification particularly impact the subclassification and genetic stratification of diffuse large B-cell lymphoma and high-grade B-cell lymphomas, and reflect evolving criteria and nomenclature for indolent B-cell lymphomas and lymphoproliferative disorders.

  20. An interobserver reliability comparison between the Orthopaedic Trauma Association's open fracture classification and the Gustilo and Anderson classification.

    Science.gov (United States)

    Ghoshal, A; Enninghorst, N; Sisak, K; Balogh, Z J

    2018-02-01

    To evaluate interobserver reliability of the Orthopaedic Trauma Association's open fracture classification system (OTA-OFC). Patients of any age with a first presentation of an open long bone fracture were included. Standard radiographs, wound photographs, and a short clinical description were given to eight orthopaedic surgeons, who independently evaluated the injury using both the Gustilo and Anderson (GA) and OTA-OFC classifications. The responses were compared for variability using Cohen's kappa. The overall interobserver agreement was ĸ = 0.44 for the GA classification and ĸ = 0.49 for OTA-OFC, which reflects moderate agreement (0.41 to 0.60) for both classifications. The agreement in the five categories of OTA-OFC was: for skin, ĸ = 0.55 (moderate); for muscle, ĸ = 0.44 (moderate); for arterial injury, ĸ = 0.74 (substantial); for contamination, ĸ = 0.35 (fair); and for bone loss, ĸ = 0.41 (moderate). Although the OTA-OFC, with similar interobserver agreement to GA, offers a more detailed description of open fractures, further development may be needed to make it a reliable and robust tool. Cite this article: Bone Joint J 2018;100-B:242-6. ©2018 The British Editorial Society of Bone & Joint Surgery.

  1. A Just-in-Time Learning based Monitoring and Classification Method for Hyper/Hypocalcemia Diagnosis.

    Science.gov (United States)

    Peng, Xin; Tang, Yang; He, Wangli; Du, Wenli; Qian, Feng

    2017-01-20

    This study focuses on the classification and pathological status monitoring of hyper/hypo-calcemia in the calcium regulatory system. By utilizing the Independent Component Analysis (ICA) mixture model, samples from healthy patients are collected, diagnosed, and subsequently classified according to their underlying behaviors, characteristics, and mechanisms. Then, a Just-in-Time Learning (JITL) has been employed in order to estimate the diseased status dynamically. In terms of JITL, for the purpose of the construction of an appropriate similarity index to identify relevant datasets, a novel similarity index based on the ICA mixture model is proposed in this paper to improve online model quality. The validity and effectiveness of the proposed approach have been demonstrated by applying it to the calcium regulatory system under various hypocalcemic and hypercalcemic diseased conditions.

  2. Analysis of MODIS 250 m Time Series Product for LULC Classification and Retrieval of Crop Biophysical Parameter

    Science.gov (United States)

    Verma, A. K.; Garg, P. K.; Prasad, K. S. H.; Dadhwal, V. K.

    2016-12-01

    Agriculture is a backbone of Indian economy, providing livelihood to about 70% of the population. The primary objective of this research is to investigate the general applicability of time-series MODIS 250m Normalized difference vegetation index (NDVI) and Enhanced vegetation index (EVI) data for various Land use/Land cover (LULC) classification. The other objective is the retrieval of crop biophysical parameter using MODIS 250m resolution data. The Uttar Pradesh state of India is selected for this research work. A field study of 38 farms was conducted during entire crop season of the year 2015 to evaluate the applicability of MODIS 8-day, 250m resolution composite images for assessment of crop condition. The spectroradiometer is used for ground reflectance and the AccuPAR LP-80 Ceptometer is used to measure the agricultural crops Leaf Area Index (LAI). The AccuPAR measures Photosynthetically Active Radiation (PAR) and can invert these readings to give LAI for plant canopy. Ground-based canopy reflectance and LAI were used to calibrate a radiative transfer model to create look-up table (LUT) that was used to simulate LAI. The seasonal trend of MODIS-derived LAI was used to find crop parameter by adjusting the LAI simulated from climate-based crop yield model. Cloud free MODIS images of 250m resolution (16 day composite period) were downloaded using LP-DAAC website over a period of 12 months (Jan to Dec 2015). MODIS both the VI products were found to have sufficient spectral, spatial and temporal resolution to detect unique signatures for each class (water, fallow land, urban, dense vegetation, orchard, sugarcane and other crops). Ground truth data were collected using JUNO GPS. Multi-temporal VI signatures for vegetation classes were consistent with its general phenological characteristic and were spectrally separable at some point during the growing season. The MODIS NDVI and EVI multi-temporal images tracked similar seasonal responses for all croplands and were

  3. The Improvement of Land Cover Classification by Thermal Remote Sensing

    Directory of Open Access Journals (Sweden)

    Liya Sun

    2015-06-01

    Full Text Available Land cover classification has been widely investigated in remote sensing for agricultural, ecological and hydrological applications. Landsat images with multispectral bands are commonly used to study the numerous classification methods in order to improve the classification accuracy. Thermal remote sensing provides valuable information to investigate the effectiveness of the thermal bands in extracting land cover patterns. k-NN and Random Forest algorithms were applied to both the single Landsat 8 image and the time series Landsat 4/5 images for the Attert catchment in the Grand Duchy of Luxembourg, trained and validated by the ground-truth reference data considering the three level classification scheme from COoRdination of INformation on the Environment (CORINE using the 10-fold cross validation method. The accuracy assessment showed that compared to the visible and near infrared (VIS/NIR bands, the time series of thermal images alone can produce comparatively reliable land cover maps with the best overall accuracy of 98.7% to 99.1% for Level 1 classification and 93.9% to 96.3% for the Level 2 classification. In addition, the combination with the thermal band improves the overall accuracy by 5% and 6% for the single Landsat 8 image in Level 2 and Level 3 category and provides the best classified results with all seven bands for the time series of Landsat TM images.

  4. Automatic Genre Classification of Musical Signals

    Science.gov (United States)

    Barbedo, Jayme Garcia sArnal; Lopes, Amauri

    2006-12-01

    We present a strategy to perform automatic genre classification of musical signals. The technique divides the signals into 21.3 milliseconds frames, from which 4 features are extracted. The values of each feature are treated over 1-second analysis segments. Some statistical results of the features along each analysis segment are used to determine a vector of summary features that characterizes the respective segment. Next, a classification procedure uses those vectors to differentiate between genres. The classification procedure has two main characteristics: (1) a very wide and deep taxonomy, which allows a very meticulous comparison between different genres, and (2) a wide pairwise comparison of genres, which allows emphasizing the differences between each pair of genres. The procedure points out the genre that best fits the characteristics of each segment. The final classification of the signal is given by the genre that appears more times along all signal segments. The approach has shown very good accuracy even for the lowest layers of the hierarchical structure.

  5. Affective stress responses during leisure time: Validity evaluation of a modified version of the Stress-Energy Questionnaire.

    Science.gov (United States)

    Hadžibajramović, Emina; Ahlborg, Gunnar; Håkansson, Carita; Lundgren-Nilsson, Åsa; Grimby-Ekman, Anna

    2015-12-01

    Psychosocial stress at work is one of the most important factors behind increasing sick-leave rates. In addition to work stressors, it is important to account for non-work-related stressors when assessing stress responses. In this study, a modified version of the Stress-Energy Questionnaire (SEQ), the SEQ during leisure time (SEQ-LT) was introduced for assessing the affective stress response during leisure time. The aim of this study was to investigate the internal construct validity of the SEQ-LT. A second aim was to define the cut-off points for the scales, which could indicate high and low levels of leisure-time stress and energy, respectively. Internal construct validity of the SEQ-LT was evaluated using a Rasch analysis. We examined the unidimensionality and other psychometric properties of the scale by the fit to the Rasch model. A criterion-based approach was used for classification into high and low stress/energy levels. The psychometric properties of the stress and energy scales of the SEQ-LT were satisfactory, having accommodated for local dependency. The cut-off point for low stress was proposed to be in the interval between 2.45 and 3.02 on the Rasch metric score; while for high stress, it was between 3.65 and 3.90. The suggested cut-off points for the low and high energy levels were values between 1.73-1.97 and 2.66-3.08, respectively. The stress and energy scale of the SEQ-LT satisfied the measurement criteria defined by the Rasch analysis and it provided a useful tool for non-work-related assessment of stress responses. We provide guidelines on how to interpret the scale values. © 2015 the Nordic Societies of Public Health.

  6. Calibration of the time response functions of a quenched plastic scintillator for neutron time of flight

    CERN Document Server

    Chen, J B; Peng, H S; Tang, C H; Zhang, B H; Ding, Y K; Chen, M; Chen, H S; Li, C G; Wen, T S; Yu, R Z

    2002-01-01

    The time response functions of an ultrafast quenched plastic scintillation detector used to measure neutron time of flight spectra were calibrated by utilizing cosmic rays and implosion neutrons from DT-filled capsules at the Shenguang II laser facility. These sources could be regarded as delta function pulses due to their much narrower time widths than those of the time response functions of the detection system. The results showed that the detector responses to DT neutrons and to cosmic rays were 1.18 and 0.96 ns FWHM, respectively.

  7. Clever Toolbox - the Art of Automated Genre Classification

    DEFF Research Database (Denmark)

    2005-01-01

    Automatic musical genre classification can be defined as the science of finding computer algorithms that a digitized sound clip as input and yield a musical genre as output. The goal of automated genre classification is, of course, that the musical genre should agree with the human classificasion....... This demo illustrates an approach to the problem that first extract frequency-based sound features followed by a "linear regression" classifier. The basic features are the so-called mel-frequency cepstral coefficients (MFCCs), which are extracted on a time-scale of 30 msec. From these MFCC features, auto......) is subsequently used for classification. This classifier is rather simple; current research investigates more advanced methods of classification....

  8. Inter-laboratory agreement on embryo classification and clinical decision: Conventional morphological assessment vs. time lapse.

    Science.gov (United States)

    Martínez-Granados, Luis; Serrano, María; González-Utor, Antonio; Ortíz, Nereyda; Badajoz, Vicente; Olaya, Enrique; Prados, Nicolás; Boada, Montse; Castilla, Jose A

    2017-01-01

    -perfect inter-laboratory agreement among conventional morphological assessment (CMA), EmbryoScope™ and Primo Vision™, except for false divisions, vacuoles and asymmetry (users of all methods) and multinucleation (users of Primo Vision™), where the degree of agreement was lower. The inter-laboratory agreement on embryo classification according to the ASEBIR criteria was moderate-substantial (Gwet 0.41-0.80) for the laboratories using CMA and EmbryoScope™, and fair-moderate (Gwet 0.21-0.60) for those using Primo Vision™. The inter-laboratory agreement for clinical decision was moderate (Gwet 0.41-0.60) on day 5 for CMA users and almost perfect (Gwet 0.81-1) for time-lapse users. In conclusion, time-lapse technology does not improve inter-laboratory agreement on embryo classification or the analysis of each morphological variable. Moreover, depending on the time-lapse platform used, inter-laboratory agreement may be lower than that obtained by CMA. However, inter-laboratory agreement on clinical decisions is improved with the use of time lapse, regardless of the platform used.

  9. Inter-laboratory agreement on embryo classification and clinical decision: Conventional morphological assessment vs. time lapse.

    Directory of Open Access Journals (Sweden)

    Luis Martínez-Granados

    was almost-perfect inter-laboratory agreement among conventional morphological assessment (CMA, EmbryoScope™ and Primo Vision™, except for false divisions, vacuoles and asymmetry (users of all methods and multinucleation (users of Primo Vision™, where the degree of agreement was lower. The inter-laboratory agreement on embryo classification according to the ASEBIR criteria was moderate-substantial (Gwet 0.41-0.80 for the laboratories using CMA and EmbryoScope™, and fair-moderate (Gwet 0.21-0.60 for those using Primo Vision™. The inter-laboratory agreement for clinical decision was moderate (Gwet 0.41-0.60 on day 5 for CMA users and almost perfect (Gwet 0.81-1 for time-lapse users. In conclusion, time-lapse technology does not improve inter-laboratory agreement on embryo classification or the analysis of each morphological variable. Moreover, depending on the time-lapse platform used, inter-laboratory agreement may be lower than that obtained by CMA. However, inter-laboratory agreement on clinical decisions is improved with the use of time lapse, regardless of the platform used.

  10. A Bivariate Generalized Linear Item Response Theory Modeling Framework to the Analysis of Responses and Response Times.

    Science.gov (United States)

    Molenaar, Dylan; Tuerlinckx, Francis; van der Maas, Han L J

    2015-01-01

    A generalized linear modeling framework to the analysis of responses and response times is outlined. In this framework, referred to as bivariate generalized linear item response theory (B-GLIRT), separate generalized linear measurement models are specified for the responses and the response times that are subsequently linked by cross-relations. The cross-relations can take various forms. Here, we focus on cross-relations with a linear or interaction term for ability tests, and cross-relations with a curvilinear term for personality tests. In addition, we discuss how popular existing models from the psychometric literature are special cases in the B-GLIRT framework depending on restrictions in the cross-relation. This allows us to compare existing models conceptually and empirically. We discuss various extensions of the traditional models motivated by practical problems. We also illustrate the applicability of our approach using various real data examples, including data on personality and cognitive ability.

  11. Establishment of land model at the Shika Nuclear Power Plant. Mainly, on rock board classification

    International Nuclear Information System (INIS)

    Katagawa, Hideki; Hashimoto, Toru; Hirano, Shuji

    1999-01-01

    In order to grasp engineering properties of basic land of constructions, there is rock board classification as a method to classify whole of rock board to some groups considerable to be nearly equal on its properties. Among the method, various methods in response to its aim and characteristics are devised, and for a classification to hard rock board, the Denken type rock board classification considering degree of weathering to its main element and so forth are well known. The basic rock board of the Shika Nuclear Power Plant is composed of middle and hard types of rock, and its weathering is limited to its shallow portion, most of which are held at fresh condition. For such land, a new classification standard in response to characteristics of land was established. Here were introduced on a progress to establish a new classification standard, its application results and rock board properties. (G.K.)

  12. On the Relationship Between Transfer Function-derived Response Times and Hydrograph Analysis Timing Parameters: Are there Similarities?

    Science.gov (United States)

    Bansah, S.; Ali, G.; Haque, M. A.; Tang, V.

    2017-12-01

    The proportion of precipitation that becomes streamflow is a function of internal catchment characteristics - which include geology, landscape characteristics and vegetation - and influence overall storage dynamics. The timing and quantity of water discharged by a catchment are indeed embedded in event hydrographs. Event hydrograph timing parameters, such as the response lag and time of concentration, are important descriptors of how long it takes the catchment to respond to input precipitation and how long it takes the latter to filter through the catchment. However, the extent to which hydrograph timing parameters relate to average response times derived from fitting transfer functions to annual hydrographs is unknown. In this study, we used a gamma transfer function to determine catchment average response times as well as event-specific hydrograph parameters across a network of eight nested watersheds ranging from 0.19 km2 to 74.6 km2 prairie catchments located in south central Manitoba (Canada). Various statistical analyses were then performed to correlate average response times - estimated using the parameters of the fitted gamma transfer function - to event-specific hydrograph parameters. Preliminary results show significant interannual variations in response times and hydrograph timing parameters: the former were in the order of a few hours to days, while the latter ranged from a few days to weeks. Some statistically significant relationships were detected between response times and event-specific hydrograph parameters. Future analyses will involve the comparison of statistical distributions of event-specific hydrograph parameters with that of runoff response times and baseflow transit times in order to quantity catchment storage dynamics across a range of temporal scales.

  13. Planck 2013 results. VII. HFI time response and beams

    CERN Document Server

    Ade, P A R; Armitage-Caplan, C; Arnaud, M; Ashdown, M; Atrio-Barandela, F; Aumont, J; Baccigalupi, C; Banday, A J; Barreiro, R B; Battaner, E; Benabed, K; Benoît, A; Benoit-Lévy, A; Bernard, J -P; Bersanelli, M; Bielewicz, P; Bobin, J; Bock, J J; Bond, J R; Borrill, J; Bouchet, F R; Bowyer, J W; Bridges, M; Bucher, M; Burigana, C; Cardoso, J -F; Catalano, A; Challinor, A; Chamballu, A; Chary, R -R; Chiang, L -Y; Chiang, H C; Christensen, P R; Church, S; Clements, D L; Colombi, S; Colombo, L P L; Couchot, F; Coulais, A; Crill, B P; Curto, A; Cuttaia, F; Danese, L; Davies, R D; de Bernardis, P; de Rosa, A; de Zotti, G; Delabrouille, J; Delouis, J -M; Désert, F -X; Diego, J M; Dole, H; Donzelli, S; Doré, O; Douspis, M; Dunkley, J; Dupac, X; Efstathiou, G; Enßlin, T A; Eriksen, H K; Finelli, F; Forni, O; Frailis, M; Fraisse, A A; Franceschi, E; Galeotta, S; Ganga, K; Giard, M; Giraud-Héraud, Y; González-Nuevo, J; Górski, K M; Gratton, S; Gregorio, A; Gruppuso, A; Gudmundsson, J E; Haissinski, J; Hansen, F K; Hanson, D; Harrison, D; Henrot-Versillé, S; Hernández-Monteagudo, C; Herranz, D; Hildebrandt, S R; Hivon, E; Hobson, M; Holmes, W A; Hornstrup, A; Hou, Z; Hovest, W; Huffenberger, K M; Jaffe, T R; Jaffe, A H; Jones, W C; Juvela, M; Keihänen, E; Keskitalo, R; Kisner, T S; Kneissl, R; Knoche, J; Knox, L; Kunz, M; Kurki-Suonio, H; Lagache, G; Lamarre, J -M; Lasenby, A; Laureijs, R J; Lawrence, C R; Leonardi, R; Leroy, C; Lesgourgues, J; Liguori, M; Lilje, P B; Linden-Vørnle, M; López-Caniego, M; Lubin, P M; Macías-Pérez, J F; MacTavish, C J; Maffei, B; Mandolesi, N; Maris, M; Marshall, D J; Martin, P G; Martínez-González, E; Masi, S; Matarrese, S; Matsumura, T; Matthai, F; Mazzotta, P; McGehee, P; Melchiorri, A; Mendes, L; Mennella, A; Migliaccio, M; Mitra, S; Miville-Deschênes, M -A; Moneti, A; Montier, L; Morgante, G; Mortlock, D; Munshi, D; Murphy, J A; Naselsky, P; Nati, F; Natoli, P; Netterfield, C B; Nørgaard-Nielsen, H U; Noviello, F; Novikov, D; Novikov, I; Osborne, S; Oxborrow, C A; Paci, F; Pagano, L; Pajot, F; Paoletti, D; Pasian, F; Patanchon, G; Perdereau, O; Perotto, L; Perrotta, F; Piacentini, F; Piat, M; Pierpaoli, E; Pietrobon, D; Plaszczynski, S; Pointecouteau, E; Polegre, A M; Polenta, G; Ponthieu, N; Popa, L; Poutanen, T; Pratt, G W; Prézeau, G; Prunet, S; Puget, J -L; Rachen, J P; Reinecke, M; Remazeilles, M; Renault, C; Ricciardi, S; Riller, T; Ristorcelli, I; Rocha, G; Rosset, C; Roudier, G; Rowan-Robinson, M; Rusholme, B; Sandri, M; Santos, D; Sauvé, A; Savini, G; Shellard, E P S; Spencer, L D; Starck, J -L; Stolyarov, V; Stompor, R; Sudiwala, R; Sureau, F; Sutton, D; Suur-Uski, A -S; Sygnet, J -F; Tauber, J A; Tavagnacco, D; Terenzi, L; Tomasi, M; Tristram, M; Tucci, M; Umana, G; Valenziano, L; Valiviita, J; Van Tent, B; Vielva, P; Villa, F; Vittorio, N; Wade, L A; Wandelt, B D; Yvon, D; Zacchei, A; Zonca, A

    2014-01-01

    This paper characterizes the effective beams,the effective beam window functions and the associated errors for the Planck HFI detectors. The effective beam is the angular response including the effect of the optics,detectors,data processing and the scan strategy. The window function is the representation of this beam in the harmonic domain which is required to recover an unbiased measurement of the CMB angular power spectrum. The HFI is a scanning instrument and its effective beams are the convolution of: (a) the optical response of the telescope and feeds;(b)the processing of the time-ordered data and deconvolution of the bolometric and electronic time response; and (c) the merging of several surveys to produce maps. The time response functions are measured using observations of Jupiter and Saturn and by minimizing survey difference residuals. The scanning beam is the post-deconvolution angular response of the instrument, and is characterized with observations of Mars. The main beam solid angles are determin...

  14. Classification of refrigerants; Classification des fluides frigorigenes

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2001-07-01

    This document was made from the US standard ANSI/ASHRAE 34 published in 2001 and entitled 'designation and safety classification of refrigerants'. This classification allows to clearly organize in an international way the overall refrigerants used in the world thanks to a codification of the refrigerants in correspondence with their chemical composition. This note explains this codification: prefix, suffixes (hydrocarbons and derived fluids, azeotropic and non-azeotropic mixtures, various organic compounds, non-organic compounds), safety classification (toxicity, flammability, case of mixtures). (J.S.)

  15. [Headache: classification and diagnosis].

    Science.gov (United States)

    Carbaat, P A T; Couturier, E G M

    2016-11-01

    There are many types of headache and, moreover, many people have different types of headache at the same time. Adequate treatment is possible only on the basis of the correct diagnosis. Technically and in terms of content the current diagnostics process for headache is based on the 'International Classification of Headache Disorders' (ICHD-3-beta) that was produced under the auspices of the International Headache Society. This classification is based on a distinction between primary and secondary headaches. The most common primary headache types are the tension type headache, migraine and the cluster headache. Application of uniform diagnostic concepts is essential to come to the most appropriate treatment of the various types of headache.

  16. A deep learning architecture for temporal sleep stage classification using multivariate and multimodal time series

    OpenAIRE

    Chambon, Stanislas; Galtier, Mathieu; Arnal, Pierrick; Wainrib, Gilles; Gramfort, Alexandre

    2017-01-01

    Sleep stage classification constitutes an important preliminary exam in the diagnosis of sleep disorders. It is traditionally performed by a sleep expert who assigns to each 30s of signal a sleep stage, based on the visual inspection of signals such as electroencephalograms (EEG), electrooculograms (EOG), electrocardiograms (ECG) and electromyograms (EMG). We introduce here the first deep learning approach for sleep stage classification that learns end-to-end without computing spectrograms or...

  17. Land Cover and Land Use Classification with TWOPAC: towards Automated Processing for Pixel- and Object-Based Image Classification

    Directory of Open Access Journals (Sweden)

    Stefan Dech

    2012-09-01

    Full Text Available We present a novel and innovative automated processing environment for the derivation of land cover (LC and land use (LU information. This processing framework named TWOPAC (TWinned Object and Pixel based Automated classification Chain enables the standardized, independent, user-friendly, and comparable derivation of LC and LU information, with minimized manual classification labor. TWOPAC allows classification of multi-spectral and multi-temporal remote sensing imagery from different sensor types. TWOPAC enables not only pixel-based classification, but also allows classification based on object-based characteristics. Classification is based on a Decision Tree approach (DT for which the well-known C5.0 code has been implemented, which builds decision trees based on the concept of information entropy. TWOPAC enables automatic generation of the decision tree classifier based on a C5.0-retrieved ascii-file, as well as fully automatic validation of the classification output via sample based accuracy assessment.Envisaging the automated generation of standardized land cover products, as well as area-wide classification of large amounts of data in preferably a short processing time, standardized interfaces for process control, Web Processing Services (WPS, as introduced by the Open Geospatial Consortium (OGC, are utilized. TWOPAC’s functionality to process geospatial raster or vector data via web resources (server, network enables TWOPAC’s usability independent of any commercial client or desktop software and allows for large scale data processing on servers. Furthermore, the components of TWOPAC were built-up using open source code components and are implemented as a plug-in for Quantum GIS software for easy handling of the classification process from the user’s perspective.

  18. Comparison of real-time classification systems for arrhythmia detection on Android-based mobile devices.

    Science.gov (United States)

    Leutheuser, Heike; Gradl, Stefan; Kugler, Patrick; Anneken, Lars; Arnold, Martin; Achenbach, Stephan; Eskofier, Bjoern M

    2014-01-01

    The electrocardiogram (ECG) is a key diagnostic tool in heart disease and may serve to detect ischemia, arrhythmias, and other conditions. Automatic, low cost monitoring of the ECG signal could be used to provide instantaneous analysis in case of symptoms and may trigger the presentation to the emergency department. Currently, since mobile devices (smartphones, tablets) are an integral part of daily life, they could form an ideal basis for automatic and low cost monitoring solution of the ECG signal. In this work, we aim for a realtime classification system for arrhythmia detection that is able to run on Android-based mobile devices. Our analysis is based on 70% of the MIT-BIH Arrhythmia and on 70% of the MIT-BIH Supraventricular Arrhythmia databases. The remaining 30% are reserved for the final evaluation. We detected the R-peaks with a QRS detection algorithm and based on the detected R-peaks, we calculated 16 features (statistical, heartbeat, and template-based). With these features and four different feature subsets we trained 8 classifiers using the Embedded Classification Software Toolbox (ECST) and compared the computational costs for each classification decision and the memory demand for each classifier. We conclude that the C4.5 classifier is best for our two-class classification problem (distinction of normal and abnormal heartbeats) with an accuracy of 91.6%. This classifier still needs a detailed feature selection evaluation. Our next steps are implementing the C4.5 classifier for Android-based mobile devices and evaluating the final system using the remaining 30% of the two used databases.

  19. Modeling operators' emergency response time for chemical processing operations.

    Science.gov (United States)

    Murray, Susan L; Harputlu, Emrah; Mentzer, Ray A; Mannan, M Sam

    2014-01-01

    Operators have a crucial role during emergencies at a variety of facilities such as chemical processing plants. When an abnormality occurs in the production process, the operator often has limited time to either take corrective actions or evacuate before the situation becomes deadly. It is crucial that system designers and safety professionals can estimate the time required for a response before procedures and facilities are designed and operations are initiated. There are existing industrial engineering techniques to establish time standards for tasks performed at a normal working pace. However, it is reasonable to expect the time required to take action in emergency situations will be different than working at a normal production pace. It is possible that in an emergency, operators will act faster compared to a normal pace. It would be useful for system designers to be able to establish a time range for operators' response times for emergency situations. This article develops a modeling approach to estimate the time standard range for operators taking corrective actions or following evacuation procedures in emergency situations. This will aid engineers and managers in establishing time requirements for operators in emergency situations. The methodology used for this study combines a well-established industrial engineering technique for determining time requirements (predetermined time standard system) and adjustment coefficients for emergency situations developed by the authors. Numerous videos of workers performing well-established tasks at a maximum pace were studied. As an example, one of the tasks analyzed was pit crew workers changing tires as quickly as they could during a race. The operations in these videos were decomposed into basic, fundamental motions (such as walking, reaching for a tool, and bending over) by studying the videos frame by frame. A comparison analysis was then performed between the emergency pace and the normal working pace operations

  20. Spectral-spatial classification of hyperspectral data with mutual information based segmented stacked autoencoder approach

    Science.gov (United States)

    Paul, Subir; Nagesh Kumar, D.

    2018-04-01

    Hyperspectral (HS) data comprises of continuous spectral responses of hundreds of narrow spectral bands with very fine spectral resolution or bandwidth, which offer feature identification and classification with high accuracy. In the present study, Mutual Information (MI) based Segmented Stacked Autoencoder (S-SAE) approach for spectral-spatial classification of the HS data is proposed to reduce the complexity and computational time compared to Stacked Autoencoder (SAE) based feature extraction. A non-parametric dependency measure (MI) based spectral segmentation is proposed instead of linear and parametric dependency measure to take care of both linear and nonlinear inter-band dependency for spectral segmentation of the HS bands. Then morphological profiles are created corresponding to segmented spectral features to assimilate the spatial information in the spectral-spatial classification approach. Two non-parametric classifiers, Support Vector Machine (SVM) with Gaussian kernel and Random Forest (RF) are used for classification of the three most popularly used HS datasets. Results of the numerical experiments carried out in this study have shown that SVM with a Gaussian kernel is providing better results for the Pavia University and Botswana datasets whereas RF is performing better for Indian Pines dataset. The experiments performed with the proposed methodology provide encouraging results compared to numerous existing approaches.

  1. The Net Enabled Waste Management Database in the context of radioactive waste classification

    International Nuclear Information System (INIS)

    Csullog, G.W.; Burcl, R.; Tonkay, D.; Petoe, A.

    2002-01-01

    There is an emerging, international consensus that a common, comprehensive radioactive waste classification system is needed, which derives from the fact that the implementation of radioactive waste classification within countries is highly diverse. Within IAEA Member States, implementation ranges from none to complex systems that vary a great deal from one another. Both the IAEA and the European Commission have recommended common classification schemes but only for the purpose of facilitating communication with the public and national- and international-level organizations and to serve as the basis for developing comprehensive, national waste classification schemes. In the context described above, the IAEA's newly developed Net Enabled Waste Management Database (NEWMDB) contains a feature, the Waste Class Matrix, that Member States use to describe the waste classification schemes they use and to compare them with the IAEA's proposed waste classification scheme. Member States then report waste inventories to the NEWMDB according to their own waste classification schemes, allowing traceability back to nationally based reports. The IAEA uses the information provided in the Waste Class Matrix to convert radioactive waste inventory data reported according to a wide variety of classifications into an single inventory according to the IAEA's proposed scheme. This approach allows the international community time to develop a comprehensive, common classification scheme and allows Member States time to develop and implement effective, operational waste classification schemes while, at the same time, the IAEA can collect the information needed to compile a comprehensive, international radioactive waste inventory. (author)

  2. Land Cover Classification Using Integrated Spectral, Temporal, and Spatial Features Derived from Remotely Sensed Images

    Directory of Open Access Journals (Sweden)

    Yongguang Zhai

    2018-03-01

    Full Text Available Obtaining accurate and timely land cover information is an important topic in many remote sensing applications. Using satellite image time series data should achieve high-accuracy land cover classification. However, most satellite image time-series classification methods do not fully exploit the available data for mining the effective features to identify different land cover types. Therefore, a classification method that can take full advantage of the rich information provided by time-series data to improve the accuracy of land cover classification is needed. In this paper, a novel method for time-series land cover classification using spectral, temporal, and spatial information at an annual scale was introduced. Based on all the available data from time-series remote sensing images, a refined nonlinear dimensionality reduction method was used to extract the spectral and temporal features, and a modified graph segmentation method was used to extract the spatial features. The proposed classification method was applied in three study areas with land cover complexity, including Illinois, South Dakota, and Texas. All the Landsat time series data in 2014 were used, and different study areas have different amounts of invalid data. A series of comparative experiments were conducted on the annual time-series images using training data generated from Cropland Data Layer. The results demonstrated higher overall and per-class classification accuracies and kappa index values using the proposed spectral-temporal-spatial method compared to spectral-temporal classification methods. We also discuss the implications of this study and possibilities for future applications and developments of the method.

  3. Deep-learnt classification of light curves

    DEFF Research Database (Denmark)

    Mahabal, Ashish; Gieseke, Fabian; Pai, Akshay Sadananda Uppinakudru

    2017-01-01

    is to derive statistical features from the time series and to use machine learning methods, generally supervised, to separate objects into a few of the standard classes. In this work, we transform the time series to two-dimensional light curve representations in order to classify them using modern deep......Astronomy light curves are sparse, gappy, and heteroscedastic. As a result standard time series methods regularly used for financial and similar datasets are of little help and astronomers are usually left to their own instruments and techniques to classify light curves. A common approach...... learning techniques. In particular, we show that convolutional neural networks based classifiers work well for broad characterization and classification. We use labeled datasets of periodic variables from CRTS survey and show how this opens doors for a quick classification of diverse classes with several...

  4. The Effect of Police Response Time on Crime Detection

    DEFF Research Database (Denmark)

    Blanes i Vidal, Jordi; Kirchmaier, Tom

    preferred estimate, a 10% increase in response time leads to a 4.6 percentage points decrease in the likelihood of detection. A faster response time also decreases the number of days that it takes for the police to detect a crime, conditional on eventual detection. We find stronger effects for thefts than...

  5. Response times in a two-node queueing network with feedback

    NARCIS (Netherlands)

    van der Mei, R.D.; Gijsen, B.M.M.; in 't Veld, N.; van den Berg, J.L.

    2002-01-01

    The study presented in this paper is motivated by the performance analysis of response times in distributed information systems, where transactions are handled by iterative server and database actions. We model system response times as sojourn times in a two-node open queueing network with a

  6. Response times in a two-node queueing network with feedback

    NARCIS (Netherlands)

    van der Mei, R.D.; Gijsen, B.M.M.; Gijsen, B.M.M.; in 't Veld, N.; van den Berg, Hans Leo

    The study presented in this paper is motivated by the performance analysis of response times in distributed information systems, where transactions are handled by iterative server and database actions. We model system response times as sojourn times in a two-node open queueing network with a

  7. A Framework for Real-Time Collection, Analysis, and Classification of Ubiquitous Infrasound Data

    Science.gov (United States)

    Christe, A.; Garces, M. A.; Magana-Zook, S. A.; Schnurr, J. M.

    2015-12-01

    Traditional infrasound arrays are generally expensive to install and maintain. There are ~10^3 infrasound channels on Earth today. The amount of data currently provided by legacy architectures can be processed on a modest server. However, the growing availability of low-cost, ubiquitous, and dense infrasonic sensor networks presents a substantial increase in the volume, velocity, and variety of data flow. Initial data from a prototype ubiquitous global infrasound network is already pushing the boundaries of traditional research server and communication systems, in particular when serving data products over heterogeneous, international network topologies. We present a scalable, cloud-based approach for capturing and analyzing large amounts of dense infrasonic data (>10^6 channels). We utilize Akka actors with WebSockets to maintain data connections with infrasound sensors. Apache Spark provides streaming, batch, machine learning, and graph processing libraries which will permit signature classification, cross-correlation, and other analytics in near real time. This new framework and approach provide significant advantages in scalability and cost.

  8. Involvement of Machine Learning for Breast Cancer Image Classification: A Survey

    OpenAIRE

    Nahid, Abdullah-Al; Kong, Yinan

    2017-01-01

    Breast cancer is one of the largest causes of women’s death in the world today. Advance engineering of natural image classification techniques and Artificial Intelligence methods has largely been used for the breast-image classification task. The involvement of digital image classification allows the doctor and the physicians a second opinion, and it saves the doctors’ and physicians’ time. Despite the various publications on breast image classification, very few review papers are available w...

  9. Improvement in MFTF data base system response times

    International Nuclear Information System (INIS)

    Lang, N.C.; Nelson, B.C.

    1983-01-01

    The Supervisory Control and Diagnostic System for the Mirror Fusion Test Facility (MFTF) has been designed as an event driven system. To this end we have designed a data base notification facility in which a task can request that it be loaded and started whenever an element in the data base is changed beyond some user defined range. Our initial implementation of the notify facility exhibited marginal response times whenever a data base table with a large number of outstanding notifies was written into. In this paper we discuss the sources of the slow response and describe in detail a new structure for the list of notifies which minimizes search time resulting in significantly faster response

  10. Uncertainty analysis of accident notification time and emergency medical service response time in work zone traffic accidents.

    Science.gov (United States)

    Meng, Qiang; Weng, Jinxian

    2013-01-01

    Taking into account the uncertainty caused by exogenous factors, the accident notification time (ANT) and emergency medical service (EMS) response time were modeled as 2 random variables following the lognormal distribution. Their mean values and standard deviations were respectively formulated as the functions of environmental variables including crash time, road type, weekend, holiday, light condition, weather, and work zone type. Work zone traffic accident data from the Fatality Analysis Report System between 2002 and 2009 were utilized to determine the distributions of the ANT and the EMS arrival time in the United States. A mixed logistic regression model, taking into account the uncertainty associated with the ANT and the EMS response time, was developed to estimate the risk of death. The results showed that the uncertainty of the ANT was primarily influenced by crash time and road type, whereas the uncertainty of EMS response time is greatly affected by road type, weather, and light conditions. In addition, work zone accidents occurring during a holiday and in poor light conditions were found to be statistically associated with a longer mean ANT and longer EMS response time. The results also show that shortening the ANT was a more effective approach in reducing the risk of death than the EMS response time in work zones. To shorten the ANT and the EMS response time, work zone activities are suggested to be undertaken during non-holidays, during the daytime, and in good weather and light conditions.

  11. Active Learning of Classification Models with Likert-Scale Feedback.

    Science.gov (United States)

    Xue, Yanbing; Hauskrecht, Milos

    2017-01-01

    Annotation of classification data by humans can be a time-consuming and tedious process. Finding ways of reducing the annotation effort is critical for building the classification models in practice and for applying them to a variety of classification tasks. In this paper, we develop a new active learning framework that combines two strategies to reduce the annotation effort. First, it relies on label uncertainty information obtained from the human in terms of the Likert-scale feedback. Second, it uses active learning to annotate examples with the greatest expected change. We propose a Bayesian approach to calculate the expectation and an incremental SVM solver to reduce the time complexity of the solvers. We show the combination of our active learning strategy and the Likert-scale feedback can learn classification models more rapidly and with a smaller number of labeled instances than methods that rely on either Likert-scale labels or active learning alone.

  12. Stream classification of the Apalachicola-Chattahoochee-Flint River System to support modeling of aquatic habitat response to climate change

    Science.gov (United States)

    Elliott, Caroline M.; Jacobson, Robert B.; Freeman, Mary C.

    2014-01-01

    A stream classification and associated datasets were developed for the Apalachicola-Chattahoochee-Flint River Basin to support biological modeling of species response to climate change in the southeastern United States. The U.S. Geological Survey and the Department of the Interior’s National Climate Change and Wildlife Science Center established the Southeast Regional Assessment Project (SERAP) which used downscaled general circulation models to develop landscape-scale assessments of climate change and subsequent effects on land cover, ecosystems, and priority species in the southeastern United States. The SERAP aquatic and hydrologic dynamics modeling efforts involve multiscale watershed hydrology, stream-temperature, and fish-occupancy models, which all are based on the same stream network. Models were developed for the Apalachicola-Chattahoochee-Flint River Basin and subbasins in Alabama, Florida, and Georgia, and for the Upper Roanoke River Basin in Virginia. The stream network was used as the spatial scheme through which information was shared across the various models within SERAP. Because these models operate at different scales, coordinated pair versions of the network were delineated, characterized, and parameterized for coarse- and fine-scale hydrologic and biologic modeling. The stream network used for the SERAP aquatic models was extracted from a 30-meter (m) scale digital elevation model (DEM) using standard topographic analysis of flow accumulation. At the finer scale, reaches were delineated to represent lengths of stream channel with fairly homogenous physical characteristics (mean reach length = 350 m). Every reach in the network is designated with geomorphic attributes including upstream drainage basin area, channel gradient, channel width, valley width, Strahler and Shreve stream order, stream power, and measures of stream confinement. The reach network was aggregated from tributary junction to tributary junction to define segments for the

  13. Distinguishing Fast and Slow Processes in Accuracy - Response Time Data.

    Directory of Open Access Journals (Sweden)

    Frederik Coomans

    Full Text Available We investigate the relation between speed and accuracy within problem solving in its simplest non-trivial form. We consider tests with only two items and code the item responses in two binary variables: one indicating the response accuracy, and one indicating the response speed. Despite being a very basic setup, it enables us to study item pairs stemming from a broad range of domains such as basic arithmetic, first language learning, intelligence-related problems, and chess, with large numbers of observations for every pair of problems under consideration. We carry out a survey over a large number of such item pairs and compare three types of psychometric accuracy-response time models present in the literature: two 'one-process' models, the first of which models accuracy and response time as conditionally independent and the second of which models accuracy and response time as conditionally dependent, and a 'two-process' model which models accuracy contingent on response time. We find that the data clearly violates the restrictions imposed by both one-process models and requires additional complexity which is parsimoniously provided by the two-process model. We supplement our survey with an analysis of the erroneous responses for an example item pair and demonstrate that there are very significant differences between the types of errors in fast and slow responses.

  14. Interval prediction for graded multi-label classification

    CERN Document Server

    Lastra, Gerardo; Bahamonde, Antonio

    2014-01-01

    Multi-label was introduced as an extension of multi-class classification. The aim is to predict a set of classes (called labels in this context) instead of a single one, namely the set of relevant labels. If membership to the set of relevant labels is defined to a certain degree, the learning task is called graded multi-label classification. These learning tasks can be seen as a set of ordinal classifications. Hence, recommender systems can be considered as multi-label classification tasks. In this paper, we present a new type of nondeterministic learner that, for each instance, tries to predict at the same time the true grade for each label. When the classification is uncertain for a label, however, the hypotheses predict a set of consecutive grades, i.e., an interval. The goal is to keep the set of predicted grades as small as possible; while still containing the true grade. We shall see that these classifiers take advantage of the interrelations of labels. The result is that, with quite narrow intervals, i...

  15. Classification and Characteristics of Pain Associated with Parkinson’s Disease

    Directory of Open Access Journals (Sweden)

    Marcelo Rezende Young Blood

    2016-01-01

    Full Text Available Neuropsychiatric symptoms and pain are among the most common nonmotor symptoms of Parkinson’s disease (PD. The correlation between pain and PD has been recognized since its classic descriptions. Pain occurs in about 60% of PD patients, two to three times more frequent in this population than in age matched healthy individuals. It is an early and potentially disabling symptom that can precede motor symptoms by several years. The lower back and lower extremities are the most commonly affected areas. The most used classification for pain in PD defines musculoskeletal, dystonic, central, or neuropathic/radicular forms. Its different clinical characteristics, variable relationship with motor symptoms, and inconsistent response to dopaminergic drugs suggest that the mechanism underlying pain in PD is complex and multifaceted, involving the peripheral nervous system, generation and amplification of pain by motor symptoms, and neurodegeneration of areas related to pain modulation. Although pain in DP is common and a significant source of disability, its clinical characteristics, pathophysiology, classification, and management remain to be defined.

  16. Measuring older adults' sedentary time: reliability, validity, and responsiveness.

    Science.gov (United States)

    Gardiner, Paul A; Clark, Bronwyn K; Healy, Genevieve N; Eakin, Elizabeth G; Winkler, Elisabeth A H; Owen, Neville

    2011-11-01

    With evidence that prolonged sitting has deleterious health consequences, decreasing sedentary time is a potentially important preventive health target. High-quality measures, particularly for use with older adults, who are the most sedentary population group, are needed to evaluate the effect of sedentary behavior interventions. We examined the reliability, validity, and responsiveness to change of a self-report sedentary behavior questionnaire that assessed time spent in behaviors common among older adults: watching television, computer use, reading, socializing, transport and hobbies, and a summary measure (total sedentary time). In the context of a sedentary behavior intervention, nonworking older adults (n = 48, age = 73 ± 8 yr (mean ± SD)) completed the questionnaire on three occasions during a 2-wk period (7 d between administrations) and wore an accelerometer (ActiGraph model GT1M) for two periods of 6 d. Test-retest reliability (for the individual items and the summary measure) and validity (self-reported total sedentary time compared with accelerometer-derived sedentary time) were assessed during the 1-wk preintervention period, using Spearman (ρ) correlations and 95% confidence intervals (CI). Responsiveness to change after the intervention was assessed using the responsiveness statistic (RS). Test-retest reliability was excellent for television viewing time (ρ (95% CI) = 0.78 (0.63-0.89)), computer use (ρ (95% CI) = 0.90 (0.83-0.94)), and reading (ρ (95% CI) = 0.77 (0.62-0.86)); acceptable for hobbies (ρ (95% CI) = 0.61 (0.39-0.76)); and poor for socializing and transport (ρ < 0.45). Total sedentary time had acceptable test-retest reliability (ρ (95% CI) = 0.52 (0.27-0.70)) and validity (ρ (95% CI) = 0.30 (0.02-0.54)). Self-report total sedentary time was similarly responsive to change (RS = 0.47) as accelerometer-derived sedentary time (RS = 0.39). The summary measure of total sedentary time has good repeatability and modest validity and is

  17. NEW CLASSIFICATION OF ECOPOLICES

    Directory of Open Access Journals (Sweden)

    VOROBYOV V. V.

    2016-09-01

    Full Text Available Problem statement. Ecopolices are the newest stage of the urban planning. They have to be consideredsuchas material and energy informational structures, included to the dynamic-evolutionary matrix netsofex change processes in the ecosystems. However, there are not made the ecopolice classifications, developing on suchapproaches basis. And this determined the topicality of the article. Analysis of publications on theoretical and applied aspects of the ecopolices formation showed, that the work on them is managed mainly in the context of the latest scientific and technological achievements in the various knowledge fields. These settlements are technocratic. They are connected with the morphology of space, network structures of regional and local natural ecosystems, without independent stability, can not exist without continuous man support. Another words, they do not work in with an ecopolices idea. It is come to a head for objective, symbiotic searching of ecopolices concept with the development of their classifications. Purpose statement is to develop the objective evidence for ecopolices and to propose their new classification. Conclusion. On the base of the ecopolices classification have to lie an elements correlation idea of their general plans and men activity type according with natural mechanism of accepting, reworking and transmission of material, energy and information between geo-ecosystems, planet, man, ecopolices material part and Cosmos. New ecopolices classification should be based on the principles of multi-dimensional, time-spaced symbiotic clarity with exchange ecosystem networks. The ecopolice function with this approach comes not from the subjective anthropocentric economy but from the holistic objective of Genesis paradigm. Or, otherwise - not from the Consequence, but from the Cause.

  18. Free radicals, reactive oxygen species, oxidative stress and its classification.

    Science.gov (United States)

    Lushchak, Volodymyr I

    2014-12-05

    Reactive oxygen species (ROS) initially considered as only damaging agents in living organisms further were found to play positive roles also. This paper describes ROS homeostasis, principles of their investigation and technical approaches to investigate ROS-related processes. Especial attention is paid to complications related to experimental documentation of these processes, their diversity, spatiotemporal distribution, relationships with physiological state of the organisms. Imbalance between ROS generation and elimination in favor of the first with certain consequences for cell physiology has been called "oxidative stress". Although almost 30years passed since the first definition of oxidative stress was introduced by Helmut Sies, to date we have no accepted classification of oxidative stress. In order to fill up this gape here classification of oxidative stress based on its intensity is proposed. Due to that oxidative stress may be classified as basal oxidative stress (BOS), low intensity oxidative stress (LOS), intermediate intensity oxidative stress (IOS), and high intensity oxidative stress (HOS). Another classification of potential interest may differentiate three categories such as mild oxidative stress (MOS), temperate oxidative stress (TOS), and finally severe (strong) oxidative stress (SOS). Perspective directions of investigations in the field include development of sophisticated classification of oxidative stresses, accurate identification of cellular ROS targets and their arranged responses to ROS influence, real in situ functions and operation of so-called "antioxidants", intracellular spatiotemporal distribution and effects of ROS, deciphering of molecular mechanisms responsible for cellular response to ROS attacks, and ROS involvement in realization of normal cellular functions in cellular homeostasis. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  19. Hierarchical Bayes Models for Response Time Data

    Science.gov (United States)

    Craigmile, Peter F.; Peruggia, Mario; Van Zandt, Trisha

    2010-01-01

    Human response time (RT) data are widely used in experimental psychology to evaluate theories of mental processing. Typically, the data constitute the times taken by a subject to react to a succession of stimuli under varying experimental conditions. Because of the sequential nature of the experiments there are trends (due to learning, fatigue,…

  20. Classification of hydrocephalus: critical analysis of classification categories and advantages of "Multi-categorical Hydrocephalus Classification" (Mc HC).

    Science.gov (United States)

    Oi, Shizuo

    2011-10-01

    Hydrocephalus is a complex pathophysiology with disturbed cerebrospinal fluid (CSF) circulation. There are numerous numbers of classification trials published focusing on various criteria, such as associated anomalies/underlying lesions, CSF circulation/intracranial pressure patterns, clinical features, and other categories. However, no definitive classification exists comprehensively to cover the variety of these aspects. The new classification of hydrocephalus, "Multi-categorical Hydrocephalus Classification" (Mc HC), was invented and developed to cover the entire aspects of hydrocephalus with all considerable classification items and categories. Ten categories include "Mc HC" category I: onset (age, phase), II: cause, III: underlying lesion, IV: symptomatology, V: pathophysiology 1-CSF circulation, VI: pathophysiology 2-ICP dynamics, VII: chronology, VII: post-shunt, VIII: post-endoscopic third ventriculostomy, and X: others. From a 100-year search of publication related to the classification of hydrocephalus, 14 representative publications were reviewed and divided into the 10 categories. The Baumkuchen classification graph made from the round o'clock classification demonstrated the historical tendency of deviation to the categories in pathophysiology, either CSF or ICP dynamics. In the preliminary clinical application, it was concluded that "Mc HC" is extremely effective in expressing the individual state with various categories in the past and present condition or among the compatible cases of hydrocephalus along with the possible chronological change in the future.

  1. Classification

    Science.gov (United States)

    Clary, Renee; Wandersee, James

    2013-01-01

    In this article, Renee Clary and James Wandersee describe the beginnings of "Classification," which lies at the very heart of science and depends upon pattern recognition. Clary and Wandersee approach patterns by first telling the story of the "Linnaean classification system," introduced by Carl Linnacus (1707-1778), who is…

  2. [Construction of the Time Management Scale and examination of the influence of time management on psychological stress response].

    Science.gov (United States)

    Imura, Tomoya; Takamura, Masahiro; Okazaki, Yoshihiro; Tokunaga, Satoko

    2016-10-01

    We developed a scale to measure time management and assessed its reliability and validity. We then used this scale to examine the impact of time management on psychological stress response. In Study 1-1, we developed the scale and assessed its internal consistency and criterion-related validity. Findings from a factor analysis revealed three elements of time management, “time estimation,” “time utilization,” and “taking each moment as it comes.” In Study 1-2, we assessed the scale’s test-retest reliability. In Study 1-3, we assessed the validity of the constructed scale. The results indicate that the time management scale has good reliability and validity. In Study 2, we performed a covariance structural analysis to verify our model that hypothesized that time management influences perceived control of time and psychological stress response, and perceived control of time influences psychological stress response. The results showed that time estimation increases the perceived control of time, which in turn decreases stress response. However, we also found that taking each moment as it comes reduces perceived control of time, which in turn increases stress response.

  3. Seizure classification in EEG signals utilizing Hilbert-Huang transform.

    Science.gov (United States)

    Oweis, Rami J; Abdulhay, Enas W

    2011-05-24

    Classification method capable of recognizing abnormal activities of the brain functionality are either brain imaging or brain signal analysis. The abnormal activity of interest in this study is characterized by a disturbance caused by changes in neuronal electrochemical activity that results in abnormal synchronous discharges. The method aims at helping physicians discriminate between healthy and seizure electroencephalographic (EEG) signals. Discrimination in this work is achieved by analyzing EEG signals obtained from freely accessible databases. MATLAB has been used to implement and test the proposed classification algorithm. The analysis in question presents a classification of normal and ictal activities using a feature relied on Hilbert-Huang Transform. Through this method, information related to the intrinsic functions contained in the EEG signal has been extracted to track the local amplitude and the frequency of the signal. Based on this local information, weighted frequencies are calculated and a comparison between ictal and seizure-free determinant intrinsic functions is then performed. Methods of comparison used are the t-test and the Euclidean clustering. The t-test results in a P-value with respect to its fast response and ease to use. An original tool for EEG signal processing giving physicians the possibility to diagnose brain functionality abnormalities is presented in this paper. The proposed system bears the potential of providing several credible benefits such as fast diagnosis, high accuracy, good sensitivity and specificity, time saving and user friendly. Furthermore, the classification of mode mixing can be achieved using the extracted instantaneous information of every IMF, but it would be most likely a hard task if only the average value is used. Extra benefits of this proposed system include low cost, and ease of interface. All of that indicate the usefulness of the tool and its use as an efficient diagnostic tool.

  4. Classification of Targets and Distractors Present in Visual Hemifields Using Time-Frequency Domain EEG Features

    Directory of Open Access Journals (Sweden)

    Sweeti

    2018-01-01

    Full Text Available This paper presents a classification system to classify the cognitive load corresponding to targets and distractors present in opposite visual hemifields. The approach includes the study of EEG (electroencephalogram signal features acquired in a spatial attention task. The process comprises of EEG feature selection based on the feature distribution, followed by the stepwise discriminant analysis- (SDA- based channel selection. Repeated measure analysis of variance (rANOVA is applied to test the statistical significance of the selected features. Classifiers are developed and compared using the selected features to classify the target and distractor present in visual hemifields. The results provide a maximum classification accuracy of 87.2% and 86.1% and an average classification accuracy of 76.5 ± 4% and 76.2 ± 5.3% over the thirteen subjects corresponding to the two task conditions. These correlates present a step towards building a feature-based neurofeedback system for visual attention.

  5. IRIS COLOUR CLASSIFICATION SCALES--THEN AND NOW.

    Science.gov (United States)

    Grigore, Mariana; Avram, Alina

    2015-01-01

    Eye colour is one of the most obvious phenotypic traits of an individual. Since the first documented classification scale developed in 1843, there have been numerous attempts to classify the iris colour. In the past centuries, iris colour classification scales has had various colour categories and mostly relied on comparison of an individual's eye with painted glass eyes. Once photography techniques were refined, standard iris photographs replaced painted eyes, but this did not solve the problem of painted/ printed colour variability in time. Early clinical scales were easy to use, but lacked objectivity and were not standardised or statistically tested for reproducibility. The era of automated iris colour classification systems came with the technological development. Spectrophotometry, digital analysis of high-resolution iris images, hyper spectral analysis of the human real iris and the dedicated iris colour analysis software, all accomplished an objective, accurate iris colour classification, but are quite expensive and limited in use to research environment. Iris colour classification systems evolved continuously due to their use in a wide range of studies, especially in the fields of anthropology, epidemiology and genetics. Despite the wide range of the existing scales, up until present there has been no generally accepted iris colour classification scale.

  6. Dynamic Latent Classification Model

    DEFF Research Database (Denmark)

    Zhong, Shengtong; Martínez, Ana M.; Nielsen, Thomas Dyhre

    as possible. Motivated by this problem setting, we propose a generative model for dynamic classification in continuous domains. At each time point the model can be seen as combining a naive Bayes model with a mixture of factor analyzers (FA). The latent variables of the FA are used to capture the dynamics...

  7. Better physical activity classification using smartphone acceleration sensor.

    Science.gov (United States)

    Arif, Muhammad; Bilal, Mohsin; Kattan, Ahmed; Ahamed, S Iqbal

    2014-09-01

    Obesity is becoming one of the serious problems for the health of worldwide population. Social interactions on mobile phones and computers via internet through social e-networks are one of the major causes of lack of physical activities. For the health specialist, it is important to track the record of physical activities of the obese or overweight patients to supervise weight loss control. In this study, acceleration sensor present in the smartphone is used to monitor the physical activity of the user. Physical activities including Walking, Jogging, Sitting, Standing, Walking upstairs and Walking downstairs are classified. Time domain features are extracted from the acceleration data recorded by smartphone during different physical activities. Time and space complexity of the whole framework is done by optimal feature subset selection and pruning of instances. Classification results of six physical activities are reported in this paper. Using simple time domain features, 99 % classification accuracy is achieved. Furthermore, attributes subset selection is used to remove the redundant features and to minimize the time complexity of the algorithm. A subset of 30 features produced more than 98 % classification accuracy for the six physical activities.

  8. Comparison of LMFBR piping response obtained using response spectrum and time history methods

    International Nuclear Information System (INIS)

    Hulbert, G.M.

    1981-04-01

    The dynamic response to a seismic event is calculated for a piping system using a response spectrum analysis method and two time history analysis methods. The results from the analytical methods are compared to identify causes for the differences between the sets of analytical results. Comparative methods are also presented which help to gain confidence in the accuracy of the analytical methods in predicting piping system structure response during seismic events

  9. Automatic Classification of Attacks on IP Telephony

    Directory of Open Access Journals (Sweden)

    Jakub Safarik

    2013-01-01

    Full Text Available This article proposes an algorithm for automatic analysis of attack data in IP telephony network with a neural network. Data for the analysis is gathered from variable monitoring application running in the network. These monitoring systems are a typical part of nowadays network. Information from them is usually used after attack. It is possible to use an automatic classification of IP telephony attacks for nearly real-time classification and counter attack or mitigation of potential attacks. The classification use proposed neural network, and the article covers design of a neural network and its practical implementation. It contains also methods for neural network learning and data gathering functions from honeypot application.

  10. Modeling an Application's Theoretical Minimum and Average Transactional Response Times

    Energy Technology Data Exchange (ETDEWEB)

    Paiz, Mary Rose [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-04-01

    The theoretical minimum transactional response time of an application serves as a ba- sis for the expected response time. The lower threshold for the minimum response time represents the minimum amount of time that the application should take to complete a transaction. Knowing the lower threshold is beneficial in detecting anomalies that are re- sults of unsuccessful transactions. On the converse, when an application's response time falls above an upper threshold, there is likely an anomaly in the application that is causing unusual performance issues in the transaction. This report explains how the non-stationary Generalized Extreme Value distribution is used to estimate the lower threshold of an ap- plication's daily minimum transactional response time. It also explains how the seasonal Autoregressive Integrated Moving Average time series model is used to estimate the upper threshold for an application's average transactional response time.

  11. Visualization and classification in biomedical terahertz pulsed imaging

    International Nuclear Information System (INIS)

    Loeffler, Torsten; Siebert, Karsten; Czasch, Stephanie; Bauer, Tobias; Roskos, Hartmut G

    2002-01-01

    'Visualization' in imaging is the process of extracting useful information from raw data in such a way that meaningful physical contrasts are developed. 'Classification' is the subsequent process of defining parameter ranges which allow us to identify elements of images such as different tissues or different objects. In this paper, we explore techniques for visualization and classification in terahertz pulsed imaging (TPI) for biomedical applications. For archived (formalin-fixed, alcohol-dehydrated and paraffin-mounted) test samples, we investigate both time- and frequency-domain methods based on bright- and dark-field TPI. Successful tissue classification is demonstrated

  12. Hot complaint intelligent classification based on text mining

    Directory of Open Access Journals (Sweden)

    XIA Haifeng

    2013-10-01

    Full Text Available The complaint recognizer system plays an important role in making sure the correct classification of the hot complaint,improving the service quantity of telecommunications industry.The customers’ complaint in telecommunications industry has its special particularity which should be done in limited time,which cause the error in classification of hot complaint.The paper presents a model of complaint hot intelligent classification based on text mining,which can classify the hot complaint in the correct level of the complaint navigation.The examples show that the model can be efficient to classify the text of the complaint.

  13. Time-based MRPC detector response simulations for the CBM time-of-flight system

    Energy Technology Data Exchange (ETDEWEB)

    Simon, Christian; Herrmann, Norbert [Physikalisches Institut und Fakultaet fuer Physik und Astronomie, Ruprecht-Karls-Universitaet Heidelberg (Germany); Collaboration: CBM-Collaboration

    2016-07-01

    The design goal of the future Compressed Baryonic Matter (CBM) experiment is to measure rare probes of dense strongly interacting matter with an unprecedented accuracy. Target interaction rates of up to 10 MHz need to be processed by the detector. The time-of-flight (TOF) wall of CBM which should provide hadron identification at particle fluxes of up to a few tens of kHz/cm{sup 2} is composed of high-resolution timing multi-gap resistive plate chambers (MRPCs). Due to the self-triggered digitization and readout scheme of CBM comprising online event reconstruction preparatory Monte Carlo (MC) transport and response simulations including the MRPC array need to be carried out in a time-based fashion. While in an event-based simulation mode interference between MC tracks in a detector volume owing to rate effects or electronics dead time is confined to a single event, time-based response simulations need to take into account track pile-up and interference across events. A proposed time-based digitizer class for CBM-TOF within the CbmRoot software framework is presented.

  14. A Box-Cox normal model for response times

    NARCIS (Netherlands)

    Klein Entink, R.H.; Fox, J.P.; Linden, W.J. van der

    2009-01-01

    The log-transform has been a convenient choice in response time modelling on test items. However, motivated by a dataset of the Medical College Admission Test where the lognormal model violated the normality assumption, the possibilities of the broader class of Box–Cox transformations for response

  15. Involvement of Machine Learning for Breast Cancer Image Classification: A Survey.

    Science.gov (United States)

    Nahid, Abdullah-Al; Kong, Yinan

    2017-01-01

    Breast cancer is one of the largest causes of women's death in the world today. Advance engineering of natural image classification techniques and Artificial Intelligence methods has largely been used for the breast-image classification task. The involvement of digital image classification allows the doctor and the physicians a second opinion, and it saves the doctors' and physicians' time. Despite the various publications on breast image classification, very few review papers are available which provide a detailed description of breast cancer image classification techniques, feature extraction and selection procedures, classification measuring parameterizations, and image classification findings. We have put a special emphasis on the Convolutional Neural Network (CNN) method for breast image classification. Along with the CNN method we have also described the involvement of the conventional Neural Network (NN), Logic Based classifiers such as the Random Forest (RF) algorithm, Support Vector Machines (SVM), Bayesian methods, and a few of the semisupervised and unsupervised methods which have been used for breast image classification.

  16. Involvement of Machine Learning for Breast Cancer Image Classification: A Survey

    Directory of Open Access Journals (Sweden)

    Abdullah-Al Nahid

    2017-01-01

    Full Text Available Breast cancer is one of the largest causes of women’s death in the world today. Advance engineering of natural image classification techniques and Artificial Intelligence methods has largely been used for the breast-image classification task. The involvement of digital image classification allows the doctor and the physicians a second opinion, and it saves the doctors’ and physicians’ time. Despite the various publications on breast image classification, very few review papers are available which provide a detailed description of breast cancer image classification techniques, feature extraction and selection procedures, classification measuring parameterizations, and image classification findings. We have put a special emphasis on the Convolutional Neural Network (CNN method for breast image classification. Along with the CNN method we have also described the involvement of the conventional Neural Network (NN, Logic Based classifiers such as the Random Forest (RF algorithm, Support Vector Machines (SVM, Bayesian methods, and a few of the semisupervised and unsupervised methods which have been used for breast image classification.

  17. Next-Generation Library Catalogs and the Problem of Slow Response Time

    Directory of Open Access Journals (Sweden)

    Margaret Brown-Sica

    2010-12-01

    Full Text Available Response time as defined for this study is the time that it takes for all files that constitute a single webpage to travel across the Internet from a Web server to the end user’s browser. In this study, the authors tested response times on queries for identical items in five different library catalogs, one of them a next-generation (NextGen catalog. The authors also discuss acceptable response time and how it may affect the discovery process. They suggest that librarians and vendors should develop standards for acceptable response time and use it in the product selection and development processes.

  18. Deep Recurrent Neural Networks for Supernovae Classification

    Science.gov (United States)

    Charnock, Tom; Moss, Adam

    2017-03-01

    We apply deep recurrent neural networks, which are capable of learning complex sequential information, to classify supernovae (code available at https://github.com/adammoss/supernovae). The observational time and filter fluxes are used as inputs to the network, but since the inputs are agnostic, additional data such as host galaxy information can also be included. Using the Supernovae Photometric Classification Challenge (SPCC) data, we find that deep networks are capable of learning about light curves, however the performance of the network is highly sensitive to the amount of training data. For a training size of 50% of the representational SPCC data set (around 104 supernovae) we obtain a type-Ia versus non-type-Ia classification accuracy of 94.7%, an area under the Receiver Operating Characteristic curve AUC of 0.986 and an SPCC figure-of-merit F 1 = 0.64. When using only the data for the early-epoch challenge defined by the SPCC, we achieve a classification accuracy of 93.1%, AUC of 0.977, and F 1 = 0.58, results almost as good as with the whole light curve. By employing bidirectional neural networks, we can acquire impressive classification results between supernovae types I, II and III at an accuracy of 90.4% and AUC of 0.974. We also apply a pre-trained model to obtain classification probabilities as a function of time and show that it can give early indications of supernovae type. Our method is competitive with existing algorithms and has applications for future large-scale photometric surveys.

  19. The effect of person misfit on classification decisions

    NARCIS (Netherlands)

    Hendrawan, I.; Glas, Cornelis A.W.; Meijer, R.R.

    2001-01-01

    The effect of person misfit to an item response theory (IRT) model on a mastery/nonmastery decision was investigated. Also investigated was whether the classification precision can be improved by identifying misfitting respondents using person-fit statistics. A simulation study was conducted to

  20. Classification of light sources and their interaction with active and passive environments

    Science.gov (United States)

    El-Dardiry, Ramy G. S.; Faez, Sanli; Lagendijk, Ad

    2011-03-01

    Emission from a molecular light source depends on its optical and chemical environment. This dependence is different for various sources. We present a general classification in terms of constant-amplitude and constant-power sources. Using this classification, we have described the response to both changes in the local density of states and stimulated emission. The unforeseen consequences of this classification are illustrated for photonic studies by random laser experiments and are in good agreement with our correspondingly developed theory. Our results require a revision of studies on sources in complex media.

  1. Classification of light sources and their interaction with active and passive environments

    International Nuclear Information System (INIS)

    El-Dardiry, Ramy G. S.; Faez, Sanli; Lagendijk, Ad

    2011-01-01

    Emission from a molecular light source depends on its optical and chemical environment. This dependence is different for various sources. We present a general classification in terms of constant-amplitude and constant-power sources. Using this classification, we have described the response to both changes in the local density of states and stimulated emission. The unforeseen consequences of this classification are illustrated for photonic studies by random laser experiments and are in good agreement with our correspondingly developed theory. Our results require a revision of studies on sources in complex media.

  2. The United Nations Framework Classification for World Petroleum Resources

    Science.gov (United States)

    Ahlbrandt, T.S.; Blystad, P.; Young, E.D.; Slavov, S.; Heiberg, S.

    2003-01-01

    The United Nations has developed an international framework classification for solid fuels and minerals (UNFC). This is now being extended to petroleum by building on the joint classification of the Society of Petroleum Engineers (SPE), the World Petroleum Congresses (WPC) and the American Association of Petroleum Geologists (AAPG). The UNFC is a 3-dimansional classification. This: Is necessary in order to migrate accounts of resource quantities that are developed on one or two of the axes to the common basis; Provides for more precise reporting and analysis. This is particularly useful in analyses of contingent resources. The characteristics of the SPE/WPC/AAPG classification has been preserved and enhanced to facilitate improved international and national petroleum resource management, corporate business process management and financial reporting. A UN intergovernmental committee responsible for extending the UNFC to extractive energy resources (coal, petroleum and uranium) will meet in Geneva on October 30th and 31st to review experiences gained and comments received during 2003. A recommended classification will then be delivered for consideration to the United Nations through the Committee on Sustainable Energy of the Economic Commission for Europe (UN ECE).

  3. Integrating human and machine intelligence in galaxy morphology classification tasks

    Science.gov (United States)

    Beck, Melanie R.; Scarlata, Claudia; Fortson, Lucy F.; Lintott, Chris J.; Simmons, B. D.; Galloway, Melanie A.; Willett, Kyle W.; Dickinson, Hugh; Masters, Karen L.; Marshall, Philip J.; Wright, Darryl

    2018-06-01

    Quantifying galaxy morphology is a challenging yet scientifically rewarding task. As the scale of data continues to increase with upcoming surveys, traditional classification methods will struggle to handle the load. We present a solution through an integration of visual and automated classifications, preserving the best features of both human and machine. We demonstrate the effectiveness of such a system through a re-analysis of visual galaxy morphology classifications collected during the Galaxy Zoo 2 (GZ2) project. We reprocess the top-level question of the GZ2 decision tree with a Bayesian classification aggregation algorithm dubbed SWAP, originally developed for the Space Warps gravitational lens project. Through a simple binary classification scheme, we increase the classification rate nearly 5-fold classifying 226 124 galaxies in 92 d of GZ2 project time while reproducing labels derived from GZ2 classification data with 95.7 per cent accuracy. We next combine this with a Random Forest machine learning algorithm that learns on a suite of non-parametric morphology indicators widely used for automated morphologies. We develop a decision engine that delegates tasks between human and machine and demonstrate that the combined system provides at least a factor of 8 increase in the classification rate, classifying 210 803 galaxies in just 32 d of GZ2 project time with 93.1 per cent accuracy. As the Random Forest algorithm requires a minimal amount of computational cost, this result has important implications for galaxy morphology identification tasks in the era of Euclid and other large-scale surveys.

  4. Experience with RTD response time testing in nuclear power plants

    International Nuclear Information System (INIS)

    Hashemian, H.M.; Kerlin, T.W.

    1985-01-01

    The reactor coolant temperatures in pressurized water reactors are measured with platinum resistance temperature detectors (RTDs). The information furnished by these RTDs is used for plant protection as well as control. As a part of the plant protection system, the RTDs must respond to temperature changes in a timely fashion. The RTD response time requirements are different for the various plant types. These requirements are specified in the plant technical specifications in terms of an RTd time constant. The current time constant requirements for nuclear plant RTDs varies from 0.5 seconds to 13.0 seconds depending on the type of the plant. Therefore, different types of RTDs are used in different plants to achieve the required time constants. In addition, in-situ response time tests are periodically performed on protective system RTDs to ensure that the in-service time constants are within acceptable limits as the plant is operating. The periodic testing is important because response time degradation may occur while the RTD ages in the process. Recent response time tests in operating plants revealed unacceptable time constants for several protection system RTDs. As a result, these plants had to be shut down to resolve the problem which in one case was due to improper installation and in another case was because of degradation of a thermal compound used in the thermowell

  5. Overeating at dinner time among Japanese workers: Is overeating related to stress response and late dinner times?

    Science.gov (United States)

    Suzuki, Akiko; Sakurazawa, Hirofumi; Fujita, Takanori; Akamatsu, Rie

    2016-06-01

    There are several known risk factors for overeating, including negative feelings and hunger. It was hypothesized that overtime work is associated with stress responses and later dinner times, leading to longer periods of time without eating, and that this, in turn, leads to a strong experience of hunger and consequent overeating at dinner. The aim of this study was to examine relationships among overeating at dinner, stress responses (e.g., fatigue, anxiety, and depression), and dinner times in Japanese male workers. In December 2012, 255 Japanese male workers at a leasing company completed a self-report questionnaire about overeating at dinner, psychological stress responses, physical stress responses, and dinner times. Each worker was sent an email with a link to the questionnaire website, where his answers were collected. Relationships between overeating at dinner and lifestyle issues were investigated using multiple linear regression analysis treating overeating as a dependent variable. Factors related to overeating at dinner included psychological stress response (β = 0.251 p overeating at dinner is related to dinner time in men and to stress responses. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. The response time distribution in a real-time database with optimistic concurrency control

    NARCIS (Netherlands)

    Sassen, S.A.E.; Wal, van der J.

    1996-01-01

    For a real-time shared-memory database with optimistic concurrency control, an approximation for the transaction response time distribution is obtained. The model assumes that transactions arrive at the database according to a Poisson process, that every transaction uses an equal number of

  7. Potential of turbidity monitoring for real time control of pollutant discharge in sewers during rainfall events

    OpenAIRE

    LACOUR, Céline; JOANNIS, Claude; GROMAIRE, MC; CHEBBO, Ghassan

    2009-01-01

    Turbidity sensors can be used to continuously monitor the evolution of pollutant mass discharge. For two sites within the Paris combined sewer system, continuous turbidity, conductivity and flow data were recorded at one-minute time intervals over a one-year period. This paper is intended to highlight the variability in turbidity dynamics during wet weather. For each storm event, turbidity response aspects were analysed through different classifications. The correlation between classification...

  8. Integrated response and transit time distributions of watersheds by combining hydrograph separation and long-term transit time modeling

    Directory of Open Access Journals (Sweden)

    M. C. Roa-García

    2010-08-01

    Full Text Available We present a new modeling approach analyzing and predicting the Transit Time Distribution (TTD and the Response Time Distribution (RTD from hourly to annual time scales as two distinct hydrological processes. The model integrates Isotope Hydrograph Separation (IHS and the Instantaneous Unit Hydrograph (IUH approach as a tool to provide a more realistic description of transit and response time of water in catchments. Individual event simulations and parameterizations were combined with long-term baseflow simulation and parameterizations; this provides a comprehensive picture of the catchment response for a long time span for the hydraulic and isotopic processes. The proposed method was tested in three Andean headwater catchments to compare the effects of land use on hydrological response and solute transport. Results show that the characteristics of events and antecedent conditions have a significant influence on TTD and RTD, but in general the RTD of the grassland dominated catchment is concentrated in the shorter time spans and has a higher cumulative TTD, while the forest dominated catchment has a relatively higher response distribution and lower cumulative TTD. The catchment where wetlands concentrate shows a flashier response, but wetlands also appear to prolong transit time.

  9. A specialist-generalist classification of the arable flora and its response to changes in agricultural practices

    Science.gov (United States)

    2010-01-01

    Background Theory in ecology points out the potential link between the degree of specialisation of organisms and their responses to disturbances and suggests that this could be a key element for understanding the assembly of communities. We evaluated this question for the arable weed flora as this group has scarcely been the focus of ecological studies so far and because weeds are restricted to habitats characterised by very high degrees of disturbance. As such, weeds offer a case study to ask how specialization relates to abundance and distribution of species in relation to the varying disturbance regimes occurring in arable crops. Results We used data derived from an extensive national monitoring network of approximately 700 arable fields scattered across France to quantify the degree of specialisation of 152 weed species using six different ecological methods. We then explored the impact of the level of disturbance occurring in arable fields by comparing the degree of specialisation of weed communities in contrasting field situations. The classification of species as specialist or generalist was consistent between different ecological indices. When applied on a large-scale data set across France, this classification highlighted that monoculture harbour significantly more specialists than crop rotations, suggesting that crop rotation increases abundance of generalist species rather than sets of species that are each specialised to the individual crop types grown in the rotation. Applied to a diachronic dataset, the classification also shows that the proportion of specialist weed species has significantly decreased in cultivated fields over the last 30 years which suggests a biotic homogenization of agricultural landscapes. Conclusions This study shows that the concept of generalist/specialist species is particularly relevant to understand the effect of anthropogenic disturbances on the evolution of plant community composition and that ecological theories

  10. Hand eczema classification

    DEFF Research Database (Denmark)

    Diepgen, T L; Andersen, Klaus Ejner; Brandao, F M

    2008-01-01

    of the disease is rarely evidence based, and a classification system for different subdiagnoses of hand eczema is not agreed upon. Randomized controlled trials investigating the treatment of hand eczema are called for. For this, as well as for clinical purposes, a generally accepted classification system...... A classification system for hand eczema is proposed. Conclusions It is suggested that this classification be used in clinical work and in clinical trials....

  11. Classification for longevity potential: the use of novel biomarkers

    Directory of Open Access Journals (Sweden)

    Marian Beekman

    2016-10-01

    Full Text Available Background: In older people chronological age may not be the best predictor of residual lifespan and mortality, because with age the heterogeneity in health is increasing. Biomarkers for biological age and residual lifespan are being developed to predict disease and mortality better at an individual level than chronological age. In the current paper we aim to classify a group of older people into those with longevity potential or controls.Methods: In the Leiden Longevity Study participated 1671 offspring of nonagenarian siblings, as the group with longevity potential, and 744 similarly aged controls. Using known risk factors for cardiovascular disease, previously reported markers for human longevity and other physiological measures as predictors, classification models for longevity potential were constructed with multiple logistic regression of the offspring-control status.Results: The Framingham Risk Score is predictive for longevity potential (AUC = 64.7. Physiological parameters involved in immune responses and glucose, lipid and energy metabolism further improve the prediction performance for longevity potential (AUCmale = 71.4, AUCfemale = 68.7.Conclusion: Using the Framingham Risk Score, the classification of older people in groups with longevity potential and controls is moderate, but can be improved to a reasonably good classification in combination with markers of immune response, glucose, lipid and energy metabolism. We show that individual classification of older people for longevity potential may be feasible using biomarkers from a wide variety of different biological processes.

  12. A neurally inspired musical instrument classification system based upon the sound onset.

    Science.gov (United States)

    Newton, Michael J; Smith, Leslie S

    2012-06-01

    Physiological evidence suggests that sound onset detection in the auditory system may be performed by specialized neurons as early as the cochlear nucleus. Psychoacoustic evidence shows that the sound onset can be important for the recognition of musical sounds. Here the sound onset is used in isolation to form tone descriptors for a musical instrument classification task. The task involves 2085 isolated musical tones from the McGill dataset across five instrument categories. A neurally inspired tone descriptor is created using a model of the auditory system's response to sound onset. A gammatone filterbank and spiking onset detectors, built from dynamic synapses and leaky integrate-and-fire neurons, create parallel spike trains that emphasize the sound onset. These are coded as a descriptor called the onset fingerprint. Classification uses a time-domain neural network, the echo state network. Reference strategies, based upon mel-frequency cepstral coefficients, evaluated either over the whole tone or only during the sound onset, provide context to the method. Classification success rates for the neurally-inspired method are around 75%. The cepstral methods perform between 73% and 76%. Further testing with tones from the Iowa MIS collection shows that the neurally inspired method is considerably more robust when tested with data from an unrelated dataset.

  13. Integrated tracking, classification, and sensor management theory and applications

    CERN Document Server

    Krishnamurthy, Vikram; Vo, Ba-Ngu

    2012-01-01

    A unique guide to the state of the art of tracking, classification, and sensor management. This book addresses the tremendous progress made over the last few decades in algorithm development and mathematical analysis for filtering, multi-target multi-sensor tracking, sensor management and control, and target classification. It provides for the first time an integrated treatment of these advanced topics, complete with careful mathematical formulation, clear description of the theory, and real-world applications. Written by experts in the field, Integrated Tracking, Classification, and Sensor Management provides readers with easy access to key Bayesian modeling and filtering methods, multi-target tracking approaches, target classification procedures, and large scale sensor management problem-solving techniques.

  14. Classification of the web

    DEFF Research Database (Denmark)

    Mai, Jens Erik

    2004-01-01

    This paper discusses the challenges faced by investigations into the classification of the Web and outlines inquiries that are needed to use principles for bibliographic classification to construct classifications of the Web. This paper suggests that the classification of the Web meets challenges...... that call for inquiries into the theoretical foundation of bibliographic classification theory....

  15. Security classification of information

    Energy Technology Data Exchange (ETDEWEB)

    Quist, A.S.

    1993-04-01

    This document is the second of a planned four-volume work that comprehensively discusses the security classification of information. The main focus of Volume 2 is on the principles for classification of information. Included herein are descriptions of the two major types of information that governments classify for national security reasons (subjective and objective information), guidance to use when determining whether information under consideration for classification is controlled by the government (a necessary requirement for classification to be effective), information disclosure risks and benefits (the benefits and costs of classification), standards to use when balancing information disclosure risks and benefits, guidance for assigning classification levels (Top Secret, Secret, or Confidential) to classified information, guidance for determining how long information should be classified (classification duration), classification of associations of information, classification of compilations of information, and principles for declassifying and downgrading information. Rules or principles of certain areas of our legal system (e.g., trade secret law) are sometimes mentioned to .provide added support to some of those classification principles.

  16. Proposal of a new classification scheme for periocular injuries

    Directory of Open Access Journals (Sweden)

    Devi Prasad Mohapatra

    2017-01-01

    Full Text Available Background: Eyelids are important structures and play a role in protecting the globe from trauma, brightness, in maintaining the integrity of tear films and moving the tears towards the lacrimal drainage system and contribute to aesthetic appearance of the face. Ophthalmic trauma is an important cause of morbidity among individuals and has also been responsible for additional cost of healthcare. Periocular trauma involving eyelids and adjacent structures has been found to have increased recently probably due to increased pace of life and increased dependence on machinery. A comprehensive classification of periocular trauma would help in stratifying these injuries as well as study outcomes. Material and Methods: This study was carried out at our institute from June 2015 to Dec 2015. We searched multiple English language databases for existing classification systems for periocular trauma. We designed a system of classification of periocular soft tissue injuries based on clinico-anatomical presentations. This classification was applied prospectively to patients presenting with periocular soft tissue injuries to our department. Results: A comprehensive classification scheme was designed consisting of five types of periocular injuries. A total of 38 eyelid injuries in 34 patients were evaluated in this study. According to the System for Peri-Ocular Trauma (SPOT classification, Type V injuries were most common. SPOT Type II injuries were more common isolated injuries among all zones. Discussion: Classification systems are necessary in order to provide a framework in which to scientifically study the etiology, pathogenesis, and treatment of diseases in an orderly fashion. The SPOT classification has taken into account the periocular soft tissue injuries i.e., upper eyelid, lower eyelid, medial and lateral canthus injuries., based on observed clinico-anatomical patterns of eyelid injuries. Conclusion: The SPOT classification seems to be a reliable

  17. Aircraft Fault Detection Using Real-Time Frequency Response Estimation

    Science.gov (United States)

    Grauer, Jared A.

    2016-01-01

    A real-time method for estimating time-varying aircraft frequency responses from input and output measurements was demonstrated. The Bat-4 subscale airplane was used with NASA Langley Research Center's AirSTAR unmanned aerial flight test facility to conduct flight tests and collect data for dynamic modeling. Orthogonal phase-optimized multisine inputs, summed with pilot stick and pedal inputs, were used to excite the responses. The aircraft was tested in its normal configuration and with emulated failures, which included a stuck left ruddervator and an increased command path latency. No prior knowledge of a dynamic model was used or available for the estimation. The longitudinal short period dynamics were investigated in this work. Time-varying frequency responses and stability margins were tracked well using a 20 second sliding window of data, as compared to a post-flight analysis using output error parameter estimation and a low-order equivalent system model. This method could be used in a real-time fault detection system, or for other applications of dynamic modeling such as real-time verification of stability margins during envelope expansion tests.

  18. The response-time distribution in a real-time database with optimistic concurrency control and constant execution times

    NARCIS (Netherlands)

    Sassen, S.A.E.; Wal, van der J.

    1997-01-01

    For a real-time shared-memory database with optimistic concurrency control, an approximation for the transaction response-time distribution is obtained. The model assumes that transactions arrive at the database according to a Poisson process, that every transaction uses an equal number of

  19. The response-time distribution in a real-time database with optimistic concurrency control and exponential execution times

    NARCIS (Netherlands)

    Sassen, S.A.E.; Wal, van der J.

    1997-01-01

    For a real-time shared-memory database with optimistic concurrency control, an approximation for the transaction response-time distribution is obtained. The model assumes that transactions arrive at the database according to a Poisson process, that every transaction takes an exponential execution

  20. Cancer classification using the Immunoscore: a worldwide task force.

    Science.gov (United States)

    Galon, Jérôme; Pagès, Franck; Marincola, Francesco M; Angell, Helen K; Thurin, Magdalena; Lugli, Alessandro; Zlobec, Inti; Berger, Anne; Bifulco, Carlo; Botti, Gerardo; Tatangelo, Fabiana; Britten, Cedrik M; Kreiter, Sebastian; Chouchane, Lotfi; Delrio, Paolo; Arndt, Hartmann; Asslaber, Martin; Maio, Michele; Masucci, Giuseppe V; Mihm, Martin; Vidal-Vanaclocha, Fernando; Allison, James P; Gnjatic, Sacha; Hakansson, Leif; Huber, Christoph; Singh-Jasuja, Harpreet; Ottensmeier, Christian; Zwierzina, Heinz; Laghi, Luigi; Grizzi, Fabio; Ohashi, Pamela S; Shaw, Patricia A; Clarke, Blaise A; Wouters, Bradly G; Kawakami, Yutaka; Hazama, Shoichi; Okuno, Kiyotaka; Wang, Ena; O'Donnell-Tormey, Jill; Lagorce, Christine; Pawelec, Graham; Nishimura, Michael I; Hawkins, Robert; Lapointe, Réjean; Lundqvist, Andreas; Khleif, Samir N; Ogino, Shuji; Gibbs, Peter; Waring, Paul; Sato, Noriyuki; Torigoe, Toshihiko; Itoh, Kyogo; Patel, Prabhu S; Shukla, Shilin N; Palmqvist, Richard; Nagtegaal, Iris D; Wang, Yili; D'Arrigo, Corrado; Kopetz, Scott; Sinicrope, Frank A; Trinchieri, Giorgio; Gajewski, Thomas F; Ascierto, Paolo A; Fox, Bernard A

    2012-10-03

    Prediction of clinical outcome in cancer is usually achieved by histopathological evaluation of tissue samples obtained during surgical resection of the primary tumor. Traditional tumor staging (AJCC/UICC-TNM classification) summarizes data on tumor burden (T), presence of cancer cells in draining and regional lymph nodes (N) and evidence for metastases (M). However, it is now recognized that clinical outcome can significantly vary among patients within the same stage. The current classification provides limited prognostic information, and does not predict response to therapy. Recent literature has alluded to the importance of the host immune system in controlling tumor progression. Thus, evidence supports the notion to include immunological biomarkers, implemented as a tool for the prediction of prognosis and response to therapy. Accumulating data, collected from large cohorts of human cancers, has demonstrated the impact of immune-classification, which has a prognostic value that may add to the significance of the AJCC/UICC TNM-classification. It is therefore imperative to begin to incorporate the 'Immunoscore' into traditional classification, thus providing an essential prognostic and potentially predictive tool. Introduction of this parameter as a biomarker to classify cancers, as part of routine diagnostic and prognostic assessment of tumors, will facilitate clinical decision-making including rational stratification of patient treatment. Equally, the inherent complexity of quantitative immunohistochemistry, in conjunction with protocol variation across laboratories, analysis of different immune cell types, inconsistent region selection criteria, and variable ways to quantify immune infiltration, all underline the urgent requirement to reach assay harmonization. In an effort to promote the Immunoscore in routine clinical settings, an international task force was initiated. This review represents a follow-up of the announcement of this initiative, and of the J

  1. Cancer classification using the Immunoscore: a worldwide task force

    Directory of Open Access Journals (Sweden)

    Galon Jérôme

    2012-10-01

    Full Text Available Abstract Prediction of clinical outcome in cancer is usually achieved by histopathological evaluation of tissue samples obtained during surgical resection of the primary tumor. Traditional tumor staging (AJCC/UICC-TNM classification summarizes data on tumor burden (T, presence of cancer cells in draining and regional lymph nodes (N and evidence for metastases (M. However, it is now recognized that clinical outcome can significantly vary among patients within the same stage. The current classification provides limited prognostic information, and does not predict response to therapy. Recent literature has alluded to the importance of the host immune system in controlling tumor progression. Thus, evidence supports the notion to include immunological biomarkers, implemented as a tool for the prediction of prognosis and response to therapy. Accumulating data, collected from large cohorts of human cancers, has demonstrated the impact of immune-classification, which has a prognostic value that may add to the significance of the AJCC/UICC TNM-classification. It is therefore imperative to begin to incorporate the ‘Immunoscore’ into traditional classification, thus providing an essential prognostic and potentially predictive tool. Introduction of this parameter as a biomarker to classify cancers, as part of routine diagnostic and prognostic assessment of tumors, will facilitate clinical decision-making including rational stratification of patient treatment. Equally, the inherent complexity of quantitative immunohistochemistry, in conjunction with protocol variation across laboratories, analysis of different immune cell types, inconsistent region selection criteria, and variable ways to quantify immune infiltration, all underline the urgent requirement to reach assay harmonization. In an effort to promote the Immunoscore in routine clinical settings, an international task force was initiated. This review represents a follow-up of the announcement of

  2. Towards Automatic Classification of Wikipedia Content

    Science.gov (United States)

    Szymański, Julian

    Wikipedia - the Free Encyclopedia encounters the problem of proper classification of new articles everyday. The process of assignment of articles to categories is performed manually and it is a time consuming task. It requires knowledge about Wikipedia structure, which is beyond typical editor competence, which leads to human-caused mistakes - omitting or wrong assignments of articles to categories. The article presents application of SVM classifier for automatic classification of documents from The Free Encyclopedia. The classifier application has been tested while using two text representations: inter-documents connections (hyperlinks) and word content. The results of the performed experiments evaluated on hand crafted data show that the Wikipedia classification process can be partially automated. The proposed approach can be used for building a decision support system which suggests editors the best categories that fit new content entered to Wikipedia.

  3. Reducing Spatial Data Complexity for Classification Models

    International Nuclear Information System (INIS)

    Ruta, Dymitr; Gabrys, Bogdan

    2007-01-01

    Intelligent data analytics gradually becomes a day-to-day reality of today's businesses. However, despite rapidly increasing storage and computational power current state-of-the-art predictive models still can not handle massive and noisy corporate data warehouses. What is more adaptive and real-time operational environment requires multiple models to be frequently retrained which further hinders their use. Various data reduction techniques ranging from data sampling up to density retention models attempt to address this challenge by capturing a summarised data structure, yet they either do not account for labelled data or degrade the classification performance of the model trained on the condensed dataset. Our response is a proposition of a new general framework for reducing the complexity of labelled data by means of controlled spatial redistribution of class densities in the input space. On the example of Parzen Labelled Data Compressor (PLDC) we demonstrate a simulatory data condensation process directly inspired by the electrostatic field interaction where the data are moved and merged following the attracting and repelling interactions with the other labelled data. The process is controlled by the class density function built on the original data that acts as a class-sensitive potential field ensuring preservation of the original class density distributions, yet allowing data to rearrange and merge joining together their soft class partitions. As a result we achieved a model that reduces the labelled datasets much further than any competitive approaches yet with the maximum retention of the original class densities and hence the classification performance. PLDC leaves the reduced dataset with the soft accumulative class weights allowing for efficient online updates and as shown in a series of experiments if coupled with Parzen Density Classifier (PDC) significantly outperforms competitive data condensation methods in terms of classification performance at the

  4. Reducing Spatial Data Complexity for Classification Models

    Science.gov (United States)

    Ruta, Dymitr; Gabrys, Bogdan

    2007-11-01

    Intelligent data analytics gradually becomes a day-to-day reality of today's businesses. However, despite rapidly increasing storage and computational power current state-of-the-art predictive models still can not handle massive and noisy corporate data warehouses. What is more adaptive and real-time operational environment requires multiple models to be frequently retrained which further hinders their use. Various data reduction techniques ranging from data sampling up to density retention models attempt to address this challenge by capturing a summarised data structure, yet they either do not account for labelled data or degrade the classification performance of the model trained on the condensed dataset. Our response is a proposition of a new general framework for reducing the complexity of labelled data by means of controlled spatial redistribution of class densities in the input space. On the example of Parzen Labelled Data Compressor (PLDC) we demonstrate a simulatory data condensation process directly inspired by the electrostatic field interaction where the data are moved and merged following the attracting and repelling interactions with the other labelled data. The process is controlled by the class density function built on the original data that acts as a class-sensitive potential field ensuring preservation of the original class density distributions, yet allowing data to rearrange and merge joining together their soft class partitions. As a result we achieved a model that reduces the labelled datasets much further than any competitive approaches yet with the maximum retention of the original class densities and hence the classification performance. PLDC leaves the reduced dataset with the soft accumulative class weights allowing for efficient online updates and as shown in a series of experiments if coupled with Parzen Density Classifier (PDC) significantly outperforms competitive data condensation methods in terms of classification performance at the

  5. A supervised learning rule for classification of spatiotemporal spike patterns.

    Science.gov (United States)

    Lilin Guo; Zhenzhong Wang; Adjouadi, Malek

    2016-08-01

    This study introduces a novel supervised algorithm for spiking neurons that take into consideration synapse delays and axonal delays associated with weights. It can be utilized for both classification and association and uses several biologically influenced properties, such as axonal and synaptic delays. This algorithm also takes into consideration spike-timing-dependent plasticity as in Remote Supervised Method (ReSuMe). This paper focuses on the classification aspect alone. Spiked neurons trained according to this proposed learning rule are capable of classifying different categories by the associated sequences of precisely timed spikes. Simulation results have shown that the proposed learning method greatly improves classification accuracy when compared to the Spike Pattern Association Neuron (SPAN) and the Tempotron learning rule.

  6. Search for an optimum time response of spark counters

    International Nuclear Information System (INIS)

    Devismes, A.; Finck, Ch.; Kress, T.; Gobbi, A.; Eschke, J.; Herrmann, N.; Hildenbrand, K.D.; Koczon, P.; Petrovici, M.

    2002-01-01

    A spark counter of the type developed by Pestov has been tested with the aim of searching for an optimum time response function, changing voltage, content of noble and quencher gases, pressure and energy-loss. Replacing the usual argon by neon has brought an improvement of the resolution and a significant reduction of tails in the time response function. It has been proven that a counter as long as 90 cm can deliver, using neon gas mixture, a time resolution σ<60 ps with about 1% absolute tail and an efficiency of about 90%

  7. [Evaluation of new and emerging health technologies. Proposal for classification].

    Science.gov (United States)

    Prados-Torres, J D; Vidal-España, F; Barnestein-Fonseca, P; Gallo-García, C; Irastorza-Aldasoro, A; Leiva-Fernández, F

    2011-01-01

    Review and develop a proposal for the classification of health technologies (HT) evaluated by the Health Technology Assessment Agencies (HTAA). Peer review of AETS of the previous proposed classification of HT. Analysis of their input and suggestions for amendments. Construction of a new classification. Pilot study with physicians. Andalusian Public Health System. Spanish HTAA. Experts from HTAA. Tutors of family medicine residents. HT Update classification previously made by the research team. Peer review by Spanish HTAA. Qualitative and quantitative analysis of responses. Construction of a new and pilot study based on 12 evaluation reports of the HTAA. We obtained 11 thematic categories that are classified into 6 major head groups: 1, prevention technology; 2, diagnostic technology; 3, therapeutic technologies; 4, diagnostic and therapeutic technologies; 5, organizational technology, and 6, knowledge management and quality of care. In the pilot there was a good concordance in the classification of 8 of the 12 reports reviewed by physicians. Experts agree on 11 thematic categories of HT. A new classification of HT with double entry (Nature and purpose of HT) is proposed. APPLICABILITY: According to experts, the classification of the work of the HTAA may represent a useful tool to transfer and manage knowledge. Moreover, an adequate classification of the HTAA reports would help clinicians and other potential users to locate them and this can facilitate their dissemination. Copyright © 2010 SECA. Published by Elsevier Espana. All rights reserved.

  8. An Improved Rotation Forest for Multi-Feature Remote-Sensing Imagery Classification

    Directory of Open Access Journals (Sweden)

    Yingchang Xiu

    2017-11-01

    Full Text Available Multi-feature, especially multi-temporal, remote-sensing data have the potential to improve land cover classification accuracy. However, sometimes it is difficult to utilize all the features efficiently. To enhance classification performance based on multi-feature imagery, an improved rotation forest, combining Principal Component Analysis (PCA and a boosting naïve Bayesian tree (NBTree, is proposed. First, feature extraction was carried out with PCA. The feature set was randomly split into several disjoint subsets; then, PCA was applied to each subset, and new training data for linear extracted features based on original training data were obtained. These steps were repeated several times. Second, based on the new training data, a boosting naïve Bayesian tree was constructed as the base classifier, which aims to achieve lower prediction error than a decision tree in the original rotation forest. At the classification phase, the improved rotation forest has two-layer voting. It first obtains several predictions through weighted voting in a boosting naïve Bayesian tree; then, the first-layer vote predicts by majority to obtain the final result. To examine the classification performance, the improved rotation forest was applied to multi-feature remote-sensing images, including MODIS Enhanced Vegetation Index (EVI imagery time series, MODIS Surface Reflectance products and ancillary data in Shandong Province for 2013. The EVI imagery time series was preprocessed using harmonic analysis of time series (HANTS to reduce the noise effects. The overall accuracy of the final classification result was 89.17%, and the Kappa coefficient was 0.71, which outperforms the original rotation forest and other classifier ensemble results, as well as the NASA land cover product. However, this new algorithm requires more computational time, meaning the efficiency needs to be further improved. Generally, the improved rotation forest has a potential advantage in

  9. Hazard classification methodology

    International Nuclear Information System (INIS)

    Brereton, S.J.

    1996-01-01

    This document outlines the hazard classification methodology used to determine the hazard classification of the NIF LTAB, OAB, and the support facilities on the basis of radionuclides and chemicals. The hazard classification determines the safety analysis requirements for a facility

  10. Progress in the diagnosis and classification of pituitary adenomas

    Directory of Open Access Journals (Sweden)

    Luis V Syro

    2015-06-01

    Full Text Available Pituitary adenomas are common neoplasms. Their classification is based upon size, invasion of adjacent structures, sporadic or familial cases, biochemical activity, clinical manifestations, morphological characteristics, response to treatment and recurrence. Although they are considered benign tumors, some of them are difficult to treat due to their tendency to recur, despite standardized treatment. Functional tumors present other challenges for normalizing their biochemical activity. Novel approaches for early diagnosis as well as different perspectives on classification may help to identify subgroups of patients with similar characteristics, creating opportunities to match each patient with the best personalized treatment option. In this paper we present the progress in the diagnosis and classification of different subgroups of patients with pituitary tumors that may be managed with specific considerations according to their tumor subtype.

  11. Accurate and efficient calculation of response times for groundwater flow

    Science.gov (United States)

    Carr, Elliot J.; Simpson, Matthew J.

    2018-03-01

    We study measures of the amount of time required for transient flow in heterogeneous porous media to effectively reach steady state, also known as the response time. Here, we develop a new approach that extends the concept of mean action time. Previous applications of the theory of mean action time to estimate the response time use the first two central moments of the probability density function associated with the transition from the initial condition, at t = 0, to the steady state condition that arises in the long time limit, as t → ∞ . This previous approach leads to a computationally convenient estimation of the response time, but the accuracy can be poor. Here, we outline a powerful extension using the first k raw moments, showing how to produce an extremely accurate estimate by making use of asymptotic properties of the cumulative distribution function. Results are validated using an existing laboratory-scale data set describing flow in a homogeneous porous medium. In addition, we demonstrate how the results also apply to flow in heterogeneous porous media. Overall, the new method is: (i) extremely accurate; and (ii) computationally inexpensive. In fact, the computational cost of the new method is orders of magnitude less than the computational effort required to study the response time by solving the transient flow equation. Furthermore, the approach provides a rigorous mathematical connection with the heuristic argument that the response time for flow in a homogeneous porous medium is proportional to L2 / D , where L is a relevant length scale, and D is the aquifer diffusivity. Here, we extend such heuristic arguments by providing a clear mathematical definition of the proportionality constant.

  12. Deep learning for hybrid EEG-fNIRS brain–computer interface: application to motor imagery classification

    Science.gov (United States)

    Chiarelli, Antonio Maria; Croce, Pierpaolo; Merla, Arcangelo; Zappasodi, Filippo

    2018-06-01

    Objective. Brain–computer interface (BCI) refers to procedures that link the central nervous system to a device. BCI was historically performed using electroencephalography (EEG). In the last years, encouraging results were obtained by combining EEG with other neuroimaging technologies, such as functional near infrared spectroscopy (fNIRS). A crucial step of BCI is brain state classification from recorded signal features. Deep artificial neural networks (DNNs) recently reached unprecedented complex classification outcomes. These performances were achieved through increased computational power, efficient learning algorithms, valuable activation functions, and restricted or back-fed neurons connections. By expecting significant overall BCI performances, we investigated the capabilities of combining EEG and fNIRS recordings with state-of-the-art deep learning procedures. Approach. We performed a guided left and right hand motor imagery task on 15 subjects with a fixed classification response time of 1 s and overall experiment length of 10 min. Left versus right classification accuracy of a DNN in the multi-modal recording modality was estimated and it was compared to standalone EEG and fNIRS and other classifiers. Main results. At a group level we obtained significant increase in performance when considering multi-modal recordings and DNN classifier with synergistic effect. Significance. BCI performances can be significantly improved by employing multi-modal recordings that provide electrical and hemodynamic brain activity information, in combination with advanced non-linear deep learning classification procedures.

  13. Automated classification of mouse pup isolation syllables: from cluster analysis to an Excel based ‘mouse pup syllable classification calculator’

    Directory of Open Access Journals (Sweden)

    Jasmine eGrimsley

    2013-01-01

    Full Text Available Mouse pups vocalize at high rates when they are cold or isolated from the nest. The proportions of each syllable type produced carry information about disease state and are being used as behavioral markers for the internal state of animals. Manual classifications of these vocalizations identified ten syllable types based on their spectro-temporal features. However, manual classification of mouse syllables is time consuming and vulnerable to experimenter bias. This study uses an automated cluster analysis to identify acoustically distinct syllable types produced by CBA/CaJ mouse pups, and then compares the results to prior manual classification methods. The cluster analysis identified two syllable types, based on their frequency bands, that have continuous frequency-time structure, and two syllable types featuring abrupt frequency transitions. Although cluster analysis computed fewer syllable types than manual classification, the clusters represented well the probability distributions of the acoustic features within syllables. These probability distributions indicate that some of the manually classified syllable types are not statistically distinct. The characteristics of the four classified clusters were used to generate a Microsoft Excel-based mouse syllable classifier that rapidly categorizes syllables, with over a 90% match, into the syllable types determined by cluster analysis.

  14. Predictive ability of the Society for Vascular Surgery Wound, Ischemia, and foot Infection (WIfI) classification system after first-time lower extremity revascularizations

    OpenAIRE

    Darling, Jeremy; McCallum, John C.; Soden, Peter A.; Guzman, R.J. (Raul J.); Wyers, M.C. (Mark C.); Hamdan, A.D. (Allen D.); Verhagen, Hence; Schermerhorn, Marc

    2017-01-01

    markdownabstract__Objective:__ The Society for Vascular Surgery (SVS) Wound, Ischemia and foot Infection (WIfI) classification system was proposed to predict 1-year amputation risk and potential benefit from revascularization. Our goal was to evaluate the predictive ability of this scale in a real-world selection of patients undergoing a first-time lower extremity revascularization for chronic limb-threatening ischemia (CLTI). __Methods:__ From 2005 to 2014, 1336 limbs underwent a first-time ...

  15. A neural network for noise correlation classification

    Science.gov (United States)

    Paitz, Patrick; Gokhberg, Alexey; Fichtner, Andreas

    2018-02-01

    We present an artificial neural network (ANN) for the classification of ambient seismic noise correlations into two categories, suitable and unsuitable for noise tomography. By using only a small manually classified data subset for network training, the ANN allows us to classify large data volumes with low human effort and to encode the valuable subjective experience of data analysts that cannot be captured by a deterministic algorithm. Based on a new feature extraction procedure that exploits the wavelet-like nature of seismic time-series, we efficiently reduce the dimensionality of noise correlation data, still keeping relevant features needed for automated classification. Using global- and regional-scale data sets, we show that classification errors of 20 per cent or less can be achieved when the network training is performed with as little as 3.5 per cent and 16 per cent of the data sets, respectively. Furthermore, the ANN trained on the regional data can be applied to the global data, and vice versa, without a significant increase of the classification error. An experiment where four students manually classified the data, revealed that the classification error they would assign to each other is substantially larger than the classification error of the ANN (>35 per cent). This indicates that reproducibility would be hampered more by human subjectivity than by imperfections of the ANN.

  16. Congestion Service Facilities Location Problem with Promise of Response Time

    Directory of Open Access Journals (Sweden)

    Dandan Hu

    2013-01-01

    Full Text Available In many services, promise of specific response time is advertised as a commitment by the service providers for the customer satisfaction. Congestion on service facilities could delay the delivery of the services and hurts the overall satisfaction. In this paper, congestion service facilities location problem with promise of response time is studied, and a mixed integer nonlinear programming model is presented with budget constrained. The facilities are modeled as M/M/c queues. The decision variables of the model are the locations of the service facilities and the number of servers at each facility. The objective function is to maximize the demands served within specific response time promised by the service provider. To solve this problem, we propose an algorithm that combines greedy and genetic algorithms. In order to verify the proposed algorithm, a lot of computational experiments are tested. And the results demonstrate that response time has a significant impact on location decision.

  17. Elucidation of time-dependent systems biology cell response patterns with time course network enrichment

    DEFF Research Database (Denmark)

    Wiwie, Christian; Rauch, Alexander; Haakonsson, Anders

    2018-01-01

    , no methods exist to integrate time series data with networks, thus preventing the identification of time-dependent systems biology responses. We close this gap with Time Course Network Enrichment (TiCoNE). It combines a new kind of human-augmented clustering with a novel approach to network enrichment...

  18. The Importance of Responsibility in Times of Crisis

    OpenAIRE

    Jacob Dahl Rendtorff

    2014-01-01

    In this paper I would like to show the importance of the concept of responsibility as the foundation of ethics in times of crisis in particular in the fields of politics and economics in the modern civilisation marked by globalization and technological progres. I consider the concept of responsibility as the key notion in order to understand the ethical duty in a modern technological civilisation. We can indeed observe a moralization of the concept of responsibility going beyond a strict lega...

  19. Classification of brain tumors by means of proton nuclear magnetic resonance (NMR) spectroscopy

    International Nuclear Information System (INIS)

    Sottile, V.S.; Zanchi, D.E.

    2017-01-01

    In the present work, at the request of health professionals, a computer application named “ViDa” was developed. The aim of this study is to differentiate brain lesions according to whether or not they are tumors, and their subsequent classification into different tumor types using magnetic resonance spectroscopy (SVS) with an echo time of 30 milliseconds. For this development, different areas of knowledge were integrated, among which are Artificial intelligence, physics, programming, physiopathology, images in medicine, among others. Biomedical imaging can be divided into two stages: the pre-processing, performed by the resonator, and post-processing software, performed by ViDa, for the interpretation of the data. This application is included within the Medical Informatics area, as it provides assistance for clinical decision making. The role of the biomedical engineer is fulfilled by developing a health technology in response to a manifested real-life problem. The tool developed shows promising results achieving a 100% Sensitivity, 73% Specificity, 77% Positive Predictive Value and 100% Negative Predictive Value reported in 21 cases tested. The correct classifications of the tumor’s origin reach 70%, the classification of non-astrocytic lesions achieves 67% of correct classifications in that the gradation of astrocytomas achieves a 57% of gradations that agree with biopsies and 43% of slight errors. It was possible to develop an application of assistance to the diagnosis, which together with others medical tests, will make it possible to sharpen the diagnoses of brain tumors. (authors) [es

  20. Deep convolutional neural networks for automatic classification of gastric carcinoma using whole slide images in digital histopathology.

    Science.gov (United States)

    Sharma, Harshita; Zerbe, Norman; Klempert, Iris; Hellwich, Olaf; Hufnagl, Peter

    2017-11-01

    Deep learning using convolutional neural networks is an actively emerging field in histological image analysis. This study explores deep learning methods for computer-aided classification in H&E stained histopathological whole slide images of gastric carcinoma. An introductory convolutional neural network architecture is proposed for two computerized applications, namely, cancer classification based on immunohistochemical response and necrosis detection based on the existence of tumor necrosis in the tissue. Classification performance of the developed deep learning approach is quantitatively compared with traditional image analysis methods in digital histopathology requiring prior computation of handcrafted features, such as statistical measures using gray level co-occurrence matrix, Gabor filter-bank responses, LBP histograms, gray histograms, HSV histograms and RGB histograms, followed by random forest machine learning. Additionally, the widely known AlexNet deep convolutional framework is comparatively analyzed for the corresponding classification problems. The proposed convolutional neural network architecture reports favorable results, with an overall classification accuracy of 0.6990 for cancer classification and 0.8144 for necrosis detection. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Virtual Sensor of Surface Electromyography in a New Extensive Fault-Tolerant Classification System.

    Science.gov (United States)

    de Moura, Karina de O A; Balbinot, Alexandre

    2018-05-01

    A few prosthetic control systems in the scientific literature obtain pattern recognition algorithms adapted to changes that occur in the myoelectric signal over time and, frequently, such systems are not natural and intuitive. These are some of the several challenges for myoelectric prostheses for everyday use. The concept of the virtual sensor, which has as its fundamental objective to estimate unavailable measures based on other available measures, is being used in other fields of research. The virtual sensor technique applied to surface electromyography can help to minimize these problems, typically related to the degradation of the myoelectric signal that usually leads to a decrease in the classification accuracy of the movements characterized by computational intelligent systems. This paper presents a virtual sensor in a new extensive fault-tolerant classification system to maintain the classification accuracy after the occurrence of the following contaminants: ECG interference, electrode displacement, movement artifacts, power line interference, and saturation. The Time-Varying Autoregressive Moving Average (TVARMA) and Time-Varying Kalman filter (TVK) models are compared to define the most robust model for the virtual sensor. Results of movement classification were presented comparing the usual classification techniques with the method of the degraded signal replacement and classifier retraining. The experimental results were evaluated for these five noise types in 16 surface electromyography (sEMG) channel degradation case studies. The proposed system without using classifier retraining techniques recovered of mean classification accuracy was of 4% to 38% for electrode displacement, movement artifacts, and saturation noise. The best mean classification considering all signal contaminants and channel combinations evaluated was the classification using the retraining method, replacing the degraded channel by the virtual sensor TVARMA model. This method

  2. Classification, disease, and diagnosis.

    Science.gov (United States)

    Jutel, Annemarie

    2011-01-01

    Classification shapes medicine and guides its practice. Understanding classification must be part of the quest to better understand the social context and implications of diagnosis. Classifications are part of the human work that provides a foundation for the recognition and study of illness: deciding how the vast expanse of nature can be partitioned into meaningful chunks, stabilizing and structuring what is otherwise disordered. This article explores the aims of classification, their embodiment in medical diagnosis, and the historical traditions of medical classification. It provides a brief overview of the aims and principles of classification and their relevance to contemporary medicine. It also demonstrates how classifications operate as social framing devices that enable and disable communication, assert and refute authority, and are important items for sociological study.

  3. Transformation Algorithm of Dielectric Response in Time-Frequency Domain

    Directory of Open Access Journals (Sweden)

    Ji Liu

    2014-01-01

    Full Text Available A transformation algorithm of dielectric response from time domain to frequency domain is presented. In order to shorten measuring time of low or ultralow frequency dielectric response characteristics, the transformation algorithm is used in this paper to transform the time domain relaxation current to frequency domain current for calculating the low frequency dielectric dissipation factor. In addition, it is shown from comparing the calculation results with actual test data that there is a coincidence for both results over a wide range of low frequencies. Meanwhile, the time domain test data of depolarization currents in dry and moist pressboards are converted into frequency domain results on the basis of the transformation. The frequency domain curves of complex capacitance and dielectric dissipation factor at the low frequency range are obtained. Test results of polarization and depolarization current (PDC in pressboards are also given at the different voltage and polarization time. It is demonstrated from the experimental results that polarization and depolarization current are affected significantly by moisture contents of the test pressboards, and the transformation algorithm is effective in ultralow frequency of 10−3 Hz. Data analysis and interpretation of the test results conclude that analysis of time-frequency domain dielectric response can be used for assessing insulation system in power transformer.

  4. Three-class classification in computer-aided diagnosis of breast cancer by support vector machine

    Science.gov (United States)

    Sun, Xuejun; Qian, Wei; Song, Dansheng

    2004-05-01

    Design of classifier in computer-aided diagnosis (CAD) scheme of breast cancer plays important role to its overall performance in sensitivity and specificity. Classification of a detected object as malignant lesion, benign lesion, or normal tissue on mammogram is a typical three-class pattern recognition problem. This paper presents a three-class classification approach by using two-stage classifier combined with support vector machine (SVM) learning algorithm for classification of breast cancer on mammograms. The first classification stage is used to detect abnormal areas and normal breast tissues, and the second stage is for classification of malignant or benign in detected abnormal objects. A series of spatial, morphology and texture features have been extracted on detected objects areas. By using genetic algorithm (GA), different feature groups for different stage classification have been investigated. Computerized free-response receiver operating characteristic (FROC) and receiver operating characteristic (ROC) analyses have been employed in different classification stages. Results have shown that obvious performance improvement in both sensitivity and specificity was observed through proposed classification approach compared with conventional two-class classification approaches, indicating its effectiveness in classification of breast cancer on mammograms.

  5. Woven fabric defects detection based on texture classification algorithm

    International Nuclear Information System (INIS)

    Ben Salem, Y.; Nasri, S.

    2011-01-01

    In this paper we have compared two famous methods in texture classification to solve the problem of recognition and classification of defects occurring in a textile manufacture. We have compared local binary patterns method with co-occurrence matrix. The classifier used is the support vector machines (SVM). The system has been tested using TILDA database. The results obtained are interesting and show that LBP is a good method for the problems of recognition and classifcation defects, it gives a good running time especially for the real time applications.

  6. Current Trends in the Molecular Classification of Renal Neoplasms

    Directory of Open Access Journals (Sweden)

    Andrew N. Young

    2006-01-01

    Full Text Available Renal cell carcinoma (RCC is the most common form of kidney cancer in adults. RCC is a significant challenge for pathologic diagnosis and clinical management. The primary approach to diagnosis is by light microscopy, using the World Health Organization (WHO classification system, which defines histopathologic tumor subtypes with distinct clinical behavior and underlying genetic mutations. However, light microscopic diagnosis of RCC subtypes is often difficult due to variable histology. In addition, the clinical behavior of RCC is highly variable and therapeutic response rates are poor. Few clinical assays are available to predict outcome in RCC or correlate behavior with histology. Therefore, novel RCC classification systems based on gene expression should be useful for diagnosis, prognosis, and treatment. Recent microarray studies have shown that renal tumors are characterized by distinct gene expression profiles, which can be used to discover novel diagnostic and prognostic biomarkers. Here, we review clinical features of kidney cancer, the WHO classification system, and the growing role of molecular classification for diagnosis, prognosis, and therapy of this disease.

  7. Integrated inertial sensors and mobile computing for real-time cycling performance guidance via pedaling profile classification.

    Science.gov (United States)

    Xu, James Y; Nan, Xiaomeng; Ebken, Victor; Wang, Yan; Pottie, Greg J; Kaiser, William J

    2015-03-01

    Today, the bicycle is utilized as a daily commute tool, a physical rehabilitation asset, and sporting equipment, prompting studies into the biomechanics of cycling. Of the number of important parameters that affect cycling efficiency, the foot angle profile is one of the most important as it correlates directly with the effective force applied to the bike. However, there has been no compact and portable solution for measuring the foot angle and for providing the cyclist with real-time feedback due to a number of difficulties of the current tracking and sensing technologies and the myriad types of bikes available. This paper presents a novel sensing and mobile computing system for classifying the foot angle profiles during cycling and for providing real-time guidance to the user to achieve the correct profile. Continuous foot angle tracking is firstly converted into a discrete problem requiring only recognition of acceleration profiles of the foot using a single shoe mounted tri-axial accelerometer during each pedaling cycle. A classification method is then applied to identify the pedaling profile. Finally, a mobile solution is presented to provide real-time signal processing and guidance.

  8. Processing of recognition information and additional cues: A model-based analysis of choice, confidence, and response time

    Directory of Open Access Journals (Sweden)

    Andreas Glockner

    2011-02-01

    Full Text Available Research on the processing of recognition information has focused on testing the recognition heuristic (RH. On the aggregate, the noncompensatory use of recognition information postulated by the RH was rejected in several studies, while RH could still account for a considerable proportion of choices. These results can be explained if either a a part of the subjects used RH or b nobody used it but its choice predictions were accidentally in line with predictions of the strategy used. In the current study, which exemplifies a new approach to model testing, we determined individuals' decision strategies based on a maximum-likelihood classification method, taking into account choices, response times and confidence ratings simultaneously. Unlike most previous studies of the RH, our study tested the RH under conditions in which we provided information about cue values of unrecognized objects (which we argue is fairly common and thus of some interest. For 77.5% of the subjects, overall behavior was best explained by a compensatory parallel constraint satisfaction (PCS strategy. The proportion of subjects using an enhanced RH heuristic (RHe was negligible (up to 7.5%; 15% of the subjects seemed to use a take the best strategy (TTB. A more-fine grained analysis of the supplemental behavioral parameters conditional on strategy use supports PCS but calls into question process assumptions for apparent users of RH, RHe, and TTB within our experimental context. Our results are consistent with previous literature highlighting the importance of individual strategy classification as compared to aggregated analyses.

  9. Psychophysiological Sensing and State Classification for Attention Management in Commercial Aviation

    Science.gov (United States)

    Harrivel, Angela R.; Liles, Charles; Stephens, Chad L.; Ellis, Kyle K.; Prinzel, Lawrence J.; Pope, Alan T.

    2016-01-01

    Attention-related human performance limiting states (AHPLS) can cause pilots to lose airplane state awareness (ASA), and their detection is important to improving commercial aviation safety. The Commercial Aviation Safety Team found that the majority of recent international commercial aviation accidents attributable to loss of control inflight involved flight crew loss of airplane state awareness, and that distraction of various forms was involved in all of them. Research on AHPLS, including channelized attention, diverted attention, startle / surprise, and confirmation bias, has been recommended in a Safety Enhancement (SE) entitled "Training for Attention Management." To accomplish the detection of such cognitive and psychophysiological states, a broad suite of sensors has been implemented to simultaneously measure their physiological markers during high fidelity flight simulation human subject studies. Pilot participants were asked to perform benchmark tasks and experimental flight scenarios designed to induce AHPLS. Pattern classification was employed to distinguish the AHPLS induced by the benchmark tasks. Unimodal classification using pre-processed electroencephalography (EEG) signals as input features to extreme gradient boosting, random forest and deep neural network multiclass classifiers was implemented. Multi-modal classification using galvanic skin response (GSR) in addition to the same EEG signals and using the same types of classifiers produced increased accuracy with respect to the unimodal case (90 percent vs. 86 percent), although only via the deep neural network classifier. These initial results are a first step toward the goal of demonstrating simultaneous real time classification of multiple states using multiple sensing modalities in high-fidelity flight simulators. This detection is intended to support and inform training methods under development to mitigate the loss of ASA and thus reduce accidents and incidents.

  10. Standard classification: Physics

    International Nuclear Information System (INIS)

    1977-01-01

    This is a draft standard classification of physics. The conception is based on the physics part of the systematic catalogue of the Bayerische Staatsbibliothek and on the classification given in standard textbooks. The ICSU-AB classification now used worldwide by physics information services was not taken into account. (BJ) [de

  11. Fully Convolutional Networks for Ground Classification from LIDAR Point Clouds

    Science.gov (United States)

    Rizaldy, A.; Persello, C.; Gevaert, C. M.; Oude Elberink, S. J.

    2018-05-01

    Deep Learning has been massively used for image classification in recent years. The use of deep learning for ground classification from LIDAR point clouds has also been recently studied. However, point clouds need to be converted into an image in order to use Convolutional Neural Networks (CNNs). In state-of-the-art techniques, this conversion is slow because each point is converted into a separate image. This approach leads to highly redundant computation during conversion and classification. The goal of this study is to design a more efficient data conversion and ground classification. This goal is achieved by first converting the whole point cloud into a single image. The classification is then performed by a Fully Convolutional Network (FCN), a modified version of CNN designed for pixel-wise image classification. The proposed method is significantly faster than state-of-the-art techniques. On the ISPRS Filter Test dataset, it is 78 times faster for conversion and 16 times faster for classification. Our experimental analysis on the same dataset shows that the proposed method results in 5.22 % of total error, 4.10 % of type I error, and 15.07 % of type II error. Compared to the previous CNN-based technique and LAStools software, the proposed method reduces the total error and type I error (while type II error is slightly higher). The method was also tested on a very high point density LIDAR point clouds resulting in 4.02 % of total error, 2.15 % of type I error and 6.14 % of type II error.

  12. Detection of advance item knowledge using response times in computer adaptive testing

    NARCIS (Netherlands)

    Meijer, R.R.; Sotaridona, Leonardo

    2006-01-01

    We propose a new method for detecting item preknowledge in a CAT based on an estimate of “effective response time” for each item. Effective response time is defined as the time required for an individual examinee to answer an item correctly. An unusually short response time relative to the expected

  13. Classifications of Acute Scaphoid Fractures: A Systematic Literature Review.

    Science.gov (United States)

    Ten Berg, Paul W; Drijkoningen, Tessa; Strackee, Simon D; Buijze, Geert A

    2016-05-01

    Background In the lack of consensus, surgeon-based preference determines how acute scaphoid fractures are classified. There is a great variety of classification systems with considerable controversies. Purposes The purpose of this study was to provide an overview of the different classification systems, clarifying their subgroups and analyzing their popularity by comparing citation indexes. The intention was to improve data comparison between studies using heterogeneous fracture descriptions. Methods We performed a systematic review of the literature based on a search of medical literature from 1950 to 2015, and a manual search using the reference lists in relevant book chapters. Only original descriptions of classifications of acute scaphoid fractures in adults were included. Popularity was based on citation index as reported in the databases of Web of Science (WoS) and Google Scholar. Articles that were cited <10 times in WoS were excluded. Results Our literature search resulted in 308 potentially eligible descriptive reports of which 12 reports met the inclusion criteria. We distinguished 13 different (sub) classification systems based on (1) fracture location, (2) fracture plane orientation, and (3) fracture stability/displacement. Based on citations numbers, the Herbert classification was most popular, followed by the Russe and Mayo classifications. All classification systems were based on plain radiography. Conclusions Most classification systems were based on fracture location, displacement, or stability. Based on the controversy and limited reliability of current classification systems, suggested research areas for an updated classification include three-dimensional fracture pattern etiology and fracture fragment mobility assessed by dynamic imaging.

  14. A Deep Learning Architecture for Temporal Sleep Stage Classification Using Multivariate and Multimodal Time Series.

    Science.gov (United States)

    Chambon, Stanislas; Galtier, Mathieu N; Arnal, Pierrick J; Wainrib, Gilles; Gramfort, Alexandre

    2018-04-01

    Sleep stage classification constitutes an important preliminary exam in the diagnosis of sleep disorders. It is traditionally performed by a sleep expert who assigns to each 30 s of the signal of a sleep stage, based on the visual inspection of signals such as electroencephalograms (EEGs), electrooculograms (EOGs), electrocardiograms, and electromyograms (EMGs). We introduce here the first deep learning approach for sleep stage classification that learns end-to-end without computing spectrograms or extracting handcrafted features, that exploits all multivariate and multimodal polysomnography (PSG) signals (EEG, EMG, and EOG), and that can exploit the temporal context of each 30-s window of data. For each modality, the first layer learns linear spatial filters that exploit the array of sensors to increase the signal-to-noise ratio, and the last layer feeds the learnt representation to a softmax classifier. Our model is compared to alternative automatic approaches based on convolutional networks or decisions trees. Results obtained on 61 publicly available PSG records with up to 20 EEG channels demonstrate that our network architecture yields the state-of-the-art performance. Our study reveals a number of insights on the spatiotemporal distribution of the signal of interest: a good tradeoff for optimal classification performance measured with balanced accuracy is to use 6 EEG with 2 EOG (left and right) and 3 EMG chin channels. Also exploiting 1 min of data before and after each data segment offers the strongest improvement when a limited number of channels are available. As sleep experts, our system exploits the multivariate and multimodal nature of PSG signals in order to deliver the state-of-the-art classification performance with a small computational cost.

  15. Response probability and response time: a straight line, the Tagging/Retagging interpretation of short term memory, an operational definition of meaningfulness and short term memory time decay and search time.

    Science.gov (United States)

    Tarnow, Eugen

    2008-12-01

    The functional relationship between correct response probability and response time is investigated in data sets from Rubin, Hinton and Wenzel, J Exp Psychol Learn Mem Cogn 25:1161-1176, 1999 and Anderson, J Exp Psychol [Hum Learn] 7:326-343, 1981. The two measures are linearly related through stimulus presentation lags from 0 to 594 s in the former experiment and for repeated learning of words in the latter. The Tagging/Retagging interpretation of short term memory is introduced to explain this linear relationship. At stimulus presentation the words are tagged. This tagging level drops slowly with time. When a probe word is reintroduced the tagging level has to increase for the word to be properly identified leading to a delay in response time. The tagging time is related to the meaningfulness of the words used-the more meaningful the word the longer the tagging time. After stimulus presentation the tagging level drops in a logarithmic fashion to 50% after 10 s and to 20% after 240 s. The incorrect recall and recognition times saturate in the Rubin et al. data set (they are not linear for large time lags), suggesting a limited time to search the short term memory structure: the search time for recall of unusual words is 1.7 s. For recognition of nonsense words the corresponding time is about 0.4 s, similar to the 0.243 s found in Cavanagh (1972).

  16. A new time-frequency method for identification and classification of ball bearing faults

    Science.gov (United States)

    Attoui, Issam; Fergani, Nadir; Boutasseta, Nadir; Oudjani, Brahim; Deliou, Adel

    2017-06-01

    In order to fault diagnosis of ball bearing that is one of the most critical components of rotating machinery, this paper presents a time-frequency procedure incorporating a new feature extraction step that combines the classical wavelet packet decomposition energy distribution technique and a new feature extraction technique based on the selection of the most impulsive frequency bands. In the proposed procedure, firstly, as a pre-processing step, the most impulsive frequency bands are selected at different bearing conditions using a combination between Fast-Fourier-Transform FFT and Short-Frequency Energy SFE algorithms. Secondly, once the most impulsive frequency bands are selected, the measured machinery vibration signals are decomposed into different frequency sub-bands by using discrete Wavelet Packet Decomposition WPD technique to maximize the detection of their frequency contents and subsequently the most useful sub-bands are represented in the time-frequency domain by using Short Time Fourier transform STFT algorithm for knowing exactly what the frequency components presented in those frequency sub-bands are. Once the proposed feature vector is obtained, three feature dimensionality reduction techniques are employed using Linear Discriminant Analysis LDA, a feedback wrapper method and Locality Sensitive Discriminant Analysis LSDA. Lastly, the Adaptive Neuro-Fuzzy Inference System ANFIS algorithm is used for instantaneous identification and classification of bearing faults. In order to evaluate the performances of the proposed method, different testing data set to the trained ANFIS model by using different conditions of healthy and faulty bearings under various load levels, fault severities and rotating speed. The conclusion resulting from this paper is highlighted by experimental results which prove that the proposed method can serve as an intelligent bearing fault diagnosis system.

  17. Toward functional classification of neuronal types.

    Science.gov (United States)

    Sharpee, Tatyana O

    2014-09-17

    How many types of neurons are there in the brain? This basic neuroscience question remains unsettled despite many decades of research. Classification schemes have been proposed based on anatomical, electrophysiological, or molecular properties. However, different schemes do not always agree with each other. This raises the question of whether one can classify neurons based on their function directly. For example, among sensory neurons, can a classification scheme be devised that is based on their role in encoding sensory stimuli? Here, theoretical arguments are outlined for how this can be achieved using information theory by looking at optimal numbers of cell types and paying attention to two key properties: correlations between inputs and noise in neural responses. This theoretical framework could help to map the hierarchical tree relating different neuronal classes within and across species. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Effectiveness Analysis of a Part-Time Rapid Response System During Operation Versus Nonoperation.

    Science.gov (United States)

    Kim, Youlim; Lee, Dong Seon; Min, Hyunju; Choi, Yun Young; Lee, Eun Young; Song, Inae; Park, Jong Sun; Cho, Young-Jae; Jo, You Hwan; Yoon, Ho Il; Lee, Jae Ho; Lee, Choon-Taek; Do, Sang Hwan; Lee, Yeon Joo

    2017-06-01

    To evaluate the effect of a part-time rapid response system on the occurrence rate of cardiopulmonary arrest by comparing the times of rapid response system operation versus nonoperation. Retrospective cohort study. A 1,360-bed tertiary care hospital. Adult patients admitted to the general ward were screened. Data were collected over 36 months from rapid response system implementation (October 2012 to September 2015) and more than 45 months before rapid response system implementation (January 2009 to September 2012). None. The rapid response system operates from 7 AM to 10 PM on weekdays and from 7 AM to 12 PM on Saturdays. Primary outcomes were the difference of cardiopulmonary arrest incidence between pre-rapid response system and post-rapid response system periods and whether the rapid response system operating time affects the cardiopulmonary arrest incidence. The overall cardiopulmonary arrest incidence (per 1,000 admissions) was 1.43. Although the number of admissions per month and case-mix index were increased (3,555.18 vs 4,564.72, p times (0.82 vs 0.49/1,000 admissions; p = 0.001) but remained similar during rapid response system nonoperating times (0.77 vs 0.73/1,000 admissions; p = 0.729). The implementation of a part-time rapid response system reduced the cardiopulmonary arrest incidence based on the reduction of cardiopulmonary arrest during rapid response system operating times. Further analysis of the cost effectiveness of part-time rapid response system is needed.

  19. IRIS COLOUR CLASSIFICATION SCALES – THEN AND NOW

    Science.gov (United States)

    Grigore, Mariana; Avram, Alina

    2015-01-01

    Eye colour is one of the most obvious phenotypic traits of an individual. Since the first documented classification scale developed in 1843, there have been numerous attempts to classify the iris colour. In the past centuries, iris colour classification scales has had various colour categories and mostly relied on comparison of an individual’s eye with painted glass eyes. Once photography techniques were refined, standard iris photographs replaced painted eyes, but this did not solve the problem of painted/ printed colour variability in time. Early clinical scales were easy to use, but lacked objectivity and were not standardised or statistically tested for reproducibility. The era of automated iris colour classification systems came with the technological development. Spectrophotometry, digital analysis of high-resolution iris images, hyper spectral analysis of the human real iris and the dedicated iris colour analysis software, all accomplished an objective, accurate iris colour classification, but are quite expensive and limited in use to research environment. Iris colour classification systems evolved continuously due to their use in a wide range of studies, especially in the fields of anthropology, epidemiology and genetics. Despite the wide range of the existing scales, up until present there has been no generally accepted iris colour classification scale. PMID:27373112

  20. Phenotype classification of zebrafish embryos by supervised learning.

    Directory of Open Access Journals (Sweden)

    Nathalie Jeanray

    Full Text Available Zebrafish is increasingly used to assess biological properties of chemical substances and thus is becoming a specific tool for toxicological and pharmacological studies. The effects of chemical substances on embryo survival and development are generally evaluated manually through microscopic observation by an expert and documented by several typical photographs. Here, we present a methodology to automatically classify brightfield images of wildtype zebrafish embryos according to their defects by using an image analysis approach based on supervised machine learning. We show that, compared to manual classification, automatic classification results in 90 to 100% agreement with consensus voting of biological experts in nine out of eleven considered defects in 3 days old zebrafish larvae. Automation of the analysis and classification of zebrafish embryo pictures reduces the workload and time required for the biological expert and increases the reproducibility and objectivity of this classification.

  1. Challenges to the Use of Artificial Neural Networks for Diagnostic Classifications with Student Test Data

    Science.gov (United States)

    Briggs, Derek C.; Circi, Ruhan

    2017-01-01

    Artificial Neural Networks (ANNs) have been proposed as a promising approach for the classification of students into different levels of a psychological attribute hierarchy. Unfortunately, because such classifications typically rely upon internally produced item response patterns that have not been externally validated, the instability of ANN…

  2. Multispectral Image classification using the theories of neural networks

    International Nuclear Information System (INIS)

    Ardisasmita, M.S.; Subki, M.I.R.

    1997-01-01

    Image classification is the one of the important part of digital image analysis. the objective of image classification is to identify and regroup the features occurring in an image into one or several classes in terms of the object. basic to the understanding of multispectral classification is the concept of the spectral response of an object as a function of the electromagnetic radiation and the wavelength of the spectrum. new approaches to classification has been developed to improve the result of analysis, these state-of-the-art classifiers are based upon the theories of neural networks. Neural network classifiers are algorithmes which mimic the computational abilities of the human brain. Artificial neurons are simple emulation's of biological neurons; they take in information from sensors or other artificial neurons, perform very simple operations on this data, and pass the result to other recognize the spectral signature of each image pixel. Neural network image classification has been divided into supervised and unsupervised training procedures. In the supervised approach, examples of each cover type can be located and the computer can compute spectral signatures to categorize all pixels in a digital image into several land cover classes. In supervised classification, spectral signatures are generated by mathematically grouping and it does not require analyst-specified training data. Thus, in the supervised approach we define useful information categories and then examine their spectral reparability; in the unsupervised approach the computer determines spectrally sapable classes and then we define thei information value

  3. Frequency-dependent effects of background noise on subcortical response timing.

    Science.gov (United States)

    Tierney, A; Parbery-Clark, A; Skoe, E; Kraus, N

    2011-12-01

    The addition of background noise to an auditory signal delays brainstem response timing. This effect has been extensively documented using manual peak selection. Peak picking, however, is impractical for large-scale studies of spectrotemporally complex stimuli, and leaves open the question of whether noise-induced delays are frequency-dependent or occur across the frequency spectrum. Here we use an automated, objective method to examine phase shifts between auditory brainstem responses to a speech sound (/da/) presented with and without background noise. We predicted that shifts in neural response timing would also be reflected in frequency-specific phase shifts. Our results indicate that the addition of background noise causes phase shifts across the subcortical response spectrum (70-1000 Hz). However, this noise-induced delay is not uniform such that some frequency bands show greater shifts than others: low-frequency phase shifts (300-500 Hz) are largest during the response to the consonant-vowel formant transition (/d/), while high-frequency shifts (720-1000 Hz) predominate during the response to the steady-state vowel (/a/). Most importantly, phase shifts occurring in specific frequency bands correlate strongly with shifts in the latencies of the predominant peaks in the auditory brainstem response, while phase shifts in other frequency bands do not. This finding confirms the validity of phase shift detection as an objective measure of timing differences and reveals that this method detects noise-induced shifts in timing that may not be captured by traditional peak latency measurements. Copyright © 2011 Elsevier B.V. All rights reserved.

  4. Transforaminal epidural steroid injections influence Mechanical Diagnosis and Therapy (MDT) pain response classification in candidates for lumbar herniated disc surgery.

    Science.gov (United States)

    van Helvoirt, Hans; Apeldoorn, Adri T; Knol, Dirk L; Arts, Mark P; Kamper, Steven J; van Tulder, Maurits W; Ostelo, Raymond W

    2016-04-27

    Prospective cohort study. Although lumbar radiculopathy is regarded as a specific diagnosis, the most effective treatment strategy is unclear. Commonly used treatments include transforaminal epidural steroid injections (TESIs) and Mechanical Diagnosis & Therapy (MDT), but no studies have investigated the effectiveness of this combination. MDT differentiates pain centralization (C) from non-centralization (NC), which indicates good vs. poor prognostic validity respectively. The main aims were 1) to determine changes in Mechanical Diagnosis and Therapy (MDT) pain response classifications after transforaminal epidural steroid injections (TESIs) in candidates for lumbar herniated disc surgery and 2) to evaluate differences in short and long term outcomes for patients with different pain response classifications. Candidates for lumbar herniated disc surgery were assessed with a MDT protocol and their pain response classified as centralizing or peripheralizing. For this study,only patients were eligible who showed a peripheralizing pain response at intake. All patients then received TESIs and were reassessed and classified using the MDT protocol, into groups according to pain response (resolved, centralizing, peripheralizing with less pain and peripheralising with severe pain). After receiving targeted treatment based on pain response after TESIs, ranging from advice, MDT or surgery, follow-up assessments were completed at discharge and at 12 months. The primary outcomes were disability (Roland-Morris Disability Questionnaire [RMDQ] for Sciatica), pain severity in leg (visual analogue scale [VAS], 0-100) and global perceived effect (GPE). Linear mixed-models were used to determine between-groups differences in outcome. A total of 77 patients with lumbar disc herniation and peripheralizing symptoms were included. Patients received an average of 2 (SD 0.7) TESIs. After TESIs, 17 patients (22%) were classified as peripheralizing with continuing severe pain.These patients

  5. Building the United States National Vegetation Classification

    Science.gov (United States)

    Franklin, S.B.; Faber-Langendoen, D.; Jennings, M.; Keeler-Wolf, T.; Loucks, O.; Peet, R.; Roberts, D.; McKerrow, A.

    2012-01-01

    The Federal Geographic Data Committee (FGDC) Vegetation Subcommittee, the Ecological Society of America Panel on Vegetation Classification, and NatureServe have worked together to develop the United States National Vegetation Classification (USNVC). The current standard was accepted in 2008 and fosters consistency across Federal agencies and non-federal partners for the description of each vegetation concept and its hierarchical classification. The USNVC is structured as a dynamic standard, where changes to types at any level may be proposed at any time as new information comes in. But, because much information already exists from previous work, the NVC partners first established methods for screening existing types to determine their acceptability with respect to the 2008 standard. Current efforts include a screening process to assign confidence to Association and Group level descriptions, and a review of the upper three levels of the classification. For the upper levels especially, the expectation is that the review process includes international scientists. Immediate future efforts include the review of remaining levels and the development of a proposal review process.

  6. The paradox of atheoretical classification

    DEFF Research Database (Denmark)

    Hjørland, Birger

    2016-01-01

    A distinction can be made between “artificial classifications” and “natural classifications,” where artificial classifications may adequately serve some limited purposes, but natural classifications are overall most fruitful by allowing inference and thus many different purposes. There is strong...... support for the view that a natural classification should be based on a theory (and, of course, that the most fruitful theory provides the most fruitful classification). Nevertheless, atheoretical (or “descriptive”) classifications are often produced. Paradoxically, atheoretical classifications may...... be very successful. The best example of a successful “atheoretical” classification is probably the prestigious Diagnostic and Statistical Manual of Mental Disorders (DSM) since its third edition from 1980. Based on such successes one may ask: Should the claim that classifications ideally are natural...

  7. Integrating Human and Machine Intelligence in Galaxy Morphology Classification Tasks

    Science.gov (United States)

    Beck, Melanie Renee

    The large flood of data flowing from observatories presents significant challenges to astronomy and cosmology--challenges that will only be magnified by projects currently under development. Growth in both volume and velocity of astrophysics data is accelerating: whereas the Sloan Digital Sky Survey (SDSS) has produced 60 terabytes of data in the last decade, the upcoming Large Synoptic Survey Telescope (LSST) plans to register 30 terabytes per night starting in the year 2020. Additionally, the Euclid Mission will acquire imaging for 5 x 107 resolvable galaxies. The field of galaxy evolution faces a particularly challenging future as complete understanding often cannot be reached without analysis of detailed morphological galaxy features. Historically, morphological analysis has relied on visual classification by astronomers, accessing the human brains capacity for advanced pattern recognition. However, this accurate but inefficient method falters when confronted with many thousands (or millions) of images. In the SDSS era, efforts to automate morphological classifications of galaxies (e.g., Conselice et al., 2000; Lotz et al., 2004) are reasonably successful and can distinguish between elliptical and disk-dominated galaxies with accuracies of 80%. While this is statistically very useful, a key problem with these methods is that they often cannot say which 80% of their samples are accurate. Furthermore, when confronted with the more complex task of identifying key substructure within galaxies, automated classification algorithms begin to fail. The Galaxy Zoo project uses a highly innovative approach to solving the scalability problem of visual classification. Displaying images of SDSS galaxies to volunteers via a simple and engaging web interface, www.galaxyzoo.org asks people to classify images by eye. Within the first year hundreds of thousands of members of the general public had classified each of the 1 million SDSS galaxies an average of 40 times. Galaxy Zoo

  8. Transporter Classification Database (TCDB)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Transporter Classification Database details a comprehensive classification system for membrane transport proteins known as the Transporter Classification (TC)...

  9. Can Previewing Sport-Specific Video Influence Reactive-Agility Response Time?

    Science.gov (United States)

    Holding, Ryan; Meir, Rudi; Zhou, Shi

    2017-02-01

    The purpose of this study was to examine whether a video-based warm-up could provide an acute performance benefit to response time for athletes in a sport-specific agility task. In addition, 2 learning strategies, explicit and implicit, were compared for their effectiveness in facilitating an improvement in sport-specific agility. Thirty representative male junior rugby union players (age 14-16 y, mean age 14.6 ± 1.09 y) were placed in 3 experimental groups (explicit, implicit, and control) and completed 2 intervention sessions. Testing sessions included preintervention testing, completion of the video-based warm-up intervention, and postintervention testing. A 3D motion-analysis system was used to assess response time in the testing battery. The athletes' response times on the pre- to postintervention tests were compared to determine the effectiveness of the video-based warm-up. A 2-way general linear model with repeated-measures analysis indicated that both the explicit (P = .030, d = 0.28) and implicit (P = .049, d = 0.33) groups significantly improved their response time by the intervention compared with the control group (P = .367, d = 0.08). The mean postintervention response time for the explicit group improved by 19.1% (from 0.246 s pre to 0.199 s post), and the implicit group improved by 15.7% (from 0.268 s to 0.226 s). Findings suggest that a video-based warm-up may provide an acute benefit to sport-specific agility performance for junior athletes.

  10. Excited-state absorption in tetrapyridyl porphyrins: comparing real-time and quadratic-response time-dependent density functional theory

    Energy Technology Data Exchange (ETDEWEB)

    Bowman, David N. [Department of Chemistry; Supercomputing Institute and Chemical Theory Center; University of Minnesota; Minneapolis; USA; Asher, Jason C. [Department of Chemistry; Supercomputing Institute and Chemical Theory Center; University of Minnesota; Minneapolis; USA; Fischer, Sean A. [William R. Wiley Environmental Molecular Sciences Laboratory; Pacific Northwest National Laboratory; P.O. Box 999; Richland; USA; Cramer, Christopher J. [Department of Chemistry; Supercomputing Institute and Chemical Theory Center; University of Minnesota; Minneapolis; USA; Govind, Niranjan [William R. Wiley Environmental Molecular Sciences Laboratory; Pacific Northwest National Laboratory; P.O. Box 999; Richland; USA

    2017-01-01

    Threemeso-substituted tetrapyridyl porphyrins (free base, Ni(ii), and Cu(ii)) were investigated for their optical limiting (OL) capabilities using real-time (RT-), linear-response (LR-), and quadratic-response (QR-) time-dependent density functional theory (TDDFT) methods.

  11. A comparison of two procedures for verbal response time fractionation

    Directory of Open Access Journals (Sweden)

    Lotje evan der Linden

    2014-10-01

    Full Text Available To describe the mental architecture between stimulus and response, cognitive models often divide the stimulus-response (SR interval into stages or modules. Predictions derived from such models are typically tested by focusing on the moment of response emission, through the analysis of response time (RT distributions. To go beyond the single response event, we recently proposed a method to fractionate verbal RTs into two physiologically defined intervals that are assumed to reflect different processing stages. The analysis of the durations of these intervals can be used to study the interaction between cognitive and motor processing during speech production. Our method is inspired by studies on decision making that used manual responses, in which RTs were fractionated into a premotor time (PMT, assumed to reflect cognitive processing, and a motor time (MT, assumed to reflect motor processing. In these studies, surface EMG activity was recorded from participants' response fingers. EMG onsets, reflecting the initiation of a motor response, were used as the point of fractionation. We adapted this method to speech-production research by measuring verbal responses in combination with EMG activity from facial muscles involved in articulation. However, in contrast to button-press tasks, the complex task of producing speech often resulted in multiple EMG bursts within the SR interval. This observation forced us to decide how to operationalize the point of fractionation: as the first EMG burst after stimulus onset (the stimulus-locked approach, or as the EMG burst that is coupled to the vocal response (the response-locked approach. The point of fractionation has direct consequences on how much of the overall task effect is captured by either interval. Therefore, the purpose of the current paper was to compare both onset-detection procedures in order to make an informed decision about which of the two is preferable. We concluded in favour or the response

  12. Improvement of the classification accuracy in discriminating diabetic retinopathy by multifocal electroretinogram analysis

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    The multifocal electroretinogram (mfERG) is a newly developed electrophysiological technique. In this paper, a classification method is proposed for early diagnosis of the diabetic retinopathy using mfERG data. MfERG records were obtained from eyes of healthy individuals and patients with diabetes at different stages. For each mfERG record, 103 local responses were extracted. Amplitude value of each point on all the mfERG local responses was looked as one potential feature to classify the experimental subjects. Feature subsets were selected from the feature space by comparing the inter-intra distance. Based on the selected feature subset, Fisher's linear classifiers were trained. And the final classification decision of the record was made by voting all the classifiers' outputs. Applying the method to classify all experimental subjects, very low error rates were achieved. Some crucial properties of the diabetic retinopathy classification method are also discussed.

  13. Rapid hyperspectral image classification to enable autonomous search systems

    Directory of Open Access Journals (Sweden)

    Raj Bridgelal

    2016-11-01

    Full Text Available The emergence of lightweight full-frame hyperspectral cameras is destined to enable autonomous search vehicles in the air, on the ground and in water. Self-contained and long-endurance systems will yield important new applications, for example, in emergency response and the timely identification of environmental hazards. One missing capability is rapid classification of hyperspectral scenes so that search vehicles can immediately take actions to verify potential targets. Onsite verifications minimise false positives and preclude the expense of repeat missions. Verifications will require enhanced image quality, which is achievable by either moving closer to the potential target or by adjusting the optical system. Such a solution, however, is currently impractical for small mobile platforms with finite energy sources. Rapid classifications with current methods demand large computing capacity that will quickly deplete the on-board battery or fuel. To develop the missing capability, the authors propose a low-complexity hyperspectral image classifier that approaches the performance of prevalent classifiers. This research determines that the new method will require at least 19-fold less computing capacity than the prevalent classifier. To assess relative performances, the authors developed a benchmark that compares a statistic of library endmember separability in their respective feature spaces.

  14. Indexing Density Models for Incremental Learning and Anytime Classification on Data Streams

    DEFF Research Database (Denmark)

    Seidl, Thomas; Assent, Ira; Kranen, Philipp

    2009-01-01

    Classification of streaming data faces three basic challenges: it has to deal with huge amounts of data, the varying time between two stream data items must be used best possible (anytime classification) and additional training data must be incrementally learned (anytime learning) for applying...... to the individual object to be classified) a hierarchy of mixture densities that represent kernel density estimators at successively coarser levels. Our probability density queries together with novel classification improvement strategies provide the necessary information for very effective classification at any...... point of interruption. Moreover, we propose a novel evaluation method for anytime classification using Poisson streams and demonstrate the anytime learning performance of the Bayes tree....

  15. MO-DE-207B-03: Improved Cancer Classification Using Patient-Specific Biological Pathway Information Via Gene Expression Data

    Energy Technology Data Exchange (ETDEWEB)

    Young, M; Craft, D [Massachusetts General Hospital and Harvard Medical School, Boston, MA (United States)

    2016-06-15

    Purpose: To develop an efficient, pathway-based classification system using network biology statistics to assist in patient-specific response predictions to radiation and drug therapies across multiple cancer types. Methods: We developed PICS (Pathway Informed Classification System), a novel two-step cancer classification algorithm. In PICS, a matrix m of mRNA expression values for a patient cohort is collapsed into a matrix p of biological pathways. The entries of p, which we term pathway scores, are obtained from either principal component analysis (PCA), normal tissue centroid (NTC), or gene expression deviation (GED). The pathway score matrix is clustered using both k-means and hierarchical clustering, and a clustering is judged by how well it groups patients into distinct survival classes. The most effective pathway scoring/clustering combination, per clustering p-value, thus generates various ‘signatures’ for conventional and functional cancer classification. Results: PICS successfully regularized large dimension gene data, separated normal and cancerous tissues, and clustered a large patient cohort spanning six cancer types. Furthermore, PICS clustered patient cohorts into distinct, statistically-significant survival groups. For a suboptimally-debulked ovarian cancer set, the pathway-classified Kaplan-Meier survival curve (p = .00127) showed significant improvement over that of a prior gene expression-classified study (p = .0179). For a pancreatic cancer set, the pathway-classified Kaplan-Meier survival curve (p = .00141) showed significant improvement over that of a prior gene expression-classified study (p = .04). Pathway-based classification confirmed biomarkers for the pyrimidine, WNT-signaling, glycerophosphoglycerol, beta-alanine, and panthothenic acid pathways for ovarian cancer. Despite its robust nature, PICS requires significantly less run time than current pathway scoring methods. Conclusion: This work validates the PICS method to improve

  16. Monitoring nanotechnology using patent classifications: an overview and comparison of nanotechnology classification schemes

    Energy Technology Data Exchange (ETDEWEB)

    Jürgens, Björn, E-mail: bjurgens@agenciaidea.es [Agency of Innovation and Development of Andalusia, CITPIA PATLIB Centre (Spain); Herrero-Solana, Victor, E-mail: victorhs@ugr.es [University of Granada, SCImago-UGR (SEJ036) (Spain)

    2017-04-15

    Patents are an essential information source used to monitor, track, and analyze nanotechnology. When it comes to search nanotechnology-related patents, a keyword search is often incomplete and struggles to cover such an interdisciplinary discipline. Patent classification schemes can reveal far better results since they are assigned by experts who classify the patent documents according to their technology. In this paper, we present the most important classifications to search nanotechnology patents and analyze how nanotechnology is covered in the main patent classification systems used in search systems nowadays: the International Patent Classification (IPC), the United States Patent Classification (USPC), and the Cooperative Patent Classification (CPC). We conclude that nanotechnology has a significantly better patent coverage in the CPC since considerable more nanotechnology documents were retrieved than by using other classifications, and thus, recommend its use for all professionals involved in nanotechnology patent searches.

  17. Monitoring nanotechnology using patent classifications: an overview and comparison of nanotechnology classification schemes

    International Nuclear Information System (INIS)

    Jürgens, Björn; Herrero-Solana, Victor

    2017-01-01

    Patents are an essential information source used to monitor, track, and analyze nanotechnology. When it comes to search nanotechnology-related patents, a keyword search is often incomplete and struggles to cover such an interdisciplinary discipline. Patent classification schemes can reveal far better results since they are assigned by experts who classify the patent documents according to their technology. In this paper, we present the most important classifications to search nanotechnology patents and analyze how nanotechnology is covered in the main patent classification systems used in search systems nowadays: the International Patent Classification (IPC), the United States Patent Classification (USPC), and the Cooperative Patent Classification (CPC). We conclude that nanotechnology has a significantly better patent coverage in the CPC since considerable more nanotechnology documents were retrieved than by using other classifications, and thus, recommend its use for all professionals involved in nanotechnology patent searches.

  18. Co-occurrence Models in Music Genre Classification

    DEFF Research Database (Denmark)

    Ahrendt, Peter; Goutte, Cyril; Larsen, Jan

    2005-01-01

    Music genre classification has been investigated using many different methods, but most of them build on probabilistic models of feature vectors x\\_r which only represent the short time segment with index r of the song. Here, three different co-occurrence models are proposed which instead consider...... genre data set with a variety of modern music. The basis was a so-called AR feature representation of the music. Besides the benefit of having proper probabilistic models of the whole song, the lowest classification test errors were found using one of the proposed models....

  19. Novelty detection for breast cancer image classification

    Science.gov (United States)

    Cichosz, Pawel; Jagodziński, Dariusz; Matysiewicz, Mateusz; Neumann, Łukasz; Nowak, Robert M.; Okuniewski, Rafał; Oleszkiewicz, Witold

    2016-09-01

    Using classification learning algorithms for medical applications may require not only refined model creation techniques and careful unbiased model evaluation, but also detecting the risk of misclassification at the time of model application. This is addressed by novelty detection, which identifies instances for which the training set is not sufficiently representative and for which it may be safer to restrain from classification and request a human expert diagnosis. The paper investigates two techniques for isolated instance identification, based on clustering and one-class support vector machines, which represent two different approaches to multidimensional outlier detection. The prediction quality for isolated instances in breast cancer image data is evaluated using the random forest algorithm and found to be substantially inferior to the prediction quality for non-isolated instances. Each of the two techniques is then used to create a novelty detection model which can be combined with a classification model and used at the time of prediction to detect instances for which the latter cannot be reliably applied. Novelty detection is demonstrated to improve random forest prediction quality and argued to deserve further investigation in medical applications.

  20. [The importance of classifications in psychiatry].

    Science.gov (United States)

    Lempérière, T

    1995-12-01

    The classifications currently used in psychiatry have different aims: to facilitate communication between researchers and clinicians at national and international levels through the use of a common language, or at least a clearly and precisely defined nomenclature; to provide a nosographical reference system which can be used in practice (diagnosis, prognosis, treatment); to optimize research by ensuring that sample cases are as homogeneous as possible; to facilitate statistical records for public health institutions. A classification is of practical interest only if it is reliable, valid and acceptable to all potential users. In recent decades, there has been a considerable systematic and coordinated effort to improve the methodological approach to classification and categorization in the field of psychiatry, including attempts to create operational definitions, field trials of inter-assessor reliability, attempts to validate the selected nosological categories by analysis of correlation between progression, treatment response, family history and additional examinations. The introduction of glossaries, and particularly of diagnostic criteria, marked a decisive step in this new approach. The key problem remains that of the validity of diagnostic criteria. Ideally, these should be based on demonstrable etiologic or pathogenic data, but such information is rarely available in psychiatry. Current classifications rely on the use of extremely diverse elements in differing degrees: descriptive criteria, evolutive criteria, etiopathogenic criteria, psychopathogenic criteria, etc. Certain syndrome-based classifications such as DSM III and its successors aim to be atheoretical and pragmatic. Others, such as ICD-10, while more eclectic than the different versions of DSM, follow suit by abandoning the terms "disease" and "illness" in favor of the more consensual "disorder". The legitimacy of classifications in the field of psychiatry has been fiercely contested, being

  1. Small-scale classification schemes

    DEFF Research Database (Denmark)

    Hertzum, Morten

    2004-01-01

    Small-scale classification schemes are used extensively in the coordination of cooperative work. This study investigates the creation and use of a classification scheme for handling the system requirements during the redevelopment of a nation-wide information system. This requirements...... classification inherited a lot of its structure from the existing system and rendered requirements that transcended the framework laid out by the existing system almost invisible. As a result, the requirements classification became a defining element of the requirements-engineering process, though its main...... effects remained largely implicit. The requirements classification contributed to constraining the requirements-engineering process by supporting the software engineers in maintaining some level of control over the process. This way, the requirements classification provided the software engineers...

  2. Developing a New Zealand casemix classification for mental health services.

    Science.gov (United States)

    Eagar, Kathy; Gaines, Phillipa; Burgess, Philip; Green, Janette; Bower, Alison; Buckingham, Bill; Mellsop, Graham

    2004-10-01

    This study aimed to develop a casemix classification of characteristics of New Zealand mental health services users. Over a six month period, patient information, staff time and service costs were collected from 8 district health boards. This information was analysed seeking the classification of service user characteristics which best predicted the cost drivers of the services provided. A classification emerged which explained more than two thirds of the variance in service user costs. It can be used to inform service management and funding, but it is premature to have it determine funding.

  3. FULLY CONVOLUTIONAL NETWORKS FOR GROUND CLASSIFICATION FROM LIDAR POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    A. Rizaldy

    2018-05-01

    Full Text Available Deep Learning has been massively used for image classification in recent years. The use of deep learning for ground classification from LIDAR point clouds has also been recently studied. However, point clouds need to be converted into an image in order to use Convolutional Neural Networks (CNNs. In state-of-the-art techniques, this conversion is slow because each point is converted into a separate image. This approach leads to highly redundant computation during conversion and classification. The goal of this study is to design a more efficient data conversion and ground classification. This goal is achieved by first converting the whole point cloud into a single image. The classification is then performed by a Fully Convolutional Network (FCN, a modified version of CNN designed for pixel-wise image classification. The proposed method is significantly faster than state-of-the-art techniques. On the ISPRS Filter Test dataset, it is 78 times faster for conversion and 16 times faster for classification. Our experimental analysis on the same dataset shows that the proposed method results in 5.22 % of total error, 4.10 % of type I error, and 15.07 % of type II error. Compared to the previous CNN-based technique and LAStools software, the proposed method reduces the total error and type I error (while type II error is slightly higher. The method was also tested on a very high point density LIDAR point clouds resulting in 4.02 % of total error, 2.15 % of type I error and 6.14 % of type II error.

  4. A simplified immunohistochemical classification of skeletal muscle fibres in mouse

    Directory of Open Access Journals (Sweden)

    M. Kammoun

    2014-06-01

    Full Text Available The classification of muscle fibres is of particular interest for the study of the skeletal muscle properties in a wide range of scientific fields, especially animal phenotyping. It is therefore important to define a reliable method for classifying fibre types. The aim of this study was to establish a simplified method for the immunohistochemical classification of fibres in mouse. To carry it out, we first tested a combination of several anti myosin heavy chain (MyHC antibodies in order to choose a minimum number of antibodies to implement a semi-automatic classification. Then, we compared the classification of fibres to the MyHC electrophoretic pattern on the same samples. Only two anti MyHC antibodies on serial sections with the fluorescent labeling of the Laminin were necessary to classify properly fibre types in Tibialis Anterior and Soleus mouse muscles in normal physiological conditions. This classification was virtually identical to the classification realized by the electrophoretic separation of MyHC. This immunohistochemical classification can be applied to the total area of Tibialis Anterior and Soleus mouse muscles. Thus, we provide here a useful, simple and time-efficient method for immunohistochemical classification of fibres, applicable for research in mouse

  5. Free classification of regional dialects of American English

    Science.gov (United States)

    Clopper, Cynthia G.; Pisoni, David B.

    2011-01-01

    Recent studies have found that naïve listeners perform poorly in forced-choice dialect categorization tasks. However, the listeners' error patterns in these tasks reveal systematic confusions between phonologically similar dialects. In the present study, a free classification procedure was used to measure the perceptual similarity structure of regional dialect variation in the United States. In two experiments, participants listened to a set of short English sentences produced by male talkers only (Experiment 1) and by male and female talkers (Experiment 2). The listeners were instructed to group the talkers by regional dialect into as many groups as they wanted with as many talkers in each group as they wished. Multidimensional scaling analyses of the data revealed three primary dimensions of perceptual similarity (linguistic markedness, geography, and gender). In addition, a comparison of the results obtained from the free classification task to previous results using the same stimulus materials in six-alternative forced-choice categorization tasks revealed that response biases in the six-alternative task were reduced or eliminated in the free classification task. Thus, the results obtained with the free classification task in the current study provided further evidence that the underlying structure of perceptual dialect category representations reflects important linguistic and sociolinguistic factors. PMID:21423862

  6. Free classification of regional dialects of American English.

    Science.gov (United States)

    Clopper, Cynthia G; Pisoni, David B

    2007-07-01

    Recent studies have found that naïve listeners perform poorly in forced-choice dialect categorization tasks. However, the listeners' error patterns in these tasks reveal systematic confusions between phonologically similar dialects. In the present study, a free classification procedure was used to measure the perceptual similarity structure of regional dialect variation in the United States. In two experiments, participants listened to a set of short English sentences produced by male talkers only (Experiment 1) and by male and female talkers (Experiment 2). The listeners were instructed to group the talkers by regional dialect into as many groups as they wanted with as many talkers in each group as they wished. Multidimensional scaling analyses of the data revealed three primary dimensions of perceptual similarity (linguistic markedness, geography, and gender). In addition, a comparison of the results obtained from the free classification task to previous results using the same stimulus materials in six-alternative forced-choice categorization tasks revealed that response biases in the six-alternative task were reduced or eliminated in the free classification task. Thus, the results obtained with the free classification task in the current study provided further evidence that the underlying structure of perceptual dialect category representations reflects important linguistic and sociolinguistic factors.

  7. Reclaiming Spare Capacity and Improving Aperiodic Response Times in Real-Time Environments

    Directory of Open Access Journals (Sweden)

    Liu Xue

    2011-01-01

    Full Text Available Abstract Scheduling recurring task sets that allow some instances of the tasks to be skipped produces holes in the schedule which are nonuniformly distributed. Similarly, when the recurring tasks are not strictly periodic but are sporadic, there is extra processor bandwidth arising because of irregular job arrivals. The additional computation capacity that results from skips or sporadic tasks can be reclaimed to service aperiodic task requests efficiently and quickly. We present techniques for improving the response times of aperiodic tasks by identifying nonuniformly distributed spare capacity—because of skips or sporadic tasks—in the schedule and adding such extra capacity to the capacity queue of a BASH server. These gaps can account for a significant portion of aperiodic capacity, and their reclamation results in considerable improvement to aperiodic response times. We present two schemes: NCLB-CBS, which performs well in periodic real-time environments with firm tasks, and NCLB-CUS, which can be deployed when the basic task set to schedule is sporadic. Evaluation via simulations and implementation suggests that performance improvements for aperiodic tasks can be obtained with limited additional overhead.

  8. Time response measurements of LASL diagnostic detectors

    International Nuclear Information System (INIS)

    Hocker, L.P.

    1970-07-01

    The measurement and data analysis techniques developed under the Los Alamos Scientific Laboratory's detector improvement program were used to characterize the time and frequency response of selected LASL Compton, fluor-photodiode (NPD), and fluor-photomultiplier (NPM) diagnostic detectors. Data acquisition procedures and analysis methods presently in use are summarized, and detector time and frequency data obtained using the EG and G/AEC electron linear accelerator fast pulse (approximately 50 psec FWHM) as the incident radiation driving function are presented. (U.S.)

  9. Music genre classification using temporal domain features

    Science.gov (United States)

    Shiu, Yu; Kuo, C.-C. Jay

    2004-10-01

    Music genre provides an efficient way to index songs in the music database, and can be used as an effective means to retrieval music of a similar type, i.e. content-based music retrieval. In addition to other features, the temporal domain features of a music signal are exploited so as to increase the classification rate in this research. Three temporal techniques are examined in depth. First, the hidden Markov model (HMM) is used to emulate the time-varying properties of music signals. Second, to further increase the classification rate, we propose another feature set that focuses on the residual part of music signals. Third, the overall classification rate is enhanced by classifying smaller segments from a test material individually and making decision via majority voting. Experimental results are given to demonstrate the performance of the proposed techniques.

  10. Recurrent neural networks for breast lesion classification based on DCE-MRIs

    Science.gov (United States)

    Antropova, Natasha; Huynh, Benjamin; Giger, Maryellen

    2018-02-01

    Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) plays a significant role in breast cancer screening, cancer staging, and monitoring response to therapy. Recently, deep learning methods are being rapidly incorporated in image-based breast cancer diagnosis and prognosis. However, most of the current deep learning methods make clinical decisions based on 2-dimentional (2D) or 3D images and are not well suited for temporal image data. In this study, we develop a deep learning methodology that enables integration of clinically valuable temporal components of DCE-MRIs into deep learning-based lesion classification. Our work is performed on a database of 703 DCE-MRI cases for the task of distinguishing benign and malignant lesions, and uses the area under the ROC curve (AUC) as the performance metric in conducting that task. We train a recurrent neural network, specifically a long short-term memory network (LSTM), on sequences of image features extracted from the dynamic MRI sequences. These features are extracted with VGGNet, a convolutional neural network pre-trained on a large dataset of natural images ImageNet. The features are obtained from various levels of the network, to capture low-, mid-, and high-level information about the lesion. Compared to a classification method that takes as input only images at a single time-point (yielding an AUC = 0.81 (se = 0.04)), our LSTM method improves lesion classification with an AUC of 0.85 (se = 0.03).

  11. Normal spectrum of pulmonary parametric response map to differentiate lung collapsibility: distribution of densitometric classifications in healthy adult volunteers

    International Nuclear Information System (INIS)

    Silva, Mario; Nemec, Stefan F.; Dufresne, Valerie; Occhipinti, Mariaelena; Heidinger, Benedikt H.; Bankier, Alexander A.; Chamberlain, Ryan

    2016-01-01

    Pulmonary parametric response map (PRM) was proposed for quantitative densitometric phenotypization of chronic obstructive pulmonary disease. However, little is known about this technique in healthy subjects. The purpose of this study was to describe the normal spectrum of densitometric classification of pulmonary PRM in a group of healthy adults. 15 healthy volunteers underwent spirometrically monitored chest CT at total lung capacity (TLC) and functional residual capacity (FRC). The paired CT scans were analyzed by PRM for voxel-by-voxel characterization of lung parenchyma according to 4 densitometric classifications: normal lung (TLC ≥ -950 HU, FRC ≥ -856 HU); expiratory low attenuation area (LAA) (TLC ≥ -950 HU, FRC < -856 HU); dual LAA (TLC<-950 HU, FRC < -856 HU); uncharacterized (TLC < -950 HU, FRC ≥ -856 HU). PRM spectrum was 78 % ± 10 % normal lung, 20 % ± 8 % expiratory LAA, and 1 % ± 1 % dual LAA. PRM was similar between genders, there was moderate correlation between dual LAA and spirometrically assessed TLC (R = 0.531; p = 0.042), and between expiratory LAA and Vol Exp/Insp ratio (R = -0.572; p = 0.026). PRM reflects the predominance of normal lung parenchyma in a group of healthy volunteers. However, PRM also confirms the presence of physiological expiratory LAA seemingly related to air trapping and a minimal amount of dual LAA likely reflecting emphysema. (orig.)

  12. Gynecomastia Classification for Surgical Management: A Systematic Review and Novel Classification System.

    Science.gov (United States)

    Waltho, Daniel; Hatchell, Alexandra; Thoma, Achilleas

    2017-03-01

    Gynecomastia is a common deformity of the male breast, where certain cases warrant surgical management. There are several surgical options, which vary depending on the breast characteristics. To guide surgical management, several classification systems for gynecomastia have been proposed. A systematic review was performed to (1) identify all classification systems for the surgical management of gynecomastia, and (2) determine the adequacy of these classification systems to appropriately categorize the condition for surgical decision-making. The search yielded 1012 articles, and 11 articles were included in the review. Eleven classification systems in total were ascertained, and a total of 10 unique features were identified: (1) breast size, (2) skin redundancy, (3) breast ptosis, (4) tissue predominance, (5) upper abdominal laxity, (6) breast tuberosity, (7) nipple malposition, (8) chest shape, (9) absence of sternal notch, and (10) breast skin elasticity. On average, classification systems included two or three of these features. Breast size and ptosis were the most commonly included features. Based on their review of the current classification systems, the authors believe the ideal classification system should be universal and cater to all causes of gynecomastia; be surgically useful and easy to use; and should include a comprehensive set of clinically appropriate patient-related features, such as breast size, breast ptosis, tissue predominance, and skin redundancy. None of the current classification systems appears to fulfill these criteria.

  13. Unsupervised classification of variable stars

    Science.gov (United States)

    Valenzuela, Lucas; Pichara, Karim

    2018-03-01

    During the past 10 years, a considerable amount of effort has been made to develop algorithms for automatic classification of variable stars. That has been primarily achieved by applying machine learning methods to photometric data sets where objects are represented as light curves. Classifiers require training sets to learn the underlying patterns that allow the separation among classes. Unfortunately, building training sets is an expensive process that demands a lot of human efforts. Every time data come from new surveys; the only available training instances are the ones that have a cross-match with previously labelled objects, consequently generating insufficient training sets compared with the large amounts of unlabelled sources. In this work, we present an algorithm that performs unsupervised classification of variable stars, relying only on the similarity among light curves. We tackle the unsupervised classification problem by proposing an untraditional approach. Instead of trying to match classes of stars with clusters found by a clustering algorithm, we propose a query-based method where astronomers can find groups of variable stars ranked by similarity. We also develop a fast similarity function specific for light curves, based on a novel data structure that allows scaling the search over the entire data set of unlabelled objects. Experiments show that our unsupervised model achieves high accuracy in the classification of different types of variable stars and that the proposed algorithm scales up to massive amounts of light curves.

  14. Comparison study of time history and response spectrum responses for multiply supported piping systems

    International Nuclear Information System (INIS)

    Wang, Y.K.; Subudhi, M.; Bezler, P.

    1983-01-01

    In the past decade, several investigators have studied the problem of independent support excitation of a multiply supported piping system to identify the real need for such an analysis. This approach offers an increase in accuracy at a small increase in computational costs. To assess the method, studies based on the response spectrum approach using independent support motions for each group of commonly connected supports were performed. The results obtained from this approach were compared with the conventional envelope spectrum and time history solutions. The present study includes a mathematical formulation of the independent support motion analysis method suitable for implementation into an existing all purpose piping code PSAFE2 and a comparison of the solutions for some typical piping system using both Time History and Response Spectrum Methods. The results obtained from the Response Spectrum Methods represent the upper bound solution at most points in the piping system. Similarly, the Seismic Anchor Movement analysis based on the SRP method over predicts the responses near the support points and under predicts at points away from the supports

  15. Information gathering for CLP classification

    Directory of Open Access Journals (Sweden)

    Ida Marcello

    2011-01-01

    Full Text Available Regulation 1272/2008 includes provisions for two types of classification: harmonised classification and self-classification. The harmonised classification of substances is decided at Community level and a list of harmonised classifications is included in the Annex VI of the classification, labelling and packaging Regulation (CLP. If a chemical substance is not included in the harmonised classification list it must be self-classified, based on available information, according to the requirements of Annex I of the CLP Regulation. CLP appoints that the harmonised classification will be performed for carcinogenic, mutagenic or toxic to reproduction substances (CMR substances and for respiratory sensitisers category 1 and for other hazard classes on a case-by-case basis. The first step of classification is the gathering of available and relevant information. This paper presents the procedure for gathering information and to obtain data. The data quality is also discussed.

  16. Wavelet transform and ANNs for detection and classification of power signal disturbances

    International Nuclear Information System (INIS)

    Memon, A.P.; Uqaili, M.A.; Memon, Z.A.

    2012-01-01

    This article proposes WT (Wavelet Transform) and an ANN (Artificial Neural Network) based approach for detection and classification of EPQDs (Electrical Power Quality Disturbances). A modified WT known as ST (Stockwell Transform) is suggested for feature extraction and PNN (probabilistic Neural Network) for pattern classification. The ST possesses outstanding time-frequency resolution characteristics and its phase correction techniques determine the phase of the WT to the zero time point The feature vectors for the input of PNN are extracted using ST technique and these obtained features are discrete, logical, and unaffected to noisy data of distorted signals. The data of the models required to develop the distorted EPQ (Electrical Power Quality) signals, is obtained within the ranges specified by IEEE 1159-1995 in its literatures. The features vectors including noisy time varying data during steady state or transient condition and extracted using the ST, are trained through PNN for pattern classification. Their simulation results demonstrate that the proposed methodology is successful and can classify EPQDs even under a noisy environment very efficiently with an average classification accuracy of 96%. (author)

  17. Estimation of different data compositions for early-season crop type classification.

    Science.gov (United States)

    Hao, Pengyu; Wu, Mingquan; Niu, Zheng; Wang, Li; Zhan, Yulin

    2018-01-01

    Timely and accurate crop type distribution maps are an important inputs for crop yield estimation and production forecasting as multi-temporal images can observe phenological differences among crops. Therefore, time series remote sensing data are essential for crop type mapping, and image composition has commonly been used to improve the quality of the image time series. However, the optimal composition period is unclear as long composition periods (such as compositions lasting half a year) are less informative and short composition periods lead to information redundancy and missing pixels. In this study, we initially acquired daily 30 m Normalized Difference Vegetation Index (NDVI) time series by fusing MODIS, Landsat, Gaofen and Huanjing (HJ) NDVI, and then composited the NDVI time series using four strategies (daily, 8-day, 16-day, and 32-day). We used Random Forest to identify crop types and evaluated the classification performances of the NDVI time series generated from four composition strategies in two studies regions from Xinjiang, China. Results indicated that crop classification performance improved as crop separabilities and classification accuracies increased, and classification uncertainties dropped in the green-up stage of the crops. When using daily NDVI time series, overall accuracies saturated at 113-day and 116-day in Bole and Luntai, and the saturated overall accuracies (OAs) were 86.13% and 91.89%, respectively. Cotton could be identified 40∼60 days and 35∼45 days earlier than the harvest in Bole and Luntai when using daily, 8-day and 16-day composition NDVI time series since both producer's accuracies (PAs) and user's accuracies (UAs) were higher than 85%. Among the four compositions, the daily NDVI time series generated the highest classification accuracies. Although the 8-day, 16-day and 32-day compositions had similar saturated overall accuracies (around 85% in Bole and 83% in Luntai), the 8-day and 16-day compositions achieved these

  18. Modeling and evaluating repeatability and reproducibility of ordinal classifications

    NARCIS (Netherlands)

    de Mast, J.; van Wieringen, W.N.

    2010-01-01

    This paper argues that currently available methods for the assessment of the repeatability and reproducibility of ordinal classifications are not satisfactory. The paper aims to study whether we can modify a class of models from Item Response Theory, well established for the study of the reliability

  19. 10 CFR 603.1000 - Contracting officer's responsibilities at time of award.

    Science.gov (United States)

    2010-01-01

    ... 10 Energy 4 2010-01-01 2010-01-01 false Contracting officer's responsibilities at time of award. 603.1000 Section 603.1000 Energy DEPARTMENT OF ENERGY (CONTINUED) ASSISTANCE REGULATIONS TECHNOLOGY INVESTMENT AGREEMENTS Executing the Award § 603.1000 Contracting officer's responsibilities at time of award...

  20. Experimental Study of Real-Time Classification of 17 Voluntary Movements for Multi-Degree Myoelectric Prosthetic Hand

    Directory of Open Access Journals (Sweden)

    Trongmun Jiralerspong

    2017-11-01

    Full Text Available The myoelectric prosthetic hand is a powerful tool developed to help people with upper limb loss restore the functions of a biological hand. Recognizing multiple hand motions from only a few electromyography (EMG sensors is one of the requirements for the development of prosthetic hands with high level of usability. This task is highly challenging because both classification rate and misclassification rate worsen with additional hand motions. This paper presents a signal processing technique that uses spectral features and an artificial neural network to classify 17 voluntary movements from EMG signals. The main highlight will be on the use of a small set of low-cost EMG sensor for classification of a reasonably large number of hand movements. The aim of this work is to extend the capabilities to recognize and produce multiple movements beyond what is currently feasible. This work will also show and discuss about how tailoring the number of hand motions for a specific task can help develop a more reliable prosthetic hand system. Online classification experiments have been conducted on seven male and five female participants to evaluate the validity of the proposed method. The proposed algorithm achieves an overall correct classification rate of up to 83%, thus, demonstrating the potential to classify 17 movements from 6 EMG sensors. Furthermore, classifying 9 motions using this method could achieve an accuracy of up to 92%. These results show that if the prosthetic hand is intended for a specific task, limiting the number of motions can significantly increase the performance and usability.

  1. Evaluating perceptual integration: uniting response-time- and accuracy-based methodologies.

    Science.gov (United States)

    Eidels, Ami; Townsend, James T; Hughes, Howard C; Perry, Lacey A

    2015-02-01

    This investigation brings together a response-time system identification methodology (e.g., Townsend & Wenger Psychonomic Bulletin & Review 11, 391-418, 2004a) and an accuracy methodology, intended to assess models of integration across stimulus dimensions (features, modalities, etc.) that were proposed by Shaw and colleagues (e.g., Mulligan & Shaw Perception & Psychophysics 28, 471-478, 1980). The goal was to theoretically examine these separate strategies and to apply them conjointly to the same set of participants. The empirical phases were carried out within an extension of an established experimental design called the double factorial paradigm (e.g., Townsend & Nozawa Journal of Mathematical Psychology 39, 321-359, 1995). That paradigm, based on response times, permits assessments of architecture (parallel vs. serial processing), stopping rule (exhaustive vs. minimum time), and workload capacity, all within the same blocks of trials. The paradigm introduced by Shaw and colleagues uses a statistic formally analogous to that of the double factorial paradigm, but based on accuracy rather than response times. We demonstrate that the accuracy measure cannot discriminate between parallel and serial processing. Nonetheless, the class of models supported by the accuracy data possesses a suitable interpretation within the same set of models supported by the response-time data. The supported model, consistent across individuals, is parallel and has limited capacity, with the participants employing the appropriate stopping rule for the experimental setting.

  2. ASIST SIG/CR Classification Workshop 2000: Classification for User Support and Learning.

    Science.gov (United States)

    Soergel, Dagobert

    2001-01-01

    Reports on papers presented at the 62nd Annual Meeting of ASIST (American Society for Information Science and Technology) for the Special Interest Group in Classification Research (SIG/CR). Topics include types of knowledge; developing user-oriented classifications, including domain analysis; classification in the user interface; and automatic…

  3. Couinaud's classification v.s. Cho's classification. Their feasibility in the right hepatic lobe

    International Nuclear Information System (INIS)

    Shioyama, Yasukazu; Ikeda, Hiroaki; Sato, Motohito; Yoshimi, Fuyo; Kishi, Kazushi; Sato, Morio; Kimura, Masashi

    2008-01-01

    The objective of this study was to investigate if the new classification system proposed by Cho is feasible to clinical usage comparing with the classical Couinaud's one. One hundred consecutive cases of abdominal CT were studied using a 64 or an 8 slice multislice CT and created three dimensional portal vein images for analysis by the Workstation. We applied both Cho's classification and the classical Couinaud's one for each cases according to their definitions. Three diagnostic radiologists assessed their feasibility as category one (unable to classify) to five (clear to classify with total suit with the original classification criteria). And in each cases, we tried to judge whether Cho's or the classical Couinaud' classification could more easily transmit anatomical information. Analyzers could classified portal veins clearly (category 5) in 77 to 80% of cases and clearly (category 5) or almost clearly (category 4) in 86-93% along with both classifications. In the feasibility of classification, there was no statistically significant difference between two classifications. In 15 cases we felt that using Couinaud's classification is more convenient for us to transmit anatomical information to physicians than using Cho's one, because in these cases we noticed two large portal veins ramify from right main portal vein cranialy and caudaly and then we could not classify P5 as a branch of antero-ventral segment (AVS). Conversely in 17 cases we felt Cho's classification is more convenient because we could not divide right posterior branch as P6 and P7 and in these cases the right posterior portal vein ramified to several small branches. The anterior fissure vein was clearly noticed in only 60 cases. Comparing the classical Couinaud's classification and Cho's one in feasility of classification, there was no statistically significant difference. We propose we routinely report hepatic anatomy with the classical Couinauds classification and in the preoperative cases we

  4. Classification of DNA nucleotides with transverse tunneling currents

    DEFF Research Database (Denmark)

    Pedersen, Jonas Nyvold; Boynton, Paul; Ventra, Massimiliano Di

    2017-01-01

    , however. In realistic liquid environments, typical currents in tunneling devices are of the order of picoamps. This corresponds to only six electrons per microsecond, and this number affects the integration time required to do current measurements in real experiments. This limits the speed of sequencing......, though current fluctuations due to Brownian motion of the molecule average out during the required integration time. Moreover, data acquisition equipment introduces noise, and electronic filters create correlations in time-series data. We discuss how these effects must be included in the analysis of, e.......g., the assignment of specific nucleobases to current signals. As the signals from different molecules overlap, unambiguous classification is impossible with a single measurement. We argue that the assignment of molecules to a signal is a standard pattern classification problem and calculation of the error rates...

  5. Classification of agricultural fields using time series of dual polarimetry TerraSAR-X images

    Directory of Open Access Journals (Sweden)

    S. Mirzaee

    2014-10-01

    Full Text Available Due to its special imaging characteristics, Synthetic Aperture Radar (SAR has become an important source of information for a variety of remote sensing applications dealing with environmental changes. SAR images contain information about both phase and intensity in different polarization modes, making them sensitive to geometrical structure and physical properties of the targets such as dielectric and plant water content. In this study we investigate multi temporal changes occurring to different crop types due to phenological changes using high-resolution TerraSAR-X imagers. The dataset includes 17 dual-polarimetry TSX data acquired from June 2012 to August 2013 in Lorestan province, Iran. Several features are extracted from polarized data and classified using support vector machine (SVM classifier. Training samples and different features employed in classification are also assessed in the study. Results show a satisfactory accuracy for classification which is about 0.91 in kappa coefficient.

  6. Ultimate response time of high electron mobility transistors

    International Nuclear Information System (INIS)

    Rudin, Sergey; Rupper, Greg; Shur, Michael

    2015-01-01

    We present theoretical studies of the response time of the two-dimensional gated electron gas to femtosecond pulses. Our hydrodynamic simulations show that the device response to a short pulse or a step-function signal is either smooth or oscillating time-decay at low and high mobility, μ, values, respectively. At small gate voltage swings, U 0  = U g  − U th , where U g is the gate voltage and U th is the threshold voltage, such that μU 0 /L < v s , where L is the channel length and v s is the effective electron saturation velocity, the decay time in the low mobility samples is on the order of L 2 /(μU 0 ), in agreement with the analytical drift model. However, the decay is preceded by a delay time on the order of L/s, where s is the plasma wave velocity. This delay is the ballistic transport signature in collision-dominated devices, which becomes important during very short time periods. In the high mobility devices, the period of the decaying oscillations is on the order of the plasma wave velocity transit time. Our analysis shows that short channel field effect transistors operating in the plasmonic regime can meet the requirements for applications as terahertz detectors, mixers, delay lines, and phase shifters in ultra high-speed wireless communication circuits

  7. Land use/cover classification in the Brazilian Amazon using satellite images.

    Science.gov (United States)

    Lu, Dengsheng; Batistella, Mateus; Li, Guiying; Moran, Emilio; Hetrick, Scott; Freitas, Corina da Costa; Dutra, Luciano Vieira; Sant'anna, Sidnei João Siqueira

    2012-09-01

    Land use/cover classification is one of the most important applications in remote sensing. However, mapping accurate land use/cover spatial distribution is a challenge, particularly in moist tropical regions, due to the complex biophysical environment and limitations of remote sensing data per se. This paper reviews experiments related to land use/cover classification in the Brazilian Amazon for a decade. Through comprehensive analysis of the classification results, it is concluded that spatial information inherent in remote sensing data plays an essential role in improving land use/cover classification. Incorporation of suitable textural images into multispectral bands and use of segmentation-based method are valuable ways to improve land use/cover classification, especially for high spatial resolution images. Data fusion of multi-resolution images within optical sensor data is vital for visual interpretation, but may not improve classification performance. In contrast, integration of optical and radar data did improve classification performance when the proper data fusion method was used. Of the classification algorithms available, the maximum likelihood classifier is still an important method for providing reasonably good accuracy, but nonparametric algorithms, such as classification tree analysis, has the potential to provide better results. However, they often require more time to achieve parametric optimization. Proper use of hierarchical-based methods is fundamental for developing accurate land use/cover classification, mainly from historical remotely sensed data.

  8. It's time to set some standards: Environmental classification of freshwater wetlands in New Zealand and their protection from eutrophication

    DEFF Research Database (Denmark)

    Sorrell, Brian Keith; Clarkson, Beverly

    Most natural resource plans provide protection for lakes and rivers from catchment activities leading to eutrophication. However, they are often silent about wetlands, due to the lack of information available for setting standards, defining reference conditions, and predicting responses to nutrient...... states in New Zealand wetlands, present an environmental classification based on physico-chemical and nutrient data, compare wetlands in New Zealand with those in other temperate regions, and argue for some catchment land use standards to protect wetlands from nutrient enrichment. Our database reveals...... that New Zealand wetlands, like those in other temperate climates, are defined by specific alkalinity and nutrient gradients and that there is a wide range of fertility levels. Using regression tree analysis, we have identified environmental groups of wetlands with significantly distinct nutrient regimes...

  9. VOCAL SEGMENT CLASSIFICATION IN POPULAR MUSIC

    DEFF Research Database (Denmark)

    Feng, Ling; Nielsen, Andreas Brinch; Hansen, Lars Kai

    2008-01-01

    This paper explores the vocal and non-vocal music classification problem within popular songs. A newly built labeled database covering 147 popular songs is announced. It is designed for classifying signals from 1sec time windows. Features are selected for this particular task, in order to capture...

  10. Supervised learning for the automated transcription of spacer classification from spoligotype films

    Directory of Open Access Journals (Sweden)

    Abernethy Neil

    2009-08-01

    Full Text Available Abstract Background Molecular genotyping of bacteria has revolutionized the study of tuberculosis epidemiology, yet these established laboratory techniques typically require subjective and laborious interpretation by trained professionals. In the context of a Tuberculosis Case Contact study in The Gambia we used a reverse hybridization laboratory assay called spoligotype analysis. To facilitate processing of spoligotype images we have developed tools and algorithms to automate the classification and transcription of these data directly to a database while allowing for manual editing. Results Features extracted from each of the 1849 spots on a spoligo film were classified using two supervised learning algorithms. A graphical user interface allows manual editing of the classification, before export to a database. The application was tested on ten films of differing quality and the results of the best classifier were compared to expert manual classification, giving a median correct classification rate of 98.1% (inter quartile range: 97.1% to 99.2%, with an automated processing time of less than 1 minute per film. Conclusion The software implementation offers considerable time savings over manual processing whilst allowing expert editing of the automated classification. The automatic upload of the classification to a database reduces the chances of transcription errors.

  11. Understanding recovery: changes in the relationships of the International Classification of Functioning (ICF) components over time.

    Science.gov (United States)

    Davis, A M; Perruccio, A V; Ibrahim, S; Hogg-Johnson, S; Wong, R; Badley, E M

    2012-12-01

    The International Classification of Functioning, Disability and Health framework describes human functioning through body structure and function, activity and participation in the context of a person's social and physical environment. This work tested the temporal relationships of these components. Our hypotheses were: 1) there would be associations among physical impairment, activity limitations and participation restrictions within time; 2) prior status of a component would be associated with future status; 3) prior status of one component would influence status of a second component (e.g. prior activity limitations would be associated with current participation restrictions); and, 4) the magnitude of the within time relationships of the components would vary over time. Participants from Canada with primary hip or knee joint replacement (n = 931), an intervention with predictable improvement in pain and disability, completed standardized outcome measures pre-surgery and five times in the first year post-surgery. These included physical impairment (pain), activity limitations and participation restrictions. ICF component relationships were evaluated cross-sectionally and longitudinally using path analysis adjusting for age, sex, BMI, hip vs. knee, low back pain and mood. All component scores improved significantly over time. The path coefficients supported the hypotheses in that both within and across time, physical impairment was associated with activity limitation and activity limitation was associated with participation restriction; prior status and change in a component were associated with current status in another component; and, the magnitude of the path coefficients varied over time with stronger associations among components to three months post surgery than later in recovery with the exception of the association between impairment and participation restrictions which was of similar magnitude at all times. This work enhances understanding of the

  12. More is not Always Better: The Relation between Item Response and Item Response Time in Raven’s Matrices

    Directory of Open Access Journals (Sweden)

    Frank Goldhammer

    2015-03-01

    Full Text Available The role of response time in completing an item can have very different interpretations. Responding more slowly could be positively related to success as the item is answered more carefully. However, the association may be negative if working faster indicates higher ability. The objective of this study was to clarify the validity of each assumption for reasoning items considering the mode of processing. A total of 230 persons completed a computerized version of Raven’s Advanced Progressive Matrices test. Results revealed that response time overall had a negative effect. However, this effect was moderated by items and persons. For easy items and able persons the effect was strongly negative, for difficult items and less able persons it was less negative or even positive. The number of rules involved in a matrix problem proved to explain item difficulty significantly. Most importantly, a positive interaction effect between the number of rules and item response time indicated that the response time effect became less negative with an increasing number of rules. Moreover, exploratory analyses suggested that the error type influenced the response time effect.

  13. Diagnostic Criteria, Classification and Treatment Goals in Multiple Sclerosis: The Chronicles of Time and Space.

    Science.gov (United States)

    Ntranos, Achilles; Lublin, Fred

    2016-10-01

    Multiple sclerosis (MS) is one of the most diverse human diseases. Since its first description by Charcot in the nineteenth century, the diagnostic criteria, clinical course classification, and treatment goals for MS have been constantly revised and updated to improve diagnostic accuracy, physician communication, and clinical trial design. These changes have improved the clinical outcomes and quality of life for patients with the disease. Recent technological and research breakthroughs will almost certainly further change how we diagnose, classify, and treat MS in the future. In this review, we summarize the key events in the history of MS, explain the reasoning behind the current criteria for MS diagnosis, classification, and treatment, and provide suggestions for further improvements that will keep enhancing the clinical practice of MS.

  14. Observation versus classification in supervised category learning.

    Science.gov (United States)

    Levering, Kimery R; Kurtz, Kenneth J

    2015-02-01

    The traditional supervised classification paradigm encourages learners to acquire only the knowledge needed to predict category membership (a discriminative approach). An alternative that aligns with important aspects of real-world concept formation is learning with a broader focus to acquire knowledge of the internal structure of each category (a generative approach). Our work addresses the impact of a particular component of the traditional classification task: the guess-and-correct cycle. We compare classification learning to a supervised observational learning task in which learners are shown labeled examples but make no classification response. The goals of this work sit at two levels: (1) testing for differences in the nature of the category representations that arise from two basic learning modes; and (2) evaluating the generative/discriminative continuum as a theoretical tool for understand learning modes and their outcomes. Specifically, we view the guess-and-correct cycle as consistent with a more discriminative approach and therefore expected it to lead to narrower category knowledge. Across two experiments, the observational mode led to greater sensitivity to distributional properties of features and correlations between features. We conclude that a relatively subtle procedural difference in supervised category learning substantially impacts what learners come to know about the categories. The results demonstrate the value of the generative/discriminative continuum as a tool for advancing the psychology of category learning and also provide a valuable constraint for formal models and associated theories.

  15. The Performance of EEG-P300 Classification using Backpropagation Neural Networks

    Directory of Open Access Journals (Sweden)

    Arjon Turnip

    2013-12-01

    Full Text Available Electroencephalogram (EEG recordings signal provide an important function of brain-computer communication, but the accuracy of their classification is very limited in unforeseeable signal variations relating to artifacts. In this paper, we propose a classification method entailing time-series EEG-P300 signals using backpropagation neural networks to predict the qualitative properties of a subject’s mental tasks by extracting useful information from the highly multivariate non-invasive recordings of brain activity. To test the improvement in the EEG-P300 classification performance (i.e., classification accuracy and transfer rate with the proposed method, comparative experiments were conducted using Bayesian Linear Discriminant Analysis (BLDA. Finally, the result of the experiment showed that the average of the classification accuracy was 97% and the maximum improvement of the average transfer rate is 42.4%, indicating the considerable potential of the using of EEG-P300 for the continuous classification of mental tasks.

  16. Time-Lapse and Slow-Motion Tracking of Temperature Changes: Response Time of a Thermometer

    Science.gov (United States)

    Moggio, L.; Onorato, P.; Gratton, L. M.; Oss, S.

    2017-01-01

    We propose the use of a smartphone based time-lapse and slow-motion video techniques together with tracking analysis as valuable tools for investigating thermal processes such as the response time of a thermometer. The two simple experimental activities presented here, suitable also for high school and undergraduate students, allow one to measure…

  17. Fast-Response-Time Shape-Memory-Effect Foam Actuators

    Science.gov (United States)

    Jardine, Peter

    2010-01-01

    Bulk shape memory alloys, such as Nitinol or CuAlZn, display strong recovery forces undergoing a phase transformation after being strained in their martensitic state. These recovery forces are used for actuation. As the phase transformation is thermally driven, the response time of the actuation can be slow, as the heat must be passively inserted or removed from the alloy. Shape memory alloy TiNi torque tubes have been investigated for at least 20 years and have demonstrated high actuation forces [3,000 in.-lb (approximately equal to 340 N-m) torques] and are very lightweight. However, they are not easy to attach to existing structures. Adhesives will fail in shear at low-torque loads and the TiNi is not weldable, so that mechanical crimp fits have been generally used. These are not reliable, especially in vibratory environments. The TiNi is also slow to heat up, as it can only be heated indirectly using heater and cooling must be done passively. This has restricted their use to on-off actuators where cycle times of approximately one minute is acceptable. Self-propagating high-temperature synthesis (SHS) has been used in the past to make porous TiNi metal foams. Shape Change Technologies has been able to train SHS derived TiNi to exhibit the shape memory effect. As it is an open-celled material, fast response times were observed when the material was heated using hot and cold fluids. A methodology was developed to make the open-celled porous TiNi foams as a tube with integrated hexagonal ends, which then becomes a torsional actuator with fast response times. Under processing developed independently, researchers were able to verify torques of 84 in.-lb (approximately equal to 9.5 Nm) using an actuator weighing 1.3 oz (approximately equal to 37 g) with very fast (less than 1/16th of a second) initial response times when hot and cold fluids were used to facilitate heat transfer. Integrated structural connections were added as part of the net shape process, eliminating

  18. Responses to interracial interactions over time.

    Science.gov (United States)

    Plant, E Ashby

    2004-11-01

    The current work tested and expanded on Plant and Devine's (2003) model of the antecedents and implications of interracial anxiety by examining people's experiences with interracial interactions at two time points. Study 1 explored non-Black people's responses to interactions with Black people and Study 2 explored Black people's responses to interactions with White people. Non-Black participants' expectancies about coming across as biased in interracial interactions and Black participants' expectancies about White people's bias predicted their interracial anxiety and whether they had positive interactions with outgroup members during the 2 weeks between assessments. Across both studies, interracial anxiety predicted the desire to avoid interactions with outgroup members. In addition, participants who were personally motivated to respond without prejudice reported more positive expectancies. The findings are discussed in terms of the implications for understanding the course and quality of interracial interactions.

  19. Geological-genetic classification for uranium deposits

    International Nuclear Information System (INIS)

    Terentiev, V.M.; Naumov, S.S.

    1997-01-01

    The paper describes a system for classification uranium deposits based on geological and genetic characteristics. The system is based on the interrelation and interdependence of uranium ore formation processes and other geological phenomena including sedimentation, magmatism and tectonics, as well as the evolution of geotectonic structures. Using these aspects, deposits are classified in three categories: endogenic - predominately hydrothermal and hydrothermal-metasomatic; exogenic - sedimentary diagenetic, biogenic sorption, and infiltrational; and polygenetic or composite types. The latter complex types includes: sedimentary/metamorphic and metamorphic and sedimentary/hydrothermal, where different ore generating processes have prevailed over a rock unit at different times. The 3 page classification is given in both the English and Russian languages. (author). 3 tabs

  20. A Labor and Delivery Patient Classification System Based on Direct Nursing Care Time

    Science.gov (United States)

    1991-08-01

    equipment at bedside, position temperature probe or thermometer, assess respiratory rate, take pulse, place cuff around extremity, position stethoscope ...at bedside, assess respiratory rate, take pulse, place cuff around extremity, position stethoscope , measure blood pressure, remove cuff, record...International Classification of Diseases 9th Revision Clinical Modification (ICD-9-CM). A stratified random sample of the various types of delivery

  1. Simulations of hybrid system varying solar radiation and microturbine response time

    Directory of Open Access Journals (Sweden)

    Yolanda Fernández Ribaya

    2015-07-01

    Full Text Available Hybrid power systems, such as combinations of renewable power sources with intermittent power production and non-renewable power sources, theoretically increase the reliability and thus integration of renewable sources in the electrical system. However, a recent increase in the number of hybrid installations has sparked interest in the effects of their connection to the grid, especially in remote areas. This paper analyses a photovoltaic-gas microturbine hybrid system dimensioned to be installed in La Paz (Mexico.The research presented in this paper studies and quantifies the effects on the total electric power produced, varying both the solar radiation and the gas microturbine response time. The gas microturbine and the photovoltaic panels are modelled using Matlab/Simulink software, obtaining a platform where different tests to simulate real conditions have been executed. They consist of diverse ramps of irradiance that replicate solar radiation variations, and different microturbine response times reproduced by the time constants of a first order transfer function that models the microturbine dynamic response. The results obtained show that when radiation varies quickly it does not produce significant differences in the power guarantee or the microturbine gas consumption, to any microturbine response time. However, these two parameters are highly variable with smooth radiance variations. The maximum total power variation decreases greatly as the radiation variation gets lower. In addition, by decreasing the microturbine response time, it is possible to appreciably increase the power guarantee although the maximum power variation and gas consumption increase. Only in cases of low radiation variation is there no appreciable difference in the maximum power variation obtained by the different turbine response times.

  2. Simulations of hybrid system varying solar radiation and microturbine response time

    Energy Technology Data Exchange (ETDEWEB)

    Fernández Ribaya, Yolanda, E-mail: fernandezryolanda@uniovi.es; Álvarez, Eduardo; Paredes Sánchez, José Pablo; Xiberta Bernat, Jorge [Department of Energy E.I.M.E.M., University of Oviedo. 13 Independencia Street 2" n" d floor, 36004, Oviedo (Spain)

    2015-07-15

    Hybrid power systems, such as combinations of renewable power sources with intermittent power production and non-renewable power sources, theoretically increase the reliability and thus integration of renewable sources in the electrical system. However, a recent increase in the number of hybrid installations has sparked interest in the effects of their connection to the grid, especially in remote areas. This paper analyses a photovoltaic-gas microturbine hybrid system dimensioned to be installed in La Paz (Mexico).The research presented in this paper studies and quantifies the effects on the total electric power produced, varying both the solar radiation and the gas microturbine response time. The gas microturbine and the photovoltaic panels are modelled using Matlab/Simulink software, obtaining a platform where different tests to simulate real conditions have been executed. They consist of diverse ramps of irradiance that replicate solar radiation variations, and different microturbine response times reproduced by the time constants of a first order transfer function that models the microturbine dynamic response. The results obtained show that when radiation varies quickly it does not produce significant differences in the power guarantee or the microturbine gas consumption, to any microturbine response time. However, these two parameters are highly variable with smooth radiance variations. The maximum total power variation decreases greatly as the radiation variation gets lower. In addition, by decreasing the microturbine response time, it is possible to appreciably increase the power guarantee although the maximum power variation and gas consumption increase. Only in cases of low radiation variation is there no appreciable difference in the maximum power variation obtained by the different turbine response times.

  3. Classification with support hyperplanes

    NARCIS (Netherlands)

    G.I. Nalbantov (Georgi); J.C. Bioch (Cor); P.J.F. Groenen (Patrick)

    2006-01-01

    textabstractA new classification method is proposed, called Support Hy- perplanes (SHs). To solve the binary classification task, SHs consider the set of all hyperplanes that do not make classification mistakes, referred to as semi-consistent hyperplanes. A test object is classified using

  4. Classification of Flotation Frothers

    Directory of Open Access Journals (Sweden)

    Jan Drzymala

    2018-02-01

    Full Text Available In this paper, a scheme of flotation frothers classification is presented. The scheme first indicates the physical system in which a frother is present and four of them i.e., pure state, aqueous solution, aqueous solution/gas system and aqueous solution/gas/solid system are distinguished. As a result, there are numerous classifications of flotation frothers. The classifications can be organized into a scheme described in detail in this paper. The frother can be present in one of four physical systems, that is pure state, aqueous solution, aqueous solution/gas and aqueous solution/gas/solid system. It results from the paper that a meaningful classification of frothers relies on choosing the physical system and next feature, trend, parameter or parameters according to which the classification is performed. The proposed classification can play a useful role in characterizing and evaluation of flotation frothers.

  5. In situ response time measurements of RTD temperature sensors

    International Nuclear Information System (INIS)

    Goncalves, I.M.P.

    1985-01-01

    The loop-current-step-response test provides a mean for determining the time constant of resistence thermometers. The test consist in heating the sensor a few degrees above ambient temperature by causing a step pertubation in the electric current that flows through the sensor leads. The developed mathematical transformation permits to use data collected during the internal heating transient to predict the sensor response to perturbations in fluid temperature. Experimental data obtained show that the time constant determined by method is within 15 percent of true value. The loop-current-step-response test is a remote in situ test, which can be performed with the sensor installed in the process. Consequently it takes account the local heat transfer conditions, and appropriated for nuclear power plants, where sensors are installed in points of difficult access. (author) [pt

  6. DOE LLW classification rationale

    International Nuclear Information System (INIS)

    Flores, A.Y.

    1991-01-01

    This report was about the rationale which the US Department of Energy had with low-level radioactive waste (LLW) classification. It is based on the Nuclear Regulatory Commission's classification system. DOE site operators met to review the qualifications and characteristics of the classification systems. They evaluated performance objectives, developed waste classification tables, and compiled dose limits on the waste. A goal of the LLW classification system was to allow each disposal site the freedom to develop limits to radionuclide inventories and concentrations according to its own site-specific characteristics. This goal was achieved with the adoption of a performance objectives system based on a performance assessment, with site-specific environmental conditions and engineered disposal systems

  7. Asteroid taxonomic classifications

    International Nuclear Information System (INIS)

    Tholen, D.J.

    1989-01-01

    This paper reports on three taxonomic classification schemes developed and applied to the body of available color and albedo data. Asteroid taxonomic classifications according to two of these schemes are reproduced

  8. Hardware Accelerators Targeting a Novel Group Based Packet Classification Algorithm

    Directory of Open Access Journals (Sweden)

    O. Ahmed

    2013-01-01

    Full Text Available Packet classification is a ubiquitous and key building block for many critical network devices. However, it remains as one of the main bottlenecks faced when designing fast network devices. In this paper, we propose a novel Group Based Search packet classification Algorithm (GBSA that is scalable, fast, and efficient. GBSA consumes an average of 0.4 megabytes of memory for a 10 k rule set. The worst-case classification time per packet is 2 microseconds, and the preprocessing speed is 3 M rules/second based on an Xeon processor operating at 3.4 GHz. When compared with other state-of-the-art classification techniques, the results showed that GBSA outperforms the competition with respect to speed, memory usage, and processing time. Moreover, GBSA is amenable to implementation in hardware. Three different hardware implementations are also presented in this paper including an Application Specific Instruction Set Processor (ASIP implementation and two pure Register-Transfer Level (RTL implementations based on Impulse-C and Handel-C flows, respectively. Speedups achieved with these hardware accelerators ranged from 9x to 18x compared with a pure software implementation running on an Xeon processor.

  9. Corrective response times in a coordinated eye-head-arm countermanding task.

    Science.gov (United States)

    Tao, Gordon; Khan, Aarlenne Z; Blohm, Gunnar

    2018-06-01

    Inhibition of motor responses has been described as a race between two competing decision processes of motor initiation and inhibition, which manifest as the reaction time (RT) and the stop signal reaction time (SSRT); in the case where motor initiation wins out over inhibition, an erroneous movement occurs that usually needs to be corrected, leading to corrective response times (CRTs). Here we used a combined eye-head-arm movement countermanding task to investigate the mechanisms governing multiple effector coordination and the timing of corrective responses. We found a high degree of correlation between effector response times for RT, SSRT, and CRT, suggesting that decision processes are strongly dependent across effectors. To gain further insight into the mechanisms underlying CRTs, we tested multiple models to describe the distribution of RTs, SSRTs, and CRTs. The best-ranked model (according to 3 information criteria) extends the LATER race model governing RTs and SSRTs, whereby a second motor initiation process triggers the corrective response (CRT) only after the inhibition process completes in an expedited fashion. Our model suggests that the neural processing underpinning a failed decision has a residual effect on subsequent actions. NEW & NOTEWORTHY Failure to inhibit erroneous movements typically results in corrective movements. For coordinated eye-head-hand movements we show that corrective movements are only initiated after the erroneous movement cancellation signal has reached a decision threshold in an accelerated fashion.

  10. Non-target adjacent stimuli classification improves performance of classical ERP-based brain computer interface

    Science.gov (United States)

    Ceballos, G. A.; Hernández, L. F.

    2015-04-01

    Objective. The classical ERP-based speller, or P300 Speller, is one of the most commonly used paradigms in the field of Brain Computer Interfaces (BCI). Several alterations to the visual stimuli presentation system have been developed to avoid unfavorable effects elicited by adjacent stimuli. However, there has been little, if any, regard to useful information contained in responses to adjacent stimuli about spatial location of target symbols. This paper aims to demonstrate that combining the classification of non-target adjacent stimuli with standard classification (target versus non-target) significantly improves classical ERP-based speller efficiency. Approach. Four SWLDA classifiers were trained and combined with the standard classifier: the lower row, upper row, right column and left column classifiers. This new feature extraction procedure and the classification method were carried out on three open databases: the UAM P300 database (Universidad Autonoma Metropolitana, Mexico), BCI competition II (dataset IIb) and BCI competition III (dataset II). Main results. The inclusion of the classification of non-target adjacent stimuli improves target classification in the classical row/column paradigm. A gain in mean single trial classification of 9.6% and an overall improvement of 25% in simulated spelling speed was achieved. Significance. We have provided further evidence that the ERPs produced by adjacent stimuli present discriminable features, which could provide additional information about the spatial location of intended symbols. This work promotes the searching of information on the peripheral stimulation responses to improve the performance of emerging visual ERP-based spellers.

  11. Attribute Learning for SAR Image Classification

    Directory of Open Access Journals (Sweden)

    Chu He

    2017-04-01

    Full Text Available This paper presents a classification approach based on attribute learning for high spatial resolution Synthetic Aperture Radar (SAR images. To explore the representative and discriminative attributes of SAR images, first, an iterative unsupervised algorithm is designed to cluster in the low-level feature space, where the maximum edge response and the ratio of mean-to-variance are included; a cross-validation step is applied to prevent overfitting. Second, the most discriminative clustering centers are sorted out to construct an attribute dictionary. By resorting to the attribute dictionary, a representation vector describing certain categories in the SAR image can be generated, which in turn is used to perform the classifying task. The experiments conducted on TerraSAR-X images indicate that those learned attributes have strong visual semantics, which are characterized by bright and dark spots, stripes, or their combinations. The classification method based on these learned attributes achieves better results.

  12. Full-motion video analysis for improved gender classification

    Science.gov (United States)

    Flora, Jeffrey B.; Lochtefeld, Darrell F.; Iftekharuddin, Khan M.

    2014-06-01

    The ability of computer systems to perform gender classification using the dynamic motion of the human subject has important applications in medicine, human factors, and human-computer interface systems. Previous works in motion analysis have used data from sensors (including gyroscopes, accelerometers, and force plates), radar signatures, and video. However, full-motion video, motion capture, range data provides a higher resolution time and spatial dataset for the analysis of dynamic motion. Works using motion capture data have been limited by small datasets in a controlled environment. In this paper, we explore machine learning techniques to a new dataset that has a larger number of subjects. Additionally, these subjects move unrestricted through a capture volume, representing a more realistic, less controlled environment. We conclude that existing linear classification methods are insufficient for the gender classification for larger dataset captured in relatively uncontrolled environment. A method based on a nonlinear support vector machine classifier is proposed to obtain gender classification for the larger dataset. In experimental testing with a dataset consisting of 98 trials (49 subjects, 2 trials per subject), classification rates using leave-one-out cross-validation are improved from 73% using linear discriminant analysis to 88% using the nonlinear support vector machine classifier.

  13. Solubility classification of airborne products from uranium ores and tailings piles

    International Nuclear Information System (INIS)

    Kalkwarf, D.R.

    1979-01-01

    Airborne products generated at uranium mills were assigned solubility classifications for use in the ICRP Task Group Lung Model. No significant difference was seen between the dissolution behavior of airborne samples and sieved ground samples of the same product. If the product contained radionuclides that dissolved at different rates, composite classifications were assigned to show the solubility class of each component. If the dissolution data indicated that a radionuclide was present in two chemical forms that dissolved at different rates, a mixed classification was assigned to show the percentage of radionuclide in each solubility class. Uranium-ore dust was assigned the composite classification: ( 235 U, 238 U) W; ( 226 Ra) 10% D, 90% Y; ( 230 Th, 210 Pb, 210 Po) Y. Tailings-pile dust was classified: ( 226 Ra) 10% D, 90% Y; ( 230 Th, 210 Pb, 210 Po) Y. Uranium octoxide was classified Y, uranium tetrafluoride was also classified Y, ammonium diuranate was classified D, and yellow-cake dust was classified ( 235 U, 238 U) 60% D, 40% W. The term yellow cake, however, covers a variety of materials which differ significantly in dissolution rate. Solubility classifications based on the dissolution half-times of particular yellow-cake products should, thus, be used when available. The D, W, and Y classifications refer to biological half-times for clearance from the human respiratory tract of 0 to 10 days, 11 to 100 days, and > 100 days, respectively

  14. Visual Alphabets: Video classification by end users

    NARCIS (Netherlands)

    Israël, Menno; van den Broek, Egon; van der Putten, Peter; den Uyl, Marten J.; Petrushin, Valery A.; Khan, Latifur

    2007-01-01

    The work presented here introduces a real-time automatic scene classifier within content-based video retrieval. In our envisioned approach end users like documentalists, not image processing experts, build classifiers interactively, by simply indicating positive examples of a scene. Classification

  15. Gaze Embeddings for Zero-Shot Image Classification

    NARCIS (Netherlands)

    Karessli, N.; Akata, Z.; Schiele, B.; Bulling, A.

    2017-01-01

    Zero-shot image classification using auxiliary information, such as attributes describing discriminative object properties, requires time-consuming annotation by domain experts. We instead propose a method that relies on human gaze as auxiliary information, exploiting that even non-expert users have

  16. A comparison of autonomous techniques for multispectral image analysis and classification

    Science.gov (United States)

    Valdiviezo-N., Juan C.; Urcid, Gonzalo; Toxqui-Quitl, Carina; Padilla-Vivanco, Alfonso

    2012-10-01

    Multispectral imaging has given place to important applications related to classification and identification of objects from a scene. Because of multispectral instruments can be used to estimate the reflectance of materials in the scene, these techniques constitute fundamental tools for materials analysis and quality control. During the last years, a variety of algorithms has been developed to work with multispectral data, whose main purpose has been to perform the correct classification of the objects in the scene. The present study introduces a brief review of some classical as well as a novel technique that have been used for such purposes. The use of principal component analysis and K-means clustering techniques as important classification algorithms is here discussed. Moreover, a recent method based on the min-W and max-M lattice auto-associative memories, that was proposed for endmember determination in hyperspectral imagery, is introduced as a classification method. Besides a discussion of their mathematical foundation, we emphasize their main characteristics and the results achieved for two exemplar images conformed by objects similar in appearance, but spectrally different. The classification results state that the first components computed from principal component analysis can be used to highlight areas with different spectral characteristics. In addition, the use of lattice auto-associative memories provides good results for materials classification even in the cases where some spectral similarities appears in their spectral responses.

  17. Classification of multiple sclerosis lesions using adaptive dictionary learning.

    Science.gov (United States)

    Deshpande, Hrishikesh; Maurel, Pierre; Barillot, Christian

    2015-12-01

    This paper presents a sparse representation and an adaptive dictionary learning based method for automated classification of multiple sclerosis (MS) lesions in magnetic resonance (MR) images. Manual delineation of MS lesions is a time-consuming task, requiring neuroradiology experts to analyze huge volume of MR data. This, in addition to the high intra- and inter-observer variability necessitates the requirement of automated MS lesion classification methods. Among many image representation models and classification methods that can be used for such purpose, we investigate the use of sparse modeling. In the recent years, sparse representation has evolved as a tool in modeling data using a few basis elements of an over-complete dictionary and has found applications in many image processing tasks including classification. We propose a supervised classification approach by learning dictionaries specific to the lesions and individual healthy brain tissues, which include white matter (WM), gray matter (GM) and cerebrospinal fluid (CSF). The size of the dictionaries learned for each class plays a major role in data representation but it is an even more crucial element in the case of competitive classification. Our approach adapts the size of the dictionary for each class, depending on the complexity of the underlying data. The algorithm is validated using 52 multi-sequence MR images acquired from 13 MS patients. The results demonstrate the effectiveness of our approach in MS lesion classification. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. A History of Cluster Analysis Using the Classification Society's Bibliography Over Four Decades

    Science.gov (United States)

    Murtagh, Fionn; Kurtz, Michael J.

    2016-04-01

    The Classification Literature Automated Search Service, an annual bibliography based on citation of one or more of a set of around 80 book or journal publications, ran from 1972 to 2012. We analyze here the years 1994 to 2011. The Classification Society's Service, as it was termed, has been produced by the Classification Society. In earlier decades it was distributed as a diskette or CD with the Journal of Classification. Among our findings are the following: an enormous increase in scholarly production post approximately 2000; a very major increase in quantity, coupled with work in different disciplines, from approximately 2004; and a major shift also from cluster analysis in earlier times having mathematics and psychology as disciplines of the journals published in, and affiliations of authors, contrasted with, in more recent times, a "centre of gravity" in management and engineering.

  19. Classification of COROT Exoplanet Light Curves

    NARCIS (Netherlands)

    Debosscher, J.; Aerts, C.C.; Vandenbussche, B.

    2006-01-01

    We present methodology to achieve the automated variability classification of stars based on photometric time series. Our work is done in the framework of the COROT space mission to be launched in 2006, but will also be applicable to data of the future Gaia satellite. We developed routines that are

  20. A Classification System to Guide Physical Therapy Management in Huntington Disease: A Case Series.

    Science.gov (United States)

    Fritz, Nora E; Busse, Monica; Jones, Karen; Khalil, Hanan; Quinn, Lori

    2017-07-01

    Individuals with Huntington disease (HD), a rare neurological disease, experience impairments in mobility and cognition throughout their disease course. The Medical Research Council framework provides a schema that can be applied to the development and evaluation of complex interventions, such as those provided by physical therapists. Treatment-based classifications, based on expert consensus and available literature, are helpful in guiding physical therapy management across the stages of HD. Such classifications also contribute to the development and further evaluation of well-defined complex interventions in this highly variable and complex neurodegenerative disease. The purpose of this case series was to illustrate the use of these classifications in the management of 2 individuals with late-stage HD. Two females, 40 and 55 years of age, with late-stage HD participated in this case series. Both experienced progressive declines in ambulatory function and balance as well as falls or fear of falling. Both individuals received daily care in the home for activities of daily living. Physical therapy Treatment-Based Classifications for HD guided the interventions and outcomes. Eight weeks of in-home balance training, strength training, task-specific practice of functional activities including transfers and walking tasks, and family/carer education were provided. Both individuals demonstrated improvements that met or exceeded the established minimal detectible change values for gait speed and Timed Up and Go performance. Both also demonstrated improvements on Berg Balance Scale and Physical Performance Test performance, with 1 of the 2 individuals exceeding the established minimal detectible changes for both tests. Reductions in fall risk were evident in both cases. These cases provide proof-of-principle to support use of treatment-based classifications for physical therapy management in individuals with HD. Traditional classification of early-, mid-, and late

  1. 32 CFR 2001.15 - Classification guides.

    Science.gov (United States)

    2010-07-01

    ... 32 National Defense 6 2010-07-01 2010-07-01 false Classification guides. 2001.15 Section 2001.15..., NATIONAL ARCHIVES AND RECORDS ADMINISTRATION CLASSIFIED NATIONAL SECURITY INFORMATION Classification § 2001.15 Classification guides. (a) Preparation of classification guides. Originators of classification...

  2. Reduced computational cost in the calculation of worst case response time for real time systems

    OpenAIRE

    Urriza, José M.; Schorb, Lucas; Orozco, Javier D.; Cayssials, Ricardo

    2009-01-01

    Modern Real Time Operating Systems require reducing computational costs even though the microprocessors become more powerful each day. It is usual that Real Time Operating Systems for embedded systems have advance features to administrate the resources of the applications that they support. In order to guarantee either the schedulability of the system or the schedulability of a new task in a dynamic Real Time System, it is necessary to know the Worst Case Response Time of the Real Time tasks ...

  3. Maxillectomy defects: a suggested classification scheme.

    Science.gov (United States)

    Akinmoladun, V I; Dosumu, O O; Olusanya, A A; Ikusika, O F

    2013-06-01

    The term "maxillectomy" has been used to describe a variety of surgical procedures for a spectrum of diseases involving a diverse anatomical site. Hence, classifications of maxillectomy defects have often made communication difficult. This article highlights this problem, emphasises the need for a uniform system of classification and suggests a classification system which is simple and comprehensive. Articles related to this subject, especially those with specified classifications of maxillary surgical defects were sourced from the internet through Google, Scopus and PubMed using the search terms maxillectomy defects classification. A manual search through available literature was also done. The review of the materials revealed many classifications and modifications of classifications from the descriptive, reconstructive and prosthodontic perspectives. No globally acceptable classification exists among practitioners involved in the management of diseases in the mid-facial region. There were over 14 classifications of maxillary defects found in the English literature. Attempts made to address the inadequacies of previous classifications have tended to result in cumbersome and relatively complex classifications. A single classification that is based on both surgical and prosthetic considerations is most desirable and is hereby proposed.

  4. Impact of insufficient energy content in the design time history on the structure response

    International Nuclear Information System (INIS)

    Ma, D.C.; Gvildys, J.; Chang, Y.W.; Seidensticker, R.W.

    1989-01-01

    In the design of nuclear power plants, it is often desirable to use the time history method in the soil-structure interaction analysis to determine the plant floor response to seismic loads. Because many design criteria are specified in terms of design response spectra, the artificial time history needs to be generated under the requirement that the response spectra of the artificial history should envelop the given design response spectra. However, recent studies indicate that the artificial time history used in the plant design may have insufficient energy in the frequency range of interest, even though the response spectra of the design time history closely envelop the design response spectra. This paper presents an investigation of the effects of the insufficient energy content in the design time history on the response of the soil-structure system. Numerical studies were carried out. Both the real earthquake records and the artificial time histories were used as the input motions in a simple lumped-mass soil-structure interaction model. The results obtained from this study provide a better understanding of the effects of the insufficient energy content in the design time history on the structural response

  5. Improving settlement type classification of aerial images

    CSIR Research Space (South Africa)

    Mdakane, L

    2014-10-01

    Full Text Available , an automated method can be used to help identify human settlements in a fixed, repeatable and timely manner. The main contribution of this work is to improve generalisation on settlement type classification of aerial imagery. Images acquired at different dates...

  6. Constructing criticality by classification

    DEFF Research Database (Denmark)

    Machacek, Erika

    2017-01-01

    " in the bureaucratic practice of classification: Experts construct material criticality in assessments as they allot information on the materials to the parameters of the assessment framework. In so doing, they ascribe a new set of connotations to the materials, namely supply risk, and their importance to clean energy......, legitimizing a criticality discourse.Specifically, the paper introduces a typology delineating the inferences made by the experts from their produced recommendations in the classification of rare earth element criticality. The paper argues that the classification is a specific process of constructing risk....... It proposes that the expert bureaucratic practice of classification legitimizes (i) the valorisation that was made in the drafting of the assessment framework for the classification, and (ii) political operationalization when enacted that might have (non-)distributive implications for the allocation of public...

  7. Relationship Between Time Consumption and Quality of Responses to Drug-related Queries

    DEFF Research Database (Denmark)

    Amundstuen Reppe, Linda; Lydersen, Stian; Schjøtt, Jan

    2016-01-01

    in score, –0.05 per hour of work; 95% CI, –0.08 to –0.01; P = 0.005). No such associations were found for the internal experts’ assessment. Implications To our knowledge, this is the first study of the association between time consumption and quality of responses to drug-related queries in DICs......Purpose The aims of this study were to assess the quality of responses produced by drug information centers (DICs) in Scandinavia, and to study the association between time consumption processing queries and the quality of the responses. Methods We posed six identical drug-related queries to seven...... DICs in Scandinavia, and the time consumption required for processing them was estimated. Clinical pharmacologists (internal experts) and general practitioners (external experts) reviewed responses individually. We used mixed model linear regression analyses to study the associations between time...

  8. 12 CFR 403.4 - Derivative classification.

    Science.gov (United States)

    2010-01-01

    ... SAFEGUARDING OF NATIONAL SECURITY INFORMATION § 403.4 Derivative classification. (a) Use of derivative classification. (1) Unlike original classification which is an initial determination, derivative classification... 12 Banks and Banking 4 2010-01-01 2010-01-01 false Derivative classification. 403.4 Section 403.4...

  9. Magnetic resonance imaging texture analysis classification of primary breast cancer

    International Nuclear Information System (INIS)

    Waugh, S.A.; Lerski, R.A.; Purdie, C.A.; Jordan, L.B.; Vinnicombe, S.; Martin, P.; Thompson, A.M.

    2016-01-01

    Patient-tailored treatments for breast cancer are based on histological and immunohistochemical (IHC) subtypes. Magnetic Resonance Imaging (MRI) texture analysis (TA) may be useful in non-invasive lesion subtype classification. Women with newly diagnosed primary breast cancer underwent pre-treatment dynamic contrast-enhanced breast MRI. TA was performed using co-occurrence matrix (COM) features, by creating a model on retrospective training data, then prospectively applying to a test set. Analyses were blinded to breast pathology. Subtype classifications were performed using a cross-validated k-nearest-neighbour (k = 3) technique, with accuracy relative to pathology assessed and receiver operator curve (AUROC) calculated. Mann-Whitney U and Kruskal-Wallis tests were used to assess raw entropy feature values. Histological subtype classifications were similar across training (n = 148 cancers) and test sets (n = 73 lesions) using all COM features (training: 75 %, AUROC = 0.816; test: 72.5 %, AUROC = 0.823). Entropy features were significantly different between lobular and ductal cancers (p < 0.001; Mann-Whitney U). IHC classifications using COM features were also similar for training and test data (training: 57.2 %, AUROC = 0.754; test: 57.0 %, AUROC = 0.750). Hormone receptor positive and negative cancers demonstrated significantly different entropy features. Entropy features alone were unable to create a robust classification model. Textural differences on contrast-enhanced MR images may reflect underlying lesion subtypes, which merits testing against treatment response. (orig.)

  10. Magnetic resonance imaging texture analysis classification of primary breast cancer

    Energy Technology Data Exchange (ETDEWEB)

    Waugh, S.A.; Lerski, R.A. [Ninewells Hospital and Medical School, Department of Medical Physics, Dundee (United Kingdom); Purdie, C.A.; Jordan, L.B. [Ninewells Hospital and Medical School, Department of Pathology, Dundee (United Kingdom); Vinnicombe, S. [University of Dundee, Division of Imaging and Technology, Ninewells Hospital and Medical School, Dundee (United Kingdom); Martin, P. [Ninewells Hospital and Medical School, Department of Clinical Radiology, Dundee (United Kingdom); Thompson, A.M. [University of Texas MD Anderson Cancer Center, Department of Surgical Oncology, Houston, TX (United States)

    2016-02-15

    Patient-tailored treatments for breast cancer are based on histological and immunohistochemical (IHC) subtypes. Magnetic Resonance Imaging (MRI) texture analysis (TA) may be useful in non-invasive lesion subtype classification. Women with newly diagnosed primary breast cancer underwent pre-treatment dynamic contrast-enhanced breast MRI. TA was performed using co-occurrence matrix (COM) features, by creating a model on retrospective training data, then prospectively applying to a test set. Analyses were blinded to breast pathology. Subtype classifications were performed using a cross-validated k-nearest-neighbour (k = 3) technique, with accuracy relative to pathology assessed and receiver operator curve (AUROC) calculated. Mann-Whitney U and Kruskal-Wallis tests were used to assess raw entropy feature values. Histological subtype classifications were similar across training (n = 148 cancers) and test sets (n = 73 lesions) using all COM features (training: 75 %, AUROC = 0.816; test: 72.5 %, AUROC = 0.823). Entropy features were significantly different between lobular and ductal cancers (p < 0.001; Mann-Whitney U). IHC classifications using COM features were also similar for training and test data (training: 57.2 %, AUROC = 0.754; test: 57.0 %, AUROC = 0.750). Hormone receptor positive and negative cancers demonstrated significantly different entropy features. Entropy features alone were unable to create a robust classification model. Textural differences on contrast-enhanced MR images may reflect underlying lesion subtypes, which merits testing against treatment response. (orig.)

  11. An Incremental Classification Algorithm for Mining Data with Feature Space Heterogeneity

    Directory of Open Access Journals (Sweden)

    Yu Wang

    2014-01-01

    Full Text Available Feature space heterogeneity often exists in many real world data sets so that some features are of different importance for classification over different subsets. Moreover, the pattern of feature space heterogeneity might dynamically change over time as more and more data are accumulated. In this paper, we develop an incremental classification algorithm, Supervised Clustering for Classification with Feature Space Heterogeneity (SCCFSH, to address this problem. In our approach, supervised clustering is implemented to obtain a number of clusters such that samples in each cluster are from the same class. After the removal of outliers, relevance of features in each cluster is calculated based on their variations in this cluster. The feature relevance is incorporated into distance calculation for classification. The main advantage of SCCFSH lies in the fact that it is capable of solving a classification problem with feature space heterogeneity in an incremental way, which is favorable for online classification tasks with continuously changing data. Experimental results on a series of data sets and application to a database marketing problem show the efficiency and effectiveness of the proposed approach.

  12. Generation of artificial time-histories, rich in all frequencies, from given response spectra

    International Nuclear Information System (INIS)

    Levy, S.; Wilkinson, J.P.D.

    1976-01-01

    In the design of nuclear power plants, it has been found desirable in certain instances to use the time-history method of dynamic analysis to determine the plant response to seismic input. In the implementation of this method, it is necessary to determine an adequate representation of the excitation as a function of time. Because many design criteria are specified in terms of design response spectra one is faced with the problem of generating a time-history whose own response spectrum approximates as far as possible to the originally specified design response spectrum. One objective of this paper is to present a method of synthesizing such time-histories from a given design response spectrum. The design response spectra may be descriptive of floor responses at a particular location in a plant, or they may be descriptive of seismic ground motions at a plant site. The method described in this paper allows the generation of time histories that are rich in all frequencies in the spectrum. This richness is achieved by choosing a large number of closely-spaced frequency points such that the half-power points of adjacent frequencies overlap. Examples are given concerning seismic design response spectra, and a number of points are discussed concerning the effect of frequency spacing on convergence. (Auth.)

  13. Using random response input in Ibrahim Time Domain

    DEFF Research Database (Denmark)

    Olsen, Peter; Brincker, R.

    2013-01-01

    In this paper the time domain technique Ibrahim Time Domain (ITD) is used to analyze random time data. ITD is known to be a technique for identification of output only systems. The traditional formulation of ITD is claimed to be limited, when identifying closely spaced modes, because....... In this article it is showed that when using the modified ITD random time data can be analyzed. The application of the technique is displayed by a case study, with simulations and experimental data....... of the technique being Single Input Multiple Output (SIMO). It has earlier been showed that when modifying ITD with Toeplitz matrix averaging. Identification of time data with closely spaced modes is improved. In the traditional formulation of ITD the time data has to be free decays or impulse response functions...

  14. Supernova Photometric Lightcurve Classification

    Science.gov (United States)

    Zaidi, Tayeb; Narayan, Gautham

    2016-01-01

    This is a preliminary report on photometric supernova classification. We first explore the properties of supernova light curves, and attempt to restructure the unevenly sampled and sparse data from assorted datasets to allow for processing and classification. The data was primarily drawn from the Dark Energy Survey (DES) simulated data, created for the Supernova Photometric Classification Challenge. This poster shows a method for producing a non-parametric representation of the light curve data, and applying a Random Forest classifier algorithm to distinguish between supernovae types. We examine the impact of Principal Component Analysis to reduce the dimensionality of the dataset, for future classification work. The classification code will be used in a stage of the ANTARES pipeline, created for use on the Large Synoptic Survey Telescope alert data and other wide-field surveys. The final figure-of-merit for the DES data in the r band was 60% for binary classification (Type I vs II).Zaidi was supported by the NOAO/KPNO Research Experiences for Undergraduates (REU) Program which is funded by the National Science Foundation Research Experiences for Undergraduates Program (AST-1262829).

  15. DATA CLASSIFICATION WITH NEURAL CLASSIFIER USING RADIAL BASIS FUNCTION WITH DATA REDUCTION USING HIERARCHICAL CLUSTERING

    Directory of Open Access Journals (Sweden)

    M. Safish Mary

    2012-04-01

    Full Text Available Classification of large amount of data is a time consuming process but crucial for analysis and decision making. Radial Basis Function networks are widely used for classification and regression analysis. In this paper, we have studied the performance of RBF neural networks to classify the sales of cars based on the demand, using kernel density estimation algorithm which produces classification accuracy comparable to data classification accuracy provided by support vector machines. In this paper, we have proposed a new instance based data selection method where redundant instances are removed with help of a threshold thus improving the time complexity with improved classification accuracy. The instance based selection of the data set will help reduce the number of clusters formed thereby reduces the number of centers considered for building the RBF network. Further the efficiency of the training is improved by applying a hierarchical clustering technique to reduce the number of clusters formed at every step. The paper explains the algorithm used for classification and for conditioning the data. It also explains the complexities involved in classification of sales data for analysis and decision-making.

  16. Project implementation : classification of organic soils and classification of marls - training of INDOT personnel.

    Science.gov (United States)

    2012-09-01

    This is an implementation project for the research completed as part of the following projects: SPR3005 Classification of Organic Soils : and SPR3227 Classification of Marl Soils. The methods developed for the classification of both soi...

  17. 45 CFR 601.5 - Derivative classification.

    Science.gov (United States)

    2010-10-01

    ... CLASSIFICATION AND DECLASSIFICATION OF NATIONAL SECURITY INFORMATION § 601.5 Derivative classification. Distinct... 45 Public Welfare 3 2010-10-01 2010-10-01 false Derivative classification. 601.5 Section 601.5... classification guide, need not possess original classification authority. (a) If a person who applies derivative...

  18. An Ultrasonic Pattern Recognition Approach to Welding Defect Classification

    International Nuclear Information System (INIS)

    Song, Sung Jin

    1995-01-01

    Classification of flaws in weldments from their ultrasonic scattering signals is very important in quantitative nondestructive evaluation. This problem is ideally suited to a modern ultrasonic pattern recognition technique. Here brief discussion on systematic approach to this methodology is presented including ultrasonic feature extraction, feature selection and classification. A stronger emphasis is placed on probabilistic neural networks as efficient classifiers for many practical classification problems. In an example probabilistic neural networks are applied to classify flaws in weldments into 3 classes such as cracks, porosity and slag inclusions. Probabilistic nets are shown to be able to exhibit high performance of other classifiers without any training time overhead. In addition, forward selection scheme for sensitive features is addressed to enhance network performance

  19. Learning features for tissue classification with the classification restricted Boltzmann machine

    DEFF Research Database (Denmark)

    van Tulder, Gijs; de Bruijne, Marleen

    2014-01-01

    Performance of automated tissue classification in medical imaging depends on the choice of descriptive features. In this paper, we show how restricted Boltzmann machines (RBMs) can be used to learn features that are especially suited for texture-based tissue classification. We introduce the convo...... outperform conventional RBM-based feature learning, which is unsupervised and uses only a generative learning objective, as well as often-used filter banks. We show that a mixture of generative and discriminative learning can produce filters that give a higher classification accuracy....

  20. Mapping US Urban Extents from MODIS Data Using One-Class Classification Method

    Directory of Open Access Journals (Sweden)

    Bo Wan

    2015-08-01

    Full Text Available Urban areas are one of the most important components of human society. Their extents have been continuously growing during the last few decades. Accurate and timely measurements of the extents of urban areas can help in analyzing population densities and urban sprawls and in studying environmental issues related to urbanization. Urban extents detected from remotely sensed data are usually a by-product of land use classification results, and their interpretation requires a full understanding of land cover types. In this study, for the first time, we mapped urban extents in the continental United States using a novel one-class classification method, i.e., positive and unlabeled learning (PUL, with multi-temporal Moderate Resolution Imaging Spectroradiometer (MODIS data for the year 2010. The Defense Meteorological Satellite Program Operational Linescan System (DMSP-OLS night stable light data were used to calibrate the urban extents obtained from the one-class classification scheme. Our results demonstrated the effectiveness of the use of the PUL algorithm in mapping large-scale urban areas from coarse remote-sensing images, for the first time. The total accuracy of mapped urban areas was 92.9% and the kappa coefficient was 0.85. The use of DMSP-OLS night stable light data can significantly reduce false detection rates from bare land and cropland far from cities. Compared with traditional supervised classification methods, the one-class classification scheme can greatly reduce the effort involved in collecting training datasets, without losing predictive accuracy.

  1. Automated artery-venous classification of retinal blood vessels based on structural mapping method

    Science.gov (United States)

    Joshi, Vinayak S.; Garvin, Mona K.; Reinhardt, Joseph M.; Abramoff, Michael D.

    2012-03-01

    Retinal blood vessels show morphologic modifications in response to various retinopathies. However, the specific responses exhibited by arteries and veins may provide a precise diagnostic information, i.e., a diabetic retinopathy may be detected more accurately with the venous dilatation instead of average vessel dilatation. In order to analyze the vessel type specific morphologic modifications, the classification of a vessel network into arteries and veins is required. We previously described a method for identification and separation of retinal vessel trees; i.e. structural mapping. Therefore, we propose the artery-venous classification based on structural mapping and identification of color properties prominent to the vessel types. The mean and standard deviation of each of green channel intensity and hue channel intensity are analyzed in a region of interest around each centerline pixel of a vessel. Using the vector of color properties extracted from each centerline pixel, it is classified into one of the two clusters (artery and vein), obtained by the fuzzy-C-means clustering. According to the proportion of clustered centerline pixels in a particular vessel, and utilizing the artery-venous crossing property of retinal vessels, each vessel is assigned a label of an artery or a vein. The classification results are compared with the manually annotated ground truth (gold standard). We applied the proposed method to a dataset of 15 retinal color fundus images resulting in an accuracy of 88.28% correctly classified vessel pixels. The automated classification results match well with the gold standard suggesting its potential in artery-venous classification and the respective morphology analysis.

  2. Vehicle Classification Using an Imbalanced Dataset Based on a Single Magnetic Sensor

    Directory of Open Access Journals (Sweden)

    Chang Xu

    2018-05-01

    Full Text Available This paper aims to improve the accuracy of automatic vehicle classifiers for imbalanced datasets. Classification is made through utilizing a single anisotropic magnetoresistive sensor, with the models of vehicles involved being classified into hatchbacks, sedans, buses, and multi-purpose vehicles (MPVs. Using time domain and frequency domain features in combination with three common classification algorithms in pattern recognition, we develop a novel feature extraction method for vehicle classification. These three common classification algorithms are the k-nearest neighbor, the support vector machine, and the back-propagation neural network. Nevertheless, a problem remains with the original vehicle magnetic dataset collected being imbalanced, and may lead to inaccurate classification results. With this in mind, we propose an approach called SMOTE, which can further boost the performance of classifiers. Experimental results show that the k-nearest neighbor (KNN classifier with the SMOTE algorithm can reach a classification accuracy of 95.46%, thus minimizing the effect of the imbalance.

  3. Vehicle Classification Using an Imbalanced Dataset Based on a Single Magnetic Sensor.

    Science.gov (United States)

    Xu, Chang; Wang, Yingguan; Bao, Xinghe; Li, Fengrong

    2018-05-24

    This paper aims to improve the accuracy of automatic vehicle classifiers for imbalanced datasets. Classification is made through utilizing a single anisotropic magnetoresistive sensor, with the models of vehicles involved being classified into hatchbacks, sedans, buses, and multi-purpose vehicles (MPVs). Using time domain and frequency domain features in combination with three common classification algorithms in pattern recognition, we develop a novel feature extraction method for vehicle classification. These three common classification algorithms are the k-nearest neighbor, the support vector machine, and the back-propagation neural network. Nevertheless, a problem remains with the original vehicle magnetic dataset collected being imbalanced, and may lead to inaccurate classification results. With this in mind, we propose an approach called SMOTE, which can further boost the performance of classifiers. Experimental results show that the k-nearest neighbor (KNN) classifier with the SMOTE algorithm can reach a classification accuracy of 95.46%, thus minimizing the effect of the imbalance.

  4. Radiomic features analysis in computed tomography images of lung nodule classification.

    Directory of Open Access Journals (Sweden)

    Chia-Hung Chen

    Full Text Available Radiomics, which extract large amount of quantification image features from diagnostic medical images had been widely used for prognostication, treatment response prediction and cancer detection. The treatment options for lung nodules depend on their diagnosis, benign or malignant. Conventionally, lung nodule diagnosis is based on invasive biopsy. Recently, radiomics features, a non-invasive method based on clinical images, have shown high potential in lesion classification, treatment outcome prediction.Lung nodule classification using radiomics based on Computed Tomography (CT image data was investigated and a 4-feature signature was introduced for lung nodule classification. Retrospectively, 72 patients with 75 pulmonary nodules were collected. Radiomics feature extraction was performed on non-enhanced CT images with contours which were delineated by an experienced radiation oncologist.Among the 750 image features in each case, 76 features were found to have significant differences between benign and malignant lesions. A radiomics signature was composed of the best 4 features which included Laws_LSL_min, Laws_SLL_energy, Laws_SSL_skewness and Laws_EEL_uniformity. The accuracy using the signature in benign or malignant classification was 84% with the sensitivity of 92.85% and the specificity of 72.73%.The classification signature based on radiomics features demonstrated very good accuracy and high potential in clinical application.

  5. Classification of smooth Fano polytopes

    DEFF Research Database (Denmark)

    Øbro, Mikkel

    A simplicial lattice polytope containing the origin in the interior is called a smooth Fano polytope, if the vertices of every facet is a basis of the lattice. The study of smooth Fano polytopes is motivated by their connection to toric varieties. The thesis concerns the classification of smooth...... Fano polytopes up to isomorphism. A smooth Fano -polytope can have at most vertices. In case of vertices an explicit classification is known. The thesis contains the classification in case of vertices. Classifications of smooth Fano -polytopes for fixed exist only for . In the thesis an algorithm...... for the classification of smooth Fano -polytopes for any given is presented. The algorithm has been implemented and used to obtain the complete classification for ....

  6. Classification of EEG signals using a genetic-based machine learning classifier.

    Science.gov (United States)

    Skinner, B T; Nguyen, H T; Liu, D K

    2007-01-01

    This paper investigates the efficacy of the genetic-based learning classifier system XCS, for the classification of noisy, artefact-inclusive human electroencephalogram (EEG) signals represented using large condition strings (108bits). EEG signals from three participants were recorded while they performed four mental tasks designed to elicit hemispheric responses. Autoregressive (AR) models and Fast Fourier Transform (FFT) methods were used to form feature vectors with which mental tasks can be discriminated. XCS achieved a maximum classification accuracy of 99.3% and a best average of 88.9%. The relative classification performance of XCS was then compared against four non-evolutionary classifier systems originating from different learning techniques. The experimental results will be used as part of our larger research effort investigating the feasibility of using EEG signals as an interface to allow paralysed persons to control a powered wheelchair or other devices.

  7. Impact of insufficient energy content in the design time history on the structure response

    International Nuclear Information System (INIS)

    Ma, D.C.; Gvildys, J.; Chang, Y.W.; Seidensticker, R.W.

    1989-01-01

    In the design of nuclear power plants, it is often desirable to use the time history method in the soil-structure interaction analysis to determine the plant floor response to seismic loads. Because many design criteria are specified in terms of design response spectra, the artificial time history needs to be generated under the requirement that the response spectra of the artificial history should envelop the given design response spectra. However, recent studies indicate that the artificial time history used in the plant design may have insufficient energy in the frequency range of interest, even though the response spectra of the design time history closely envelop the design response spectra. Therefore, the proposed changes in the NRC Standard Review Plan requires that when a single time history is used in the seismic design, it must satisfy requirements for both response spectra enveloping and matching a power spectra density (PSD) function in the frequency range of interest. The use of multiple artificial time histories (at least five time histories) in the plant design is also suggested in the new Standard Review Plan. This paper presents an investigation of the effects of the insufficient energy content in the design time history on the response of the soil-structure system. Numerical studies were carried out. Both the real earthquake records and the artificial time histories were used as the input motions in a simple lumped-mass soil-structure interaction model. The results obtained from this study provide a better understanding of the effects of the insufficient energy content in the design time history on the structural response. 5 refs., 10 figs., 1 tab

  8. Building classification trees to explain the radioactive contamination levels of the plants

    International Nuclear Information System (INIS)

    Briand, B.

    2008-04-01

    The objective of this thesis is the development of a method allowing the identification of factors leading to various radioactive contamination levels of the plants. The methodology suggested is based on the use of a radioecological transfer model of the radionuclides through the environment (A.S.T.R.A.L. computer code) and a classification-tree method. Particularly, to avoid the instability problems of classification trees and to preserve the tree structure, a node level stabilizing technique is used. Empirical comparisons are carried out between classification trees built by this method (called R.E.N. method) and those obtained by the C.A.R.T. method. A similarity measure is defined to compare the structure of two classification trees. This measure is used to study the stabilizing performance of the R.E.N. method. The methodology suggested is applied to a simplified contamination scenario. By the results obtained, we can identify the main variables responsible of the various radioactive contamination levels of four leafy-vegetables (lettuce, cabbage, spinach and leek). Some extracted rules from these classification trees can be usable in a post-accidental context. (author)

  9. Support vector machine and principal component analysis for microarray data classification

    Science.gov (United States)

    Astuti, Widi; Adiwijaya

    2018-03-01

    Cancer is a leading cause of death worldwide although a significant proportion of it can be cured if it is detected early. In recent decades, technology called microarray takes an important role in the diagnosis of cancer. By using data mining technique, microarray data classification can be performed to improve the accuracy of cancer diagnosis compared to traditional techniques. The characteristic of microarray data is small sample but it has huge dimension. Since that, there is a challenge for researcher to provide solutions for microarray data classification with high performance in both accuracy and running time. This research proposed the usage of Principal Component Analysis (PCA) as a dimension reduction method along with Support Vector Method (SVM) optimized by kernel functions as a classifier for microarray data classification. The proposed scheme was applied on seven data sets using 5-fold cross validation and then evaluation and analysis conducted on term of both accuracy and running time. The result showed that the scheme can obtained 100% accuracy for Ovarian and Lung Cancer data when Linear and Cubic kernel functions are used. In term of running time, PCA greatly reduced the running time for every data sets.

  10. A study on HCI design strategy using emergent features and response time

    International Nuclear Information System (INIS)

    Lee, Sung Jin; Chang, Soon Heung; Park, Jin Gyun

    2001-01-01

    Existing design process of user interface has some weak point that there is no feedback information and no quantitative information between each sub process. If they're such information in design process, the design time cycle will be decreased and the contentment of HCI in the aspect of user will be more easily archived. In this study, new design process with feedback information and quantitative information was proposed using emergent features and user response time. The proposed methodology was put together with three main parts. First part is to calculate distinctiveness of a user interface or expanded user interface with consideration of emergent features. Second part is to expand a prototype user interface with design option for purpose of design requirement using directed structure graph (or nodal graph) theory. Last part is to convert non-realized value, distinctiveness, into realized value, response time, by response time database or response time correlation in the form of Hick-Hyman law equation. From the present validations, the usefulness of the proposed methodology was obtained by simple validation testing. It was found that emergent features should be improved for high reflection of real user interface. For the reliability of response time database, lots of end-user experiment is necessary. Expansion algorithm and representation technique of qualitative information should be somewhat improved for more efficient design process

  11. Determination of the ecological connectivity between landscape patches obtained using the knowledge engineer (expert) classification technique

    Science.gov (United States)

    Selim, Serdar; Sonmez, Namik Kemal; Onur, Isin; Coslu, Mesut

    2017-10-01

    Connection of similar landscape patches with ecological corridors supports habitat quality of these patches, increases urban ecological quality, and constitutes an important living and expansion area for wild life. Furthermore, habitat connectivity provided by urban green areas is supporting biodiversity in urban areas. In this study, possible ecological connections between landscape patches, which were achieved by using Expert classification technique and modeled with probabilistic connection index. Firstly, the reflection responses of plants to various bands are used as data in hypotheses. One of the important features of this method is being able to use more than one image at the same time in the formation of the hypothesis. For this reason, before starting the application of the Expert classification, the base images are prepared. In addition to the main image, the hypothesis conditions were also created for each class with the NDVI image which is commonly used in the vegetation researches. Besides, the results of the previously conducted supervised classification were taken into account. We applied this classification method by using the raster imagery with user-defined variables. Hereupon, to provide ecological connections of the tree cover which was achieved from the classification, we used Probabilistic Connection (PC) index. The probabilistic connection model which is used for landscape planning and conservation studies via detecting and prioritization critical areas for ecological connection characterizes the possibility of direct connection between habitats. As a result we obtained over % 90 total accuracy in accuracy assessment analysis. We provided ecological connections with PC index and we created inter-connected green spaces system. Thus, we offered and implicated green infrastructure system model takes place in the agenda of recent years.

  12. Effects of supervised Self Organising Maps parameters on classification performance.

    Science.gov (United States)

    Ballabio, Davide; Vasighi, Mahdi; Filzmoser, Peter

    2013-02-26

    Self Organising Maps (SOMs) are one of the most powerful learning strategies among neural networks algorithms. SOMs have several adaptable parameters and the selection of appropriate network architectures is required in order to make accurate predictions. The major disadvantage of SOMs is probably due to the network optimisation, since this procedure can be often time-expensive. Effects of network size, training epochs and learning rate on the classification performance of SOMs are known, whereas the effect of other parameters (type of SOMs, weights initialisation, training algorithm, topology and boundary conditions) are not so obvious. This study was addressed to analyse the effect of SOMs parameters on the network classification performance, as well as on their computational times, taking into consideration a significant number of real datasets, in order to achieve a comprehensive statistical comparison. Parameters were contemporaneously evaluated by means of an approach based on the design of experiments, which enabled the investigation of their interaction effects. Results highlighted the most important parameters which influence the classification performance and enabled the identification of the optimal settings, as well as the optimal architectures to reduce the computational time of SOMs. Copyright © 2012 Elsevier B.V. All rights reserved.

  13. Change of exposure response over time and long-term risk of silicosis among a cohort of Chinese pottery workers.

    Science.gov (United States)

    Sun, Yi; Bochmann, Frank; Morfeld, Peter; Ulm, Kurt; Liu, Yuewei; Wang, Heijiao; Yang, Lei; Chen, Weihong

    2011-07-01

    An analysis was conducted on a cohort of Chinese pottery workers to estimate the exposure-response relationship between respirable crystalline silica dust exposure and the incidence of radiographically diagnosed silicosis, and to estimate the long-term risk of developing silicosis until the age of 65. The cohort comprised 3,250 employees with a median follow-up duration of around 37 years. Incident cases of silicosis were identified via silicosis registries (Chinese X-ray stage I, similar to International Labor Organisation classification scheme profusion category 1/1). Individual exposure to respirable crystalline silica dust was estimated based on over 100,000 historical dust measurements. The association between dust exposure, incidence and long-time risk of silicosis was quantified by Poisson regression analysis adjusted for age and smoking. The risk of silicosis depended not only on the cumulative respirable crystalline silica dust exposures, but also on the time-dependent respirable crystalline silica dust exposure pattern (long-term average concentration, highest annual concentration ever experienced and time since first exposure). A long-term "excess" risk of silicosis of approximately 1.5/1,000 was estimated among workers with all annual respirable crystalline silica dust concentration estimates less than 0.1 mg/m(3), using the German measurement strategy. This study indicates the importance of proper consideration of exposure information in risk quantification in epidemiological studies.

  14. Change of Exposure Response over Time and Long-Term Risk of Silicosis among a Cohort of Chinese Pottery Workers

    Directory of Open Access Journals (Sweden)

    Yuewei Liu

    2011-07-01

    Full Text Available An analysis was conducted on a cohort of Chinese pottery workers to estimate the exposure-response relationship between respirable crystalline silica dust exposure and the incidence of radiographically diagnosed silicosis, and to estimate the long-term risk of developing silicosis until the age of 65. The cohort comprised 3,250 employees with a median follow-up duration of around 37 years. Incident cases of silicosis were identified via silicosis registries (Chinese X-ray stage I, similar to International Labor Organisation classification scheme profusion category 1/1. Individual exposure to respirable crystalline silica dust was estimated based on over 100,000 historical dust measurements. The association between dust exposure, incidence and long-time risk of silicosis was quantified by Poisson regression analysis adjusted for age and smoking. The risk of silicosis depended not only on the cumulative respirable crystalline silica dust exposures, but also on the time-dependent respirable crystalline silica dust exposure pattern (long-term average concentration, highest annual concentration ever experienced and time since first exposure. A long-term “excess” risk of silicosis of approximately 1.5/1,000 was estimated among workers with all annual respirable crystalline silica dust concentration estimates less than 0.1 mg/m3, using the German measurement strategy. This study indicates the importance of proper consideration of exposure information in risk quantification in epidemiological studies.

  15. Photoconductivity response time in amorphous semiconductors

    Science.gov (United States)

    Adriaenssens, G. J.; Baranovskii, S. D.; Fuhs, W.; Jansen, J.; Öktü, Ö.

    1995-04-01

    The photoconductivity response time of amorphous semiconductors is examined theoretically on the basis of standard definitions for free- and trapped-carrier lifetimes, and experimentally for a series of a-Si1-xCx:H alloys with xgeneration rate and temperature. As no satisfactory agreement between models and experiments emerges, a simple theory is developed that can account for the experimental observations on the basis of the usual multiple-trappping ideas, provided a small probability of direct free-carrier recombination is included. The theory leads to a stretched-exponential photocurrent decay.

  16. Performance of rapid subtyping tools used for the classification of ...

    African Journals Online (AJOL)

    HIV-1 genetic diversity in sub-Saharan Africa is broad and the AIDS epidemic is driven predominantly by recombinants in Central and West Africa. The classification of HIV-1 strains is therefore necessary to understand diagnostic efficiency, individual treatment responses as well as options for designing vaccines and ...

  17. Classification of forensic autopsy reports through conceptual graph-based document representation model.

    Science.gov (United States)

    Mujtaba, Ghulam; Shuib, Liyana; Raj, Ram Gopal; Rajandram, Retnagowri; Shaikh, Khairunisa; Al-Garadi, Mohammed Ali

    2018-06-01

    Text categorization has been used extensively in recent years to classify plain-text clinical reports. This study employs text categorization techniques for the classification of open narrative forensic autopsy reports. One of the key steps in text classification is document representation. In document representation, a clinical report is transformed into a format that is suitable for classification. The traditional document representation technique for text categorization is the bag-of-words (BoW) technique. In this study, the traditional BoW technique is ineffective in classifying forensic autopsy reports because it merely extracts frequent but discriminative features from clinical reports. Moreover, this technique fails to capture word inversion, as well as word-level synonymy and polysemy, when classifying autopsy reports. Hence, the BoW technique suffers from low accuracy and low robustness unless it is improved with contextual and application-specific information. To overcome the aforementioned limitations of the BoW technique, this research aims to develop an effective conceptual graph-based document representation (CGDR) technique to classify 1500 forensic autopsy reports from four (4) manners of death (MoD) and sixteen (16) causes of death (CoD). Term-based and Systematized Nomenclature of Medicine-Clinical Terms (SNOMED CT) based conceptual features were extracted and represented through graphs. These features were then used to train a two-level text classifier. The first level classifier was responsible for predicting MoD. In addition, the second level classifier was responsible for predicting CoD using the proposed conceptual graph-based document representation technique. To demonstrate the significance of the proposed technique, its results were compared with those of six (6) state-of-the-art document representation techniques. Lastly, this study compared the effects of one-level classification and two-level classification on the experimental results

  18. Classification of root canal microorganisms using electronic-nose and discriminant analysis

    Directory of Open Access Journals (Sweden)

    Özbilge Hatice

    2010-11-01

    Full Text Available Abstract Background Root canal treatment is a debridement process which disrupts and removes entire microorganisms from the root canal system. Identification of microorganisms may help clinicians decide on treatment alternatives such as using different irrigants, intracanal medicaments and antibiotics. However, the difficulty in cultivation and the complexity in isolation of predominant anaerobic microorganisms make clinicians resort to empirical medical treatments. For this reason, identification of microorganisms is not a routinely used procedure in root canal treatment. In this study, we aimed at classifying 7 different standard microorganism strains which are frequently seen in root canal infections, using odor data collected using an electronic nose instrument. Method Our microorganism odor data set consisted of 5 repeated samples from 7 different classes at 4 concentration levels. For each concentration, 35 samples were classified using 3 different discriminant analysis methods. In order to determine an optimal setting for using electronic-nose in such an application, we have tried 3 different approaches in evaluating sensor responses. Moreover, we have used 3 different sensor baseline values in normalizing sensor responses. Since the number of sensors is relatively large compared to sample size, we have also investigated the influence of two different dimension reduction methods on classification performance. Results We have found that quadratic type dicriminant analysis outperforms other varieties of this method. We have also observed that classification performance decreases as the concentration decreases. Among different baseline values used for pre-processing the sensor responses, the model where the minimum values of sensor readings in the sample were accepted as the baseline yields better classification performance. Corresponding to this optimal choice of baseline value, we have noted that among different sensor response model and

  19. Justification of response time testing requirements for pressure and differential pressure sensors

    International Nuclear Information System (INIS)

    Weiss, J.M.; Mayo, C.; Swisher, V.

    1991-01-01

    This paper reports on response time testing (RTT) requirements that were imposed on pressure, differential pressure sensors as a conservative approach to insure that assumptions in the plant safety analyses were met. The purpose of this project has been to identify the need for response time testing using the bases identified in IEEE Standard 338. A combination of plant data analyses, failure modes, and effects analyses (FMEAs) was performed. Eighteen currently qualified sensor models were utilized. The results of these analyses indicate that there are only two failure modes that affect response time, not sensor output concurrently. For these failure modes, appropriate plant actions and testing techniques were identified. Safety system RTT requirements were established by IEEE Standard 338-1975. Criteria for the Periodic Testing of Class IE Power, Protection Systems, presuming the need existed for this testing. This standard established guidelines for periodic testing to verify that loop response times of installed nuclear safety-related equipment were within the limits presumed by the design basis plant transient, accident analyses. The requirements covered all passive, active components in an instrument loop, including sensors. Individual components could be tested either in groups or separately to determine the overall loop response time

  20. Generation of synthetic time histories compatible with multiple-damping design response spectra

    International Nuclear Information System (INIS)

    Lilhanand, K.; Tseng, W.S.

    1987-01-01

    Seismic design of nuclear power plants as currently practiced requires time history analyses be performed to generate floor response spectra for seismic qualification of piping, equipment, and components. Since design response spectra are normally prescribed in the form of smooth spectra, the generation of synthetic time histories whose response spectra closely match the ''target'' design spectra of multiple damping values, is often required for the seismic time history analysis purpose. Various methods of generation of synthetic time histories compatible with target response spectra have been proposed in the literature. Since the mathematical problem of determining a time history from a given set of response spectral values is not unique, an exact solution is not possible, and all the proposed methods resort to some forms of approximate solutions. In this paper, a new iteration scheme, is described which effectively removes the difficulties encountered by the existing methods. This new iteration scheme can not only improve the accuracy of spectrum matching for a single-damping target spectrum, but also automate the spectrum matching for multiple-damping target spectra. The applicability and limitations as well as the method adopted to improve the numerical stability of this new iteration scheme are presented. The effectiveness of this new iteration scheme is illustrated by two example applications

  1. Modern classification of neoplasms: reconciling differences between morphologic and molecular approaches

    International Nuclear Information System (INIS)

    Berman, Jules

    2005-01-01

    For over 150 years, pathologists have relied on histomorphology to classify and diagnose neoplasms. Their success has been stunning, permitting the accurate diagnosis of thousands of different types of neoplasms using only a microscope and a trained eye. In the past two decades, cancer genomics has challenged the supremacy of histomorphology by identifying genetic alterations shared by morphologically diverse tumors and by finding genetic features that distinguish subgroups of morphologically homogeneous tumors. The Developmental Lineage Classification and Taxonomy of Neoplasms groups neoplasms by their embryologic origin. The putative value of this classification is based on the expectation that tumors of a common developmental lineage will share common metabolic pathways and common responses to drugs that target these pathways. The purpose of this manuscript is to show that grouping tumors according to their developmental lineage can reconcile certain fundamental discrepancies resulting from morphologic and molecular approaches to neoplasm classification. In this study, six issues in tumor classification are described that exemplify the growing rift between morphologic and molecular approaches to tumor classification: 1) the morphologic separation between epithelial and non-epithelial tumors; 2) the grouping of tumors based on shared cellular functions; 3) the distinction between germ cell tumors and pluripotent tumors of non-germ cell origin; 4) the distinction between tumors that have lost their differentiation and tumors that arise from uncommitted stem cells; 5) the molecular properties shared by morphologically disparate tumors that have a common developmental lineage, and 6) the problem of re-classifying morphologically identical but clinically distinct subsets of tumors. The discussion of these issues in the context of describing different methods of tumor classification is intended to underscore the clinical value of a robust tumor classification. A

  2. Improving Cross-Day EEG-Based Emotion Classification Using Robust Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    Yuan-Pin Lin

    2017-07-01

    Full Text Available Constructing a robust emotion-aware analytical framework using non-invasively recorded electroencephalogram (EEG signals has gained intensive attentions nowadays. However, as deploying a laboratory-oriented proof-of-concept study toward real-world applications, researchers are now facing an ecological challenge that the EEG patterns recorded in real life substantially change across days (i.e., day-to-day variability, arguably making the pre-defined predictive model vulnerable to the given EEG signals of a separate day. The present work addressed how to mitigate the inter-day EEG variability of emotional responses with an attempt to facilitate cross-day emotion classification, which was less concerned in the literature. This study proposed a robust principal component analysis (RPCA-based signal filtering strategy and validated its neurophysiological validity and machine-learning practicability on a binary emotion classification task (happiness vs. sadness using a five-day EEG dataset of 12 subjects when participated in a music-listening task. The empirical results showed that the RPCA-decomposed sparse signals (RPCA-S enabled filtering off the background EEG activity that contributed more to the inter-day variability, and predominately captured the EEG oscillations of emotional responses that behaved relatively consistent along days. Through applying a realistic add-day-in classification validation scheme, the RPCA-S progressively exploited more informative features (from 12.67 ± 5.99 to 20.83 ± 7.18 and improved the cross-day binary emotion-classification accuracy (from 58.31 ± 12.33% to 64.03 ± 8.40% as trained the EEG signals from one to four recording days and tested against one unseen subsequent day. The original EEG features (prior to RPCA processing neither achieved the cross-day classification (the accuracy was around chance level nor replicated the encouraging improvement due to the inter-day EEG variability. This result

  3. Study on time response properties of ionization chamber in profile gauge

    International Nuclear Information System (INIS)

    Wang Zhentao; Shen Yixiong; Wang Liqiang; Hao Pengfei

    2011-01-01

    The drift time of ions in the ionization chamber was measured by means of using a shortly pulsed X-ray device and through analyzing the voltage signals on the load resistor of the chamber recorded by a digital oscilloscope. By using this method, the time response properties of the ionization chamber in the profile gauge were studied, results of ion drift time for ionization chambers with different internal structures, different voltages and different gas pressures were introduced and the sources of error were discussed. The experiment results show that the time response of ionization chamber in profile gauge meets the requirement of on-line hot strip measuring. (authors)

  4. Review of resistance temperature detector time response characteristics. Safety evaluation report

    International Nuclear Information System (INIS)

    1981-08-01

    A Resistance Temperature Detector (RTD) is used extensively for monitoring water temperatures in nuclear reactor plants. The RTD element does not respond instantaneously to changes in water temperature, but rather there is a time delay before the element senses the temperature change, and in nuclear reactors this delay must be factored into the computation of safety setpoints. For this reason it is necessary to have an accurate description of the RTD time response. This report is a review of the current state of the art of describing and measuring this time response

  5. Classification of subsurface objects using singular values derived from signal frames

    Science.gov (United States)

    Chambers, David H; Paglieroni, David W

    2014-05-06

    The classification system represents a detected object with a feature vector derived from the return signals acquired by an array of N transceivers operating in multistatic mode. The classification system generates the feature vector by transforming the real-valued return signals into complex-valued spectra, using, for example, a Fast Fourier Transform. The classification system then generates a feature vector of singular values for each user-designated spectral sub-band by applying a singular value decomposition (SVD) to the N.times.N square complex-valued matrix formed from sub-band samples associated with all possible transmitter-receiver pairs. The resulting feature vector of singular values may be transformed into a feature vector of singular value likelihoods and then subjected to a multi-category linear or neural network classifier for object classification.

  6. Ontologies vs. Classification Systems

    DEFF Research Database (Denmark)

    Madsen, Bodil Nistrup; Erdman Thomsen, Hanne

    2009-01-01

    What is an ontology compared to a classification system? Is a taxonomy a kind of classification system or a kind of ontology? These are questions that we meet when working with people from industry and public authorities, who need methods and tools for concept clarification, for developing meta...... data sets or for obtaining advanced search facilities. In this paper we will present an attempt at answering these questions. We will give a presentation of various types of ontologies and briefly introduce terminological ontologies. Furthermore we will argue that classification systems, e.g. product...... classification systems and meta data taxonomies, should be based on ontologies....

  7. Combining fine texture and coarse color features for color texture classification

    Science.gov (United States)

    Wang, Junmin; Fan, Yangyu; Li, Ning

    2017-11-01

    Color texture classification plays an important role in computer vision applications because texture and color are two fundamental visual features. To classify the color texture via extracting discriminative color texture features in real time, we present an approach of combining the fine texture and coarse color features for color texture classification. First, the input image is transformed from RGB to HSV color space to separate texture and color information. Second, the scale-selective completed local binary count (CLBC) algorithm is introduced to extract the fine texture feature from the V component in HSV color space. Third, both H and S components are quantized at an optimal coarse level. Furthermore, the joint histogram of H and S components is calculated, which is considered as the coarse color feature. Finally, the fine texture and coarse color features are combined as the final descriptor and the nearest subspace classifier is used for classification. Experimental results on CUReT, KTH-TIPS, and New-BarkTex databases demonstrate that the proposed method achieves state-of-the-art classification performance. Moreover, the proposed method is fast enough for real-time applications.

  8. Performing dynamic time history analyses by extension of the response spectrum method

    International Nuclear Information System (INIS)

    Hulbert, G.M.

    1983-01-01

    A method is presented to calculate the dynamic time history response of finite-element models using results from response spectrum analyses. The proposed modified time history method does not represent a new mathamatical approach to dynamic analysis but suggests a more efficient ordering of the analytical equations and procedures. The modified time history method is considerably faster and less expensive to use than normal time hisory methods. This paper presents the theory and implementation of the modified time history approach along with comparisons of the modified and normal time history methods for a prototypic seismic piping design problem

  9. 48 CFR 5.203 - Publicizing and response time.

    Science.gov (United States)

    2010-10-01

    ... development if the proposed contract action is expected to exceed the simplified acquisition threshold. (f) Nothing in this subpart prohibits officers or employees of agencies from responding to requests for... response times specified in paragraphs (a) through (d) of this section. Upon learning that a particular...

  10. Telling time in the Fourth Gospel

    Directory of Open Access Journals (Sweden)

    Jerome H. Neyrey

    2008-01-01

    Full Text Available When we begin the task of telling time in the Fourth Gospel, we bring something not found in any previous study, namely, a model of time articulated by cross- ultural anthropologists (Bordieu, in Pitt-Rivers 1963:55-72, Ayoade, in Wright 1984:71-89. As much as we admire Davies’ study, she has no notes to her chapter on time nor any citations in her bibliography to indicate that she has any conversation partners, much less cultural experts, a deficit to be filled in this study. Learning to tell time entails three theoretical considerations: a definition of time, key classifications of it, and special attention to what the ancients meant by past, present and future. With these lenses we are prepared to do as thorough a study as we can on telling time in the Fourth Gospel. As we consider each classification, we will suggest a brief meaning of it from the experts on time, then present a body of Greco-Roman materials illustrative of the classification, and finally use it to gather and interpret data in John. Proving the native existence of these classifications for telling time in antiquity is essential for readers to have a background against which to compare their usage with that of the Fourth Gospel.

  11. Parallelizing Gene Expression Programming Algorithm in Enabling Large-Scale Classification

    Directory of Open Access Journals (Sweden)

    Lixiong Xu

    2017-01-01

    Full Text Available As one of the most effective function mining algorithms, Gene Expression Programming (GEP algorithm has been widely used in classification, pattern recognition, prediction, and other research fields. Based on the self-evolution, GEP is able to mine an optimal function for dealing with further complicated tasks. However, in big data researches, GEP encounters low efficiency issue due to its long time mining processes. To improve the efficiency of GEP in big data researches especially for processing large-scale classification tasks, this paper presents a parallelized GEP algorithm using MapReduce computing model. The experimental results show that the presented algorithm is scalable and efficient for processing large-scale classification tasks.

  12. Sound classification of dwellings in the Nordic countries

    DEFF Research Database (Denmark)

    Rindel, Jens Holger; Turunen-Rise, Iiris

    1997-01-01

    be met. The classification system is based on limit values for airborne sound insulation, impact sound pressure level, reverberation time and indoor and outdoor noise levels. The purpose of the standard is to offer a tool for specification of a standardised acoustic climate and to promote constructors......A draft standard INSTA 122:1997 on sound classification of dwellings is for voting as a common national standard in the Nordic countries (Denmark, Norway, Sweden, Finland, Iceland) and in Estonia. The draft standard specifies a sound classification system with four classes A, B, C and D, where...... class C is proposed as the future minimum requirements for new dwellings. The classes B and A define criteria for dwellings with improved or very good acoustic conditions, whereas class D may be used for older, renovated dwellings in which the acoustic quality level of a new dwelling cannot reasonably...

  13. A new method for measuring the response time of the high pressure ionization chamber

    International Nuclear Information System (INIS)

    Wang, Zhentao; Shen, Yixiong; An, Jigang

    2012-01-01

    Time response is an important performance characteristic for gas-pressurized ionization chambers. To study the time response, it is especially crucial to measure the ion drift time in high pressure ionization chambers. In this paper, a new approach is proposed to study the ion drift time in high pressure ionization chambers. It is carried out with a short-pulsed X-ray source and a high-speed digitizer. The ion drift time in the chamber is then determined from the digitized data. By measuring the ion drift time of a 15 atm xenon testing chamber, the method has been proven to be effective in the time response studies of ionization chambers. - Highlights: ► A method for measuring response time of high pressure ionization chamber is proposed. ► A pulsed X-ray producer and a digital oscilloscope are used in the method. ► The response time of a 15 atm Xenon testing ionization chamber has been measured. ► The method has been proved to be simple, feasible and effective.

  14. Effect of hip braces on brake response time: Repeated measures designed study.

    Science.gov (United States)

    Dammerer, Dietmar; Waidmann, Cornelia; Huber, Dennis G; Krismer, Martin; Haid, Christian; Liebensteiner, Michael C

    2017-08-01

    The question whether or not a patient with a hip brace should drive a car is of obvious importance because the advice given to patients to resume driving is often anecdotal as few scientific data are available on this specific subject. To assess driving ability (brake response time) with commonly used hip braces. Repeated measures design. Brake response time was assessed under six conditions: (1) without a brace (control), (2) with a typical postoperative hip brace with adjustable range of motion and the settings: unrestricted, (3) flexion limited to 70°, (4) extension blocked at 20° hip flexion, (5) both flexion and extension limited (20°/70°) and (6) an elastic hip bandage. Brake response time was assessed using a custom-made driving simulator as used in previous studies. The participants were a convenience sample of able-bodied participants. A total of 70 participants (35 women and 35 men) participated in our study. Mean age was 31.1 (standard deviation: 10.6; range: 21.7-66.4) years. A significant within-subject effect for brake response time was found ( p = 0.009), but subsequent post hoc analyses revealed no significant differences between control and the other settings. Based on our findings, it does not seem mandatory to recommend driving abstinence for patients wearing a hip orthosis. We suggest that our results be interpreted with caution, because (1) an underlying pathological hip condition needs to be considered, (2) the ability to drive a car safely is multifactorial and brake response time is only one component thereof and (3) brake response time measurements were performed only with healthy participants. Clinical relevance Hip braces are used in the context of joint-preserving and prosthetic surgery of the hip. Therefore, clinicians are confronted with the question whether to allow driving a car with the respective hip brace or not. Our data suggest that hip braces do not impair brake response time.

  15. On response time and cycle time distributions in a two-stage cyclic queue

    NARCIS (Netherlands)

    Boxma, O.J.; Donk, P.

    1982-01-01

    We consider a two-stage closed cyclic queueing model. For the case of an exponential server at each queue we derive the joint distribution of the successive response times of a custumer at both queues, using a reversibility argument. This joint distribution turns out to have a product form. The

  16. Sources, classification, and disposal of radioactive wastes: History and legal and regulatory requirements

    International Nuclear Information System (INIS)

    Kocher, D.C.

    1991-01-01

    This report discusses the following topics: (1) early definitions of different types (classes) of radioactive waste developed prior to definitions in laws and regulations; (2) sources of different classes of radioactive waste; (3) current laws and regulations addressing classification of radioactive wastes; and requirements for disposal of different waste classes. Relationship between waste classification and requirements for permanent disposal is emphasized; (4) federal and state responsibilities for radioactive wastes; and (5) distinctions between radioactive wastes produced in civilian and defense sectors

  17. Searching for serial refreshing in working memory: Using response times to track the content of the focus of attention over time.

    Science.gov (United States)

    Vergauwe, Evie; Hardman, Kyle O; Rouder, Jeffrey N; Roemer, Emily; McAllaster, Sara; Cowan, Nelson

    2016-12-01

    One popular idea is that, to support the maintenance of a set of elements over brief periods of time, the focus of attention rotates among the different elements, thereby serially refreshing the content of working memory (WM). In the research reported here, probe letters were presented between to-be-remembered letters, and response times to these probes were used to infer the status of the different items in WM. If the focus of attention cycles from one item to the next, its content should be different at different points in time, and this should be reflected in a change in the response time patterns over time. Across a set of four experiments, we demonstrated a striking pattern of invariance in the response time patterns over time, suggesting either that the content of the focus of attention did not change over time or that response times cannot be used to infer the content of the focus of attention. We discuss how this pattern constrains models of WM, attention, and human information processing.

  18. Gender classification system in uncontrolled environments

    Science.gov (United States)

    Zeng, Pingping; Zhang, Yu-Jin; Duan, Fei

    2011-01-01

    Most face analysis systems available today perform mainly on restricted databases of images in terms of size, age, illumination. In addition, it is frequently assumed that all images are frontal and unconcealed. Actually, in a non-guided real-time supervision, the face pictures taken may often be partially covered and with head rotation less or more. In this paper, a special system supposed to be used in real-time surveillance with un-calibrated camera and non-guided photography is described. It mainly consists of five parts: face detection, non-face filtering, best-angle face selection, texture normalization, and gender classification. Emphases are focused on non-face filtering and best-angle face selection parts as well as texture normalization. Best-angle faces are figured out by PCA reconstruction, which equals to an implicit face alignment and results in a huge increase of the accuracy for gender classification. Dynamic skin model and a masked PCA reconstruction algorithm are applied to filter out faces detected in error. In order to fully include facial-texture and shape-outline features, a hybrid feature that is a combination of Gabor wavelet and PHoG (pyramid histogram of gradients) was proposed to equitable inner texture and outer contour. Comparative study on the effects of different non-face filtering and texture masking methods in the context of gender classification by SVM is reported through experiments on a set of UT (a company name) face images, a large number of internet images and CAS (Chinese Academy of Sciences) face database. Some encouraging results are obtained.

  19. Changes in the Social Responsibility Attitudes of Engineering Students Over Time.

    Science.gov (United States)

    Bielefeldt, Angela R; Canney, Nathan E

    2016-10-01

    This research explored how engineering student views of their responsibility toward helping individuals and society through their profession, so-called social responsibility, change over time. A survey instrument was administered to students initially primarily in their first year, senior year, or graduate studies majoring in mechanical, civil, or environmental engineering at five institutions in September 2012, April 2013, and March 2014. The majority of the students (57 %) did not change significantly in their social responsibility attitudes, but 23 % decreased and 20 % increased. The students who increased, decreased, or remained the same in their social responsibility attitudes over time did not differ significantly in terms of gender, academic rank, or major. Some differences were found between institutions. Students who decreased in social responsibility initially possessed more positive social responsibility attitudes, were less likely to indicate that college courses impacted their views of social responsibility, and were more likely to have decreased in the frequency that they participated in volunteer activities, compared to students who did not change or increased their social responsibility. Although the large percentage of engineering students who decreased their social responsibility during college was disappointing, it is encouraging that courses and participation in volunteer activities may combat this trend.

  20. Optimal time interval for induction of immunologic adaptive response

    International Nuclear Information System (INIS)

    Ju Guizhi; Song Chunhua; Liu Shuzheng

    1994-01-01

    The optimal time interval between prior dose (D1) and challenge dose (D2) for the induction of immunologic adaptive response was investigated. Kunming mice were exposed to 75 mGy X-rays at a dose rate of 12.5 mGy/min. 3, 6, 12, 24 or 60 h after the prior irradiation the mice were challenged with a dose of 1.5 Gy at a dose rate of 0.33 Gy/min. 18h after D2, the mice were sacrificed for examination of immunological parameters. The results showed that with an interval of 6 h between D1 and D2, the adaptive response of the reaction of splenocytes to LPS was induced, and with an interval of 12 h the adaptive responses of spontaneous incorporation of 3 H-TdR into thymocytes and the reaction of splenocytes to Con A and LPS were induced with 75 mGy prior irradiation. The data suggested that the optimal time intervals between D1 and D2 for the induction of immunologic adaptive response were 6 h and 12 h with a D1 of 75 mGy and a D2 of 1.5 Gy. The mechanism of immunologic adaptation following low dose radiation is discussed