WorldWideScience

Sample records for vector point dataset

  1. Critical Point Cancellation in 3D Vector Fields: Robustness and Discussion.

    Science.gov (United States)

    Skraba, Primoz; Rosen, Paul; Wang, Bei; Chen, Guoning; Bhatia, Harsh; Pascucci, Valerio

    2016-02-29

    Vector field topology has been successfully applied to represent the structure of steady vector fields. Critical points, one of the essential components of vector field topology, play an important role in describing the complexity of the extracted structure. Simplifying vector fields via critical point cancellation has practical merit for interpreting the behaviors of complex vector fields such as turbulence. However, there is no effective technique that allows direct cancellation of critical points in 3D. This work fills this gap and introduces the first framework to directly cancel pairs or groups of 3D critical points in a hierarchical manner with a guaranteed minimum amount of perturbation based on their robustness, a quantitative measure of their stability. In addition, our framework does not require the extraction of the entire 3D topology, which contains non-trivial separation structures, and thus is computationally effective. Furthermore, our algorithm can remove critical points in any subregion of the domain whose degree is zero and handle complex boundary configurations, making it capable of addressing challenging scenarios that may not be resolved otherwise. We apply our method to synthetic and simulation datasets to demonstrate its effectiveness.

  2. Visualizing Robustness of Critical Points for 2D Time-Varying Vector Fields

    KAUST Repository

    Wang, B.

    2013-06-01

    Analyzing critical points and their temporal evolutions plays a crucial role in understanding the behavior of vector fields. A key challenge is to quantify the stability of critical points: more stable points may represent more important phenomena or vice versa. The topological notion of robustness is a tool which allows us to quantify rigorously the stability of each critical point. Intuitively, the robustness of a critical point is the minimum amount of perturbation necessary to cancel it within a local neighborhood, measured under an appropriate metric. In this paper, we introduce a new analysis and visualization framework which enables interactive exploration of robustness of critical points for both stationary and time-varying 2D vector fields. This framework allows the end-users, for the first time, to investigate how the stability of a critical point evolves over time. We show that this depends heavily on the global properties of the vector field and that structural changes can correspond to interesting behavior. We demonstrate the practicality of our theories and techniques on several datasets involving combustion and oceanic eddy simulations and obtain some key insights regarding their stable and unstable features. © 2013 The Author(s) Computer Graphics Forum © 2013 The Eurographics Association and Blackwell Publishing Ltd.

  3. Visualizing Robustness of Critical Points for 2D Time-Varying Vector Fields

    KAUST Repository

    Wang, B.; Rosen, P.; Skraba, P.; Bhatia, H.; Pascucci, V.

    2013-01-01

    Analyzing critical points and their temporal evolutions plays a crucial role in understanding the behavior of vector fields. A key challenge is to quantify the stability of critical points: more stable points may represent more important phenomena or vice versa. The topological notion of robustness is a tool which allows us to quantify rigorously the stability of each critical point. Intuitively, the robustness of a critical point is the minimum amount of perturbation necessary to cancel it within a local neighborhood, measured under an appropriate metric. In this paper, we introduce a new analysis and visualization framework which enables interactive exploration of robustness of critical points for both stationary and time-varying 2D vector fields. This framework allows the end-users, for the first time, to investigate how the stability of a critical point evolves over time. We show that this depends heavily on the global properties of the vector field and that structural changes can correspond to interesting behavior. We demonstrate the practicality of our theories and techniques on several datasets involving combustion and oceanic eddy simulations and obtain some key insights regarding their stable and unstable features. © 2013 The Author(s) Computer Graphics Forum © 2013 The Eurographics Association and Blackwell Publishing Ltd.

  4. Point-based warping with optimized weighting factors of displacement vectors

    Science.gov (United States)

    Pielot, Ranier; Scholz, Michael; Obermayer, Klaus; Gundelfinger, Eckart D.; Hess, Andreas

    2000-06-01

    The accurate comparison of inter-individual 3D image brain datasets requires non-affine transformation techniques (warping) to reduce geometric variations. Constrained by the biological prerequisites we use in this study a landmark-based warping method with weighted sums of displacement vectors, which is enhanced by an optimization process. Furthermore, we investigate fast automatic procedures for determining landmarks to improve the practicability of 3D warping. This combined approach was tested on 3D autoradiographs of Gerbil brains. The autoradiographs were obtained after injecting a non-metabolized radioactive glucose derivative into the Gerbil thereby visualizing neuronal activity in the brain. Afterwards the brain was processed with standard autoradiographical methods. The landmark-generator computes corresponding reference points simultaneously within a given number of datasets by Monte-Carlo-techniques. The warping function is a distance weighted exponential function with a landmark- specific weighting factor. These weighting factors are optimized by a computational evolution strategy. The warping quality is quantified by several coefficients (correlation coefficient, overlap-index, and registration error). The described approach combines a highly suitable procedure to automatically detect landmarks in autoradiographical brain images and an enhanced point-based warping technique, optimizing the local weighting factors. This optimization process significantly improves the similarity between the warped and the target dataset.

  5. Estimation and Forecasting in Vector Autoregressive Moving Average Models for Rich Datasets

    DEFF Research Database (Denmark)

    Dias, Gustavo Fruet; Kapetanios, George

    We address the issue of modelling and forecasting macroeconomic variables using rich datasets, by adopting the class of Vector Autoregressive Moving Average (VARMA) models. We overcome the estimation issue that arises with this class of models by implementing an iterative ordinary least squares (...

  6. Vector Nonlinear Time-Series Analysis of Gamma-Ray Burst Datasets on Heterogeneous Clusters

    Directory of Open Access Journals (Sweden)

    Ioana Banicescu

    2005-01-01

    Full Text Available The simultaneous analysis of a number of related datasets using a single statistical model is an important problem in statistical computing. A parameterized statistical model is to be fitted on multiple datasets and tested for goodness of fit within a fixed analytical framework. Definitive conclusions are hopefully achieved by analyzing the datasets together. This paper proposes a strategy for the efficient execution of this type of analysis on heterogeneous clusters. Based on partitioning processors into groups for efficient communications and a dynamic loop scheduling approach for load balancing, the strategy addresses the variability of the computational loads of the datasets, as well as the unpredictable irregularities of the cluster environment. Results from preliminary tests of using this strategy to fit gamma-ray burst time profiles with vector functional coefficient autoregressive models on 64 processors of a general purpose Linux cluster demonstrate the effectiveness of the strategy.

  7. Interior point decoding for linear vector channels

    International Nuclear Information System (INIS)

    Wadayama, T

    2008-01-01

    In this paper, a novel decoding algorithm for low-density parity-check (LDPC) codes based on convex optimization is presented. The decoding algorithm, called interior point decoding, is designed for linear vector channels. The linear vector channels include many practically important channels such as inter-symbol interference channels and partial response channels. It is shown that the maximum likelihood decoding (MLD) rule for a linear vector channel can be relaxed to a convex optimization problem, which is called a relaxed MLD problem

  8. Interior point decoding for linear vector channels

    Energy Technology Data Exchange (ETDEWEB)

    Wadayama, T [Nagoya Institute of Technology, Gokiso, Showa-ku, Nagoya, Aichi, 466-8555 (Japan)], E-mail: wadayama@nitech.ac.jp

    2008-01-15

    In this paper, a novel decoding algorithm for low-density parity-check (LDPC) codes based on convex optimization is presented. The decoding algorithm, called interior point decoding, is designed for linear vector channels. The linear vector channels include many practically important channels such as inter-symbol interference channels and partial response channels. It is shown that the maximum likelihood decoding (MLD) rule for a linear vector channel can be relaxed to a convex optimization problem, which is called a relaxed MLD problem.

  9. New fuzzy support vector machine for the class imbalance problem in medical datasets classification.

    Science.gov (United States)

    Gu, Xiaoqing; Ni, Tongguang; Wang, Hongyuan

    2014-01-01

    In medical datasets classification, support vector machine (SVM) is considered to be one of the most successful methods. However, most of the real-world medical datasets usually contain some outliers/noise and data often have class imbalance problems. In this paper, a fuzzy support machine (FSVM) for the class imbalance problem (called FSVM-CIP) is presented, which can be seen as a modified class of FSVM by extending manifold regularization and assigning two misclassification costs for two classes. The proposed FSVM-CIP can be used to handle the class imbalance problem in the presence of outliers/noise, and enhance the locality maximum margin. Five real-world medical datasets, breast, heart, hepatitis, BUPA liver, and pima diabetes, from the UCI medical database are employed to illustrate the method presented in this paper. Experimental results on these datasets show the outperformed or comparable effectiveness of FSVM-CIP.

  10. New Fuzzy Support Vector Machine for the Class Imbalance Problem in Medical Datasets Classification

    Directory of Open Access Journals (Sweden)

    Xiaoqing Gu

    2014-01-01

    Full Text Available In medical datasets classification, support vector machine (SVM is considered to be one of the most successful methods. However, most of the real-world medical datasets usually contain some outliers/noise and data often have class imbalance problems. In this paper, a fuzzy support machine (FSVM for the class imbalance problem (called FSVM-CIP is presented, which can be seen as a modified class of FSVM by extending manifold regularization and assigning two misclassification costs for two classes. The proposed FSVM-CIP can be used to handle the class imbalance problem in the presence of outliers/noise, and enhance the locality maximum margin. Five real-world medical datasets, breast, heart, hepatitis, BUPA liver, and pima diabetes, from the UCI medical database are employed to illustrate the method presented in this paper. Experimental results on these datasets show the outperformed or comparable effectiveness of FSVM-CIP.

  11. Vector boson excitations near deconfined quantum critical points.

    Science.gov (United States)

    Huh, Yejin; Strack, Philipp; Sachdev, Subir

    2013-10-18

    We show that the Néel states of two-dimensional antiferromagnets have low energy vector boson excitations in the vicinity of deconfined quantum critical points. We compute the universal damping of these excitations arising from spin-wave emission. Detection of such a vector boson will demonstrate the existence of emergent topological gauge excitations in a quantum spin system.

  12. Vector field statistical analysis of kinematic and force trajectories.

    Science.gov (United States)

    Pataky, Todd C; Robinson, Mark A; Vanrenterghem, Jos

    2013-09-27

    When investigating the dynamics of three-dimensional multi-body biomechanical systems it is often difficult to derive spatiotemporally directed predictions regarding experimentally induced effects. A paradigm of 'non-directed' hypothesis testing has emerged in the literature as a result. Non-directed analyses typically consist of ad hoc scalar extraction, an approach which substantially simplifies the original, highly multivariate datasets (many time points, many vector components). This paper describes a commensurately multivariate method as an alternative to scalar extraction. The method, called 'statistical parametric mapping' (SPM), uses random field theory to objectively identify field regions which co-vary significantly with the experimental design. We compared SPM to scalar extraction by re-analyzing three publicly available datasets: 3D knee kinematics, a ten-muscle force system, and 3D ground reaction forces. Scalar extraction was found to bias the analyses of all three datasets by failing to consider sufficient portions of the dataset, and/or by failing to consider covariance amongst vector components. SPM overcame both problems by conducting hypothesis testing at the (massively multivariate) vector trajectory level, with random field corrections simultaneously accounting for temporal correlation and vector covariance. While SPM has been widely demonstrated to be effective for analyzing 3D scalar fields, the current results are the first to demonstrate its effectiveness for 1D vector field analysis. It was concluded that SPM offers a generalized, statistically comprehensive solution to scalar extraction's over-simplification of vector trajectories, thereby making it useful for objectively guiding analyses of complex biomechanical systems. © 2013 Published by Elsevier Ltd. All rights reserved.

  13. Feature Selection Method Based on Artificial Bee Colony Algorithm and Support Vector Machines for Medical Datasets Classification

    Directory of Open Access Journals (Sweden)

    Mustafa Serter Uzer

    2013-01-01

    Full Text Available This paper offers a hybrid approach that uses the artificial bee colony (ABC algorithm for feature selection and support vector machines for classification. The purpose of this paper is to test the effect of elimination of the unimportant and obsolete features of the datasets on the success of the classification, using the SVM classifier. The developed approach conventionally used in liver diseases and diabetes diagnostics, which are commonly observed and reduce the quality of life, is developed. For the diagnosis of these diseases, hepatitis, liver disorders and diabetes datasets from the UCI database were used, and the proposed system reached a classification accuracies of 94.92%, 74.81%, and 79.29%, respectively. For these datasets, the classification accuracies were obtained by the help of the 10-fold cross-validation method. The results show that the performance of the method is highly successful compared to other results attained and seems very promising for pattern recognition applications.

  14. PENGEMBANGAN MODEL SUPPORT VECTOR MACHINES (SVM DENGAN MEMPERBANYAK DATASET UNTUK PREDIKSI BISNIS FOREX MENGGUNAKAN METODE KERNEL TRICK

    Directory of Open Access Journals (Sweden)

    adi sucipto

    2017-09-01

    Full Text Available There are many types of investments that can be used to generate income, such as in the form of land, houses, gold, precious metals etc., there are also in the form of financial assets such as stocks, mutual funds, bonds and money markets or capital markets. One of the investments that attract enough attention today is the capital market investment. The purpose of this study is to predict and improve the accuracy of foreign exchange rates on forex business by using the Support Vector Machine model as a model for predicting and using more data sets compared with previous research that is as many as 1558 dataset. This study uses currency exchange rate data obtained from PT. Best Profit Future Cab. Surabaya is already in the form of data consisting of open, high, low, close attributes by using the current data of Euro currency exchange rate to USA Dollar with period every 1 minutes from May 12, 2016 at 09.51 until 13 May 2016 at 12:30 As much as 1689 dataset, After conducting research using Support Vector Machine model with kernel trick method to predict Forex using current data of Euro exchange rate to USA Dollar with period every 1 minutes from May 12, 2016 at 09.51 until 13 May 2016 at 12:30 as much as 1689 The dataset yielded a considerable prediction accuracy of 97.86%, with this considerable accuracy indicating that the movement of the Euro currency exchange rate to the USA Dollar on May 12 to May 13, 2016 can be predicted precisely.

  15. An Investigation of the High Efficiency Estimation Approach of the Large-Scale Scattered Point Cloud Normal Vector

    Directory of Open Access Journals (Sweden)

    Xianglin Meng

    2018-03-01

    Full Text Available The normal vector estimation of the large-scale scattered point cloud (LSSPC plays an important role in point-based shape editing. However, the normal vector estimation for LSSPC cannot meet the great challenge of the sharp increase of the point cloud that is mainly attributed to its low computational efficiency. In this paper, a novel, fast method-based on bi-linear interpolation is reported on the normal vector estimation for LSSPC. We divide the point sets into many small cubes to speed up the local point search and construct interpolation nodes on the isosurface expressed by the point cloud. On the premise of calculating the normal vectors of these interpolated nodes, a normal vector bi-linear interpolation of the points in the cube is realized. The proposed approach has the merits of accurate, simple, and high efficiency, because the algorithm only needs to search neighbor and calculates normal vectors for interpolation nodes that are usually far less than the point cloud. The experimental results of several real and simulated point sets show that our method is over three times faster than the Elliptic Gabriel Graph-based method, and the average deviation is less than 0.01 mm.

  16. Georeferencing UAS Derivatives Through Point Cloud Registration with Archived Lidar Datasets

    Science.gov (United States)

    Magtalas, M. S. L. Y.; Aves, J. C. L.; Blanco, A. C.

    2016-10-01

    Georeferencing gathered images is a common step before performing spatial analysis and other processes on acquired datasets using unmanned aerial systems (UAS). Methods of applying spatial information to aerial images or their derivatives is through onboard GPS (Global Positioning Systems) geotagging, or through tying of models through GCPs (Ground Control Points) acquired in the field. Currently, UAS (Unmanned Aerial System) derivatives are limited to meter-levels of accuracy when their generation is unaided with points of known position on the ground. The use of ground control points established using survey-grade GPS or GNSS receivers can greatly reduce model errors to centimeter levels. However, this comes with additional costs not only with instrument acquisition and survey operations, but also in actual time spent in the field. This study uses a workflow for cloud-based post-processing of UAS data in combination with already existing LiDAR data. The georeferencing of the UAV point cloud is executed using the Iterative Closest Point algorithm (ICP). It is applied through the open-source CloudCompare software (Girardeau-Montaut, 2006) on a `skeleton point cloud'. This skeleton point cloud consists of manually extracted features consistent on both LiDAR and UAV data. For this cloud, roads and buildings with minimal deviations given their differing dates of acquisition are considered consistent. Transformation parameters are computed for the skeleton cloud which could then be applied to the whole UAS dataset. In addition, a separate cloud consisting of non-vegetation features automatically derived using CANUPO classification algorithm (Brodu and Lague, 2012) was used to generate a separate set of parameters. Ground survey is done to validate the transformed cloud. An RMSE value of around 16 centimeters was found when comparing validation data to the models georeferenced using the CANUPO cloud and the manual skeleton cloud. Cloud-to-cloud distance computations of

  17. Datum Feature Extraction and Deformation Analysis Method Based on Normal Vector of Point Cloud

    Science.gov (United States)

    Sun, W.; Wang, J.; Jin, F.; Liang, Z.; Yang, Y.

    2018-04-01

    In order to solve the problem lacking applicable analysis method in the application of three-dimensional laser scanning technology to the field of deformation monitoring, an efficient method extracting datum feature and analysing deformation based on normal vector of point cloud was proposed. Firstly, the kd-tree is used to establish the topological relation. Datum points are detected by tracking the normal vector of point cloud determined by the normal vector of local planar. Then, the cubic B-spline curve fitting is performed on the datum points. Finally, datum elevation and the inclination angle of the radial point are calculated according to the fitted curve and then the deformation information was analyzed. The proposed approach was verified on real large-scale tank data set captured with terrestrial laser scanner in a chemical plant. The results show that the method could obtain the entire information of the monitor object quickly and comprehensively, and reflect accurately the datum feature deformation.

  18. Topological Vector Space-Valued Cone Metric Spaces and Fixed Point Theorems

    Directory of Open Access Journals (Sweden)

    Radenović Stojan

    2010-01-01

    Full Text Available We develop the theory of topological vector space valued cone metric spaces with nonnormal cones. We prove three general fixed point results in these spaces and deduce as corollaries several extensions of theorems about fixed points and common fixed points, known from the theory of (normed-valued cone metric spaces. Examples are given to distinguish our results from the known ones.

  19. Measurement of Charmless B to Vector-Vector decays at BaBar

    International Nuclear Information System (INIS)

    Olaiya, Emmanuel

    2011-01-01

    The authors present results of B → vector-vector (VV) and B → vector-axial vector (VA) decays B 0 → φX(X = φ,ρ + or ρ 0 ), B + → φK (*)+ , B 0 → K*K*, B 0 → ρ + b 1 - and B + → K* 0 α 1 + . The largest dataset used for these results is based on 465 x 10 6 Υ(4S) → B(bar B) decays, collected with the BABAR detector at the PEP-II B meson factory located at the Stanford Linear Accelerator Center (SLAC). Using larger datasets, the BABAR experiment has provided more precise B → VV measurements, further supporting the smaller than expected longitudinal polarization fraction of B → φK*. Additional B meson to vector-vector and vector-axial vector decays have also been studied with a view to shedding light on the polarization anomaly. Taking into account the available errors, we find no disagreement between theory and experiment for these additional decays.

  20. Robust point matching via vector field consensus.

    Science.gov (United States)

    Jiayi Ma; Ji Zhao; Jinwen Tian; Yuille, Alan L; Zhuowen Tu

    2014-04-01

    In this paper, we propose an efficient algorithm, called vector field consensus, for establishing robust point correspondences between two sets of points. Our algorithm starts by creating a set of putative correspondences which can contain a very large number of false correspondences, or outliers, in addition to a limited number of true correspondences (inliers). Next, we solve for correspondence by interpolating a vector field between the two point sets, which involves estimating a consensus of inlier points whose matching follows a nonparametric geometrical constraint. We formulate this a maximum a posteriori (MAP) estimation of a Bayesian model with hidden/latent variables indicating whether matches in the putative set are outliers or inliers. We impose nonparametric geometrical constraints on the correspondence, as a prior distribution, using Tikhonov regularizers in a reproducing kernel Hilbert space. MAP estimation is performed by the EM algorithm which by also estimating the variance of the prior model (initialized to a large value) is able to obtain good estimates very quickly (e.g., avoiding many of the local minima inherent in this formulation). We illustrate this method on data sets in 2D and 3D and demonstrate that it is robust to a very large number of outliers (even up to 90%). We also show that in the special case where there is an underlying parametric geometrical model (e.g., the epipolar line constraint) that we obtain better results than standard alternatives like RANSAC if a large number of outliers are present. This suggests a two-stage strategy, where we use our nonparametric model to reduce the size of the putative set and then apply a parametric variant of our approach to estimate the geometric parameters. Our algorithm is computationally efficient and we provide code for others to use it. In addition, our approach is general and can be applied to other problems, such as learning with a badly corrupted training data set.

  1. Reducing and filtering point clouds with enhanced vector quantization.

    Science.gov (United States)

    Ferrari, Stefano; Ferrigno, Giancarlo; Piuri, Vincenzo; Borghese, N Alberto

    2007-01-01

    Modern scanners are able to deliver huge quantities of three-dimensional (3-D) data points sampled on an object's surface, in a short time. These data have to be filtered and their cardinality reduced to come up with a mesh manageable at interactive rates. We introduce here a novel procedure to accomplish these two tasks, which is based on an optimized version of soft vector quantization (VQ). The resulting technique has been termed enhanced vector quantization (EVQ) since it introduces several improvements with respect to the classical soft VQ approaches. These are based on computationally expensive iterative optimization; local computation is introduced here, by means of an adequate partitioning of the data space called hyperbox (HB), to reduce the computational time so as to be linear in the number of data points N, saving more than 80% of time in real applications. Moreover, the algorithm can be fully parallelized, thus leading to an implementation that is sublinear in N. The voxel side and the other parameters are automatically determined from data distribution on the basis of the Zador's criterion. This makes the algorithm completely automatic. Because the only parameter to be specified is the compression rate, the procedure is suitable even for nontrained users. Results obtained in reconstructing faces of both humans and puppets as well as artifacts from point clouds publicly available on the web are reported and discussed, in comparison with other methods available in the literature. EVQ has been conceived as a general procedure, suited for VQ applications with large data sets whose data space has relatively low dimensionality.

  2. A Hybrid Neuro-Fuzzy Model For Integrating Large Earth-Science Datasets

    Science.gov (United States)

    Porwal, A.; Carranza, J.; Hale, M.

    2004-12-01

    A GIS-based hybrid neuro-fuzzy approach to integration of large earth-science datasets for mineral prospectivity mapping is described. It implements a Takagi-Sugeno type fuzzy inference system in the framework of a four-layered feed-forward adaptive neural network. Each unique combination of the datasets is considered a feature vector whose components are derived by knowledge-based ordinal encoding of the constituent datasets. A subset of feature vectors with a known output target vector (i.e., unique conditions known to be associated with either a mineralized or a barren location) is used for the training of an adaptive neuro-fuzzy inference system. Training involves iterative adjustment of parameters of the adaptive neuro-fuzzy inference system using a hybrid learning procedure for mapping each training vector to its output target vector with minimum sum of squared error. The trained adaptive neuro-fuzzy inference system is used to process all feature vectors. The output for each feature vector is a value that indicates the extent to which a feature vector belongs to the mineralized class or the barren class. These values are used to generate a prospectivity map. The procedure is demonstrated by an application to regional-scale base metal prospectivity mapping in a study area located in the Aravalli metallogenic province (western India). A comparison of the hybrid neuro-fuzzy approach with pure knowledge-driven fuzzy and pure data-driven neural network approaches indicates that the former offers a superior method for integrating large earth-science datasets for predictive spatial mathematical modelling.

  3. Space vector modulation strategy for neutral-point voltage balancing in three-level inverter systems

    DEFF Research Database (Denmark)

    Choi, Uimin; Lee, Kyo Beum

    2013-01-01

    This study proposes a space vector modulation (SVM) strategy to balance the neutral-point voltage of three-level inverter systems. The proposed method is implemented by combining conventional symmetric SVM with nearest three-vector (NTV) modulation. The conventional SVM is converted to NTV...... modulation by properly adding or subtracting a minimum gate-on time. In addition, using this method, the switching frequency is reduced and a decrease of switching loss would be yielded. The neutral-point voltage is balanced by the proposed SVM strategy without additional hardware or complex calculations....... Simulation and experimental results are shown to verify the validity and feasibility of the proposed SVM strategy....

  4. Interactive visualization and analysis of multimodal datasets for surgical applications.

    Science.gov (United States)

    Kirmizibayrak, Can; Yim, Yeny; Wakid, Mike; Hahn, James

    2012-12-01

    Surgeons use information from multiple sources when making surgical decisions. These include volumetric datasets (such as CT, PET, MRI, and their variants), 2D datasets (such as endoscopic videos), and vector-valued datasets (such as computer simulations). Presenting all the information to the user in an effective manner is a challenging problem. In this paper, we present a visualization approach that displays the information from various sources in a single coherent view. The system allows the user to explore and manipulate volumetric datasets, display analysis of dataset values in local regions, combine 2D and 3D imaging modalities and display results of vector-based computer simulations. Several interaction methods are discussed: in addition to traditional interfaces including mouse and trackers, gesture-based natural interaction methods are shown to control these visualizations with real-time performance. An example of a medical application (medialization laryngoplasty) is presented to demonstrate how the combination of different modalities can be used in a surgical setting with our approach.

  5. An Analysis on Better Testing than Training Performances on the Iris Dataset

    NARCIS (Netherlands)

    Schutten, Marten; Wiering, Marco

    2016-01-01

    The Iris dataset is a well known dataset containing information on three different types of Iris flowers. A typical and popular method for solving classification problems on datasets such as the Iris set is the support vector machine (SVM). In order to do so the dataset is separated in a set used

  6. Selecting the Optimal Combination Model of FSSVM for the Imbalance Datasets

    Directory of Open Access Journals (Sweden)

    Chuandong Qin

    2014-01-01

    Full Text Available Imbalanced data learning is one of the most active and important fields in machine learning research. The existing class imbalance learning methods can make Support Vector Machines (SVMs less sensitive to class imbalance; they still suffer from the disturbance of outliers and noise present in the datasets. A kind of Fuzzy Smooth Support Vector Machines (FSSVMs are proposed based on the Smooth Support Vector Machine (SSVM of O. L. Mangasarian. SSVM can be computed by the Broyden-Fletcher-Goldfarb-Shanno (BFGS algorithm or the Newton-Armijo algorithm easily. Two kinds of fuzzy memberships and three smooth functions can be chosen in the algorithms. The fuzzy memberships consider the contribution rate of each sample to the optimal separating hyperplane. The polynomial smooth functions can make the optimization problem more accurate at the inflection point. Those changes play the active effects on trials. The results of the experiments show that the FSSVMs can gain the better accuracy and the shorter time than the SSVMs and some of the other methods.

  7. Generation of Ground Truth Datasets for the Analysis of 3d Point Clouds in Urban Scenes Acquired via Different Sensors

    Science.gov (United States)

    Xu, Y.; Sun, Z.; Boerner, R.; Koch, T.; Hoegner, L.; Stilla, U.

    2018-04-01

    In this work, we report a novel way of generating ground truth dataset for analyzing point cloud from different sensors and the validation of algorithms. Instead of directly labeling large amount of 3D points requiring time consuming manual work, a multi-resolution 3D voxel grid for the testing site is generated. Then, with the help of a set of basic labeled points from the reference dataset, we can generate a 3D labeled space of the entire testing site with different resolutions. Specifically, an octree-based voxel structure is applied to voxelize the annotated reference point cloud, by which all the points are organized by 3D grids of multi-resolutions. When automatically annotating the new testing point clouds, a voting based approach is adopted to the labeled points within multiple resolution voxels, in order to assign a semantic label to the 3D space represented by the voxel. Lastly, robust line- and plane-based fast registration methods are developed for aligning point clouds obtained via various sensors. Benefiting from the labeled 3D spatial information, we can easily create new annotated 3D point clouds of different sensors of the same scene directly by considering the corresponding labels of 3D space the points located, which would be convenient for the validation and evaluation of algorithms related to point cloud interpretation and semantic segmentation.

  8. GAP Land Cover - Vector

    Data.gov (United States)

    Minnesota Department of Natural Resources — This vector dataset is a detailed (1-acre minimum), hierarchically organized vegetation cover map produced by computer classification of combined two-season pairs of...

  9. Feature Vector Construction Method for IRIS Recognition

    Science.gov (United States)

    Odinokikh, G.; Fartukov, A.; Korobkin, M.; Yoo, J.

    2017-05-01

    One of the basic stages of iris recognition pipeline is iris feature vector construction procedure. The procedure represents the extraction of iris texture information relevant to its subsequent comparison. Thorough investigation of feature vectors obtained from iris showed that not all the vector elements are equally relevant. There are two characteristics which determine the vector element utility: fragility and discriminability. Conventional iris feature extraction methods consider the concept of fragility as the feature vector instability without respect to the nature of such instability appearance. This work separates sources of the instability into natural and encodinginduced which helps deeply investigate each source of instability independently. According to the separation concept, a novel approach of iris feature vector construction is proposed. The approach consists of two steps: iris feature extraction using Gabor filtering with optimal parameters and quantization with separated preliminary optimized fragility thresholds. The proposed method has been tested on two different datasets of iris images captured under changing environmental conditions. The testing results show that the proposed method surpasses all the methods considered as a prior art by recognition accuracy on both datasets.

  10. An Annotated Dataset of 14 Meat Images

    DEFF Research Database (Denmark)

    Stegmann, Mikkel Bille

    2002-01-01

    This note describes a dataset consisting of 14 annotated images of meat. Points of correspondence are placed on each image. As such, the dataset can be readily used for building statistical models of shape. Further, format specifications and terms of use are given.......This note describes a dataset consisting of 14 annotated images of meat. Points of correspondence are placed on each image. As such, the dataset can be readily used for building statistical models of shape. Further, format specifications and terms of use are given....

  11. Tagged Vector Contour (TVC)

    Data.gov (United States)

    Kansas Data Access and Support Center — The Kansas Tagged Vector Contour (TVC) dataset consists of digitized contours from the 7.5 minute topographic quadrangle maps. Coverage for the state is incomplete....

  12. Nonlinear pre-coding apparatus of multi-antenna system, has pre-coding unit that extents original constellation points of modulated symbols to several constellation points by using limited perturbation vector

    DEFF Research Database (Denmark)

    2008-01-01

    A Coding/Modulating units (200-1-200-N) outputs modulated symbols by modulating coding bit streams based on certain modulation scheme. The limited perturbation vector is calculated by using distribution of perturbation vectors. The original constellation points of modulated symbols are extended t...

  13. Instantaneous local wave vector estimation from multi-spacecraft measurements using few spatial points

    Directory of Open Access Journals (Sweden)

    T. D. Carozzi

    2004-07-01

    Full Text Available We introduce a technique to determine instantaneous local properties of waves based on discrete-time sampled, real-valued measurements from 4 or more spatial points. The technique is a generalisation to the spatial domain of the notion of instantaneous frequency used in signal processing. The quantities derived by our technique are closely related to those used in geometrical optics, namely the local wave vector and instantaneous phase velocity. Thus, this experimental technique complements ray-tracing. We provide example applications of the technique to electric field and potential data from the EFW instrument on Cluster. Cluster is the first space mission for which direct determination of the full 3-dimensional local wave vector is possible, as described here.

  14. Extensions of vector-valued Baire one functions with preservation of points of continuity

    Czech Academy of Sciences Publication Activity Database

    Koc, M.; Kolář, Jan

    2016-01-01

    Roč. 442, č. 1 (2016), s. 138-148 ISSN 0022-247X R&D Projects: GA ČR(CZ) GA14-07880S Institutional support: RVO:67985840 Keywords : vector-valued Baire one functions * extensions * non-tangential limit * continuity points Subject RIV: BA - General Mathematics Impact factor: 1.064, year: 2016 http://www.sciencedirect.com/science/article/pii/S0022247X1630097X

  15. A New Dataset Size Reduction Approach for PCA-Based Classification in OCR Application

    Directory of Open Access Journals (Sweden)

    Mohammad Amin Shayegan

    2014-01-01

    Full Text Available A major problem of pattern recognition systems is due to the large volume of training datasets including duplicate and similar training samples. In order to overcome this problem, some dataset size reduction and also dimensionality reduction techniques have been introduced. The algorithms presently used for dataset size reduction usually remove samples near to the centers of classes or support vector samples between different classes. However, the samples near to a class center include valuable information about the class characteristics and the support vector is important for evaluating system efficiency. This paper reports on the use of Modified Frequency Diagram technique for dataset size reduction. In this new proposed technique, a training dataset is rearranged and then sieved. The sieved training dataset along with automatic feature extraction/selection operation using Principal Component Analysis is used in an OCR application. The experimental results obtained when using the proposed system on one of the biggest handwritten Farsi/Arabic numeral standard OCR datasets, Hoda, show about 97% accuracy in the recognition rate. The recognition speed increased by 2.28 times, while the accuracy decreased only by 0.7%, when a sieved version of the dataset, which is only as half as the size of the initial training dataset, was used.

  16. Vectorization of a classical trajectory code on a floating point systems, Inc. Model 164 attached processor.

    Science.gov (United States)

    Kraus, Wayne A; Wagner, Albert F

    1986-04-01

    A triatomic classical trajectory code has been modified by extensive vectorization of the algorithms to achieve much improved performance on an FPS 164 attached processor. Extensive timings on both the FPS 164 and a VAX 11/780 with floating point accelerator are presented as a function of the number of trajectories simultaneously run. The timing tests involve a potential energy surface of the LEPS variety and trajectories with 1000 time steps. The results indicate that vectorization results in timing improvements on both the VAX and the FPS. For larger numbers of trajectories run simultaneously, up to a factor of 25 improvement in speed occurs between VAX and FPS vectorized code. Copyright © 1986 John Wiley & Sons, Inc.

  17. Robustness-Based Simplification of 2D Steady and Unsteady Vector Fields

    KAUST Repository

    Skraba, Primoz

    2015-08-01

    © 2015 IEEE. Vector field simplification aims to reduce the complexity of the flow by removing features in order of their relevance and importance, to reveal prominent behavior and obtain a compact representation for interpretation. Most existing simplification techniques based on the topological skeleton successively remove pairs of critical points connected by separatrices, using distance or area-based relevance measures. These methods rely on the stable extraction of the topological skeleton, which can be difficult due to instability in numerical integration, especially when processing highly rotational flows. In this paper, we propose a novel simplification scheme derived from the recently introduced topological notion of robustness which enables the pruning of sets of critical points according to a quantitative measure of their stability, that is, the minimum amount of vector field perturbation required to remove them. This leads to a hierarchical simplification scheme that encodes flow magnitude in its perturbation metric. Our novel simplification algorithm is based on degree theory and has minimal boundary restrictions. Finally, we provide an implementation under the piecewise-linear setting and apply it to both synthetic and real-world datasets. We show local and complete hierarchical simplifications for steady as well as unsteady vector fields.

  18. Robustness-Based Simplification of 2D Steady and Unsteady Vector Fields.

    Science.gov (United States)

    Skraba, Primoz; Bei Wang; Guoning Chen; Rosen, Paul

    2015-08-01

    Vector field simplification aims to reduce the complexity of the flow by removing features in order of their relevance and importance, to reveal prominent behavior and obtain a compact representation for interpretation. Most existing simplification techniques based on the topological skeleton successively remove pairs of critical points connected by separatrices, using distance or area-based relevance measures. These methods rely on the stable extraction of the topological skeleton, which can be difficult due to instability in numerical integration, especially when processing highly rotational flows. In this paper, we propose a novel simplification scheme derived from the recently introduced topological notion of robustness which enables the pruning of sets of critical points according to a quantitative measure of their stability, that is, the minimum amount of vector field perturbation required to remove them. This leads to a hierarchical simplification scheme that encodes flow magnitude in its perturbation metric. Our novel simplification algorithm is based on degree theory and has minimal boundary restrictions. Finally, we provide an implementation under the piecewise-linear setting and apply it to both synthetic and real-world datasets. We show local and complete hierarchical simplifications for steady as well as unsteady vector fields.

  19. Robustness-Based Simplification of 2D Steady and Unsteady Vector Fields

    KAUST Repository

    Skraba, Primoz; Wang, Bei; Chen, Guoning; Rosen, Paul

    2015-01-01

    © 2015 IEEE. Vector field simplification aims to reduce the complexity of the flow by removing features in order of their relevance and importance, to reveal prominent behavior and obtain a compact representation for interpretation. Most existing simplification techniques based on the topological skeleton successively remove pairs of critical points connected by separatrices, using distance or area-based relevance measures. These methods rely on the stable extraction of the topological skeleton, which can be difficult due to instability in numerical integration, especially when processing highly rotational flows. In this paper, we propose a novel simplification scheme derived from the recently introduced topological notion of robustness which enables the pruning of sets of critical points according to a quantitative measure of their stability, that is, the minimum amount of vector field perturbation required to remove them. This leads to a hierarchical simplification scheme that encodes flow magnitude in its perturbation metric. Our novel simplification algorithm is based on degree theory and has minimal boundary restrictions. Finally, we provide an implementation under the piecewise-linear setting and apply it to both synthetic and real-world datasets. We show local and complete hierarchical simplifications for steady as well as unsteady vector fields.

  20. An Improved TA-SVM Method Without Matrix Inversion and Its Fast Implementation for Nonstationary Datasets.

    Science.gov (United States)

    Shi, Yingzhong; Chung, Fu-Lai; Wang, Shitong

    2015-09-01

    Recently, a time-adaptive support vector machine (TA-SVM) is proposed for handling nonstationary datasets. While attractive performance has been reported and the new classifier is distinctive in simultaneously solving several SVM subclassifiers locally and globally by using an elegant SVM formulation in an alternative kernel space, the coupling of subclassifiers brings in the computation of matrix inversion, thus resulting to suffer from high computational burden in large nonstationary dataset applications. To overcome this shortcoming, an improved TA-SVM (ITA-SVM) is proposed using a common vector shared by all the SVM subclassifiers involved. ITA-SVM not only keeps an SVM formulation, but also avoids the computation of matrix inversion. Thus, we can realize its fast version, that is, improved time-adaptive core vector machine (ITA-CVM) for large nonstationary datasets by using the CVM technique. ITA-CVM has the merit of asymptotic linear time complexity for large nonstationary datasets as well as inherits the advantage of TA-SVM. The effectiveness of the proposed classifiers ITA-SVM and ITA-CVM is also experimentally confirmed.

  1. Geometrical Modification of Learning Vector Quantization Method for Solving Classification Problems

    Directory of Open Access Journals (Sweden)

    Korhan GÜNEL

    2016-09-01

    Full Text Available In this paper, a geometrical scheme is presented to show how to overcome an encountered problem arising from the use of generalized delta learning rule within competitive learning model. It is introduced a theoretical methodology for describing the quantization of data via rotating prototype vectors on hyper-spheres.The proposed learning algorithm is tested and verified on different multidimensional datasets including a binary class dataset and two multiclass datasets from the UCI repository, and a multiclass dataset constructed by us. The proposed method is compared with some baseline learning vector quantization variants in literature for all domains. Large number of experiments verify the performance of our proposed algorithm with acceptable accuracy and macro f1 scores.

  2. Extraction of Urban Trees from Integrated Airborne Based Digital Image and LIDAR Point Cloud Datasets - Initial Results

    Science.gov (United States)

    Dogon-yaro, M. A.; Kumar, P.; Rahman, A. Abdul; Buyuksalih, G.

    2016-10-01

    Timely and accurate acquisition of information on the condition and structural changes of urban trees serves as a tool for decision makers to better appreciate urban ecosystems and their numerous values which are critical to building up strategies for sustainable development. The conventional techniques used for extracting tree features include; ground surveying and interpretation of the aerial photography. However, these techniques are associated with some constraint, such as labour intensive field work, a lot of financial requirement, influences by weather condition and topographical covers which can be overcome by means of integrated airborne based LiDAR and very high resolution digital image datasets. This study presented a semi-automated approach for extracting urban trees from integrated airborne based LIDAR and multispectral digital image datasets over Istanbul city of Turkey. The above scheme includes detection and extraction of shadow free vegetation features based on spectral properties of digital images using shadow index and NDVI techniques and automated extraction of 3D information about vegetation features from the integrated processing of shadow free vegetation image and LiDAR point cloud datasets. The ability of the developed algorithms shows a promising result as an automated and cost effective approach to estimating and delineated 3D information of urban trees. The research also proved that integrated datasets is a suitable technology and a viable source of information for city managers to be used in urban trees management.

  3. Using the Gravity Model to Estimate the Spatial Spread of Vector-Borne Diseases

    Directory of Open Access Journals (Sweden)

    Jean-Marie Aerts

    2012-11-01

    Full Text Available The gravity models are commonly used spatial interaction models. They have been widely applied in a large set of domains dealing with interactions amongst spatial entities. The spread of vector-borne diseases is also related to the intensity of interaction between spatial entities, namely, the physical habitat of pathogens’ vectors and/or hosts, and urban areas, thus humans. This study implements the concept behind gravity models in the spatial spread of two vector-borne diseases, nephropathia epidemica and Lyme borreliosis, based on current knowledge on the transmission mechanism of these diseases. Two sources of information on vegetated systems were tested: the CORINE land cover map and MODIS NDVI. The size of vegetated areas near urban centers and a local indicator of occupation-related exposure were found significant predictors of disease risk. Both the land cover map and the space-borne dataset were suited yet not equivalent input sources to locate and measure vegetated areas of importance for disease spread. The overall results point at the compatibility of the gravity model concept and the spatial spread of vector-borne diseases.

  4. Twisted Vector Bundles on Pointed Nodal Curves

    Indian Academy of Sciences (India)

    Abstract. Motivated by the quest for a good compactification of the moduli space of -bundles on a nodal curve we establish a striking relationship between Abramovich's and Vistoli's twisted bundles and Gieseker vector bundles.

  5. Accelerating simulation for the multiple-point statistics algorithm using vector quantization

    Science.gov (United States)

    Zuo, Chen; Pan, Zhibin; Liang, Hao

    2018-03-01

    Multiple-point statistics (MPS) is a prominent algorithm to simulate categorical variables based on a sequential simulation procedure. Assuming training images (TIs) as prior conceptual models, MPS extracts patterns from TIs using a template and records their occurrences in a database. However, complex patterns increase the size of the database and require considerable time to retrieve the desired elements. In order to speed up simulation and improve simulation quality over state-of-the-art MPS methods, we propose an accelerating simulation for MPS using vector quantization (VQ), called VQ-MPS. First, a variable representation is presented to make categorical variables applicable for vector quantization. Second, we adopt a tree-structured VQ to compress the database so that stationary simulations are realized. Finally, a transformed template and classified VQ are used to address nonstationarity. A two-dimensional (2D) stationary channelized reservoir image is used to validate the proposed VQ-MPS. In comparison with several existing MPS programs, our method exhibits significantly better performance in terms of computational time, pattern reproductions, and spatial uncertainty. Further demonstrations consist of a 2D four facies simulation, two 2D nonstationary channel simulations, and a three-dimensional (3D) rock simulation. The results reveal that our proposed method is also capable of solving multifacies, nonstationarity, and 3D simulations based on 2D TIs.

  6. Local Patch Vectors Encoded by Fisher Vectors for Image Classification

    Directory of Open Access Journals (Sweden)

    Shuangshuang Chen

    2018-02-01

    Full Text Available The objective of this work is image classification, whose purpose is to group images into corresponding semantic categories. Four contributions are made as follows: (i For computational simplicity and efficiency, we directly adopt raw image patch vectors as local descriptors encoded by Fisher vector (FV subsequently; (ii For obtaining representative local features within the FV encoding framework, we compare and analyze three typical sampling strategies: random sampling, saliency-based sampling and dense sampling; (iii In order to embed both global and local spatial information into local features, we construct an improved spatial geometry structure which shows good performance; (iv For reducing the storage and CPU costs of high dimensional vectors, we adopt a new feature selection method based on supervised mutual information (MI, which chooses features by an importance sorting algorithm. We report experimental results on dataset STL-10. It shows very promising performance with this simple and efficient framework compared to conventional methods.

  7. Automatic building extraction from LiDAR data fusion of point and grid-based features

    Science.gov (United States)

    Du, Shouji; Zhang, Yunsheng; Zou, Zhengrong; Xu, Shenghua; He, Xue; Chen, Siyang

    2017-08-01

    This paper proposes a method for extracting buildings from LiDAR point cloud data by combining point-based and grid-based features. To accurately discriminate buildings from vegetation, a point feature based on the variance of normal vectors is proposed. For a robust building extraction, a graph cuts algorithm is employed to combine the used features and consider the neighbor contexture information. As grid feature computing and a graph cuts algorithm are performed on a grid structure, a feature-retained DSM interpolation method is proposed in this paper. The proposed method is validated by the benchmark ISPRS Test Project on Urban Classification and 3D Building Reconstruction and compared to the state-art-of-the methods. The evaluation shows that the proposed method can obtain a promising result both at area-level and at object-level. The method is further applied to the entire ISPRS dataset and to a real dataset of the Wuhan City. The results show a completeness of 94.9% and a correctness of 92.2% at the per-area level for the former dataset and a completeness of 94.4% and a correctness of 95.8% for the latter one. The proposed method has a good potential for large-size LiDAR data.

  8. Automatic Registration of Vehicle-borne Mobile Mapping Laser Point Cloud and Sequent Panoramas

    Directory of Open Access Journals (Sweden)

    CHEN Chi

    2018-02-01

    Full Text Available An automatic registration method of mobile mapping system laser point cloud and sequence panoramic image is proposed in this paper.Firstly,hierarchical object extraction method is applied on LiDAR data to extract the building façade and outline polygons are generated to construct the skyline vectors.A virtual imaging method is proposed to solve the distortion on panoramas and corners on skylines are further detected on the virtual images combining segmentation and corner detection results.Secondly,the detected skyline vectors are taken as the registration primitives.Registration graphs are built according to the extracted skyline vector and further matched under graph edit distance minimization criteria.The matched conjugate primitives are utilized to solve the 2D-3D rough registration model to obtain the initial transformation between the sequence panoramic image coordinate system and the LiDAR point cloud coordinate system.Finally,to reduce the impact of registration primitives extraction and matching error on the registration results,the optimal transformation between the multi view stereo matching dens point cloud generated from the virtual imaging of the sequent panoramas and the LiDAR point cloud are solved by a 3D-3D ICP registration algorithm variant,thus,refine the exterior orientation parameters of panoramas indirectly.Experiments are undertaken to validate the proposed method and the results show that 1.5 pixel level registration results are achieved on the experiment dataset.The registration results can be applied to point cloud and panoramas fusion applications such as true color point cloud generation.

  9. 2D Vector Field Simplification Based on Robustness

    KAUST Repository

    Skraba, Primoz

    2014-03-01

    Vector field simplification aims to reduce the complexity of the flow by removing features in order of their relevance and importance, to reveal prominent behavior and obtain a compact representation for interpretation. Most existing simplification techniques based on the topological skeleton successively remove pairs of critical points connected by separatrices, using distance or area-based relevance measures. These methods rely on the stable extraction of the topological skeleton, which can be difficult due to instability in numerical integration, especially when processing highly rotational flows. These geometric metrics do not consider the flow magnitude, an important physical property of the flow. In this paper, we propose a novel simplification scheme derived from the recently introduced topological notion of robustness, which provides a complementary view on flow structure compared to the traditional topological-skeleton-based approaches. Robustness enables the pruning of sets of critical points according to a quantitative measure of their stability, that is, the minimum amount of vector field perturbation required to remove them. This leads to a hierarchical simplification scheme that encodes flow magnitude in its perturbation metric. Our novel simplification algorithm is based on degree theory, has fewer boundary restrictions, and so can handle more general cases. Finally, we provide an implementation under the piecewise-linear setting and apply it to both synthetic and real-world datasets. © 2014 IEEE.

  10. Visualization of conserved structures by fusing highly variable datasets.

    Science.gov (United States)

    Silverstein, Jonathan C; Chhadia, Ankur; Dech, Fred

    2002-01-01

    Skill, effort, and time are required to identify and visualize anatomic structures in three-dimensions from radiological data. Fundamentally, automating these processes requires a technique that uses symbolic information not in the dynamic range of the voxel data. We were developing such a technique based on mutual information for automatic multi-modality image fusion (MIAMI Fuse, University of Michigan). This system previously demonstrated facility at fusing one voxel dataset with integrated symbolic structure information to a CT dataset (different scale and resolution) from the same person. The next step of development of our technique was aimed at accommodating the variability of anatomy from patient to patient by using warping to fuse our standard dataset to arbitrary patient CT datasets. A standard symbolic information dataset was created from the full color Visible Human Female by segmenting the liver parenchyma, portal veins, and hepatic veins and overwriting each set of voxels with a fixed color. Two arbitrarily selected patient CT scans of the abdomen were used for reference datasets. We used the warping functions in MIAMI Fuse to align the standard structure data to each patient scan. The key to successful fusion was the focused use of multiple warping control points that place themselves around the structure of interest automatically. The user assigns only a few initial control points to align the scans. Fusion 1 and 2 transformed the atlas with 27 points around the liver to CT1 and CT2 respectively. Fusion 3 transformed the atlas with 45 control points around the liver to CT1 and Fusion 4 transformed the atlas with 5 control points around the portal vein. The CT dataset is augmented with the transformed standard structure dataset, such that the warped structure masks are visualized in combination with the original patient dataset. This combined volume visualization is then rendered interactively in stereo on the ImmersaDesk in an immersive Virtual

  11. Credit Scoring by Fuzzy Support Vector Machines with a Novel Membership Function

    Directory of Open Access Journals (Sweden)

    Jian Shi

    2016-11-01

    Full Text Available Due to the recent financial crisis and European debt crisis, credit risk evaluation has become an increasingly important issue for financial institutions. Reliable credit scoring models are crucial for commercial banks to evaluate the financial performance of clients and have been widely studied in the fields of statistics and machine learning. In this paper a novel fuzzy support vector machine (SVM credit scoring model is proposed for credit risk analysis, in which fuzzy membership is adopted to indicate different contribution of each input point to the learning of SVM classification hyperplane. Considering the methodological consistency, support vector data description (SVDD is introduced to construct the fuzzy membership function and to reduce the effect of outliers and noises. The SVDD-based fuzzy SVM model is tested against the traditional fuzzy SVM on two real-world datasets and the research results confirm the effectiveness of the presented method.

  12. Group vector space method for estimating enthalpy of vaporization of organic compounds at the normal boiling point.

    Science.gov (United States)

    Wenying, Wei; Jinyu, Han; Wen, Xu

    2004-01-01

    The specific position of a group in the molecule has been considered, and a group vector space method for estimating enthalpy of vaporization at the normal boiling point of organic compounds has been developed. Expression for enthalpy of vaporization Delta(vap)H(T(b)) has been established and numerical values of relative group parameters obtained. The average percent deviation of estimation of Delta(vap)H(T(b)) is 1.16, which show that the present method demonstrates significant improvement in applicability to predict the enthalpy of vaporization at the normal boiling point, compared the conventional group methods.

  13. Music Signal Processing Using Vector Product Neural Networks

    Science.gov (United States)

    Fan, Z. C.; Chan, T. S.; Yang, Y. H.; Jang, J. S. R.

    2017-05-01

    We propose a novel neural network model for music signal processing using vector product neurons and dimensionality transformations. Here, the inputs are first mapped from real values into three-dimensional vectors then fed into a three-dimensional vector product neural network where the inputs, outputs, and weights are all three-dimensional values. Next, the final outputs are mapped back to the reals. Two methods for dimensionality transformation are proposed, one via context windows and the other via spectral coloring. Experimental results on the iKala dataset for blind singing voice separation confirm the efficacy of our model.

  14. An Annotated Dataset of 14 Cardiac MR Images

    DEFF Research Database (Denmark)

    Stegmann, Mikkel Bille

    2002-01-01

    This note describes a dataset consisting of 14 annotated cardiac MR images. Points of correspondence are placed on each image at the left ventricle (LV). As such, the dataset can be readily used for building statistical models of shape. Further, format specifications and terms of use are given....

  15. Transversals of Complex Polynomial Vector Fields

    DEFF Research Database (Denmark)

    Dias, Kealey

    Vector fields in the complex plane are defined by assigning the vector determined by the value P(z) to each point z in the complex plane, where P is a polynomial of one complex variable. We consider special families of so-called rotated vector fields that are determined by a polynomial multiplied...... by rotational constants. Transversals are a certain class of curves for such a family of vector fields that represent the bifurcation states for this family of vector fields. More specifically, transversals are curves that coincide with a homoclinic separatrix for some rotation of the vector field. Given...... a concrete polynomial, it seems to take quite a bit of work to prove that it is generic, i.e. structurally stable. This has been done for a special class of degree d polynomial vector fields having simple equilibrium points at the d roots of unity, d odd. In proving that such vector fields are generic...

  16. Vector grammars and PN machines

    Institute of Scientific and Technical Information of China (English)

    蒋昌俊

    1996-01-01

    The concept of vector grammars under the string semantic is introduced.The dass of vector grammars is given,which is similar to the dass of Chomsky grammars.The regular vector grammar is divided further.The strong and weak relation between the vector grammar and scalar grammar is discussed,so the spectrum system graph of scalar and vector grammars is made.The equivalent relation between the regular vector grammar and Petri nets (also called PN machine) is pointed.The hybrid PN machine is introduced,and its language is proved equivalent to the language of the context-free vector grammar.So the perfect relation structure between vector grammars and PN machines is formed.

  17. Rotations with Rodrigues' vector

    International Nuclear Information System (INIS)

    Pina, E

    2011-01-01

    The rotational dynamics was studied from the point of view of Rodrigues' vector. This vector is defined here by its connection with other forms of parametrization of the rotation matrix. The rotation matrix was expressed in terms of this vector. The angular velocity was computed using the components of Rodrigues' vector as coordinates. It appears to be a fundamental matrix that is used to express the components of the angular velocity, the rotation matrix and the angular momentum vector. The Hamiltonian formalism of rotational dynamics in terms of this vector uses the same matrix. The quantization of the rotational dynamics is performed with simple rules if one uses Rodrigues' vector and similar formal expressions for the quantum operators that mimic the Hamiltonian classical dynamics.

  18. Assessing the Influence of Land Use and Land Cover Datasets with Different Points in Time and Levels of Detail on Watershed Modeling in the North River Watershed, China

    Directory of Open Access Journals (Sweden)

    Jinliang Huang

    2012-12-01

    Full Text Available Land use and land cover (LULC information is an important component influencing watershed modeling with regards to hydrology and water quality in the river basin. In this study, the sensitivity of the Soil and Water Assessment Tool (SWAT model to LULC datasets with three points in time and three levels of detail was assessed in a coastal subtropical watershed located in Southeast China. The results showed good agreement between observed and simulated values for both monthly and daily streamflow and monthly NH4+-N and TP loads. Three LULC datasets in 2002, 2007 and 2010 had relatively little influence on simulated monthly and daily streamflow, whereas they exhibited greater effects on simulated monthly NH4+-N and TP loads. When using the two LULC datasets in 2007 and 2010 compared with that in 2002, the relative differences in predicted monthly NH4+-N and TP loads were −11.0 to −7.8% and −4.8 to −9.0%, respectively. There were no significant differences in simulated monthly and daily streamflow when using the three LULC datasets with ten, five and three categories. When using LULC datasets from ten categories compared to five and three categories, the relative differences in predicted monthly NH4+-N and TP loads were −6.6 to −6.5% and −13.3 to −7.3%, respectively. Overall, the sensitivity of the SWAT model to LULC datasets with different points in time and levels of detail was lower in monthly and daily streamflow simulation than in monthly NH4+-N and TP loads prediction. This research provided helpful insights into the influence of LULC datasets on watershed modeling.

  19. Topographic and hydrographic GIS dataset for the Afghanistan Geological Survey and U.S. Geological Survey 2010 Minerals Project

    Science.gov (United States)

    Chirico, P.G.; Moran, T.W.

    2011-01-01

    This dataset contains a collection of 24 folders, each representing a specific U.S. Geological Survey area of interest (AOI; fig. 1), as well as datasets for AOI subsets. Each folder includes the extent, contours, Digital Elevation Model (DEM), and hydrography of the corresponding AOI, which are organized into feature vector and raster datasets. The dataset comprises a geographic information system (GIS), which is available upon request from the USGS Afghanistan programs Web site (http://afghanistan.cr.usgs.gov/minerals.php), and the maps of the 24 areas of interest of the USGS AOIs.

  20. Semi-Automated Approach for Mapping Urban Trees from Integrated Aerial LiDAR Point Cloud and Digital Imagery Datasets

    Science.gov (United States)

    Dogon-Yaro, M. A.; Kumar, P.; Rahman, A. Abdul; Buyuksalih, G.

    2016-09-01

    Mapping of trees plays an important role in modern urban spatial data management, as many benefits and applications inherit from this detailed up-to-date data sources. Timely and accurate acquisition of information on the condition of urban trees serves as a tool for decision makers to better appreciate urban ecosystems and their numerous values which are critical to building up strategies for sustainable development. The conventional techniques used for extracting trees include ground surveying and interpretation of the aerial photography. However, these techniques are associated with some constraints, such as labour intensive field work and a lot of financial requirement which can be overcome by means of integrated LiDAR and digital image datasets. Compared to predominant studies on trees extraction mainly in purely forested areas, this study concentrates on urban areas, which have a high structural complexity with a multitude of different objects. This paper presented a workflow about semi-automated approach for extracting urban trees from integrated processing of airborne based LiDAR point cloud and multispectral digital image datasets over Istanbul city of Turkey. The paper reveals that the integrated datasets is a suitable technology and viable source of information for urban trees management. As a conclusion, therefore, the extracted information provides a snapshot about location, composition and extent of trees in the study area useful to city planners and other decision makers in order to understand how much canopy cover exists, identify new planting, removal, or reforestation opportunities and what locations have the greatest need or potential to maximize benefits of return on investment. It can also help track trends or changes to the urban trees over time and inform future management decisions.

  1. Efficiency improvement of the maximum power point tracking for PV systems using support vector machine technique

    International Nuclear Information System (INIS)

    Kareim, Ameer A; Mansor, Muhamad Bin

    2013-01-01

    The aim of this paper is to improve efficiency of maximum power point tracking (MPPT) for PV systems. The Support Vector Machine (SVM) was proposed to achieve the MPPT controller. The theoretical, the perturbation and observation (P and O), and incremental conductance (IC) algorithms were used to compare with proposed SVM algorithm. MATLAB models for PV module, theoretical, SVM, P and O, and IC algorithms are implemented. The improved MPPT uses the SVM method to predict the optimum voltage of the PV system in order to extract the maximum power point (MPP). The SVM technique used two inputs which are solar radiation and ambient temperature of the modeled PV module. The results show that the proposed SVM technique has less Root Mean Square Error (RMSE) and higher efficiency than P and O and IC methods.

  2. Converting point-wise nuclear cross sections to pole representation using regularized vector fitting

    Science.gov (United States)

    Peng, Xingjie; Ducru, Pablo; Liu, Shichang; Forget, Benoit; Liang, Jingang; Smith, Kord

    2018-03-01

    Direct Doppler broadening of nuclear cross sections in Monte Carlo codes has been widely sought for coupled reactor simulations. One recent approach proposed analytical broadening using a pole representation of the commonly used resonance models and the introduction of a local windowing scheme to improve performance (Hwang, 1987; Forget et al., 2014; Josey et al., 2015, 2016). This pole representation has been achieved in the past by converting resonance parameters in the evaluation nuclear data library into poles and residues. However, cross sections of some isotopes are only provided as point-wise data in ENDF/B-VII.1 library. To convert these isotopes to pole representation, a recent approach has been proposed using the relaxed vector fitting (RVF) algorithm (Gustavsen and Semlyen, 1999; Gustavsen, 2006; Liu et al., 2018). This approach however needs to specify ahead of time the number of poles. This article addresses this issue by adding a poles and residues filtering step to the RVF procedure. This regularized VF (ReV-Fit) algorithm is shown to efficiently converge the poles close to the physical ones, eliminating most of the superfluous poles, and thus enabling the conversion of point-wise nuclear cross sections.

  3. Violation of vector dominance in the vector manifestation

    International Nuclear Information System (INIS)

    Sasaki, Chihiro

    2003-01-01

    The vector manifestation (VM) is a new pattern for realizing the chiral symmetry in QCD. In the VM, the massless vector meson becomes the chiral partner of pion at the critical point, in contrast with the restoration based on the linear sigma model. Including the intrinsic temperature dependences of the parameters of the hidden local symmetry (HLS) Lagrangian determined from the underlying QCD through the Wilsonian matching together with the hadronic thermal corrections, we present a new prediction of the VM on the direct photon-π-π coupling which measures the validity of the vector dominance (VD) of the electromagnetic form factor of the pion. We find that the VD is largely violated at the critical temperature, which indicates that the assumption of the VD made in several analysis on the dilepton spectra in hot matter may need to be weakened for consistently including the effect of the dropping mass of the vector meson. (author)

  4. Chord Recognition Based on Temporal Correlation Support Vector Machine

    Directory of Open Access Journals (Sweden)

    Zhongyang Rao

    2016-05-01

    Full Text Available In this paper, we propose a method called temporal correlation support vector machine (TCSVM for automatic major-minor chord recognition in audio music. We first use robust principal component analysis to separate the singing voice from the music to reduce the influence of the singing voice and consider the temporal correlations of the chord features. Using robust principal component analysis, we expect the low-rank component of the spectrogram matrix to contain the musical accompaniment and the sparse component to contain the vocal signals. Then, we extract a new logarithmic pitch class profile (LPCP feature called enhanced LPCP from the low-rank part. To exploit the temporal correlation among the LPCP features of chords, we propose an improved support vector machine algorithm called TCSVM. We perform this study using the MIREX’09 (Music Information Retrieval Evaluation eXchange Audio Chord Estimation dataset. Furthermore, we conduct comprehensive experiments using different pitch class profile feature vectors to examine the performance of TCSVM. The results of our method are comparable to the state-of-the-art methods that entered the MIREX in 2013 and 2014 for the MIREX’09 Audio Chord Estimation task dataset.

  5. Aging Detection of Electrical Point Machines Based on Support Vector Data Description

    Directory of Open Access Journals (Sweden)

    Jaewon Sa

    2017-11-01

    Full Text Available Electrical point machines (EPM must be replaced at an appropriate time to prevent the occurrence of operational safety or stability problems in trains resulting from aging or budget constraints. However, it is difficult to replace EPMs effectively because the aging conditions of EPMs depend on the operating environments, and thus, a guideline is typically not be suitable for replacing EPMs at the most timely moment. In this study, we propose a method of classification for the detection of an aging effect to facilitate the timely replacement of EPMs. We employ support vector data description to segregate data of “aged” and “not-yet-aged” equipment by analyzing the subtle differences in normalized electrical signals resulting from aging. Based on the before and after-replacement data that was obtained from experimental studies that were conducted on EPMs, we confirmed that the proposed method was capable of classifying machines based on exhibited aging effects with adequate accuracy.

  6. Analysis of Naïve Bayes Algorithm for Email Spam Filtering across Multiple Datasets

    Science.gov (United States)

    Fitriah Rusland, Nurul; Wahid, Norfaradilla; Kasim, Shahreen; Hafit, Hanayanti

    2017-08-01

    E-mail spam continues to become a problem on the Internet. Spammed e-mail may contain many copies of the same message, commercial advertisement or other irrelevant posts like pornographic content. In previous research, different filtering techniques are used to detect these e-mails such as using Random Forest, Naïve Bayesian, Support Vector Machine (SVM) and Neutral Network. In this research, we test Naïve Bayes algorithm for e-mail spam filtering on two datasets and test its performance, i.e., Spam Data and SPAMBASE datasets [8]. The performance of the datasets is evaluated based on their accuracy, recall, precision and F-measure. Our research use WEKA tool for the evaluation of Naïve Bayes algorithm for e-mail spam filtering on both datasets. The result shows that the type of email and the number of instances of the dataset has an influence towards the performance of Naïve Bayes.

  7. SEMI-AUTOMATED APPROACH FOR MAPPING URBAN TREES FROM INTEGRATED AERIAL LiDAR POINT CLOUD AND DIGITAL IMAGERY DATASETS

    Directory of Open Access Journals (Sweden)

    M. A. Dogon-Yaro

    2016-09-01

    Full Text Available Mapping of trees plays an important role in modern urban spatial data management, as many benefits and applications inherit from this detailed up-to-date data sources. Timely and accurate acquisition of information on the condition of urban trees serves as a tool for decision makers to better appreciate urban ecosystems and their numerous values which are critical to building up strategies for sustainable development. The conventional techniques used for extracting trees include ground surveying and interpretation of the aerial photography. However, these techniques are associated with some constraints, such as labour intensive field work and a lot of financial requirement which can be overcome by means of integrated LiDAR and digital image datasets. Compared to predominant studies on trees extraction mainly in purely forested areas, this study concentrates on urban areas, which have a high structural complexity with a multitude of different objects. This paper presented a workflow about semi-automated approach for extracting urban trees from integrated processing of airborne based LiDAR point cloud and multispectral digital image datasets over Istanbul city of Turkey. The paper reveals that the integrated datasets is a suitable technology and viable source of information for urban trees management. As a conclusion, therefore, the extracted information provides a snapshot about location, composition and extent of trees in the study area useful to city planners and other decision makers in order to understand how much canopy cover exists, identify new planting, removal, or reforestation opportunities and what locations have the greatest need or potential to maximize benefits of return on investment. It can also help track trends or changes to the urban trees over time and inform future management decisions.

  8. Hyperbolic-symmetry vector fields.

    Science.gov (United States)

    Gao, Xu-Zhen; Pan, Yue; Cai, Meng-Qiang; Li, Yongnan; Tu, Chenghou; Wang, Hui-Tian

    2015-12-14

    We present and construct a new kind of orthogonal coordinate system, hyperbolic coordinate system. We present and design a new kind of local linearly polarized vector fields, which is defined as the hyperbolic-symmetry vector fields because the points with the same polarization form a series of hyperbolae. We experimentally demonstrate the generation of such a kind of hyperbolic-symmetry vector optical fields. In particular, we also study the modified hyperbolic-symmetry vector optical fields with the twofold and fourfold symmetric states of polarization when introducing the mirror symmetry. The tight focusing behaviors of these vector fields are also investigated. In addition, we also fabricate micro-structures on the K9 glass surfaces by several tightly focused (modified) hyperbolic-symmetry vector fields patterns, which demonstrate that the simulated tightly focused fields are in good agreement with the fabricated micro-structures.

  9. USING LEARNING VECTOR QUANTIZATION METHOD FOR AUTOMATED IDENTIFICATION OF MYCOBACTERIUM TUBERCULOSIS

    Directory of Open Access Journals (Sweden)

    Endah Purwanti

    2012-01-01

    Full Text Available In this paper, we are developing an automated method for the detection of tubercle bacilli in clinical specimens, principally the sputum. This investigation is the first attempt to automatically identify TB bacilli in sputum using image processing and learning vector quantization (LVQ techniques. The evaluation of the learning vector quantization (LVQ was carried out on Tuberculosis dataset show that average of accuracy is 91,33%.

  10. Charmless Hadronic B Decays into Vector, Axial Vector and Tensor Final States at BaBar

    International Nuclear Information System (INIS)

    Gandini, Paolo

    2012-01-01

    We present experimental measurements of branching fraction and longitudinal polarization fraction in charmless hadronic B decays into vector, axial vector and tensor final states with the final dataset of BABAR. Measurements of such kind of decays are a powerful tool both to test the Standard Model and search possible sources of new physics. In this document we present a short review of the last experimental results at BABAR concerning charmless quasi two-body decays in final states containing particles with spin 1 or spin 2 and different parities. This kind of decays has received considerable theoretical interest in the last few years and this particular attention has led to interesting experimental results at the current b-factories. In fact, the study of longitudinal polarization fraction f L in charmless B decays to vector vector (VV), vector axial-vector (VA) and axial-vector axial-vector (AA) mesons provides information on the underlying helicity structure of the decay mechanism. Naive helicity conservation arguments predict a dominant longitudinal polarization fraction f L ∼ 1 for both tree and penguin dominated decays and this pattern seems to be confirmed by tree-dominated B → ρρ and B + → (Omega)ρ + decays. Other penguin dominated decays, instead, show a different behavior: the measured value of f L ∼ 0.5 in B → φK* decays is in contrast with naive Standard Model (SM) calculations. Several solutions have been proposed such as the introduction of non-factorizable terms and penguin-annihilation amplitudes, while other explanations invoke new physics. New modes have been investigated to shed more light on the problem.

  11. Kernel method for clustering based on optimal target vector

    International Nuclear Information System (INIS)

    Angelini, Leonardo; Marinazzo, Daniele; Pellicoro, Mario; Stramaglia, Sebastiano

    2006-01-01

    We introduce Ising models, suitable for dichotomic clustering, with couplings that are (i) both ferro- and anti-ferromagnetic (ii) depending on the whole data-set and not only on pairs of samples. Couplings are determined exploiting the notion of optimal target vector, here introduced, a link between kernel supervised and unsupervised learning. The effectiveness of the method is shown in the case of the well-known iris data-set and in benchmarks of gene expression levels, where it works better than existing methods for dichotomic clustering

  12. Predicting post-translational lysine acetylation using support vector machines

    DEFF Research Database (Denmark)

    Gnad, Florian; Ren, Shubin; Choudhary, Chunaram

    2010-01-01

    spectrometry to identify 3600 lysine acetylation sites on 1750 human proteins covering most of the previously annotated sites and providing the most comprehensive acetylome so far. This dataset should provide an excellent source to train support vector machines (SVMs) allowing the high accuracy in silico...

  13. Multiview vector-valued manifold regularization for multilabel image classification.

    Science.gov (United States)

    Luo, Yong; Tao, Dacheng; Xu, Chang; Xu, Chao; Liu, Hong; Wen, Yonggang

    2013-05-01

    In computer vision, image datasets used for classification are naturally associated with multiple labels and comprised of multiple views, because each image may contain several objects (e.g., pedestrian, bicycle, and tree) and is properly characterized by multiple visual features (e.g., color, texture, and shape). Currently, available tools ignore either the label relationship or the view complementarily. Motivated by the success of the vector-valued function that constructs matrix-valued kernels to explore the multilabel structure in the output space, we introduce multiview vector-valued manifold regularization (MV(3)MR) to integrate multiple features. MV(3)MR exploits the complementary property of different features and discovers the intrinsic local geometry of the compact support shared by different features under the theme of manifold regularization. We conduct extensive experiments on two challenging, but popular, datasets, PASCAL VOC' 07 and MIR Flickr, and validate the effectiveness of the proposed MV(3)MR for image classification.

  14. Principal-vector-directed fringe-tracking technique.

    Science.gov (United States)

    Zhang, Zhihui; Guo, Hongwei

    2014-11-01

    Fringe tracking is one of the most straightforward techniques for analyzing a single fringe pattern. This work presents a principal-vector-directed fringe-tracking technique. It uses Gaussian derivatives for estimating fringe gradients and uses hysteresis thresholding for segmenting singular points, thus improving the principal component analysis method. Using it allows us to estimate the principal vectors of fringes from a pattern with high noise. The fringe-tracking procedure is directed by these principal vectors, so that erroneous results induced by noise and other error-inducing factors are avoided. At the same time, the singular point regions of the fringe pattern are identified automatically. Using them allows us to determine paths through which the "seed" point for each fringe skeleton is easy to find, thus alleviating the computational burden in processing the fringe pattern. The results of a numerical simulation and experiment demonstrate this method to be valid.

  15. Towards human behavior recognition based on spatio temporal features and support vector machines

    Science.gov (United States)

    Ghabri, Sawsen; Ouarda, Wael; Alimi, Adel M.

    2017-03-01

    Security and surveillance are vital issues in today's world. The recent acts of terrorism have highlighted the urgent need for efficient surveillance. There is indeed a need for an automated system for video surveillance which can detect identity and activity of person. In this article, we propose a new paradigm to recognize an aggressive human behavior such as boxing action. Our proposed system for human activity detection includes the use of a fusion between Spatio Temporal Interest Point (STIP) and Histogram of Oriented Gradient (HoG) features. The novel feature called Spatio Temporal Histogram Oriented Gradient (STHOG). To evaluate the robustness of our proposed paradigm with a local application of HoG technique on STIP points, we made experiments on KTH human action dataset based on Multi Class Support Vector Machines classification. The proposed scheme outperforms basic descriptors like HoG and STIP to achieve 82.26% us an accuracy value of classification rate.

  16. Line Width Recovery after Vectorization of Engineering Drawings

    Directory of Open Access Journals (Sweden)

    Gramblička Matúš

    2016-12-01

    Full Text Available Vectorization is the conversion process of a raster image representation into a vector representation. The contemporary commercial vectorization software applications do not provide sufficiently high quality outputs for such images as do mechanical engineering drawings. Line width preservation is one of the problems. There are applications which need to know the line width after vectorization because this line attribute carries the important semantic information for the next 3D model generation. This article describes the algorithm that is able to recover line width of individual lines in the vectorized engineering drawings. Two approaches are proposed, one examines the line width at three points, whereas the second uses a variable number of points depending on the line length. The algorithm is tested on real mechanical engineering drawings.

  17. Support vector machine for diagnosis cancer disease: A comparative study

    Directory of Open Access Journals (Sweden)

    Nasser H. Sweilam

    2010-12-01

    Full Text Available Support vector machine has become an increasingly popular tool for machine learning tasks involving classification, regression or novelty detection. Training a support vector machine requires the solution of a very large quadratic programming problem. Traditional optimization methods cannot be directly applied due to memory restrictions. Up to now, several approaches exist for circumventing the above shortcomings and work well. Another learning algorithm, particle swarm optimization, Quantum-behave Particle Swarm for training SVM is introduced. Another approach named least square support vector machine (LSSVM and active set strategy are introduced. The obtained results by these methods are tested on a breast cancer dataset and compared with the exact solution model problem.

  18. Generation of arbitrary vector beams

    Science.gov (United States)

    Perez-Garcia, Benjamin; López-Mariscal, Carlos; Hernandez-Aranda, Raul I.; Gutiérrez-Vega, Julio C.

    2017-08-01

    Optical vector beams arise from point to point spatial variations of the electric component of an electromagnetic field over the transverse plane. In this work, we present a novel experimental technique to generate arbitrary vec- tor beams, and provide sufficient evidence to validate their state of polarization. This technique takes advantage of the capability of a Spatial Light Modulator to simultaneously generate two components of an electromagnetic field by halving the screen of the device and subsequently recombining them in a Sagnac interferometer. Our experimental results show the versatility and robustness of this technique for the generation of vector beams.

  19. Allegheny County Cell Tower Points

    Data.gov (United States)

    Allegheny County / City of Pittsburgh / Western PA Regional Data Center — This dataset portrays cell tower locations as points in Allegheny County. The dataset is based on outbuilding codes in the Property Assessment Parcel Database used...

  20. UniMiB SHAR: A Dataset for Human Activity Recognition Using Acceleration Data from Smartphones

    Directory of Open Access Journals (Sweden)

    Daniela Micucci

    2017-10-01

    Full Text Available Smartphones, smartwatches, fitness trackers, and ad-hoc wearable devices are being increasingly used to monitor human activities. Data acquired by the hosted sensors are usually processed by machine-learning-based algorithms to classify human activities. The success of those algorithms mostly depends on the availability of training (labeled data that, if made publicly available, would allow researchers to make objective comparisons between techniques. Nowadays, there are only a few publicly available data sets, which often contain samples from subjects with too similar characteristics, and very often lack specific information so that is not possible to select subsets of samples according to specific criteria. In this article, we present a new dataset of acceleration samples acquired with an Android smartphone designed for human activity recognition and fall detection. The dataset includes 11,771 samples of both human activities and falls performed by 30 subjects of ages ranging from 18 to 60 years. Samples are divided in 17 fine grained classes grouped in two coarse grained classes: one containing samples of 9 types of activities of daily living (ADL and the other containing samples of 8 types of falls. The dataset has been stored to include all the information useful to select samples according to different criteria, such as the type of ADL performed, the age, the gender, and so on. Finally, the dataset has been benchmarked with four different classifiers and with two different feature vectors. We evaluated four different classification tasks: fall vs. no fall, 9 activities, 8 falls, 17 activities and falls. For each classification task, we performed a 5-fold cross-validation (i.e., including samples from all the subjects in both the training and the test dataset and a leave-one-subject-out cross-validation (i.e., the test data include the samples of a subject only, and the training data, the samples of all the other subjects. Regarding the

  1. Coal demand prediction based on a support vector machine model

    Energy Technology Data Exchange (ETDEWEB)

    Jia, Cun-liang; Wu, Hai-shan; Gong, Dun-wei [China University of Mining & Technology, Xuzhou (China). School of Information and Electronic Engineering

    2007-01-15

    A forecasting model for coal demand of China using a support vector regression was constructed. With the selected embedding dimension, the output vectors and input vectors were constructed based on the coal demand of China from 1980 to 2002. After compared with lineal kernel and Sigmoid kernel, a radial basis function(RBF) was adopted as the kernel function. By analyzing the relationship between the error margin of prediction and the model parameters, the proper parameters were chosen. The support vector machines (SVM) model with multi-input and single output was proposed. Compared the predictor based on RBF neural networks with test datasets, the results show that the SVM predictor has higher precision and greater generalization ability. In the end, the coal demand from 2003 to 2006 is accurately forecasted. l0 refs., 2 figs., 4 tabs.

  2. Properties of vector and axial-vector mesons from a generalized Nambu-Jona-Lasinio model

    International Nuclear Information System (INIS)

    Bernard, V.; Meissner, U.G.; Massachusetts Inst. of Tech., Cambridge; Massachusetts Inst. of Tech., Cambridge

    1988-01-01

    We construct a generalized Nambu-Jona-Lasinio lagrangian including scalar, pseudoscalar, vector and axial-vector mesons. We specialize to the two-flavor case. The properties of the structured vacuum as well as meson masses and coupling constants are calculated giving an overall agreement within 20% of the experimental data. We investigate the meson properties at finite density. In contrast to the mass of the scalar σ-meson, which decreases sharply with increasing density, the vector meson masses are almost independent of density. Furthermore, the vector-meson-quark coupling constants are also stable against density changes. We point out that these results imply a softening of the nuclear equation of state at high densities. Furthermore, we discuss the breakdown of the KFSR relation on the quark level as well as other deviations from phenomenological concepts such as universality and vector meson dominance. (orig.)

  3. CERC Dataset (Full Hadza Data)

    DEFF Research Database (Denmark)

    2016-01-01

    The dataset includes demographic, behavioral, and religiosity data from eight different populations from around the world. The samples were drawn from: (1) Coastal and (2) Inland Tanna, Vanuatu; (3) Hadzaland, Tanzania; (4) Lovu, Fiji; (5) Pointe aux Piment, Mauritius; (6) Pesqueiro, Brazil; (7......) Kyzyl, Tyva Republic; and (8) Yasawa, Fiji. Related publication: Purzycki, et al. (2016). Moralistic Gods, Supernatural Punishment and the Expansion of Human Sociality. Nature, 530(7590): 327-330....

  4. Comparison of recent SnIa datasets

    International Nuclear Information System (INIS)

    Sanchez, J.C. Bueno; Perivolaropoulos, L.; Nesseris, S.

    2009-01-01

    We rank the six latest Type Ia supernova (SnIa) datasets (Constitution (C), Union (U), ESSENCE (Davis) (E), Gold06 (G), SNLS 1yr (S) and SDSS-II (D)) in the context of the Chevalier-Polarski-Linder (CPL) parametrization w(a) = w 0 +w 1 (1−a), according to their Figure of Merit (FoM), their consistency with the cosmological constant (ΛCDM), their consistency with standard rulers (Cosmic Microwave Background (CMB) and Baryon Acoustic Oscillations (BAO)) and their mutual consistency. We find a significant improvement of the FoM (defined as the inverse area of the 95.4% parameter contour) with the number of SnIa of these datasets ((C) highest FoM, (U), (G), (D), (E), (S) lowest FoM). Standard rulers (CMB+BAO) have a better FoM by about a factor of 3, compared to the highest FoM SnIa dataset (C). We also find that the ranking sequence based on consistency with ΛCDM is identical with the corresponding ranking based on consistency with standard rulers ((S) most consistent, (D), (C), (E), (U), (G) least consistent). The ranking sequence of the datasets however changes when we consider the consistency with an expansion history corresponding to evolving dark energy (w 0 ,w 1 ) = (−1.4,2) crossing the phantom divide line w = −1 (it is practically reversed to (G), (U), (E), (S), (D), (C)). The SALT2 and MLCS2k2 fitters are also compared and some peculiar features of the SDSS-II dataset when standardized with the MLCS2k2 fitter are pointed out. Finally, we construct a statistic to estimate the internal consistency of a collection of SnIa datasets. We find that even though there is good consistency among most samples taken from the above datasets, this consistency decreases significantly when the Gold06 (G) dataset is included in the sample

  5. Geochemical Fingerprinting of Coltan Ores by Machine Learning on Uneven Datasets

    International Nuclear Information System (INIS)

    Savu-Krohn, Christian; Rantitsch, Gerd; Auer, Peter; Melcher, Frank; Graupner, Torsten

    2011-01-01

    Two modern machine learning techniques, Linear Programming Boosting (LPBoost) and Support Vector Machines (SVMs), are introduced and applied to a geochemical dataset of niobium–tantalum (“coltan”) ores from Central Africa to demonstrate how such information may be used to distinguish ore provenance, i.e., place of origin. The compositional data used include uni- and multivariate outliers and elemental distributions are not described by parametric frequency distribution functions. The “soft margin” techniques of LPBoost and SVMs can be applied to such data. Optimization of their learning parameters results in an average accuracy of up to c. 92%, if spot measurements are assessed to estimate the provenance of ore samples originating from two geographically defined source areas. A parameterized performance measure, together with common methods for its optimization, was evaluated to account for the presence of uneven datasets. Optimization of the classification function threshold improves the performance, as class importance is shifted towards one of those classes. For this dataset, the average performance of the SVMs is significantly better compared to that of LPBoost.

  6. Integrating principal component analysis and vector quantization with support vector regression for sulfur content prediction in HDS process

    Directory of Open Access Journals (Sweden)

    Shokri Saeid

    2015-01-01

    Full Text Available An accurate prediction of sulfur content is very important for the proper operation and product quality control in hydrodesulfurization (HDS process. For this purpose, a reliable data- driven soft sensors utilizing Support Vector Regression (SVR was developed and the effects of integrating Vector Quantization (VQ with Principle Component Analysis (PCA were studied on the assessment of this soft sensor. First, in pre-processing step the PCA and VQ techniques were used to reduce dimensions of the original input datasets. Then, the compressed datasets were used as input variables for the SVR model. Experimental data from the HDS setup were employed to validate the proposed integrated model. The integration of VQ/PCA techniques with SVR model was able to increase the prediction accuracy of SVR. The obtained results show that integrated technique (VQ-SVR was better than (PCA-SVR in prediction accuracy. Also, VQ decreased the sum of the training and test time of SVR model in comparison with PCA. For further evaluation, the performance of VQ-SVR model was also compared to that of SVR. The obtained results indicated that VQ-SVR model delivered the best satisfactory predicting performance (AARE= 0.0668 and R2= 0.995 in comparison with investigated models.

  7. A Large-Scale 3D Object Recognition dataset

    DEFF Research Database (Denmark)

    Sølund, Thomas; Glent Buch, Anders; Krüger, Norbert

    2016-01-01

    geometric groups; concave, convex, cylindrical and flat 3D object models. The object models have varying amount of local geometric features to challenge existing local shape feature descriptors in terms of descriptiveness and robustness. The dataset is validated in a benchmark which evaluates the matching...... performance of 7 different state-of-the-art local shape descriptors. Further, we validate the dataset in a 3D object recognition pipeline. Our benchmark shows as expected that local shape feature descriptors without any global point relation across the surface have a poor matching performance with flat...

  8. Attenuated Vector Tomography -- An Approach to Image Flow Vector Fields with Doppler Ultrasonic Imaging

    International Nuclear Information System (INIS)

    Huang, Qiu; Peng, Qiyu; Huang, Bin; Cheryauka, Arvi; Gullberg, Grant T.

    2008-01-01

    The measurement of flow obtained using continuous wave Doppler ultrasound is formulated as a directional projection of a flow vector field. When a continuous ultrasound wave bounces against a flowing particle, a signal is backscattered. This signal obtains a Doppler frequency shift proportional to the speed of the particle along the ultrasound beam. This occurs for each particle along the beam, giving rise to a Doppler velocity spectrum. The first moment of the spectrum provides the directional projection of the flow along the ultrasound beam. Signals reflected from points further away from the detector will have lower amplitude than signals reflected from points closer to the detector. The effect is very much akin to that modeled by the attenuated Radon transform in emission computed tomography.A least-squares method was adopted to reconstruct a 2D vector field from directional projection measurements. Attenuated projections of only the longitudinal projections of the vector field were simulated. The components of the vector field were reconstructed using the gradient algorithm to minimize a least-squares criterion. This result was compared with the reconstruction of longitudinal projections of the vector field without attenuation. If attenuation is known, the algorithm was able to accurately reconstruct both components of the full vector field from only one set of directional projection measurements. A better reconstruction was obtained with attenuation than without attenuation implying that attenuation provides important information for the reconstruction of flow vector fields.This confirms previous work where we showed that knowledge of the attenuation distribution helps in the reconstruction of MRI diffusion tensor fields from fewer than the required measurements. In the application of ultrasound the attenuation distribution is obtained with pulse wave transmission computed tomography and flow information is obtained with continuous wave Doppler

  9. Redesigning the DOE Data Explorer to Embed Dataset Relationships at the Point of Search and to Reflect Landing Page Organization

    Directory of Open Access Journals (Sweden)

    Sara Studwell

    2017-04-01

    Full Text Available Scientific research is producing ever-increasing amounts of data. Organizing and reflecting relationships across data collections, datasets, publications, and other research objects are essential functionalities of the modern science environment, yet challenging to implement. Landing pages are often used for providing ‘big picture’ contextual frameworks for datasets and data collections, and many large-volume data holders are utilizing them in thoughtful, creative ways. The benefits of their organizational efforts, however, are not realized unless the user eventually sees the landing page at the end point of their search. What if that organization and ‘big picture’ context could benefit the user at the beginning of the search? That is a challenging approach, but The Department of Energy’s (DOE Office of Scientific and Technical Information (OSTI is redesigning the database functionality of the DOE Data Explorer (DDE with that goal in mind. Phase I is focused on redesigning the DDE database to leverage relationships between two existing distinct populations in DDE, data Projects and individual Datasets, and then adding a third intermediate population, data Collections. Mapped, structured linkages, designed to show user relationships, will allow users to make informed search choices. These linkages will be sustainable and scalable, created automatically with the use of new metadata fields and existing authorities. Phase II will study selected DOE Data ID Service clients, analyzing how their landing pages are organized, and how that organization might be used to improve DDE search capabilities. At the heart of both phases is the realization that adding more metadata information for cross-referencing may require additional effort for data scientists. OSTI’s approach seeks to leverage existing metadata and landing page intelligence without imposing an additional burden on the data creators.

  10. Multi-view L2-SVM and its multi-view core vector machine.

    Science.gov (United States)

    Huang, Chengquan; Chung, Fu-lai; Wang, Shitong

    2016-03-01

    In this paper, a novel L2-SVM based classifier Multi-view L2-SVM is proposed to address multi-view classification tasks. The proposed Multi-view L2-SVM classifier does not have any bias in its objective function and hence has the flexibility like μ-SVC in the sense that the number of the yielded support vectors can be controlled by a pre-specified parameter. The proposed Multi-view L2-SVM classifier can make full use of the coherence and the difference of different views through imposing the consensus among multiple views to improve the overall classification performance. Besides, based on the generalized core vector machine GCVM, the proposed Multi-view L2-SVM classifier is extended into its GCVM version MvCVM which can realize its fast training on large scale multi-view datasets, with its asymptotic linear time complexity with the sample size and its space complexity independent of the sample size. Our experimental results demonstrated the effectiveness of the proposed Multi-view L2-SVM classifier for small scale multi-view datasets and the proposed MvCVM classifier for large scale multi-view datasets. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Lagrangian analysis of vector and tensor fields: Algorithmic foundations and applications in medical imaging and computational fluid dynamics

    OpenAIRE

    Ding, Zi'ang

    2016-01-01

    Both vector and tensor fields are important mathematical tools used to describe the physics of many phenomena in science and engineering. Effective vector and tensor field visualization techniques are therefore needed to interpret and analyze the corresponding data and achieve new insight into the considered problem. This dissertation is concerned with the extraction of important structural properties from vector and tensor datasets. Specifically, we present a unified approach for the charact...

  12. Supplier Short Term Load Forecasting Using Support Vector Regression and Exogenous Input

    Science.gov (United States)

    Matijaš, Marin; Vukićcević, Milan; Krajcar, Slavko

    2011-09-01

    In power systems, task of load forecasting is important for keeping equilibrium between production and consumption. With liberalization of electricity markets, task of load forecasting changed because each market participant has to forecast their own load. Consumption of end-consumers is stochastic in nature. Due to competition, suppliers are not in a position to transfer their costs to end-consumers; therefore it is essential to keep forecasting error as low as possible. Numerous papers are investigating load forecasting from the perspective of the grid or production planning. We research forecasting models from the perspective of a supplier. In this paper, we investigate different combinations of exogenous input on the simulated supplier loads and show that using points of delivery as a feature for Support Vector Regression leads to lower forecasting error, while adding customer number in different datasets does the opposite.

  13. Accelerating Relevance Vector Machine for Large-Scale Data on Spark

    Directory of Open Access Journals (Sweden)

    Liu Fang

    2017-01-01

    Full Text Available Relevance vector machine (RVM is a machine learning algorithm based on a sparse Bayesian framework, which performs well when running classification and regression tasks on small-scale datasets. However, RVM also has certain drawbacks which restricts its practical applications such as (1 slow training process, (2 poor performance on training large-scale datasets. In order to solve these problem, we propose Discrete AdaBoost RVM (DAB-RVM which incorporate ensemble learning in RVM at first. This method performs well with large-scale low-dimensional datasets. However, as the number of features increases, the training time of DAB-RVM increases as well. To avoid this phenomenon, we utilize the sufficient training samples of large-scale datasets and propose all features boosting RVM (AFB-RVM, which modifies the way of obtaining weak classifiers. In our experiments we study the differences between various boosting techniques with RVM, demonstrating the performance of the proposed approaches on Spark. As a result of this paper, two proposed approaches on Spark for different types of large-scale datasets are available.

  14. Vector manifestation and violation of vector dominance in hot matter

    International Nuclear Information System (INIS)

    Harada, Masayasu; Sasaki, Chihiro

    2004-01-01

    We show the details of the calculation of the hadronic thermal corrections to the two-point functions in the effective field theory of QCD for pions and vector mesons based on the hidden local symmetry (HLS) in hot matter using the background field gauge. We study the temperature dependence of the pion velocity in the low-temperature region determined from the hadronic thermal corrections, and show that, due to the presence of the dynamical vector meson, the pion velocity is smaller than the speed of the light already at one-loop level, in contrast to the result obtained in the ordinary chiral perturbation theory including only the pion at one-loop. Including the intrinsic temperature dependences of the parameters of the HLS Lagrangian determined from the underlying QCD through the Wilsonian matching, we show how the vector manifestation (VM), in which the massless vector meson becomes the chiral partner of pion, is realized at the critical temperature. We present a new prediction of the VM on the direct photon-π-π coupling which measures the validity of the vector dominance (VD) of the electromagnetic form factor of the pion: we find that the VD is largely violated at the critical temperature, which indicates that the assumption of the VD made in several analyses on the dilepton spectra in hot matter may need to be weakened for consistently including the effect of the dropping mass of the vector meson

  15. Columbia River ESI: SOCECON (Socioeconomic Resource Points and Lines)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains vector points and lines representing human-use resource data for Columbia River. In the data set, vector points represent aquaculture sites,...

  16. Review of data mining applications for quality assessment in manufacturing industry: support vector machines

    Directory of Open Access Journals (Sweden)

    Rostami Hamidey

    2015-01-01

    Full Text Available In many modern manufacturing industries, data that characterize the manufacturing process are electronically collected and stored in databases. Due to advances in data collection systems and analysis tools, data mining (DM has widely been applied for quality assessment (QA in manufacturing industries. In DM, the choice of technique to be used in analyzing a dataset and assessing the quality depend on the understanding of the analyst. On the other hand, with the advent of improved and efficient prediction techniques, there is a need for an analyst to know which tool performs better for a particular type of dataset. Although a few review papers have recently been published to discuss DM applications in manufacturing for QA, this paper provides an extensive review to investigate the application of a special DM technique, namely support vector machine (SVM to deal with QA problems. This review provides a comprehensive analysis of the literature from various points of view as DM concepts, data preprocessing, DM applications for each quality task, SVM preliminaries, and application results. Summary tables and figures are also provided besides to the analyses. Finally, conclusions and future research directions are provided.

  17. Kernel-based discriminant feature extraction using a representative dataset

    Science.gov (United States)

    Li, Honglin; Sancho Gomez, Jose-Luis; Ahalt, Stanley C.

    2002-07-01

    Discriminant Feature Extraction (DFE) is widely recognized as an important pre-processing step in classification applications. Most DFE algorithms are linear and thus can only explore the linear discriminant information among the different classes. Recently, there has been several promising attempts to develop nonlinear DFE algorithms, among which is Kernel-based Feature Extraction (KFE). The efficacy of KFE has been experimentally verified by both synthetic data and real problems. However, KFE has some known limitations. First, KFE does not work well for strongly overlapped data. Second, KFE employs all of the training set samples during the feature extraction phase, which can result in significant computation when applied to very large datasets. Finally, KFE can result in overfitting. In this paper, we propose a substantial improvement to KFE that overcomes the above limitations by using a representative dataset, which consists of critical points that are generated from data-editing techniques and centroid points that are determined by using the Frequency Sensitive Competitive Learning (FSCL) algorithm. Experiments show that this new KFE algorithm performs well on significantly overlapped datasets, and it also reduces computational complexity. Further, by controlling the number of centroids, the overfitting problem can be effectively alleviated.

  18. A comprehensive comparison of random forests and support vector machines for microarray-based cancer classification

    Directory of Open Access Journals (Sweden)

    Wang Lily

    2008-07-01

    Full Text Available Abstract Background Cancer diagnosis and clinical outcome prediction are among the most important emerging applications of gene expression microarray technology with several molecular signatures on their way toward clinical deployment. Use of the most accurate classification algorithms available for microarray gene expression data is a critical ingredient in order to develop the best possible molecular signatures for patient care. As suggested by a large body of literature to date, support vector machines can be considered "best of class" algorithms for classification of such data. Recent work, however, suggests that random forest classifiers may outperform support vector machines in this domain. Results In the present paper we identify methodological biases of prior work comparing random forests and support vector machines and conduct a new rigorous evaluation of the two algorithms that corrects these limitations. Our experiments use 22 diagnostic and prognostic datasets and show that support vector machines outperform random forests, often by a large margin. Our data also underlines the importance of sound research design in benchmarking and comparison of bioinformatics algorithms. Conclusion We found that both on average and in the majority of microarray datasets, random forests are outperformed by support vector machines both in the settings when no gene selection is performed and when several popular gene selection methods are used.

  19. The index of a vector field under blow ups

    International Nuclear Information System (INIS)

    Seade, J.

    1991-08-01

    A useful technique when studying the behaviour of holomorphic vector fields around their isolated singularities is that of blowing up the singular points. On the other hand, the most basic invariant of a vector field with isolated singularities is its local index, as defined by Poincare and Hopf. It is thus natural to ask how does the index of a vector field behaves under blowing ups? The purpose of this work is to study and answer this question, by taking a rather general point of view and bearing in mind that complex manifolds have a powerful birational invariant, the Todd genus. 20 refs

  20. A Core Set Based Large Vector-Angular Region and Margin Approach for Novelty Detection

    Directory of Open Access Journals (Sweden)

    Jiusheng Chen

    2016-01-01

    Full Text Available A large vector-angular region and margin (LARM approach is presented for novelty detection based on imbalanced data. The key idea is to construct the largest vector-angular region in the feature space to separate normal training patterns; meanwhile, maximize the vector-angular margin between the surface of this optimal vector-angular region and abnormal training patterns. In order to improve the generalization performance of LARM, the vector-angular distribution is optimized by maximizing the vector-angular mean and minimizing the vector-angular variance, which separates the normal and abnormal examples well. However, the inherent computation of quadratic programming (QP solver takes O(n3 training time and at least O(n2 space, which might be computational prohibitive for large scale problems. By (1+ε  and  (1-ε-approximation algorithm, the core set based LARM algorithm is proposed for fast training LARM problem. Experimental results based on imbalanced datasets have validated the favorable efficiency of the proposed approach in novelty detection.

  1. Web-based GIS: the vector-borne disease airline importation risk (VBD-AIR) tool.

    Science.gov (United States)

    Huang, Zhuojie; Das, Anirrudha; Qiu, Youliang; Tatem, Andrew J

    2012-08-14

    Over the past century, the size and complexity of the air travel network has increased dramatically. Nowadays, there are 29.6 million scheduled flights per year and around 2.7 billion passengers are transported annually. The rapid expansion of the network increasingly connects regions of endemic vector-borne disease with the rest of the world, resulting in challenges to health systems worldwide in terms of vector-borne pathogen importation and disease vector invasion events. Here we describe the development of a user-friendly Web-based GIS tool: the Vector-Borne Disease Airline Importation Risk Tool (VBD-AIR), to help better define the roles of airports and airlines in the transmission and spread of vector-borne diseases. Spatial datasets on modeled global disease and vector distributions, as well as climatic and air network traffic data were assembled. These were combined to derive relative risk metrics via air travel for imported infections, imported vectors and onward transmission, and incorporated into a three-tier server architecture in a Model-View-Controller framework with distributed GIS components. A user-friendly web-portal was built that enables dynamic querying of the spatial databases to provide relevant information. The VBD-AIR tool constructed enables the user to explore the interrelationships among modeled global distributions of vector-borne infectious diseases (malaria. dengue, yellow fever and chikungunya) and international air service routes to quantify seasonally changing risks of vector and vector-borne disease importation and spread by air travel, forming an evidence base to help plan mitigation strategies. The VBD-AIR tool is available at http://www.vbd-air.com. VBD-AIR supports a data flow that generates analytical results from disparate but complementary datasets into an organized cartographical presentation on a web map for the assessment of vector-borne disease movements on the air travel network. The framework built provides a flexible

  2. 3D Model Retrieval Based on Vector Quantisation Index Histograms

    International Nuclear Information System (INIS)

    Lu, Z M; Luo, H; Pan, J S

    2006-01-01

    This paper proposes a novel technique to retrieval 3D mesh models using vector quantisation index histograms. Firstly, points are sampled uniformly on mesh surface. Secondly, to a point five features representing global and local properties are extracted. Thus feature vectors of points are obtained. Third, we select several models from each class, and employ their feature vectors as a training set. After training using LBG algorithm, a public codebook is constructed. Next, codeword index histograms of the query model and those in database are computed. The last step is to compute the distance between histograms of the query and those of the models in database. Experimental results show the effectiveness of our method

  3. Deep, multi-stage transcriptome of the schistosomiasis vector Biomphalaria glabrata provides platform for understanding molluscan disease-related pathways

    Directory of Open Access Journals (Sweden)

    Nathan J Kenny

    2016-10-01

    Full Text Available Abstract Background The gastropod mollusc Biomphalaria glabrata is well known as a vector for the tropical disease schistosomiasis, which affects nearly 200 million people worldwide. Despite intensive study, our understanding of the genetic basis of B. glabrata development, growth and disease resistance is constrained by limited genetic resources, constraints for which next-generation sequencing methods provide a ready solution. Methods Illumina sequencing and de novo assembly using the Trinity program was used to generate a high-quality transcriptomic dataset spanning the entirety of in ovo development in schistosomiasis-free B. glabrata. This was subjected to automated (KEGG, BLAST2GO and manual annotation efforts, allowing insight into the gene complements of this species in a number of contexts. Results Excellent dataset recovery was observed, with 133,084 contigs produced of mean size 2219.48 bp. 80,952 (60.8 % returned a BLASTx hit with an E value of less than 10-3, and 74,492 (55.97 % were either mapped or assigned a GO identity using the BLAST2GO program. The CEGMA set of core eukaryotic genes was found to be 99.6 % present, indicating exceptional transcriptome completeness. We were able to identify a wealth of disease-pathway related genes within our dataset, including the Wnt, apoptosis and Notch pathways. This provides an invaluable reference point for further work into molluscan development and evolution, for studying the impact of schistosomiasis in this species, and perhaps providing targets for the treatment of this widespread disease. Conclusions Here we present a deep transcriptome of an embryonic sample of schistosomiasis-free B. glabrata, presenting a comprehensive dataset for comparison to disease-affected specimens and from which conclusions can be drawn about the genetics of this widespread medical model. Furthermore, the dataset provided by this sequencing provides a useful reference point for comparison to other mollusc

  4. An introduction to vectors, vector operators and vector analysis

    CERN Document Server

    Joag, Pramod S

    2016-01-01

    Ideal for undergraduate and graduate students of science and engineering, this book covers fundamental concepts of vectors and their applications in a single volume. The first unit deals with basic formulation, both conceptual and theoretical. It discusses applications of algebraic operations, Levi-Civita notation, and curvilinear coordinate systems like spherical polar and parabolic systems and structures, and analytical geometry of curves and surfaces. The second unit delves into the algebra of operators and their types and also explains the equivalence between the algebra of vector operators and the algebra of matrices. Formulation of eigen vectors and eigen values of a linear vector operator are elaborated using vector algebra. The third unit deals with vector analysis, discussing vector valued functions of a scalar variable and functions of vector argument (both scalar valued and vector valued), thus covering both the scalar vector fields and vector integration.

  5. Twisted vector bundles on pointed nodal curves

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    by identifying the points p1 and p2. If m ≥ 2, let R1,...,Rm−1 be m − 1 copies of the projective line P1 and let xi,yi be two distinct points in Ri. Let R be the nodal curve which arises from the union. R0 ⊔ R1 ⊔···⊔ Rm−1 ⊔ Rm by identifying p1 ∈ R0 and p2 ∈ Rm with x1 ∈ R1 and ym−1 ∈ Rm−1 respectively and by identifying ...

  6. Search for singly produced vector-like down-type quarks with ATLAS

    Energy Technology Data Exchange (ETDEWEB)

    Rehnisch, Laura; Dietrich, Janet; Lacker, Heiko [Humboldt-Universitaet zu Berlin (Germany)

    2016-07-01

    Vector-like quarks are predicted in several models, e.g. composite Higgs models. Due to relatively high mass limits from previous searches and the limited phase space for pair-produced heavy quarks, it is indicated to investigate single production of these particles. A search for down-type vector-like quarks decaying to a W boson and a top quark, conducted on the 8 TeV dataset recorded in 2012 with the ATLAS detector, is presented. Two models, a vector-like quark, B, and an excited quark with vector-like couplings, b{sup *}, have been investigated. The presented and recently published results were obtained using single-lepton and dilepton final states, while the presentation focuses on single-lepton events in which boosted decay topologies of the heavy quarks are used. This increases the sensitivity, as jets from hadronically decaying W's and tops are likely to be merged. In the absence of a significant excess of the data over the expected background, cross-section limits were set. Excited vector-like quarks with masses below 1.5 TeV are excluded.

  7. Floating point only SIMD instruction set architecture including compare, select, Boolean, and alignment operations

    Science.gov (United States)

    Gschwind, Michael K [Chappaqua, NY

    2011-03-01

    Mechanisms for implementing a floating point only single instruction multiple data instruction set architecture are provided. A processor is provided that comprises an issue unit, an execution unit coupled to the issue unit, and a vector register file coupled to the execution unit. The execution unit has logic that implements a floating point (FP) only single instruction multiple data (SIMD) instruction set architecture (ISA). The floating point vector registers of the vector register file store both scalar and floating point values as vectors having a plurality of vector elements. The processor may be part of a data processing system.

  8. Automated Coarse Registration of Point Clouds in 3d Urban Scenes Using Voxel Based Plane Constraint

    Science.gov (United States)

    Xu, Y.; Boerner, R.; Yao, W.; Hoegner, L.; Stilla, U.

    2017-09-01

    For obtaining a full coverage of 3D scans in a large-scale urban area, the registration between point clouds acquired via terrestrial laser scanning (TLS) is normally mandatory. However, due to the complex urban environment, the automatic registration of different scans is still a challenging problem. In this work, we propose an automatic marker free method for fast and coarse registration between point clouds using the geometric constrains of planar patches under a voxel structure. Our proposed method consists of four major steps: the voxelization of the point cloud, the approximation of planar patches, the matching of corresponding patches, and the estimation of transformation parameters. In the voxelization step, the point cloud of each scan is organized with a 3D voxel structure, by which the entire point cloud is partitioned into small individual patches. In the following step, we represent points of each voxel with the approximated plane function, and select those patches resembling planar surfaces. Afterwards, for matching the corresponding patches, a RANSAC-based strategy is applied. Among all the planar patches of a scan, we randomly select a planar patches set of three planar surfaces, in order to build a coordinate frame via their normal vectors and their intersection points. The transformation parameters between scans are calculated from these two coordinate frames. The planar patches set with its transformation parameters owning the largest number of coplanar patches are identified as the optimal candidate set for estimating the correct transformation parameters. The experimental results using TLS datasets of different scenes reveal that our proposed method can be both effective and efficient for the coarse registration task. Especially, for the fast orientation between scans, our proposed method can achieve a registration error of less than around 2 degrees using the testing datasets, and much more efficient than the classical baseline methods.

  9. A Bayesian spatio-temporal geostatistical model with an auxiliary lattice for large datasets

    KAUST Repository

    Xu, Ganggang

    2015-01-01

    When spatio-temporal datasets are large, the computational burden can lead to failures in the implementation of traditional geostatistical tools. In this paper, we propose a computationally efficient Bayesian hierarchical spatio-temporal model in which the spatial dependence is approximated by a Gaussian Markov random field (GMRF) while the temporal correlation is described using a vector autoregressive model. By introducing an auxiliary lattice on the spatial region of interest, the proposed method is not only able to handle irregularly spaced observations in the spatial domain, but it is also able to bypass the missing data problem in a spatio-temporal process. Because the computational complexity of the proposed Markov chain Monte Carlo algorithm is of the order O(n) with n the total number of observations in space and time, our method can be used to handle very large spatio-temporal datasets with reasonable CPU times. The performance of the proposed model is illustrated using simulation studies and a dataset of precipitation data from the coterminous United States.

  10. Existence and Stability of Solutions for Implicit Multivalued Vector Equilibrium Problems

    Directory of Open Access Journals (Sweden)

    Li Qiuying

    2011-01-01

    Full Text Available A class of implicit multivalued vector equilibrium problems is studied. By using the generalized Fan-Browder fixed point theorem, some existence results of solutions for the implicit multivalued vector equilibrium problems are obtained under some suitable assumptions. Moreover, a stability result of solutions for the implicit multivalued vector equilibrium problems is derived. These results extend and unify some recent results for implicit vector equilibrium problems, multivalued vector variational inequality problems, and vector variational inequality problems.

  11. Large-scale ligand-based predictive modelling using support vector machines.

    Science.gov (United States)

    Alvarsson, Jonathan; Lampa, Samuel; Schaal, Wesley; Andersson, Claes; Wikberg, Jarl E S; Spjuth, Ola

    2016-01-01

    The increasing size of datasets in drug discovery makes it challenging to build robust and accurate predictive models within a reasonable amount of time. In order to investigate the effect of dataset sizes on predictive performance and modelling time, ligand-based regression models were trained on open datasets of varying sizes of up to 1.2 million chemical structures. For modelling, two implementations of support vector machines (SVM) were used. Chemical structures were described by the signatures molecular descriptor. Results showed that for the larger datasets, the LIBLINEAR SVM implementation performed on par with the well-established libsvm with a radial basis function kernel, but with dramatically less time for model building even on modest computer resources. Using a non-linear kernel proved to be infeasible for large data sizes, even with substantial computational resources on a computer cluster. To deploy the resulting models, we extended the Bioclipse decision support framework to support models from LIBLINEAR and made our models of logD and solubility available from within Bioclipse.

  12. The charge form factor of the neutron from sup 2 H-vector, (e-vector, e' n)p

    CERN Document Server

    Passchier, I; Szczerba, D; Alarcon, R; Bauer, T S; Boersma, D J; Van der Brand, J F J; Bulten, H J; Ferro-Luzzi, M; Higinbotham, D W; Jager, C W D; Klous, S; Kolster, H; Lang, J; Nikolenko, D M; Nooren, G J; Norum, B E; Poolman, H R; Rachek, Igor A; Simani, M C; Six, E; Vries, H D; Wang, K; Zhou, Z L

    2000-01-01

    We report on the first measurement of spin-correlation parameters in quasifree electron scattering from vector-polarized deuterium. Polarized electrons were injected into an electron storage ring at a beam energy of 720 MeV. A Siberian snake was employed to preserve longitudinal polarization at the interaction point. Vector-polarized deuterium was produced by an atomic beam source and injected into an open-ended cylindrical cell, internal to the electron storage ring. The spin correlation parameter A sup V sub e sub d was measured for the reaction sup 2 H-vector, (e-vector, e'n)p at a four-momentum transfer squared of 0.21 (GeV/c) sup 2 from which a value for the charge form factor of the neutron was extracted.

  13. A cross-country Exchange Market Pressure (EMP dataset

    Directory of Open Access Journals (Sweden)

    Mohit Desai

    2017-06-01

    Full Text Available The data presented in this article are related to the research article titled - “An exchange market pressure measure for cross country analysis” (Patnaik et al. [1]. In this article, we present the dataset for Exchange Market Pressure values (EMP for 139 countries along with their conversion factors, ρ (rho. Exchange Market Pressure, expressed in percentage change in exchange rate, measures the change in exchange rate that would have taken place had the central bank not intervened. The conversion factor ρ can interpreted as the change in exchange rate associated with $1 billion of intervention. Estimates of conversion factor ρ allow us to calculate a monthly time series of EMP for 139 countries. Additionally, the dataset contains the 68% confidence interval (high and low values for the point estimates of ρ’s. Using the standard errors of estimates of ρ’s, we obtain one sigma intervals around mean estimates of EMP values. These values are also reported in the dataset.

  14. A cross-country Exchange Market Pressure (EMP) dataset.

    Science.gov (United States)

    Desai, Mohit; Patnaik, Ila; Felman, Joshua; Shah, Ajay

    2017-06-01

    The data presented in this article are related to the research article titled - "An exchange market pressure measure for cross country analysis" (Patnaik et al. [1]). In this article, we present the dataset for Exchange Market Pressure values (EMP) for 139 countries along with their conversion factors, ρ (rho). Exchange Market Pressure, expressed in percentage change in exchange rate, measures the change in exchange rate that would have taken place had the central bank not intervened. The conversion factor ρ can interpreted as the change in exchange rate associated with $1 billion of intervention. Estimates of conversion factor ρ allow us to calculate a monthly time series of EMP for 139 countries. Additionally, the dataset contains the 68% confidence interval (high and low values) for the point estimates of ρ 's. Using the standard errors of estimates of ρ 's, we obtain one sigma intervals around mean estimates of EMP values. These values are also reported in the dataset.

  15. Scalar-vector bootstrap

    Energy Technology Data Exchange (ETDEWEB)

    Rejon-Barrera, Fernando [Institute for Theoretical Physics, University of Amsterdam,Science Park 904, Postbus 94485, 1090 GL, Amsterdam (Netherlands); Robbins, Daniel [Department of Physics, Texas A& M University,TAMU 4242, College Station, TX 77843 (United States)

    2016-01-22

    We work out all of the details required for implementation of the conformal bootstrap program applied to the four-point function of two scalars and two vectors in an abstract conformal field theory in arbitrary dimension. This includes a review of which tensor structures make appearances, a construction of the projectors onto the required mixed symmetry representations, and a computation of the conformal blocks for all possible operators which can be exchanged. These blocks are presented as differential operators acting upon the previously known scalar conformal blocks. Finally, we set up the bootstrap equations which implement crossing symmetry. Special attention is given to the case of conserved vectors, where several simplifications occur.

  16. Representation and display of vector field topology in fluid flow data sets

    Science.gov (United States)

    Helman, James; Hesselink, Lambertus

    1989-01-01

    The visualization of physical processes in general and of vector fields in particular is discussed. An approach to visualizing flow topology that is based on the physics and mathematics underlying the physical phenomenon is presented. It involves determining critical points in the flow where the velocity vector vanishes. The critical points, connected by principal lines or planes, determine the topology of the flow. The complexity of the data is reduced without sacrificing the quantitative nature of the data set. By reducing the original vector field to a set of critical points and their connections, a representation of the topology of a two-dimensional vector field that is much smaller than the original data set but retains with full precision the information pertinent to the flow topology is obtained. This representation can be displayed as a set of points and tangent curves or as a graph. Analysis (including algorithms), display, interaction, and implementation aspects are discussed.

  17. Artificial frame filling using adaptive neural fuzzy inference system for particle image velocimetry dataset

    Science.gov (United States)

    Akdemir, Bayram; Doǧan, Sercan; Aksoy, Muharrem H.; Canli, Eyüp; Özgören, Muammer

    2015-03-01

    Liquid behaviors are very important for many areas especially for Mechanical Engineering. Fast camera is a way to observe and search the liquid behaviors. Camera traces the dust or colored markers travelling in the liquid and takes many pictures in a second as possible as. Every image has large data structure due to resolution. For fast liquid velocity, there is not easy to evaluate or make a fluent frame after the taken images. Artificial intelligence has much popularity in science to solve the nonlinear problems. Adaptive neural fuzzy inference system is a common artificial intelligence in literature. Any particle velocity in a liquid has two dimension speed and its derivatives. Adaptive Neural Fuzzy Inference System has been used to create an artificial frame between previous and post frames as offline. Adaptive neural fuzzy inference system uses velocities and vorticities to create a crossing point vector between previous and post points. In this study, Adaptive Neural Fuzzy Inference System has been used to fill virtual frames among the real frames in order to improve image continuity. So this evaluation makes the images much understandable at chaotic or vorticity points. After executed adaptive neural fuzzy inference system, the image dataset increase two times and has a sequence as virtual and real, respectively. The obtained success is evaluated using R2 testing and mean squared error. R2 testing has a statistical importance about similarity and 0.82, 0.81, 0.85 and 0.8 were obtained for velocities and derivatives, respectively.

  18. The principal part of plane vector fields with fixed Newton diagram

    International Nuclear Information System (INIS)

    Berezovskaya, F.

    1991-09-01

    Considering the main part of a plane vector field in a neighbourhood of a singular point 0(0,0) it is well known that if the singularity real parts of eigenvalues are non-zero, the linear part of the vector field provides the topological normal form and tangents of all the o-curves. The problem is to find the main part of a plane vector field which would provide the topological orbital normal form in a neighbourhood of singular point and asymptotics of all characteristics trajectories. In this work the solution to the problem for the generic ease of so-called nondegenerate vector fields, using Newton diagram is given. 13 refs, 5 figs

  19. Characterization Of Ocean Wind Vector Retrievals Using ERS-2 High-Resolution Long-Term Dataset And Buoy Measurements

    Science.gov (United States)

    Polverari, F.; Talone, M.; Crapolicchio, R. Levy, G.; Marzano, F.

    2013-12-01

    The European Remote-sensing Satellite (ERS)-2 scatterometer provides wind retrievals over Ocean. To satisfy the needs of high quality and homogeneous set of scatterometer measurements, the European Space Agency (ESA) has developed the project Advanced Scatterometer Processing System (ASPS) with which a long-term dataset of new ERS-2 wind products, with an enhanced resolution of 25km square, has been generated by the reprocessing of the entire ERS mission. This paper presents the main results of the validation work of such new dataset using in situ measurements provided by the Prediction and Research Moored Array in the Tropical Atlantic (PIRATA). The comparison indicates that, on average, the scatterometer data agree well with buoys measurements, however the scatterometer tends to overestimates lower winds and underestimates higher winds.

  20. Clifford Fourier transform on vector fields.

    Science.gov (United States)

    Ebling, Julia; Scheuermann, Gerik

    2005-01-01

    Image processing and computer vision have robust methods for feature extraction and the computation of derivatives of scalar fields. Furthermore, interpolation and the effects of applying a filter can be analyzed in detail and can be advantages when applying these methods to vector fields to obtain a solid theoretical basis for feature extraction. We recently introduced the Clifford convolution, which is an extension of the classical convolution on scalar fields and provides a unified notation for the convolution of scalar and vector fields. It has attractive geometric properties that allow pattern matching on vector fields. In image processing, the convolution and the Fourier transform operators are closely related by the convolution theorem and, in this paper, we extend the Fourier transform to include general elements of Clifford Algebra, called multivectors, including scalars and vectors. The resulting convolution and derivative theorems are extensions of those for convolution and the Fourier transform on scalar fields. The Clifford Fourier transform allows a frequency analysis of vector fields and the behavior of vector-valued filters. In frequency space, vectors are transformed into general multivectors of the Clifford Algebra. Many basic vector-valued patterns, such as source, sink, saddle points, and potential vortices, can be described by a few multivectors in frequency space.

  1. Maxwell's Multipole Vectors and the CMB

    OpenAIRE

    Weeks, Jeffrey R.

    2004-01-01

    The recently re-discovered multipole vector approach to understanding the harmonic decomposition of the cosmic microwave background traces its roots to Maxwell's Treatise on Electricity and Magnetism. Taking Maxwell's directional derivative approach as a starting point, the present article develops a fast algorithm for computing multipole vectors, with an exposition that is both simpler and better motivated than in the author's previous work. Tests show the resulting algorithm, coded up as a ...

  2. Progressive Classification Using Support Vector Machines

    Science.gov (United States)

    Wagstaff, Kiri; Kocurek, Michael

    2009-01-01

    An algorithm for progressive classification of data, analogous to progressive rendering of images, makes it possible to compromise between speed and accuracy. This algorithm uses support vector machines (SVMs) to classify data. An SVM is a machine learning algorithm that builds a mathematical model of the desired classification concept by identifying the critical data points, called support vectors. Coarse approximations to the concept require only a few support vectors, while precise, highly accurate models require far more support vectors. Once the model has been constructed, the SVM can be applied to new observations. The cost of classifying a new observation is proportional to the number of support vectors in the model. When computational resources are limited, an SVM of the appropriate complexity can be produced. However, if the constraints are not known when the model is constructed, or if they can change over time, a method for adaptively responding to the current resource constraints is required. This capability is particularly relevant for spacecraft (or any other real-time systems) that perform onboard data analysis. The new algorithm enables the fast, interactive application of an SVM classifier to a new set of data. The classification process achieved by this algorithm is characterized as progressive because a coarse approximation to the true classification is generated rapidly and thereafter iteratively refined. The algorithm uses two SVMs: (1) a fast, approximate one and (2) slow, highly accurate one. New data are initially classified by the fast SVM, producing a baseline approximate classification. For each classified data point, the algorithm calculates a confidence index that indicates the likelihood that it was classified correctly in the first pass. Next, the data points are sorted by their confidence indices and progressively reclassified by the slower, more accurate SVM, starting with the items most likely to be incorrectly classified. The user

  3. Effect of saddle-point anisotropy on point-defect drift-diffusion into straight dislocations

    International Nuclear Information System (INIS)

    Skinner, B.C.; Woo, C.H.

    1983-02-01

    Effects on point-defect drift-diffusion in the strain fields of edge or screw dislocations, due to the anisotropy of the point defect in its saddle-point configuration, are investigated. Expressions for sink strength and bias that include the saddle-point shape effect are derived, both in the absence and presence of an externally applied stress. These are found to depend on intrinsic parameters such as the relaxation volume and the saddle-point shape of the point defects, and extrinsic parameters such as temperature and the magnitude and direction of the externally applied stress with respect to the line direction and Burgers vector direction of the dislocation. The theory is applied to fcc copper and bcc iron. It is found that screw dislocations are biased sinks and that the stress-induced bias differential for the edge dislocations depends much more on the line direction than the Burgers vector direction. Comparison with the stress-induced bias differential due to the usual SIPA effect is made. It is found that the present effect causes a bias differential that is more than an order of magnitude larger

  4. Emory University: High-Throughput Protein-Protein Interaction Dataset for Lung Cancer-Associated Genes | Office of Cancer Genomics

    Science.gov (United States)

    To discover novel PPI signaling hubs for lung cancer, CTD2 Center at Emory utilized large-scale genomics datasets and literature to compile a set of lung cancer-associated genes. A library of expression vectors were generated for these genes and utilized for detecting pairwise PPIs with cell lysate-based TR-FRET assays in high-throughput screening format. Read the abstract.

  5. Ultrametric distribution of culture vectors in an extended Axelrod model of cultural dissemination

    Science.gov (United States)

    Stivala, Alex; Robins, Garry; Kashima, Yoshihisa; Kirley, Michael

    2014-05-01

    The Axelrod model of cultural diffusion is an apparently simple model that is capable of complex behaviour. A recent work used a real-world dataset of opinions as initial conditions, demonstrating the effects of the ultrametric distribution of empirical opinion vectors in promoting cultural diversity in the model. Here we quantify the degree of ultrametricity of the initial culture vectors and investigate the effect of varying degrees of ultrametricity on the absorbing state of both a simple and extended model. Unlike the simple model, ultrametricity alone is not sufficient to sustain long-term diversity in the extended Axelrod model; rather, the initial conditions must also have sufficiently large variance in intervector distances. Further, we find that a scheme for evolving synthetic opinion vectors from cultural ``prototypes'' shows the same behaviour as real opinion data in maintaining cultural diversity in the extended model; whereas neutral evolution of cultural vectors does not.

  6. A Hamilton-like vector for the special-relativistic Coulomb problem

    International Nuclear Information System (INIS)

    Munoz, Gerardo; Pavic, Ivana

    2006-01-01

    A relativistic point charge moving in a Coulomb potential does not admit a conserved Hamilton vector. Despite this fact, a Hamilton-like vector may be developed that proves useful in the derivation and analysis of the particle's orbit

  7. On some orthogonality properties of Maxwell's multipole vectors

    International Nuclear Information System (INIS)

    Gramada, Apostol

    2007-01-01

    We determine the location of the expansion points with respect to which the two Maxwell's multipole vectors of the quadrupole moment and the dipole vector of a distribution of charge form an orthogonal trihedron. We find that with respect to these 'orthogonality centres' both the dipole and the quadrupole moments are each characterized by a single real parameter. We further show that the orthogonality centres coincide with the stationary points of the magnitude of the quadrupole moment and, therefore, they can be seen as an extension of the concept of centre of the dipole moment of a neutral system introduced previously in the literature. The nature of the stationary points then provides the means for the classification of a distribution of charge in two different categories

  8. A method for generating large datasets of organ geometries for radiotherapy treatment planning studies

    International Nuclear Information System (INIS)

    Hu, Nan; Cerviño, Laura; Segars, Paul; Lewis, John; Shan, Jinlu; Jiang, Steve; Zheng, Xiaolin; Wang, Ge

    2014-01-01

    With the rapidly increasing application of adaptive radiotherapy, large datasets of organ geometries based on the patient’s anatomy are desired to support clinical application or research work, such as image segmentation, re-planning, and organ deformation analysis. Sometimes only limited datasets are available in clinical practice. In this study, we propose a new method to generate large datasets of organ geometries to be utilized in adaptive radiotherapy. Given a training dataset of organ shapes derived from daily cone-beam CT, we align them into a common coordinate frame and select one of the training surfaces as reference surface. A statistical shape model of organs was constructed, based on the establishment of point correspondence between surfaces and non-uniform rational B-spline (NURBS) representation. A principal component analysis is performed on the sampled surface points to capture the major variation modes of each organ. A set of principal components and their respective coefficients, which represent organ surface deformation, were obtained, and a statistical analysis of the coefficients was performed. New sets of statistically equivalent coefficients can be constructed and assigned to the principal components, resulting in a larger geometry dataset for the patient’s organs. These generated organ geometries are realistic and statistically representative

  9. The NASA Subsonic Jet Particle Image Velocimetry (PIV) Dataset

    Science.gov (United States)

    Bridges, James; Wernet, Mark P.

    2011-01-01

    Many tasks in fluids engineering require prediction of turbulence of jet flows. The present document documents the single-point statistics of velocity, mean and variance, of cold and hot jet flows. The jet velocities ranged from 0.5 to 1.4 times the ambient speed of sound, and temperatures ranged from unheated to static temperature ratio 2.7. Further, the report assesses the accuracies of the data, e.g., establish uncertainties for the data. This paper covers the following five tasks: (1) Document acquisition and processing procedures used to create the particle image velocimetry (PIV) datasets. (2) Compare PIV data with hotwire and laser Doppler velocimetry (LDV) data published in the open literature. (3) Compare different datasets acquired at the same flow conditions in multiple tests to establish uncertainties. (4) Create a consensus dataset for a range of hot jet flows, including uncertainty bands. (5) Analyze this consensus dataset for self-consistency and compare jet characteristics to those of the open literature. The final objective was fulfilled by using the potential core length and the spread rate of the half-velocity radius to collapse of the mean and turbulent velocity fields over the first 20 jet diameters.

  10. Vehicle Classification Using an Imbalanced Dataset Based on a Single Magnetic Sensor

    Directory of Open Access Journals (Sweden)

    Chang Xu

    2018-05-01

    Full Text Available This paper aims to improve the accuracy of automatic vehicle classifiers for imbalanced datasets. Classification is made through utilizing a single anisotropic magnetoresistive sensor, with the models of vehicles involved being classified into hatchbacks, sedans, buses, and multi-purpose vehicles (MPVs. Using time domain and frequency domain features in combination with three common classification algorithms in pattern recognition, we develop a novel feature extraction method for vehicle classification. These three common classification algorithms are the k-nearest neighbor, the support vector machine, and the back-propagation neural network. Nevertheless, a problem remains with the original vehicle magnetic dataset collected being imbalanced, and may lead to inaccurate classification results. With this in mind, we propose an approach called SMOTE, which can further boost the performance of classifiers. Experimental results show that the k-nearest neighbor (KNN classifier with the SMOTE algorithm can reach a classification accuracy of 95.46%, thus minimizing the effect of the imbalance.

  11. Vehicle Classification Using an Imbalanced Dataset Based on a Single Magnetic Sensor.

    Science.gov (United States)

    Xu, Chang; Wang, Yingguan; Bao, Xinghe; Li, Fengrong

    2018-05-24

    This paper aims to improve the accuracy of automatic vehicle classifiers for imbalanced datasets. Classification is made through utilizing a single anisotropic magnetoresistive sensor, with the models of vehicles involved being classified into hatchbacks, sedans, buses, and multi-purpose vehicles (MPVs). Using time domain and frequency domain features in combination with three common classification algorithms in pattern recognition, we develop a novel feature extraction method for vehicle classification. These three common classification algorithms are the k-nearest neighbor, the support vector machine, and the back-propagation neural network. Nevertheless, a problem remains with the original vehicle magnetic dataset collected being imbalanced, and may lead to inaccurate classification results. With this in mind, we propose an approach called SMOTE, which can further boost the performance of classifiers. Experimental results show that the k-nearest neighbor (KNN) classifier with the SMOTE algorithm can reach a classification accuracy of 95.46%, thus minimizing the effect of the imbalance.

  12. Allegheny County Address Points

    Data.gov (United States)

    Allegheny County / City of Pittsburgh / Western PA Regional Data Center — This dataset contains address points which represent physical address locations assigned by the Allegheny County addressing authority. Data is updated by County...

  13. Full-Scale Approximations of Spatio-Temporal Covariance Models for Large Datasets

    KAUST Repository

    Zhang, Bohai

    2014-01-01

    Various continuously-indexed spatio-temporal process models have been constructed to characterize spatio-temporal dependence structures, but the computational complexity for model fitting and predictions grows in a cubic order with the size of dataset and application of such models is not feasible for large datasets. This article extends the full-scale approximation (FSA) approach by Sang and Huang (2012) to the spatio-temporal context to reduce computational complexity. A reversible jump Markov chain Monte Carlo (RJMCMC) algorithm is proposed to select knots automatically from a discrete set of spatio-temporal points. Our approach is applicable to nonseparable and nonstationary spatio-temporal covariance models. We illustrate the effectiveness of our method through simulation experiments and application to an ozone measurement dataset.

  14. Support vector machine for the diagnosis of malignant mesothelioma

    Science.gov (United States)

    Ushasukhanya, S.; Nithyakalyani, A.; Sivakumar, V.

    2018-04-01

    Harmful mesothelioma is an illness in which threatening (malignancy) cells shape in the covering of the trunk or stomach area. Being presented to asbestos can influence the danger of threatening mesothelioma. Signs and side effects of threatening mesothelioma incorporate shortness of breath and agony under the rib confine. Tests that inspect within the trunk and belly are utilized to recognize (find) and analyse harmful mesothelioma. Certain elements influence forecast (shot of recuperation) and treatment choices. In this review, Support vector machine (SVM) classifiers were utilized for Mesothelioma sickness conclusion. SVM output is contrasted by concentrating on Mesothelioma’s sickness and findings by utilizing similar information set. The support vector machine algorithm gives 92.5% precision acquired by means of 3-overlap cross-approval. The Mesothelioma illness dataset were taken from an organization reports from Turkey.

  15. Efficient Verifiable Range and Closest Point Queries in Zero-Knowledge

    Directory of Open Access Journals (Sweden)

    Ghosh Esha

    2016-10-01

    Full Text Available We present an efficient method for answering one-dimensional range and closest-point queries in a verifiable and privacy-preserving manner. We consider a model where a data owner outsources a dataset of key-value pairs to a server, who answers range and closest-point queries issued by a client and provides proofs of the answers. The client verifies the correctness of the answers while learning nothing about the dataset besides the answers to the current and previous queries. Our work yields for the first time a zero-knowledge privacy assurance to authenticated range and closest-point queries. Previous work leaked the size of the dataset and used an inefficient proof protocol. Our construction is based on hierarchical identity-based encryption. We prove its security and analyze its efficiency both theoretically and with experiments on synthetic and real data (Enron email and Boston taxi datasets.

  16. Dataset of transcriptional landscape of B cell early activation

    Directory of Open Access Journals (Sweden)

    Alexander S. Garruss

    2015-09-01

    Full Text Available Signaling via B cell receptors (BCR and Toll-like receptors (TLRs result in activation of B cells with distinct physiological outcomes, but transcriptional regulatory mechanisms that drive activation and distinguish these pathways remain unknown. At early time points after BCR and TLR ligand exposure, 0.5 and 2 h, RNA-seq was performed allowing observations on rapid transcriptional changes. At 2 h, ChIP-seq was performed to allow observations on important regulatory mechanisms potentially driving transcriptional change. The dataset includes RNA-seq, ChIP-seq of control (Input, RNA Pol II, H3K4me3, H3K27me3, and a separate RNA-seq for miRNA expression, which can be found at Gene Expression Omnibus Dataset GSE61608. Here, we provide details on the experimental and analysis methods used to obtain and analyze this dataset and to examine the transcriptional landscape of B cell early activation.

  17. Cosmological Solutions of Tensor–Vector Theories of Gravity by ...

    Indian Academy of Sciences (India)

    We consider tensor–vector theories by varying the space- time–matter coupling ... solutions by considering the character of critical points of the theory and their stability .... light (Magueijo 2003) that has arisen from the possibility of varying fine structure constant. ... Vector-like dark energy displays a series of properties that.

  18. Prediction of endoplasmic reticulum resident proteins using fragmented amino acid composition and support vector machine

    Directory of Open Access Journals (Sweden)

    Ravindra Kumar

    2017-09-01

    Full Text Available Background The endoplasmic reticulum plays an important role in many cellular processes, which includes protein synthesis, folding and post-translational processing of newly synthesized proteins. It is also the site for quality control of misfolded proteins and entry point of extracellular proteins to the secretory pathway. Hence at any given point of time, endoplasmic reticulum contains two different cohorts of proteins, (i proteins involved in endoplasmic reticulum-specific function, which reside in the lumen of the endoplasmic reticulum, called as endoplasmic reticulum resident proteins and (ii proteins which are in process of moving to the extracellular space. Thus, endoplasmic reticulum resident proteins must somehow be distinguished from newly synthesized secretory proteins, which pass through the endoplasmic reticulum on their way out of the cell. Approximately only 50% of the proteins used in this study as training data had endoplasmic reticulum retention signal, which shows that these signals are not essentially present in all endoplasmic reticulum resident proteins. This also strongly indicates the role of additional factors in retention of endoplasmic reticulum-specific proteins inside the endoplasmic reticulum. Methods This is a support vector machine based method, where we had used different forms of protein features as inputs for support vector machine to develop the prediction models. During training leave-one-out approach of cross-validation was used. Maximum performance was obtained with a combination of amino acid compositions of different part of proteins. Results In this study, we have reported a novel support vector machine based method for predicting endoplasmic reticulum resident proteins, named as ERPred. During training we achieved a maximum accuracy of 81.42% with leave-one-out approach of cross-validation. When evaluated on independent dataset, ERPred did prediction with sensitivity of 72.31% and specificity of 83

  19. Relative Error Evaluation to Typical Open Global dem Datasets in Shanxi Plateau of China

    Science.gov (United States)

    Zhao, S.; Zhang, S.; Cheng, W.

    2018-04-01

    Produced by radar data or stereo remote sensing image pairs, global DEM datasets are one of the most important types for DEM data. Relative error relates to surface quality created by DEM data, so it relates to geomorphology and hydrologic applications using DEM data. Taking Shanxi Plateau of China as the study area, this research evaluated the relative error to typical open global DEM datasets including Shuttle Radar Terrain Mission (SRTM) data with 1 arc second resolution (SRTM1), SRTM data with 3 arc second resolution (SRTM3), ASTER global DEM data in the second version (GDEM-v2) and ALOS world 3D-30m (AW3D) data. Through process and selection, more than 300,000 ICESat/GLA14 points were used as the GCP data, and the vertical error was computed and compared among four typical global DEM datasets. Then, more than 2,600,000 ICESat/GLA14 point pairs were acquired using the distance threshold between 100 m and 500 m. Meanwhile, the horizontal distance between every point pair was computed, so the relative error was achieved using slope values based on vertical error difference and the horizontal distance of the point pairs. Finally, false slope ratio (FSR) index was computed through analyzing the difference between DEM and ICESat/GLA14 values for every point pair. Both relative error and FSR index were categorically compared for the four DEM datasets under different slope classes. Research results show: Overall, AW3D has the lowest relative error values in mean error, mean absolute error, root mean square error and standard deviation error; then the SRTM1 data, its values are a little higher than AW3D data; the SRTM3 and GDEM-v2 data have the highest relative error values, and the values for the two datasets are similar. Considering different slope conditions, all the four DEM data have better performance in flat areas but worse performance in sloping regions; AW3D has the best performance in all the slope classes, a litter better than SRTM1; with slope increasing

  20. Towards extending IFC with point cloud data

    NARCIS (Netherlands)

    Krijnen, T.F.; Beetz, J.; Ochmann, S.; Vock, R.; Wessel, R.

    2015-01-01

    In this paper we suggest an extension to the Industry Foundation Classes model to integrate point cloud datasets. The proposal includes a schema extension to the core model allowing the storage of points either as Cartesian coordinates, points in parametric space of a surface associated with a

  1. Segmentation of Planar Surfaces from Laser Scanning Data Using the Magnitude of Normal Position Vector for Adaptive Neighborhoods.

    Science.gov (United States)

    Kim, Changjae; Habib, Ayman; Pyeon, Muwook; Kwon, Goo-rak; Jung, Jaehoon; Heo, Joon

    2016-01-22

    Diverse approaches to laser point segmentation have been proposed since the emergence of the laser scanning system. Most of these segmentation techniques, however, suffer from limitations such as sensitivity to the choice of seed points, lack of consideration of the spatial relationships among points, and inefficient performance. In an effort to overcome these drawbacks, this paper proposes a segmentation methodology that: (1) reduces the dimensions of the attribute space; (2) considers the attribute similarity and the proximity of the laser point simultaneously; and (3) works well with both airborne and terrestrial laser scanning data. A neighborhood definition based on the shape of the surface increases the homogeneity of the laser point attributes. The magnitude of the normal position vector is used as an attribute for reducing the dimension of the accumulator array. The experimental results demonstrate, through both qualitative and quantitative evaluations, the outcomes' high level of reliability. The proposed segmentation algorithm provided 96.89% overall correctness, 95.84% completeness, a 0.25 m overall mean value of centroid difference, and less than 1° of angle difference. The performance of the proposed approach was also verified with a large dataset and compared with other approaches. Additionally, the evaluation of the sensitivity of the thresholds was carried out. In summary, this paper proposes a robust and efficient segmentation methodology for abstraction of an enormous number of laser points into plane information.

  2. EPA Nanorelease Dataset

    Data.gov (United States)

    U.S. Environmental Protection Agency — EPA Nanorelease Dataset. This dataset is associated with the following publication: Wohlleben, W., C. Kingston, J. Carter, E. Sahle-Demessie, S. Vazquez-Campos, B....

  3. Supersymmetric localization for BPS black hole entropy: 1-loop partition function from vector multiplets

    International Nuclear Information System (INIS)

    Gupta, Rajesh Kumar; Ito, Yuto; Jeon, Imtak

    2015-01-01

    We use the techniques of supersymmetric localization to compute the BPS black hole entropy in N=2 supergravity. We focus on the n_v+1 vector multiplets on the black hole near horizon background which is AdS_2× S"2 space. We find the localizing saddle point of the vector multiplets by solving the localization equations, and compute the exact one-loop partition function on the saddle point. Furthermore, we propose the appropriate functional integration measure. Through this measure, the one-loop determinant is written in terms of the radius of the physical metric, which depends on the localizing saddle point value of the vector multiplets. The result for the one-loop determinant is consistent with the logarithmic corrections to the BPS black hole entropy from vector multiplets.

  4. Verification of target motion effects on SAR imagery using the Gotcha GMTI challenge dataset

    Science.gov (United States)

    Hack, Dan E.; Saville, Michael A.

    2010-04-01

    This paper investigates the relationship between a ground moving target's kinematic state and its SAR image. While effects such as cross-range offset, defocus, and smearing appear well understood, their derivations in the literature typically employ simplifications of the radar/target geometry and assume point scattering targets. This study adopts a geometrical model for understanding target motion effects in SAR imagery, termed the target migration path, and focuses on experimental verification of predicted motion effects using both simulated and empirical datasets based on the Gotcha GMTI challenge dataset. Specifically, moving target imagery is generated from three data sources: first, simulated phase history for a moving point target; second, simulated phase history for a moving vehicle derived from a simulated Mazda MPV X-band signature; and third, empirical phase history from the Gotcha GMTI challenge dataset. Both simulated target trajectories match the truth GPS target position history from the Gotcha GMTI challenge dataset, allowing direct comparison between all three imagery sets and the predicted target migration path. This paper concludes with a discussion of the parallels between the target migration path and the measurement model within a Kalman filtering framework, followed by conclusions.

  5. Dynamical analysis for a vector-like dark energy

    Energy Technology Data Exchange (ETDEWEB)

    Landim, Ricardo C.G. [Instituto de Fisica, Universidade de Sao Paulo, Departamento de Fisica-Matematica, Sao Paulo, SP (Brazil)

    2016-09-15

    In this paper we perform a dynamical analysis for a vector field as a candidate for the dark energy, in the presence of a barotropic fluid. The vector is one component of the so-called cosmic triad, which is a set of three identical copies of an abelian field pointing mutually in orthogonal directions. In order to generalize the analysis, we also assumed the interaction between dark energy and the barotropic fluid, with a phenomenological coupling. Both matter and dark energy eras can be successfully described by the critical points, indicating that the dynamical system theory is a viable tool to analyze asymptotic states of such cosmological models. (orig.)

  6. Sistem Deteksi Retinopati Diabetik Menggunakan Support Vector Machine

    Directory of Open Access Journals (Sweden)

    Wahyudi Setiawan

    2014-02-01

    Full Text Available Diabetic Retinopathy is a complication of Diabetes Melitus. It can be a blindness if untreated settled as early as possible. System created in this thesis is the detection of diabetic retinopathy level of the image obtained from fundus photographs. There are three main steps to resolve the problems, preprocessing, feature extraction and classification. Preprocessing methods that used in this system are Grayscale Green Channel, Gaussian Filter, Contrast Limited Adaptive Histogram Equalization and Masking. Two Dimensional Linear Discriminant Analysis (2DLDA is used for feature extraction. Support Vector Machine (SVM is used for classification. The test result performed by taking a dataset of MESSIDOR with number of images that vary for the training phase, otherwise is used for the testing phase. Test result show the optimal accuracy are 84% .   Keywords : Diabetic Retinopathy, Support Vector Machine, Two Dimensional Linear Discriminant Analysis, MESSIDOR

  7. Process for structural geologic analysis of topography and point data

    Science.gov (United States)

    Eliason, Jay R.; Eliason, Valerie L. C.

    1987-01-01

    A quantitative method of geologic structural analysis of digital terrain data is described for implementation on a computer. Assuming selected valley segments are controlled by the underlying geologic structure, topographic lows in the terrain data, defining valley bottoms, are detected, filtered and accumulated into a series line segments defining contiguous valleys. The line segments are then vectorized to produce vector segments, defining valley segments, which may be indicative of the underlying geologic structure. Coplanar analysis is performed on vector segment pairs to determine which vectors produce planes which represent underlying geologic structure. Point data such as fracture phenomena which can be related to fracture planes in 3-dimensional space can be analyzed to define common plane orientation and locations. The vectors, points, and planes are displayed in various formats for interpretation.

  8. From racks to pointed Hopf algebras

    OpenAIRE

    Andruskiewitsch, Nicolás; Graña, Matı́as

    2003-01-01

    A fundamental step in the classification of finite-dimensional complex pointed Hopf algebras is the determination of all finite-dimensional Nichols algebras of braided vector spaces arising from groups. The most important class of braided vector spaces arising from groups is the class of braided vector spaces (CX, c^q), where C is the field of complex numbers, X is a rack and q is a 2-cocycle on X with values in C^*. Racks and cohomology of racks appeared also in the work of topologists. This...

  9. Application of Bred Vectors To Data Assimilation

    Science.gov (United States)

    Corazza, M.; Kalnay, E.; Patil, Dj

    We introduced a statistic, the BV-dimension, to measure the effective local finite-time dimensionality of the atmosphere. We show that this dimension is often quite low, and suggest that this finding has important implications for data assimilation and the accuracy of weather forecasting (Patil et al, 2001). The original database for this study was the forecasts of the NCEP global ensemble forecasting system. The initial differences between the control forecast and the per- turbed forecasts are called bred vectors. The control and perturbed initial conditions valid at time t=n(t are evolved using the forecast model until time t=(n+1) (t. The differences between the perturbed and the control forecasts are scaled down to their initial amplitude, and constitute the bred vectors valid at (n+1) (t. Their growth rate is typically about 1.5/day. The bred vectors are similar by construction to leading Lya- punov vectors except that they have small but finite amplitude, and they are valid at finite times. The original NCEP ensemble data set has 5 independent bred vectors. We define a local bred vector at each grid point by choosing the 5 by 5 grid points centered at the grid point (a region of about 1100km by 1100km), and using the north-south and east- west velocity components at 500mb pressure level to form a 50 dimensional column vector. Since we have k=5 global bred vectors, we also have k local bred vectors at each grid point. We estimate the effective dimensionality of the subspace spanned by the local bred vectors by performing a singular value decomposition (EOF analysis). The k local bred vector columns form a 50xk matrix M. The singular values s(i) of M measure the extent to which the k column unit vectors making up the matrix M point in the direction of v(i). We define the bred vector dimension as BVDIM={Sum[s(i)]}^2/{Sum[s(i)]^2} For example, if 4 out of the 5 vectors lie along v, and one lies along v, the BV- dimension would be BVDIM[sqrt(4), 1, 0

  10. Online Support Vector Regression with Varying Parameters for Time-Dependent Data

    International Nuclear Information System (INIS)

    Omitaomu, Olufemi A.; Jeong, Myong K.; Badiru, Adedeji B.

    2011-01-01

    Support vector regression (SVR) is a machine learning technique that continues to receive interest in several domains including manufacturing, engineering, and medicine. In order to extend its application to problems in which datasets arrive constantly and in which batch processing of the datasets is infeasible or expensive, an accurate online support vector regression (AOSVR) technique was proposed. The AOSVR technique efficiently updates a trained SVR function whenever a sample is added to or removed from the training set without retraining the entire training data. However, the AOSVR technique assumes that the new samples and the training samples are of the same characteristics; hence, the same value of SVR parameters is used for training and prediction. This assumption is not applicable to data samples that are inherently noisy and non-stationary such as sensor data. As a result, we propose Accurate On-line Support Vector Regression with Varying Parameters (AOSVR-VP) that uses varying SVR parameters rather than fixed SVR parameters, and hence accounts for the variability that may exist in the samples. To accomplish this objective, we also propose a generalized weight function to automatically update the weights of SVR parameters in on-line monitoring applications. The proposed function allows for lower and upper bounds for SVR parameters. We tested our proposed approach and compared results with the conventional AOSVR approach using two benchmark time series data and sensor data from nuclear power plant. The results show that using varying SVR parameters is more applicable to time dependent data.

  11. Using the Relevance Vector Machine Model Combined with Local Phase Quantization to Predict Protein-Protein Interactions from Protein Sequences

    Directory of Open Access Journals (Sweden)

    Ji-Yong An

    2016-01-01

    Full Text Available We propose a novel computational method known as RVM-LPQ that combines the Relevance Vector Machine (RVM model and Local Phase Quantization (LPQ to predict PPIs from protein sequences. The main improvements are the results of representing protein sequences using the LPQ feature representation on a Position Specific Scoring Matrix (PSSM, reducing the influence of noise using a Principal Component Analysis (PCA, and using a Relevance Vector Machine (RVM based classifier. We perform 5-fold cross-validation experiments on Yeast and Human datasets, and we achieve very high accuracies of 92.65% and 97.62%, respectively, which is significantly better than previous works. To further evaluate the proposed method, we compare it with the state-of-the-art support vector machine (SVM classifier on the Yeast dataset. The experimental results demonstrate that our RVM-LPQ method is obviously better than the SVM-based method. The promising experimental results show the efficiency and simplicity of the proposed method, which can be an automatic decision support tool for future proteomics research.

  12. Applicability of vector processing to large-scale nuclear codes

    International Nuclear Information System (INIS)

    Ishiguro, Misako; Harada, Hiroo; Matsuura, Toshihiko; Okuda, Motoi; Ohta, Fumio; Umeya, Makoto.

    1982-03-01

    To meet the growing trend of computational requirements in JAERI, introduction of a high-speed computer with vector processing faculty (a vector processor) is desirable in the near future. To make effective use of a vector processor, appropriate optimization of nuclear codes to pipelined-vector architecture is vital, which will pose new problems concerning code development and maintenance. In this report, vector processing efficiency is assessed with respect to large-scale nuclear codes by examining the following items: 1) The present feature of computational load in JAERI is analyzed by compiling the computer utilization statistics. 2) Vector processing efficiency is estimated for the ten heavily-used nuclear codes by analyzing their dynamic behaviors run on a scalar machine. 3) Vector processing efficiency is measured for the other five nuclear codes by using the current vector processors, FACOM 230-75 APU and CRAY-1. 4) Effectiveness of applying a high-speed vector processor to nuclear codes is evaluated by taking account of the characteristics in JAERI jobs. Problems of vector processors are also discussed from the view points of code performance and ease of use. (author)

  13. Next generation of adeno-associated virus 2 vectors: Point mutations in tyrosines lead to high-efficiency transduction at lower doses

    Science.gov (United States)

    Zhong, Li; Li, Baozheng; Mah, Cathryn S.; Govindasamy, Lakshmanan; Agbandje-McKenna, Mavis; Cooper, Mario; Herzog, Roland W.; Zolotukhin, Irene; Warrington, Kenneth H.; Weigel-Van Aken, Kirsten A.; Hobbs, Jacqueline A.; Zolotukhin, Sergei; Muzyczka, Nicholas; Srivastava, Arun

    2008-01-01

    Recombinant adeno-associated virus 2 (AAV2) vectors are in use in several Phase I/II clinical trials, but relatively large vector doses are needed to achieve therapeutic benefits. Large vector doses also trigger an immune response as a significant fraction of the vectors fails to traffic efficiently to the nucleus and is targeted for degradation by the host cell proteasome machinery. We have reported that epidermal growth factor receptor protein tyrosine kinase (EGFR-PTK) signaling negatively affects transduction by AAV2 vectors by impairing nuclear transport of the vectors. We have also observed that EGFR-PTK can phosphorylate AAV2 capsids at tyrosine residues. Tyrosine-phosphorylated AAV2 vectors enter cells efficiently but fail to transduce effectively, in part because of ubiquitination of AAV capsids followed by proteasome-mediated degradation. We reasoned that mutations of the surface-exposed tyrosine residues might allow the vectors to evade phosphorylation and subsequent ubiquitination and, thus, prevent proteasome-mediated degradation. Here, we document that site-directed mutagenesis of surface-exposed tyrosine residues leads to production of vectors that transduce HeLa cells ≈10-fold more efficiently in vitro and murine hepatocytes nearly 30-fold more efficiently in vivo at a log lower vector dose. Therapeutic levels of human Factor IX (F.IX) are also produced at an ≈10-fold reduced vector dose. The increased transduction efficiency of tyrosine-mutant vectors is due to lack of capsid ubiquitination and improved intracellular trafficking to the nucleus. These studies have led to the development of AAV vectors that are capable of high-efficiency transduction at lower doses, which has important implications in their use in human gene therapy. PMID:18511559

  14. Application of Vector Triggering Random Decrement

    DEFF Research Database (Denmark)

    Asmussen, J. C.; Ibrahim, S. R.; Brincker, Rune

    result is a Random Decrement function from each measurement. In traditional Random Decrement estimation the triggering condition is a scalar condition, which should only be fulfilled in a single measurement. In vector triggering Random Decrement the triggering condition is a vector condition......This paper deals with applications of the vector triggering Random Decrement technique. This technique is new and developed with the aim of minimizing estimation time and identification errors. The theory behind the technique is discussed in an accompanying paper. The results presented...... in this paper should be regarded as a further documentation of the technique. The key point in Random Decrement estimation is the formulation of a triggering condition. If the triggering condition is fulfilled a time segment from each measurement is picked out and averaged with previous time segments. The final...

  15. Application of Vector Triggering Random Decrement

    DEFF Research Database (Denmark)

    Asmussen, J. C.; Ibrahim, S. R.; Brincker, Rune

    1997-01-01

    result is a Random Decrement function from each measurement. In traditional Random Decrement estimation the triggering condition is a scalar condition, which should only be fulfilled in a single measurement. In vector triggering Random Decrement the triggering condition is a vector condition......This paper deals with applications of the vector triggering Random Decrement technique. This technique is new and developed with the aim of minimizing estimation time and identification errors. The theory behind the technique is discussed in an accompanying paper. The results presented...... in this paper should be regarded as a further documentation of the technique. The key point in Random Decrement estimation is the formulation of a triggering condition. If the triggering condition is fulfilled a time segment from each measurement is picked out and averaged with previous time segments. The final...

  16. Automated extraction and analysis of rock discontinuity characteristics from 3D point clouds

    Science.gov (United States)

    Bianchetti, Matteo; Villa, Alberto; Agliardi, Federico; Crosta, Giovanni B.

    2016-04-01

    principal vector similarity criteria. Poles to points are assigned to individual discontinuity objects using easy custom vector clustering and Jaccard distance approaches, and each object is segmented into planar clusters using an improved version of the DBSCAN algorithm. Modal set orientations are then recomputed by cluster-based orientation statistics to avoid the effects of biases related to cluster size and density heterogeneity of the point cloud. Finally, spacing values are measured between individual discontinuity clusters along scanlines parallel to modal pole vectors, whereas individual feature size (persistence) is measured using 3D convex hull bounding boxes. Spacing and size are provided both as raw population data and as summary statistics. The tool is optimized for parallel computing on 64bit systems, and a Graphic User Interface (GUI) has been developed to manage data processing, provide several outputs, including reclassified point clouds, tables, plots, derived fracture intensity parameters, and export to modelling software tools. We present test applications performed both on synthetic 3D data (simple 3D solids) and real case studies, validating the results with existing geomechanical datasets.

  17. RetroTransformDB: A Dataset of Generic Transforms for Retrosynthetic Analysis

    Directory of Open Access Journals (Sweden)

    Svetlana Avramova

    2018-04-01

    Full Text Available Presently, software tools for retrosynthetic analysis are widely used by organic, medicinal, and computational chemists. Rule-based systems extensively use collections of retro-reactions (transforms. While there are many public datasets with reactions in synthetic direction (usually non-generic reactions, there are no publicly-available databases with generic reactions in computer-readable format which can be used for the purposes of retrosynthetic analysis. Here we present RetroTransformDB—a dataset of transforms, compiled and coded in SMIRKS line notation by us. The collection is comprised of more than 100 records, with each one including the reaction name, SMIRKS linear notation, the functional group to be obtained, and the transform type classification. All SMIRKS transforms were tested syntactically, semantically, and from a chemical point of view in different software platforms. The overall dataset design and the retrosynthetic fitness were analyzed and curated by organic chemistry experts. The RetroTransformDB dataset may be used by open-source and commercial software packages, as well as chemoinformatics tools.

  18. Daily precipitation grids for Austria since 1961—development and evaluation of a spatial dataset for hydroclimatic monitoring and modelling

    Science.gov (United States)

    Hiebl, Johann; Frei, Christoph

    2018-04-01

    Spatial precipitation datasets that are long-term consistent, highly resolved and extend over several decades are an increasingly popular basis for modelling and monitoring environmental processes and planning tasks in hydrology, agriculture, energy resources management, etc. Here, we present a grid dataset of daily precipitation for Austria meant to promote such applications. It has a grid spacing of 1 km, extends back till 1961 and is continuously updated. It is constructed with the classical two-tier analysis, involving separate interpolations for mean monthly precipitation and daily relative anomalies. The former was accomplished by kriging with topographic predictors as external drift utilising 1249 stations. The latter is based on angular distance weighting and uses 523 stations. The input station network was kept largely stationary over time to avoid artefacts on long-term consistency. Example cases suggest that the new analysis is at least as plausible as previously existing datasets. Cross-validation and comparison against experimental high-resolution observations (WegenerNet) suggest that the accuracy of the dataset depends on interpretation. Users interpreting grid point values as point estimates must expect systematic overestimates for light and underestimates for heavy precipitation as well as substantial random errors. Grid point estimates are typically within a factor of 1.5 from in situ observations. Interpreting grid point values as area mean values, conditional biases are reduced and the magnitude of random errors is considerably smaller. Together with a similar dataset of temperature, the new dataset (SPARTACUS) is an interesting basis for modelling environmental processes, studying climate change impacts and monitoring the climate of Austria.

  19. A conceptual prototype for the next-generation national elevation dataset

    Science.gov (United States)

    Stoker, Jason M.; Heidemann, Hans Karl; Evans, Gayla A.; Greenlee, Susan K.

    2013-01-01

    In 2012 the U.S. Geological Survey's (USGS) National Geospatial Program (NGP) funded a study to develop a conceptual prototype for a new National Elevation Dataset (NED) design with expanded capabilities to generate and deliver a suite of bare earth and above ground feature information over the United States. This report details the research on identifying operational requirements based on prior research, evaluation of what is needed for the USGS to meet these requirements, and development of a possible conceptual framework that could potentially deliver the kinds of information that are needed to support NGP's partners and constituents. This report provides an initial proof-of-concept demonstration using an existing dataset, and recommendations for the future, to inform NGP's ongoing and future elevation program planning and management decisions. The demonstration shows that this type of functional process can robustly create derivatives from lidar point cloud data; however, more research needs to be done to see how well it extends to multiple datasets.

  20. Vector magnetometer design study: Analysis of a triaxial fluxgate sensor design demonstrates that all MAGSAT Vector Magnetometer specifications can be met

    Science.gov (United States)

    Adams, D. F.; Hartmann, U. G.; Lazarow, L. L.; Maloy, J. O.; Mohler, G. W.

    1976-01-01

    The design of the vector magnetometer selected for analysis is capable of exceeding the required accuracy of 5 gamma per vector field component. The principal elements that assure this performance level are very low power dissipation triaxial feedback coils surrounding ring core flux-gates and temperature control of the critical components of two-loop feedback electronics. An analysis of the calibration problem points to the need for improved test facilities.

  1. Proteomics dataset

    DEFF Research Database (Denmark)

    Bennike, Tue Bjerg; Carlsen, Thomas Gelsing; Ellingsen, Torkell

    2017-01-01

    The datasets presented in this article are related to the research articles entitled “Neutrophil Extracellular Traps in Ulcerative Colitis: A Proteome Analysis of Intestinal Biopsies” (Bennike et al., 2015 [1]), and “Proteome Analysis of Rheumatoid Arthritis Gut Mucosa” (Bennike et al., 2017 [2])...... been deposited to the ProteomeXchange Consortium via the PRIDE partner repository with the dataset identifiers PXD001608 for ulcerative colitis and control samples, and PXD003082 for rheumatoid arthritis samples....

  2. Parallel Framework for Dimensionality Reduction of Large-Scale Datasets

    Directory of Open Access Journals (Sweden)

    Sai Kiranmayee Samudrala

    2015-01-01

    Full Text Available Dimensionality reduction refers to a set of mathematical techniques used to reduce complexity of the original high-dimensional data, while preserving its selected properties. Improvements in simulation strategies and experimental data collection methods are resulting in a deluge of heterogeneous and high-dimensional data, which often makes dimensionality reduction the only viable way to gain qualitative and quantitative understanding of the data. However, existing dimensionality reduction software often does not scale to datasets arising in real-life applications, which may consist of thousands of points with millions of dimensions. In this paper, we propose a parallel framework for dimensionality reduction of large-scale data. We identify key components underlying the spectral dimensionality reduction techniques, and propose their efficient parallel implementation. We show that the resulting framework can be used to process datasets consisting of millions of points when executed on a 16,000-core cluster, which is beyond the reach of currently available methods. To further demonstrate applicability of our framework we perform dimensionality reduction of 75,000 images representing morphology evolution during manufacturing of organic solar cells in order to identify how processing parameters affect morphology evolution.

  3. Cancer Classification Based on Support Vector Machine Optimized by Particle Swarm Optimization and Artificial Bee Colony.

    Science.gov (United States)

    Gao, Lingyun; Ye, Mingquan; Wu, Changrong

    2017-11-29

    Intelligent optimization algorithms have advantages in dealing with complex nonlinear problems accompanied by good flexibility and adaptability. In this paper, the FCBF (Fast Correlation-Based Feature selection) method is used to filter irrelevant and redundant features in order to improve the quality of cancer classification. Then, we perform classification based on SVM (Support Vector Machine) optimized by PSO (Particle Swarm Optimization) combined with ABC (Artificial Bee Colony) approaches, which is represented as PA-SVM. The proposed PA-SVM method is applied to nine cancer datasets, including five datasets of outcome prediction and a protein dataset of ovarian cancer. By comparison with other classification methods, the results demonstrate the effectiveness and the robustness of the proposed PA-SVM method in handling various types of data for cancer classification.

  4. Evaluation of Modified Categorical Data Fuzzy Clustering Algorithm on the Wisconsin Breast Cancer Dataset

    Directory of Open Access Journals (Sweden)

    Amir Ahmad

    2016-01-01

    Full Text Available The early diagnosis of breast cancer is an important step in a fight against the disease. Machine learning techniques have shown promise in improving our understanding of the disease. As medical datasets consist of data points which cannot be precisely assigned to a class, fuzzy methods have been useful for studying of these datasets. Sometimes breast cancer datasets are described by categorical features. Many fuzzy clustering algorithms have been developed for categorical datasets. However, in most of these methods Hamming distance is used to define the distance between the two categorical feature values. In this paper, we use a probabilistic distance measure for the distance computation among a pair of categorical feature values. Experiments demonstrate that the distance measure performs better than Hamming distance for Wisconsin breast cancer data.

  5. Correlation between topological structure and its properties in dynamic singular vector fields.

    Science.gov (United States)

    Vasilev, Vasyl; Soskin, Marat

    2016-04-20

    A new technique for establishment of topology measurements for static and dynamic singular vector fields is elaborated. It is based on precise measurement of the 3D landscape of ellipticity distribution for a checked singular optical field with C points on the tops of ellipticity hills. Vector fields possess three-component topology: areas with right-hand (RH) and left-hand (LH) ellipses, and delimiting those L lines as the singularities of handedness. The azimuth map of polarization ellipses is common for both RH and LH ellipses of vector fields and do not feel L lines. The strict rules were confirmed experimentally, which define the connection between the sign of underlying optical vortices and morphological parameters of upper-lying C points. Percolation phenomena explain their realization in-between singular vector fields and long duration of their chains of 103  s order.

  6. Partition dataset according to amino acid type improves the prediction of deleterious non-synonymous SNPs

    International Nuclear Information System (INIS)

    Yang, Jing; Li, Yuan-Yuan; Li, Yi-Xue; Ye, Zhi-Qiang

    2012-01-01

    Highlights: ► Proper dataset partition can improve the prediction of deleterious nsSNPs. ► Partition according to original residue type at nsSNP is a good criterion. ► Similar strategy is supposed promising in other machine learning problems. -- Abstract: Many non-synonymous SNPs (nsSNPs) are associated with diseases, and numerous machine learning methods have been applied to train classifiers for sorting disease-associated nsSNPs from neutral ones. The continuously accumulated nsSNP data allows us to further explore better prediction approaches. In this work, we partitioned the training data into 20 subsets according to either original or substituted amino acid type at the nsSNP site. Using support vector machine (SVM), training classification models on each subset resulted in an overall accuracy of 76.3% or 74.9% depending on the two different partition criteria, while training on the whole dataset obtained an accuracy of only 72.6%. Moreover, the dataset was also randomly divided into 20 subsets, but the corresponding accuracy was only 73.2%. Our results demonstrated that partitioning the whole training dataset into subsets properly, i.e., according to the residue type at the nsSNP site, will improve the performance of the trained classifiers significantly, which should be valuable in developing better tools for predicting the disease-association of nsSNPs.

  7. Landslide Susceptibility Mapping Using GIS-based Vector Grid File (VGF Validating with InSAR Techniques: Three Gorges, Yangtze River (China

    Directory of Open Access Journals (Sweden)

    Cem Kıncal

    2017-04-01

    Full Text Available A landslide susceptibility assessment for the Three Gorges (TG region (China was performed in a Geographical Information System (GIS environment and Persistent Scatterer (PS InSAR derived displacements were used for validation purposes. Badong County of TG was chosen as case study field. Landslide parameters were derived from two datasets. The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER Global Digital Elevation Map (GDEM was used to calculate slope geometry parameters (slope, aspect, drainage, and lineament, while geology and vegetation cover were obtained from Landsat and ASTER data. The majority of historical landslides occurred in the sandstone-shale-claystone intercalations. It appears that slope gradients are more critical than other parameters such as aspect and drainage. The susceptibility assessment was based on a summation of assigned susceptibility scores (points for each 30×30 m unit in a database of a Vector Grid File (VGF composed of ‘vector pixels’. A landslide susceptibility map (LSM was generated using VGF and classified with low, moderate and high landslide susceptibility zones. The comparison between the LSM and PS InSAR derived displacements suggests that landslides only account for parts of the observed surface movements.

  8. Representative Vector Machines: A Unified Framework for Classical Classifiers.

    Science.gov (United States)

    Gui, Jie; Liu, Tongliang; Tao, Dacheng; Sun, Zhenan; Tan, Tieniu

    2016-08-01

    Classifier design is a fundamental problem in pattern recognition. A variety of pattern classification methods such as the nearest neighbor (NN) classifier, support vector machine (SVM), and sparse representation-based classification (SRC) have been proposed in the literature. These typical and widely used classifiers were originally developed from different theory or application motivations and they are conventionally treated as independent and specific solutions for pattern classification. This paper proposes a novel pattern classification framework, namely, representative vector machines (or RVMs for short). The basic idea of RVMs is to assign the class label of a test example according to its nearest representative vector. The contributions of RVMs are twofold. On one hand, the proposed RVMs establish a unified framework of classical classifiers because NN, SVM, and SRC can be interpreted as the special cases of RVMs with different definitions of representative vectors. Thus, the underlying relationship among a number of classical classifiers is revealed for better understanding of pattern classification. On the other hand, novel and advanced classifiers are inspired in the framework of RVMs. For example, a robust pattern classification method called discriminant vector machine (DVM) is motivated from RVMs. Given a test example, DVM first finds its k -NNs and then performs classification based on the robust M-estimator and manifold regularization. Extensive experimental evaluations on a variety of visual recognition tasks such as face recognition (Yale and face recognition grand challenge databases), object categorization (Caltech-101 dataset), and action recognition (Action Similarity LAbeliNg) demonstrate the advantages of DVM over other classifiers.

  9. Segmentation of Planar Surfaces from Laser Scanning Data Using the Magnitude of Normal Position Vector for Adaptive Neighborhoods

    Directory of Open Access Journals (Sweden)

    Changjae Kim

    2016-01-01

    Full Text Available Diverse approaches to laser point segmentation have been proposed since the emergence of the laser scanning system. Most of these segmentation techniques, however, suffer from limitations such as sensitivity to the choice of seed points, lack of consideration of the spatial relationships among points, and inefficient performance. In an effort to overcome these drawbacks, this paper proposes a segmentation methodology that: (1 reduces the dimensions of the attribute space; (2 considers the attribute similarity and the proximity of the laser point simultaneously; and (3 works well with both airborne and terrestrial laser scanning data. A neighborhood definition based on the shape of the surface increases the homogeneity of the laser point attributes. The magnitude of the normal position vector is used as an attribute for reducing the dimension of the accumulator array. The experimental results demonstrate, through both qualitative and quantitative evaluations, the outcomes’ high level of reliability. The proposed segmentation algorithm provided 96.89% overall correctness, 95.84% completeness, a 0.25 m overall mean value of centroid difference, and less than 1° of angle difference. The performance of the proposed approach was also verified with a large dataset and compared with other approaches. Additionally, the evaluation of the sensitivity of the thresholds was carried out. In summary, this paper proposes a robust and efficient segmentation methodology for abstraction of an enormous number of laser points into plane information.

  10. Fault Diagnosis in Condition of Sample Type Incompleteness Using Support Vector Data Description

    Directory of Open Access Journals (Sweden)

    Hui Yi

    2015-01-01

    Full Text Available Faulty samples are much harder to acquire than normal samples, especially in complicated systems. This leads to incompleteness for training sample types and furthermore a decrease of diagnostic accuracy. In this paper, the relationship between sample-type incompleteness and the classifier-based diagnostic accuracy is discussed first. Then, a support vector data description-based approach, which has taken the effects of sample-type incompleteness into consideration, is proposed to refine the construction of fault regions and increase the diagnostic accuracy for the condition of incomplete sample types. The effectiveness of the proposed method was validated on both a Gaussian distributed dataset and a practical dataset. Satisfactory results have been obtained.

  11. Two datasets of defect reports labeled by a crowd of annotators of unknown reliability

    Directory of Open Access Journals (Sweden)

    Jerónimo Hernández-González

    2018-06-01

    Full Text Available Classifying software defects according to any defined taxonomy is not straightforward. In order to be used for automatizing the classification of software defects, two sets of defect reports were collected from public issue tracking systems from two different real domains. Due to the lack of a domain expert, the collected defects were categorized by a set of annotators of unknown reliability according to their impact from IBM's orthogonal defect classification taxonomy. Both datasets are prepared to solve the defect classification problem by means of techniques of the learning from crowds paradigm (Hernández-González et al. [1].Two versions of both datasets are publicly shared. In the first version, the raw data is given: the text description of defects together with the category assigned by each annotator. In the second version, the text of each defect has been transformed to a descriptive vector using text-mining techniques.

  12. Hawaii ESI: NESTS (Nest Points)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains sensitive biological resource data for seabird nesting colonies in coastal Hawaii. Vector points in this data set represent locations of...

  13. Maryland ESI: NESTS (Nest Points)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains sensitive biological resource data for raptors in Maryland. Vector points in this data set represent bird nesting sites. Species-specific...

  14. Virginia ESI: REPTPT (Reptile Points)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains sensitive biological resource data for sea turtles in Virginia. Vector points in this data set represent nesting sites. Species-specific...

  15. Publication point indicators

    DEFF Research Database (Denmark)

    Elleby, Anita; Ingwersen, Peter

    2010-01-01

    ; the Cumulated Publication Point Indicator (CPPI), which graphically illustrates the cumulated gain of obtained vs. ideal points, both seen as vectors; and the normalized Cumulated Publication Point Index (nCPPI) that represents the cumulated gain of publication success as index values, either graphically......The paper presents comparative analyses of two publication point systems, The Norwegian and the in-house system from the interdisciplinary Danish Institute of International Studies (DIIS), used as case in the study for publications published 2006, and compares central citation-based indicators...... with novel publication point indicators (PPIs) that are formalized and exemplified. Two diachronic citation windows are applied: 2006-07 and 2006-08. Web of Science (WoS) as well as Google Scholar (GS) are applied to observe the cite delay and citedness for the different document types published by DIIS...

  16. Planar simplification and texturing of dense point cloud maps

    NARCIS (Netherlands)

    Ma, L.; Whelan, T.; Bondarau, Y.; With, de P.H.N.; McDonald, J.

    2013-01-01

    Dense RGB-D based SLAM techniques and highfidelity LIDAR scanners are examples from an abundant set of systems capable of providing multi-million point datasets. These large datasets quickly become difficult to process and work with due to the sheer volume of data, which typically contains

  17. Multi-SOM: an Algorithm for High-Dimensional, Small Size Datasets

    Directory of Open Access Journals (Sweden)

    Shen Lu

    2013-04-01

    Full Text Available Since it takes time to do experiments in bioinformatics, biological datasets are sometimes small but with high dimensionality. From probability theory, in order to discover knowledge from a set of data, we have to have a sufficient number of samples. Otherwise, the error bounds can become too large to be useful. For the SOM (Self- Organizing Map algorithm, the initial map is based on the training data. In order to avoid the bias caused by the insufficient training data, in this paper we present an algorithm, called Multi-SOM. Multi-SOM builds a number of small self-organizing maps, instead of just one big map. Bayesian decision theory is used to make the final decision among similar neurons on different maps. In this way, we can better ensure that we can get a real random initial weight vector set, the map size is less of consideration and errors tend to average out. In our experiments as applied to microarray datasets which are highly intense data composed of genetic related information, the precision of Multi-SOMs is 10.58% greater than SOMs, and its recall is 11.07% greater than SOMs. Thus, the Multi-SOMs algorithm is practical.

  18. Multiresolution persistent homology for excessively large biomolecular datasets

    Energy Technology Data Exchange (ETDEWEB)

    Xia, Kelin; Zhao, Zhixiong [Department of Mathematics, Michigan State University, East Lansing, Michigan 48824 (United States); Wei, Guo-Wei, E-mail: wei@math.msu.edu [Department of Mathematics, Michigan State University, East Lansing, Michigan 48824 (United States); Department of Electrical and Computer Engineering, Michigan State University, East Lansing, Michigan 48824 (United States); Department of Biochemistry and Molecular Biology, Michigan State University, East Lansing, Michigan 48824 (United States)

    2015-10-07

    Although persistent homology has emerged as a promising tool for the topological simplification of complex data, it is computationally intractable for large datasets. We introduce multiresolution persistent homology to handle excessively large datasets. We match the resolution with the scale of interest so as to represent large scale datasets with appropriate resolution. We utilize flexibility-rigidity index to access the topological connectivity of the data set and define a rigidity density for the filtration analysis. By appropriately tuning the resolution of the rigidity density, we are able to focus the topological lens on the scale of interest. The proposed multiresolution topological analysis is validated by a hexagonal fractal image which has three distinct scales. We further demonstrate the proposed method for extracting topological fingerprints from DNA molecules. In particular, the topological persistence of a virus capsid with 273 780 atoms is successfully analyzed which would otherwise be inaccessible to the normal point cloud method and unreliable by using coarse-grained multiscale persistent homology. The proposed method has also been successfully applied to the protein domain classification, which is the first time that persistent homology is used for practical protein domain analysis, to our knowledge. The proposed multiresolution topological method has potential applications in arbitrary data sets, such as social networks, biological networks, and graphs.

  19. Impedance analysis of acupuncture points and pathways

    International Nuclear Information System (INIS)

    Teplan, Michal; Kukucka, Marek; Ondrejkovicová, Alena

    2011-01-01

    Investigation of impedance characteristics of acupuncture points from acoustic to radio frequency range is addressed. Discernment and localization of acupuncture points in initial single subject study was unsuccessfully attempted by impedance map technique. Vector impedance analyses determined possible resonant zones in MHz region.

  20. Louisiana ESI: NESTS (Nest Points)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains sensitive biological resource data for seabird and wading bird nesting colonies in coastal Louisiana. Vector points in this data set represent...

  1. Effective population sizes of a major vector of human diseases, Aedes aegypti.

    Science.gov (United States)

    Saarman, Norah P; Gloria-Soria, Andrea; Anderson, Eric C; Evans, Benjamin R; Pless, Evlyn; Cosme, Luciano V; Gonzalez-Acosta, Cassandra; Kamgang, Basile; Wesson, Dawn M; Powell, Jeffrey R

    2017-12-01

    The effective population size ( N e ) is a fundamental parameter in population genetics that determines the relative strength of selection and random genetic drift, the effect of migration, levels of inbreeding, and linkage disequilibrium. In many cases where it has been estimated in animals, N e is on the order of 10%-20% of the census size. In this study, we use 12 microsatellite markers and 14,888 single nucleotide polymorphisms (SNPs) to empirically estimate N e in Aedes aegypti , the major vector of yellow fever, dengue, chikungunya, and Zika viruses. We used the method of temporal sampling to estimate N e on a global dataset made up of 46 samples of Ae. aegypti that included multiple time points from 17 widely distributed geographic localities. Our N e estimates for Ae. aegypti fell within a broad range (~25-3,000) and averaged between 400 and 600 across all localities and time points sampled. Adult census size (N c ) estimates for this species range between one and five thousand, so the N e / N c ratio is about the same as for most animals. These N e values are lower than estimates available for other insects and have important implications for the design of genetic control strategies to reduce the impact of this species of mosquito on human health.

  2. Novel scanning procedure enabling the vectorization of entire rhizotron-grown root systems

    Directory of Open Access Journals (Sweden)

    Lobet Guillaume

    2013-01-01

    Full Text Available Abstract This paper presents an original spit-and-combine imaging procedure that enables the complete vectorization of complex root systems grown in rhizotrons. The general principle of the method is to (1 separate the root system into a small number of large pieces to reduce root overlap, (2 scan these pieces one by one, (3 analyze separate images with a root tracing software and (4 combine all tracings into a single vectorized root system. This method generates a rich dataset containing morphological, topological and geometrical information of entire root systems grown in rhizotrons. The utility of the method is illustrated with a detailed architectural analysis of a 20-day old maize root system, coupled with a spatial analysis of water uptake patterns.

  3. Novel scanning procedure enabling the vectorization of entire rhizotron-grown root systems.

    Science.gov (United States)

    Lobet, Guillaume; Draye, Xavier

    2013-01-04

    : This paper presents an original spit-and-combine imaging procedure that enables the complete vectorization of complex root systems grown in rhizotrons. The general principle of the method is to (1) separate the root system into a small number of large pieces to reduce root overlap, (2) scan these pieces one by one, (3) analyze separate images with a root tracing software and (4) combine all tracings into a single vectorized root system. This method generates a rich dataset containing morphological, topological and geometrical information of entire root systems grown in rhizotrons. The utility of the method is illustrated with a detailed architectural analysis of a 20-day old maize root system, coupled with a spatial analysis of water uptake patterns.

  4. Coherent search of continuous gravitational wave signals: extension of the 5-vectors method to a network of detectors

    International Nuclear Information System (INIS)

    Astone, P; Colla, A; Frasca, S; Palomba, C; D'Antonio, S

    2012-01-01

    We describe the extension to multiple datasets of a coherent method for the search of continuous gravitational wave signals, based on the computation of 5-vectors. In particular, we show how to coherently combine different datasets belonging to the same detector or to different detectors. In the latter case the coherent combination is the way to have the maximum increase in signal-to-noise ratio. If the datasets belong to the same detector the advantage comes mainly from the properties of a quantity called coherence which is helpful (in both cases, in fact) in rejecting false candidates. The method has been tested searching for simulated signals injected in Gaussian noise and the results of the simulations are discussed.

  5. Georeferenced Population Datasets of Mexico (GEO-MEX): Urban Place GIS Coverage of Mexico

    Data.gov (United States)

    National Aeronautics and Space Administration — The Urban Place GIS Coverage of Mexico is a vector based point Geographic Information System (GIS) coverage of 696 urban places in Mexico. Each Urban Place is...

  6. Dataset on usnic acid from Cladonia substellata Vainio (Lichen) schistosomiasis mansoni's vector control and environmental toxicity.

    Science.gov (United States)

    Andrade de Araújo, Hallysson Douglas; Dos Santos Silva, Luanna Ribeiro; de Siqueira, Williams Nascimento; Martins da Fonseca, Caíque Silveira; da Silva, Nicácio Henrique; de Albuquerque Melo, Ana Maria Mendonça; Barroso Martins, Mônica Cristina; de Menezes Lima, Vera Lúcia

    2018-04-01

    This text presents complementary data corresponding to schistosomiasis mansoni's vector control and enviromental toxicity using usnic acid. These informations support our research article "Toxicity of Usnic Acid from Cladonia substellata (Lichen) to embryos and adults of Biomphalaria glabrata " by Araújo et al. [1], and focuses on the analysis of the detailed data regarding the different concentrations of Usnic Acid and their efficiency to B. glabrata mortality and non-viability, as also to environmental toxicity, evaluated by A. salina mortality.

  7. Integration of geophysical datasets by a conjoint probability tomography approach: application to Italian active volcanic areas

    Directory of Open Access Journals (Sweden)

    D. Patella

    2008-06-01

    Full Text Available We expand the theory of probability tomography to the integration of different geophysical datasets. The aim of the new method is to improve the information quality using a conjoint occurrence probability function addressed to highlight the existence of common sources of anomalies. The new method is tested on gravity, magnetic and self-potential datasets collected in the volcanic area of Mt. Vesuvius (Naples, and on gravity and dipole geoelectrical datasets collected in the volcanic area of Mt. Etna (Sicily. The application demonstrates that, from a probabilistic point of view, the integrated analysis can delineate the signature of some important volcanic targets better than the analysis of the tomographic image of each dataset considered separately.

  8. Zero-point energy in spheroidal geometries

    OpenAIRE

    Kitson, A. R.; Signal, A. I.

    2005-01-01

    We study the zero-point energy of a massless scalar field subject to spheroidal boundary conditions. Using the zeta-function method, the zero-point energy is evaluated for small ellipticity. Axially symmetric vector fields are also considered. The results are interpreted within the context of QCD flux tubes and the MIT bag model.

  9. Morphological Operations to Extract Urban Curbs in 3D MLS Point Clouds

    Directory of Open Access Journals (Sweden)

    Borja Rodríguez-Cuenca

    2016-06-01

    Full Text Available Automatic curb detection is an important issue in road maintenance, three-dimensional (3D urban modeling, and autonomous navigation fields. This paper is focused on the segmentation of curbs and street boundaries using a 3D point cloud captured by a mobile laser scanner (MLS system. Our method provides a solution based on the projection of the measured point cloud on the XY plane. Over that plane, a segmentation algorithm is carried out based on morphological operations to determine the location of street boundaries. In addition, a solution to extract curb edges based on the roughness of the point cloud is proposed. The proposed method is valid in both straight and curved road sections and applicable both to laser scanner and stereo vision 3D data due to the independence of its scanning geometry. The proposed method has been successfully tested with two datasets measured by different sensors. The first dataset corresponds to a point cloud measured by a TOPCON sensor in the Spanish town of Cudillero. The second dataset corresponds to a point cloud measured by a RIEGL sensor in the Austrian town of Horn. The extraction method provides completeness and correctness rates above 90% and quality values higher than 85% in both studied datasets.

  10. Methods for registration laser scanner point clouds in forest stands

    International Nuclear Information System (INIS)

    Bienert, A.; Pech, K.; Maas, H.-G.

    2011-01-01

    Laser scanning is a fast and efficient 3-D measurement technique to capture surface points describing the geometry of a complex object in an accurate and reliable way. Besides airborne laser scanning, terrestrial laser scanning finds growing interest for forestry applications. These two different recording platforms show large differences in resolution, recording area and scan viewing direction. Using both datasets for a combined point cloud analysis may yield advantages because of their largely complementary information. In this paper, methods will be presented to automatically register airborne and terrestrial laser scanner point clouds of a forest stand. In a first step, tree detection is performed in both datasets in an automatic manner. In a second step, corresponding tree positions are determined using RANSAC. Finally, the geometric transformation is performed, divided in a coarse and fine registration. After a coarse registration, the fine registration is done in an iterative manner (ICP) using the point clouds itself. The methods are tested and validated with a dataset of a forest stand. The presented registration results provide accuracies which fulfill the forestry requirements [de

  11. Polarization speckles and generalized Stokes vector wave: a review [invited

    DEFF Research Database (Denmark)

    Takeda, Mitsuo; Wang, Wei; Hanson, Steen Grüner

    2010-01-01

    We review some of the statistical properties of polarization-related speckle phenomena, with an introduction of a less known concept of polarization speckles and their spatial degree of polarization. As a useful means to characterize twopoint vector field correlations, we review the generalized...... Stokes parameters proposed by Korotkova and Wolf, and introduce its time-domain representation to describe the space-time evolution of the correlation between random electric vector fields at two different space-time points. This time-domain generalized Stokes vector, with components similar to those...... of the beam coherence polarization matrix proposed by Gori, is shown to obey the wave equation in exact analogy to a coherence function of scalar fields. Because of this wave nature, the time-domain generalized Stokes vector is referred to as generalized Stokes vector wave in this paper....

  12. Perturbation vectors to evaluate air quality using lichens and bromeliads: a Brazilian case study.

    Science.gov (United States)

    Monna, F; Marques, A N; Guillon, R; Losno, R; Couette, S; Navarro, N; Dongarra, G; Tamburo, E; Varrica, D; Chateau, C; Nepomuceno, F O

    2017-10-17

    Samples of one lichen species, Parmotrema crinitum, and one bromeliad species, Tillandsia usneoides, were collected in the state of Rio de Janeiro, Brazil, at four sites differently affected by anthropogenic pollution. The concentrations of aluminum, cadmium, copper, iron, lanthanum, lead, sulfur, titanium, zinc, and zirconium were determined by inductively coupled plasma-atomic emission spectroscopy. The environmental diagnosis was established by examining compositional changes via perturbation vectors, an underused family of methods designed to circumvent the problem of closure in any compositional dataset. The perturbation vectors between the reference site and the other three sites were similar for both species, although body concentration levels were different. At each site, perturbation vectors between lichens and bromeliads were approximately the same, whatever the local pollution level. It should thus be possible to combine these organisms, though physiologically different, for air quality surveys, after making all results comparable with appropriate correction. The use of perturbation vectors seems particularly suitable for assessing pollution level by biomonitoring, and for many frequently met situations in environmental geochemistry, where elemental ratios are more relevant than absolute concentrations.

  13. Vector manifestation and matter formed in relativistic heavy-ion processes

    International Nuclear Information System (INIS)

    Brown, Gerald E.; Holt, Jeremy W.; Lee, Chang-Hwan; Rho, Mannque

    2007-01-01

    Recent developments in our description of RHIC and related heavy-ion phenomena in terms of hidden local symmetry theories are reviewed with a focus on the novel nearly massless states in the vicinity of-both below and above-the chiral restoration temperature T c . We present complementary and intuitive ways to understand both Harada-Yamawaki's vector manifestation structure and Brown-Rho scaling-which are closely related-in terms of 'melting' of soft glues observed in lattice calculations and join the massless modes that arise in the vector manifestation (in the chiral limit) just below T c to tightly bound massless states above T c . This phenomenon may be interpreted in terms of the Beg-Shei theorem. It is suggested that hidden local symmetry theories arise naturally in holographic dual QCD from string theory, and a clear understanding of what really happens near the critical point could come from a deeper understanding of the dual bulk theory. Other matters discussed are the relation between Brown-Rho scaling and Landau Fermi-liquid fixed point parameters at the equilibrium density, its implications for 'low-mass dileptons' produced in heavy-ion collisions, the reconstruction of vector mesons in peripheral collisions, the pion velocity in the vicinity of the chiral transition point, kaon condensation viewed from the VM fixed point, nuclear physics with Brown-Rho scaling, and the generic feature of dropping masses at the RGE fixed points in generalized hidden local symmetry theories

  14. Next generation of adeno-associated virus 2 vectors: Point mutations in tyrosines lead to high-efficiency transduction at lower doses

    OpenAIRE

    Zhong, Li; Li, Baozheng; Mah, Cathryn S.; Govindasamy, Lakshmanan; Agbandje-McKenna, Mavis; Cooper, Mario; Herzog, Roland W.; Zolotukhin, Irene; Warrington, Kenneth H.; Weigel-Van Aken, Kirsten A.; Hobbs, Jacqueline A.; Zolotukhin, Sergei; Muzyczka, Nicholas; Srivastava, Arun

    2008-01-01

    Recombinant adeno-associated virus 2 (AAV2) vectors are in use in several Phase I/II clinical trials, but relatively large vector doses are needed to achieve therapeutic benefits. Large vector doses also trigger an immune response as a significant fraction of the vectors fails to traffic efficiently to the nucleus and is targeted for degradation by the host cell proteasome machinery. We have reported that epidermal growth factor receptor protein tyrosine kinase (EGFR-PTK) signaling negatively...

  15. Incremental and batch planar simplification of dense point cloud maps

    NARCIS (Netherlands)

    Whelan, T.; Ma, L.; Bondarev, E.; With, de P.H.N.; McDonald, J.

    2015-01-01

    Dense RGB-D SLAM techniques and high-fidelity LIDAR scanners are examples from an abundant set of systems capable of providing multi-million point datasets. These datasets quickly become difficult to process due to the sheer volume of data, typically containing significant redundant information,

  16. A Self-Organizing Map-Based Approach to Generating Reduced-Size, Statistically Similar Climate Datasets

    Science.gov (United States)

    Cabell, R.; Delle Monache, L.; Alessandrini, S.; Rodriguez, L.

    2015-12-01

    Climate-based studies require large amounts of data in order to produce accurate and reliable results. Many of these studies have used 30-plus year data sets in order to produce stable and high-quality results, and as a result, many such data sets are available, generally in the form of global reanalyses. While the analysis of these data lead to high-fidelity results, its processing can be very computationally expensive. This computational burden prevents the utilization of these data sets for certain applications, e.g., when rapid response is needed in crisis management and disaster planning scenarios resulting from release of toxic material in the atmosphere. We have developed a methodology to reduce large climate datasets to more manageable sizes while retaining statistically similar results when used to produce ensembles of possible outcomes. We do this by employing a Self-Organizing Map (SOM) algorithm to analyze general patterns of meteorological fields over a regional domain of interest to produce a small set of "typical days" with which to generate the model ensemble. The SOM algorithm takes as input a set of vectors and generates a 2D map of representative vectors deemed most similar to the input set and to each other. Input predictors are selected that are correlated with the model output, which in our case is an Atmospheric Transport and Dispersion (T&D) model that is highly dependent on surface winds and boundary layer depth. To choose a subset of "typical days," each input day is assigned to its closest SOM map node vector and then ranked by distance. Each node vector is treated as a distribution and days are sampled from them by percentile. Using a 30-node SOM, with sampling every 20th percentile, we have been able to reduce 30 years of the Climate Forecast System Reanalysis (CFSR) data for the month of October to 150 "typical days." To estimate the skill of this approach, the "Measure of Effectiveness" (MOE) metric is used to compare area and overlap

  17. Developing a Data-Set for Stereopsis

    Directory of Open Access Journals (Sweden)

    D.W Hunter

    2014-08-01

    Full Text Available Current research on binocular stereopsis in humans and non-human primates has been limited by a lack of available data-sets. Current data-sets fall into two categories; stereo-image sets with vergence but no ranging information (Hibbard, 2008, Vision Research, 48(12, 1427-1439 or combinations of depth information with binocular images and video taken from cameras in fixed fronto-parallel configurations exhibiting neither vergence or focus effects (Hirschmuller & Scharstein, 2007, IEEE Conf. Computer Vision and Pattern Recognition. The techniques for generating depth information are also imperfect. Depth information is normally inaccurate or simply missing near edges and on partially occluded surfaces. For many areas of vision research these are the most interesting parts of the image (Goutcher, Hunter, Hibbard, 2013, i-Perception, 4(7, 484; Scarfe & Hibbard, 2013, Vision Research. Using state-of-the-art open-source ray-tracing software (PBRT as a back-end, our intention is to release a set of tools that will allow researchers in this field to generate artificial binocular stereoscopic data-sets. Although not as realistic as photographs, computer generated images have significant advantages in terms of control over the final output and ground-truth information about scene depth is easily calculated at all points in the scene, even partially occluded areas. While individual researchers have been developing similar stimuli by hand for many decades, we hope that our software will greatly reduce the time and difficulty of creating naturalistic binocular stimuli. Our intension in making this presentation is to elicit feedback from the vision community about what sort of features would be desirable in such software.

  18. RARD: The Related-Article Recommendation Dataset

    OpenAIRE

    Beel, Joeran; Carevic, Zeljko; Schaible, Johann; Neusch, Gabor

    2017-01-01

    Recommender-system datasets are used for recommender-system evaluations, training machine-learning algorithms, and exploring user behavior. While there are many datasets for recommender systems in the domains of movies, books, and music, there are rather few datasets from research-paper recommender systems. In this paper, we introduce RARD, the Related-Article Recommendation Dataset, from the digital library Sowiport and the recommendation-as-a-service provider Mr. DLib. The dataset contains ...

  19. On rationality of moduli spaces of vector bundles on real Hirzebruch ...

    Indian Academy of Sciences (India)

    Introduction. Moduli spaces of semistable vector bundles on a smooth projective variety are studied from various points of view. One of the questions that is often addressed is the birational type of the moduli space, more precisely, the question of rationality. It is known that the moduli space of semistable vector bundles of ...

  20. Annotating spatio-temporal datasets for meaningful analysis in the Web

    Science.gov (United States)

    Stasch, Christoph; Pebesma, Edzer; Scheider, Simon

    2014-05-01

    More and more environmental datasets that vary in space and time are available in the Web. This comes along with an advantage of using the data for other purposes than originally foreseen, but also with the danger that users may apply inappropriate analysis procedures due to lack of important assumptions made during the data collection process. In order to guide towards a meaningful (statistical) analysis of spatio-temporal datasets available in the Web, we have developed a Higher-Order-Logic formalism that captures some relevant assumptions in our previous work [1]. It allows to proof on meaningful spatial prediction and aggregation in a semi-automated fashion. In this poster presentation, we will present a concept for annotating spatio-temporal datasets available in the Web with concepts defined in our formalism. Therefore, we have defined a subset of the formalism as a Web Ontology Language (OWL) pattern. It allows capturing the distinction between the different spatio-temporal variable types, i.e. point patterns, fields, lattices and trajectories, that in turn determine whether a particular dataset can be interpolated or aggregated in a meaningful way using a certain procedure. The actual annotations that link spatio-temporal datasets with the concepts in the ontology pattern are provided as Linked Data. In order to allow data producers to add the annotations to their datasets, we have implemented a Web portal that uses a triple store at the backend to store the annotations and to make them available in the Linked Data cloud. Furthermore, we have implemented functions in the statistical environment R to retrieve the RDF annotations and, based on these annotations, to support a stronger typing of spatio-temporal datatypes guiding towards a meaningful analysis in R. [1] Stasch, C., Scheider, S., Pebesma, E., Kuhn, W. (2014): "Meaningful spatial prediction and aggregation", Environmental Modelling & Software, 51, 149-165.

  1. IMAGE TO POINT CLOUD METHOD OF 3D-MODELING

    Directory of Open Access Journals (Sweden)

    A. G. Chibunichev

    2012-07-01

    Full Text Available This article describes the method of constructing 3D models of objects (buildings, monuments based on digital images and a point cloud obtained by terrestrial laser scanner. The first step is the automated determination of exterior orientation parameters of digital image. We have to find the corresponding points of the image and point cloud to provide this operation. Before the corresponding points searching quasi image of point cloud is generated. After that SIFT algorithm is applied to quasi image and real image. SIFT algorithm allows to find corresponding points. Exterior orientation parameters of image are calculated from corresponding points. The second step is construction of the vector object model. Vectorization is performed by operator of PC in an interactive mode using single image. Spatial coordinates of the model are calculated automatically by cloud points. In addition, there is automatic edge detection with interactive editing available. Edge detection is performed on point cloud and on image with subsequent identification of correct edges. Experimental studies of the method have demonstrated its efficiency in case of building facade modeling.

  2. A 3D-Space Vector Modulation Algorithm for Three Phase Four Wire Neutral Point Clamped Inverter Systems as Power Quality Compensator

    Directory of Open Access Journals (Sweden)

    Palanisamy Ramasamy

    2017-11-01

    Full Text Available A Unified Power Quality Conditioner (UPQC is designed using a Neutral Point Clamped (NPC multilevel inverter to improve the power quality. When designed for high/medium voltage and power applications, the voltage stress across the switches and harmonic content in the output voltage are increased. A 3-phase 4-wire NPC inverter system is developed as Power Quality Conditioner using an effectual three dimensional Space Vector Modulation (3D-SVM technique. The proposed system behaves like a UPQC with shunt and series active filter under balanced and unbalanced loading conditions. In addition to the improvement of the power quality issues, it also balances the neutral point voltage and voltage balancing across the capacitors under unbalanced condition. The hardware and simulation results of proposed system are compared with 2D-SVM and 3D-SVM. The proposed system is stimulated using MATLAB and the hardware is designed using FPGA. From the results it is evident that effectual 3D-SVM technique gives better performance compared to other control methods.

  3. Density Based Support Vector Machines for Classification

    OpenAIRE

    Zahra Nazari; Dongshik Kang

    2015-01-01

    Support Vector Machines (SVM) is the most successful algorithm for classification problems. SVM learns the decision boundary from two classes (for Binary Classification) of training points. However, sometimes there are some less meaningful samples amongst training points, which are corrupted by noises or misplaced in wrong side, called outliers. These outliers are affecting on margin and classification performance, and machine should better to discard them. SVM as a popular and widely used cl...

  4. Vectors of subsurface stormflow in a layered hillslope during runoff initiation

    Directory of Open Access Journals (Sweden)

    M. Retter

    2006-01-01

    Full Text Available The focus is the experimental assessment of in-situ flow vectors in a hillslope soil. We selected a 100 m2 trenched hillslope study site. During prescribed sprinkling an obliquely installed TDR wave-guide provides for the velocity of the wetting front in its direction. A triplet of wave-guides mounted along the sides of an hypothetical tetrahedron, with its peak pointing down, produces a three-dimensional vector of the wetting front. The method is based on the passing of wetting fronts. We analysed 34 vectors along the hillslope at distributed locations and at soil depths from 11 cm (representing top soil to 40 cm (close to bedrock interface. The mean values resulted as follows vx=16.1 mm min-1, vy=-0.2 mm min-1, and vz=11.9 mm min-1. The velocity vectors of the wetting fronts were generally gravity dominated and downslope orientated. Downslope direction (x-axis dominated close to bedrock, whereas no preference between vertical and downslope direction was found in vectors close to the surface. The velocities along the contours (y-axis varied widely. The Kruskal-Wallis tests indicated that the different upslope sprinkling areas had no influence on the orientation of the vectors. Vectors of volume flux density were also calculated for each triplet. The lateral velocities of the vector approach are compared with subsurface stromflow collected at the downhill end of the slope. Velocities were 25-140 times slower than lateral saturated tracer movements on top of the bedrock. Beside other points, we conclude that this method is restricted to non-complex substrate (skeleton or portion of big stones.

  5. Chromosome preference of disease genes and vectorization for the prediction of non-coding disease genes.

    Science.gov (United States)

    Peng, Hui; Lan, Chaowang; Liu, Yuansheng; Liu, Tao; Blumenstein, Michael; Li, Jinyan

    2017-10-03

    Disease-related protein-coding genes have been widely studied, but disease-related non-coding genes remain largely unknown. This work introduces a new vector to represent diseases, and applies the newly vectorized data for a positive-unlabeled learning algorithm to predict and rank disease-related long non-coding RNA (lncRNA) genes. This novel vector representation for diseases consists of two sub-vectors, one is composed of 45 elements, characterizing the information entropies of the disease genes distribution over 45 chromosome substructures. This idea is supported by our observation that some substructures (e.g., the chromosome 6 p-arm) are highly preferred by disease-related protein coding genes, while some (e.g., the 21 p-arm) are not favored at all. The second sub-vector is 30-dimensional, characterizing the distribution of disease gene enriched KEGG pathways in comparison with our manually created pathway groups. The second sub-vector complements with the first one to differentiate between various diseases. Our prediction method outperforms the state-of-the-art methods on benchmark datasets for prioritizing disease related lncRNA genes. The method also works well when only the sequence information of an lncRNA gene is known, or even when a given disease has no currently recognized long non-coding genes.

  6. Point source reconstruction principle of linear inverse problems

    International Nuclear Information System (INIS)

    Terazono, Yasushi; Matani, Ayumu; Fujimaki, Norio; Murata, Tsutomu

    2010-01-01

    Exact point source reconstruction for underdetermined linear inverse problems with a block-wise structure was studied. In a block-wise problem, elements of a source vector are partitioned into blocks. Accordingly, a leadfield matrix, which represents the forward observation process, is also partitioned into blocks. A point source is a source having only one nonzero block. An example of such a problem is current distribution estimation in electroencephalography and magnetoencephalography, where a source vector represents a vector field and a point source represents a single current dipole. In this study, the block-wise norm, a block-wise extension of the l p -norm, was defined as the family of cost functions of the inverse method. The main result is that a set of three conditions was found to be necessary and sufficient for block-wise norm minimization to ensure exact point source reconstruction for any leadfield matrix that admit such reconstruction. The block-wise norm that satisfies the conditions is the sum of the cost of all the observations of source blocks, or in other words, the block-wisely extended leadfield-weighted l 1 -norm. Additional results are that minimization of such a norm always provides block-wisely sparse solutions and that its solutions form cones in source space

  7. Dimension Reduction Aided Hyperspectral Image Classification with a Small-sized Training Dataset: Experimental Comparisons

    Directory of Open Access Journals (Sweden)

    Jinya Su

    2017-11-01

    Full Text Available Hyperspectral images (HSI provide rich information which may not be captured by other sensing technologies and therefore gradually find a wide range of applications. However, they also generate a large amount of irrelevant or redundant data for a specific task. This causes a number of issues including significantly increased computation time, complexity and scale of prediction models mapping the data to semantics (e.g., classification, and the need of a large amount of labelled data for training. Particularly, it is generally difficult and expensive for experts to acquire sufficient training samples in many applications. This paper addresses these issues by exploring a number of classical dimension reduction algorithms in machine learning communities for HSI classification. To reduce the size of training dataset, feature selection (e.g., mutual information, minimal redundancy maximal relevance and feature extraction (e.g., Principal Component Analysis (PCA, Kernel PCA are adopted to augment a baseline classification method, Support Vector Machine (SVM. The proposed algorithms are evaluated using a real HSI dataset. It is shown that PCA yields the most promising performance in reducing the number of features or spectral bands. It is observed that while significantly reducing the computational complexity, the proposed method can achieve better classification results over the classic SVM on a small training dataset, which makes it suitable for real-time applications or when only limited training data are available. Furthermore, it can also achieve performances similar to the classic SVM on large datasets but with much less computing time.

  8. Isfahan MISP Dataset.

    Science.gov (United States)

    Kashefpur, Masoud; Kafieh, Rahele; Jorjandi, Sahar; Golmohammadi, Hadis; Khodabande, Zahra; Abbasi, Mohammadreza; Teifuri, Nilufar; Fakharzadeh, Ali Akbar; Kashefpoor, Maryam; Rabbani, Hossein

    2017-01-01

    An online depository was introduced to share clinical ground truth with the public and provide open access for researchers to evaluate their computer-aided algorithms. PHP was used for web programming and MySQL for database managing. The website was entitled "biosigdata.com." It was a fast, secure, and easy-to-use online database for medical signals and images. Freely registered users could download the datasets and could also share their own supplementary materials while maintaining their privacies (citation and fee). Commenting was also available for all datasets, and automatic sitemap and semi-automatic SEO indexing have been set for the site. A comprehensive list of available websites for medical datasets is also presented as a Supplementary (http://journalonweb.com/tempaccess/4800.584.JMSS_55_16I3253.pdf).

  9. An implementation of support vector machine on sentiment classification of movie reviews

    Science.gov (United States)

    Yulietha, I. M.; Faraby, S. A.; Adiwijaya; Widyaningtyas, W. C.

    2018-03-01

    With technological advances, all information about movie is available on the internet. If the information is processed properly, it will get the quality of the information. This research proposes to the classify sentiments on movie review documents. This research uses Support Vector Machine (SVM) method because it can classify high dimensional data in accordance with the data used in this research in the form of text. Support Vector Machine is a popular machine learning technique for text classification because it can classify by learning from a collection of documents that have been classified previously and can provide good result. Based on number of datasets, the 90-10 composition has the best result that is 85.6%. Based on SVM kernel, kernel linear with constant 1 has the best result that is 84.9%

  10. Spatio-temporal patterns of distribution of West Nile virus vectors in eastern Piedmont Region, Italy

    Directory of Open Access Journals (Sweden)

    Bisanzio Donal

    2011-12-01

    Full Text Available Abstract Background West Nile Virus (WNV transmission in Italy was first reported in 1998 as an equine outbreak near the swamps of Padule di Fucecchio, Tuscany. No other cases were identified during the following decade until 2008, when horse and human outbreaks were reported in Emilia Romagna, North Italy. Since then, WNV outbreaks have occurred annually, spreading from their initial northern foci throughout the country. Following the outbreak in 1998 the Italian public health authority defined a surveillance plan to detect WNV circulation in birds, horses and mosquitoes. By applying spatial statistical analysis (spatial point pattern analysis and models (Bayesian GLMM models to a longitudinal dataset on the abundance of the three putative WNV vectors [Ochlerotatus caspius (Pallas 1771, Culex pipiens (Linnaeus 1758 and Culex modestus (Ficalbi 1890] in eastern Piedmont, we quantified their abundance and distribution in space and time and generated prediction maps outlining the areas with the highest vector productivity and potential for WNV introduction and amplification. Results The highest abundance and significant spatial clusters of Oc. caspius and Cx. modestus were in proximity to rice fields, and for Cx. pipiens, in proximity to highly populated urban areas. The GLMM model showed the importance of weather conditions and environmental factors in predicting mosquito abundance. Distance from the preferential breeding sites and elevation were negatively associated with the number of collected mosquitoes. The Normalized Difference Vegetation Index (NDVI was positively correlated with mosquito abundance in rice fields (Oc. caspius and Cx. modestus. Based on the best models, we developed prediction maps for the year 2010 outlining the areas where high abundance of vectors could favour the introduction and amplification of WNV. Conclusions Our findings provide useful information for surveillance activities aiming to identify locations where the

  11. Desingularization strategies for three-dimensional vector fields

    CERN Document Server

    Torres, Felipe Cano

    1987-01-01

    For a vector field #3, where Ai are series in X, the algebraic multiplicity measures the singularity at the origin. In this research monograph several strategies are given to make the algebraic multiplicity of a three-dimensional vector field decrease, by means of permissible blowing-ups of the ambient space, i.e. transformations of the type xi=x'ix1, 2s. A logarithmic point of view is taken, marking the exceptional divisor of each blowing-up and by considering only the vector fields which are tangent to this divisor, instead of the whole tangent sheaf. The first part of the book is devoted to the logarithmic background and to the permissible blowing-ups. The main part corresponds to the control of the algorithms for the desingularization strategies by means of numerical invariants inspired by Hironaka's characteristic polygon. Only basic knowledge of local algebra and algebraic geometry is assumed of the reader. The pathologies we find in the reduction of vector fields are analogous to pathologies in the pro...

  12. Scalar and Vector Spherical Harmonics for Assimilation of Global Datasets in the Ionosphere and Thermosphere

    Science.gov (United States)

    Miladinovich, D.; Datta-Barua, S.; Bust, G. S.; Ramirez, U.

    2017-12-01

    Understanding physical processes during storm time in the ionosphere-thermosphere (IT) system is limited, in part, due to the inability to obtain accurate estimates of IT states on a global scale. One reason for this inability is the sparsity of spatially distributed high quality data sets. Data assimilation is showing promise toward enabling global estimates by blending high quality observational data sets with established climate models. We are continuing development of an algorithm called Estimating Model Parameters for Ionospheric Reverse Engineering (EMPIRE) to enable assimilation of global datasets for storm time estimates of IT drivers. EMPIRE is a data assimilation algorithm that uses a Kalman filtering routine to ingest model and observational data. The EMPIRE algorithm is based on spherical harmonics which provide a spherically symmetric, smooth, continuous, and orthonormal set of basis functions suitable for a spherical domain such as Earth's IT region (200-600 km altitude). Once the basis function coefficients are determined, the newly fitted function represents the disagreement between observational measurements and models. We apply spherical harmonics to study the March 17, 2015 storm. Data sources include Fabry-Perot interferometer neutral wind measurements and global Ionospheric Data Assimilation 4 Dimensional (IDA4D) assimilated total electron content (TEC). Models include Weimer 2000 electric potential, International Geomagnetic Reference Field (IGRF) magnetic field, and Horizontal Wind Model 2014 (HWM14) neutral winds. We present the EMPIRE assimilation results of Earth's electric potential and thermospheric winds. We also compare EMPIRE storm time E cross B ion drift estimates to measured drifts produced from the Super Dual Auroral Radar Network (SuperDARN) and Active Magnetosphere and Planetary Electrodynamics Response Experiment (AMPERE) measurement datasets. The analysis from these results will enable the generation of globally assimilated

  13. Abstract generalized vector quasi-equilibrium problems in noncompact Hadamard manifolds

    Directory of Open Access Journals (Sweden)

    Haishu Lu

    2017-05-01

    Full Text Available Abstract This paper deals with the abstract generalized vector quasi-equilibrium problem in noncompact Hadamard manifolds. We prove the existence of solutions to the abstract generalized vector quasi-equilibrium problem under suitable conditions and provide applications to an abstract vector quasi-equilibrium problem, a generalized scalar equilibrium problem, a scalar equilibrium problem, and a perturbed saddle point problem. Finally, as an application of the existence of solutions to the generalized scalar equilibrium problem, we obtain a weakly mixed variational inequality and two mixed variational inequalities. The results presented in this paper unify and generalize many known results in the literature.

  14. Open University Learning Analytics dataset.

    Science.gov (United States)

    Kuzilek, Jakub; Hlosta, Martin; Zdrahal, Zdenek

    2017-11-28

    Learning Analytics focuses on the collection and analysis of learners' data to improve their learning experience by providing informed guidance and to optimise learning materials. To support the research in this area we have developed a dataset, containing data from courses presented at the Open University (OU). What makes the dataset unique is the fact that it contains demographic data together with aggregated clickstream data of students' interactions in the Virtual Learning Environment (VLE). This enables the analysis of student behaviour, represented by their actions. The dataset contains the information about 22 courses, 32,593 students, their assessment results, and logs of their interactions with the VLE represented by daily summaries of student clicks (10,655,280 entries). The dataset is freely available at https://analyse.kmi.open.ac.uk/open_dataset under a CC-BY 4.0 license.

  15. Covariance estimation in Terms of Stokes Parameters with Application to Vector Sensor Imaging

    Science.gov (United States)

    2016-12-15

    surements. A vector sensor (example shown in Figure 1) measures the electromagnetic field at a single point using three orthogonal dipole elements and...Secretary of Defense for Research and Engineering . Figure 1. Atom antenna [1], an electromagnetic vector sensor. The antenna is composed of three...orthogonal loop and dipole elements with a common phase center, measuring the complete electromagnetic field in a six-element vector . rounding sphere as

  16. FASTQSim: platform-independent data characterization and in silico read generation for NGS datasets.

    Science.gov (United States)

    Shcherbina, Anna

    2014-08-15

    High-throughput next generation sequencing technologies have enabled rapid characterization of clinical and environmental samples. Consequently, the largest bottleneck to actionable data has become sample processing and bioinformatics analysis, creating a need for accurate and rapid algorithms to process genetic data. Perfectly characterized in silico datasets are a useful tool for evaluating the performance of such algorithms. Background contaminating organisms are observed in sequenced mixtures of organisms. In silico samples provide exact truth. To create the best value for evaluating algorithms, in silico data should mimic actual sequencer data as closely as possible. FASTQSim is a tool that provides the dual functionality of NGS dataset characterization and metagenomic data generation. FASTQSim is sequencing platform-independent, and computes distributions of read length, quality scores, indel rates, single point mutation rates, indel size, and similar statistics for any sequencing platform. To create training or testing datasets, FASTQSim has the ability to convert target sequences into in silico reads with specific error profiles obtained in the characterization step. FASTQSim enables users to assess the quality of NGS datasets. The tool provides information about read length, read quality, repetitive and non-repetitive indel profiles, and single base pair substitutions. FASTQSim allows the user to simulate individual read datasets that can be used as standardized test scenarios for planning sequencing projects or for benchmarking metagenomic software. In this regard, in silico datasets generated with the FASTQsim tool hold several advantages over natural datasets: they are sequencing platform independent, extremely well characterized, and less expensive to generate. Such datasets are valuable in a number of applications, including the training of assemblers for multiple platforms, benchmarking bioinformatics algorithm performance, and creating challenge

  17. Improving the lattice axial vector current

    International Nuclear Information System (INIS)

    Horsley, R.; Perlt, H.; Schiller, A.; Zanotti, J.M.

    2015-11-01

    For Wilson and clover fermions traditional formulations of the axial vector current do not respect the continuum Ward identity which relates the divergence of that current to the pseudoscalar density. Here we propose to use a point-split or one-link axial vector current whose divergence exactly satisfies a lattice Ward identity, involving the pseudoscalar density and a number of irrelevant operators. We check in one-loop lattice perturbation theory with SLiNC fermion and gauge plaquette action that this is indeed the case including order O(a) effects. Including these operators the axial Ward identity remains renormalisation invariant. First preliminary results of a nonperturbative check of the Ward identity are also presented.

  18. Southeast Alaska ESI: FISHPT (Fish Points)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains biological resource data for anadromous fish streams in Southeast Alaska. Vector points in this data set represent locations of fish streams....

  19. Investigation on the Weighted RANSAC Approaches for Building Roof Plane Segmentation from LiDAR Point Clouds

    Directory of Open Access Journals (Sweden)

    Bo Xu

    2015-12-01

    Full Text Available RANdom SAmple Consensus (RANSAC is a widely adopted method for LiDAR point cloud segmentation because of its robustness to noise and outliers. However, RANSAC has a tendency to generate false segments consisting of points from several nearly coplanar surfaces. To address this problem, we formulate the weighted RANSAC approach for the purpose of point cloud segmentation. In our proposed solution, the hard threshold voting function which considers both the point-plane distance and the normal vector consistency is transformed into a soft threshold voting function based on two weight functions. To improve weighted RANSAC’s ability to distinguish planes, we designed the weight functions according to the difference in the error distribution between the proper and improper plane hypotheses, based on which an outlier suppression ratio was also defined. Using the ratio, a thorough comparison was conducted between these different weight functions to determine the best performing function. The selected weight function was then compared to the existing weighted RANSAC methods, the original RANSAC, and a representative region growing (RG method. Experiments with two airborne LiDAR datasets of varying densities show that the various weighted methods can improve the segmentation quality differently, but the dedicated designed weight functions can significantly improve the segmentation accuracy and the topology correctness. Moreover, its robustness is much better when compared to the RG method.

  20. Public Access Points, Location of public beach access along the Oregon Coast. Boat ramp locations were added to the dataset to allow users to view the location of boat ramps along the Columbia River and the Willamete River north of the Oregon City Dam., Published in 2005, 1:100000 (1in=8333ft) scale, Oregon Geospatial Enterprise Office (GEO).

    Data.gov (United States)

    NSGIC State | GIS Inventory — Public Access Points dataset current as of 2005. Location of public beach access along the Oregon Coast. Boat ramp locations were added to the dataset to allow users...

  1. Projected economic losses due to vector and vector-borne parasitic diseases in livestock of India and its significance in implementing the concept of integrated practices for vector management

    Directory of Open Access Journals (Sweden)

    B. W. Narladkar

    2018-02-01

    Full Text Available Broadly, species of arthropods infesting livestock are grouped into flies (biting and non-biting, fleas, lice (biting and sucking, ticks (soft and hard, and mites (burrowing, non-burrowing, and follicular. Among which, biting and non-biting flies and ticks are the potent vectors for many bacterial, viral, rickettsial, and protozoan diseases. Vectors of livestock are having economic significance on three points (1 direct losses from their bite and annoyance, worries, and psychological disturbances produced during the act of biting and feeding, (2 diseases they transmit, and (3 expenditure incurred for their control. Flies such as Culicoides spp. and Musca spp. and various species of hard ticks play important role in disease transmission in addition to their direct effects. For control of vectors, recent concept of integrated pest management (IPM provides the best solution and also addresses the problems related to acaricide resistance and environmental protection from hazardous chemicals. However, to successfully implement the concept of IPM, for each vector species, estimation of two monitory benchmarks, i.e., economic injury level (EIL and economic threshold level (ETL is essential prerequisite. For many vector species and under several circumstances, estimation of EIL and ETL appears to be difficult. Under such scenario, although may not be exact, an approximate estimate can be accrued by taking into account several criteria such as percent prevalence of vectors in a geographical area, percent losses produced, total livestock population, and current prices of livestock products such as milk, meat, and wool. Method for approximate estimation is first time described and elaborated in the present review article.

  2. Projected economic losses due to vector and vector-borne parasitic diseases in livestock of India and its significance in implementing the concept of integrated practices for vector management

    Science.gov (United States)

    Narladkar, B. W.

    2018-01-01

    Broadly, species of arthropods infesting livestock are grouped into flies (biting and non-biting), fleas, lice (biting and sucking), ticks (soft and hard), and mites (burrowing, non-burrowing, and follicular). Among which, biting and non-biting flies and ticks are the potent vectors for many bacterial, viral, rickettsial, and protozoan diseases. Vectors of livestock are having economic significance on three points (1) direct losses from their bite and annoyance, worries, and psychological disturbances produced during the act of biting and feeding, (2) diseases they transmit, and (3) expenditure incurred for their control. Flies such as Culicoides spp. and Musca spp. and various species of hard ticks play important role in disease transmission in addition to their direct effects. For control of vectors, recent concept of integrated pest management (IPM) provides the best solution and also addresses the problems related to acaricide resistance and environmental protection from hazardous chemicals. However, to successfully implement the concept of IPM, for each vector species, estimation of two monitory benchmarks, i.e., economic injury level (EIL) and economic threshold level (ETL) is essential prerequisite. For many vector species and under several circumstances, estimation of EIL and ETL appears to be difficult. Under such scenario, although may not be exact, an approximate estimate can be accrued by taking into account several criteria such as percent prevalence of vectors in a geographical area, percent losses produced, total livestock population, and current prices of livestock products such as milk, meat, and wool. Method for approximate estimation is first time described and elaborated in the present review article. PMID:29657396

  3. Hairy Slices: Evaluating the Perceptual Effectiveness of Cutting Plane Glyphs for 3D Vector Fields.

    Science.gov (United States)

    Stevens, Andrew H; Butkiewicz, Thomas; Ware, Colin

    2017-01-01

    Three-dimensional vector fields are common datasets throughout the sciences. Visualizing these fields is inherently difficult due to issues such as visual clutter and self-occlusion. Cutting planes are often used to overcome these issues by presenting more manageable slices of data. The existing literature provides many techniques for visualizing the flow through these cutting planes; however, there is a lack of empirical studies focused on the underlying perceptual cues that make popular techniques successful. This paper presents a quantitative human factors study that evaluates static monoscopic depth and orientation cues in the context of cutting plane glyph designs for exploring and analyzing 3D flow fields. The goal of the study was to ascertain the relative effectiveness of various techniques for portraying the direction of flow through a cutting plane at a given point, and to identify the visual cues and combinations of cues involved, and how they contribute to accurate performance. It was found that increasing the dimensionality of line-based glyphs into tubular structures enhances their ability to convey orientation through shading, and that increasing their diameter intensifies this effect. These tube-based glyphs were also less sensitive to visual clutter issues at higher densities. Adding shadows to lines was also found to increase perception of flow direction. Implications of the experimental results are discussed and extrapolated into a number of guidelines for designing more perceptually effective glyphs for 3D vector field visualizations.

  4. Emerging Vector-Borne Diseases - Incidence through Vectors.

    Science.gov (United States)

    Savić, Sara; Vidić, Branka; Grgić, Zivoslav; Potkonjak, Aleksandar; Spasojevic, Ljubica

    2014-01-01

    Vector-borne diseases use to be a major public health concern only in tropical and subtropical areas, but today they are an emerging threat for the continental and developed countries also. Nowadays, in intercontinental countries, there is a struggle with emerging diseases, which have found their way to appear through vectors. Vector-borne zoonotic diseases occur when vectors, animal hosts, climate conditions, pathogens, and susceptible human population exist at the same time, at the same place. Global climate change is predicted to lead to an increase in vector-borne infectious diseases and disease outbreaks. It could affect the range and population of pathogens, host and vectors, transmission season, etc. Reliable surveillance for diseases that are most likely to emerge is required. Canine vector-borne diseases represent a complex group of diseases including anaplasmosis, babesiosis, bartonellosis, borreliosis, dirofilariosis, ehrlichiosis, and leishmaniosis. Some of these diseases cause serious clinical symptoms in dogs and some of them have a zoonotic potential with an effect to public health. It is expected from veterinarians in coordination with medical doctors to play a fundamental role at primarily prevention and then treatment of vector-borne diseases in dogs. The One Health concept has to be integrated into the struggle against emerging diseases. During a 4-year period, from 2009 to 2013, a total number of 551 dog samples were analyzed for vector-borne diseases (borreliosis, babesiosis, ehrlichiosis, anaplasmosis, dirofilariosis, and leishmaniasis) in routine laboratory work. The analysis was done by serological tests - ELISA for borreliosis, dirofilariosis, and leishmaniasis, modified Knott test for dirofilariosis, and blood smear for babesiosis, ehrlichiosis, and anaplasmosis. This number of samples represented 75% of total number of samples that were sent for analysis for different diseases in dogs. Annually, on average more then half of the samples

  5. Mahalanobis Distance Based Iterative Closest Point

    DEFF Research Database (Denmark)

    Hansen, Mads Fogtmann; Blas, Morten Rufus; Larsen, Rasmus

    2007-01-01

    the notion of a mahalanobis distance map upon a point set with associated covariance matrices which in addition to providing correlation weighted distance implicitly provides a method for assigning correspondence during alignment. This distance map provides an easy formulation of the ICP problem that permits...... a fast optimization. Initially, the covariance matrices are set to the identity matrix, and all shapes are aligned to a randomly selected shape (equivalent to standard ICP). From this point the algorithm iterates between the steps: (a) obtain mean shape and new estimates of the covariance matrices from...... the aligned shapes, (b) align shapes to the mean shape. Three different methods for estimating the mean shape with associated covariance matrices are explored in the paper. The proposed methods are validated experimentally on two separate datasets (IMM face dataset and femur-bones). The superiority of ICP...

  6. Columbia River ESI: NESTS (Nest Points)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains sensitive biological resource data for bird nesting sites in the Columbia River area. Vector points in this data set represent locations of...

  7. Louisiana ESI: SOCECON (Socioeconomic Resource Points)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains human-use resource data for airport, heliport, marina, and boat ramp locations in Louisiana. Vector points in this data set represent the...

  8. Multiscale asymmetric orthogonal wavelet kernel for linear programming support vector learning and nonlinear dynamic systems identification.

    Science.gov (United States)

    Lu, Zhao; Sun, Jing; Butts, Kenneth

    2014-05-01

    Support vector regression for approximating nonlinear dynamic systems is more delicate than the approximation of indicator functions in support vector classification, particularly for systems that involve multitudes of time scales in their sampled data. The kernel used for support vector learning determines the class of functions from which a support vector machine can draw its solution, and the choice of kernel significantly influences the performance of a support vector machine. In this paper, to bridge the gap between wavelet multiresolution analysis and kernel learning, the closed-form orthogonal wavelet is exploited to construct new multiscale asymmetric orthogonal wavelet kernels for linear programming support vector learning. The closed-form multiscale orthogonal wavelet kernel provides a systematic framework to implement multiscale kernel learning via dyadic dilations and also enables us to represent complex nonlinear dynamics effectively. To demonstrate the superiority of the proposed multiscale wavelet kernel in identifying complex nonlinear dynamic systems, two case studies are presented that aim at building parallel models on benchmark datasets. The development of parallel models that address the long-term/mid-term prediction issue is more intricate and challenging than the identification of series-parallel models where only one-step ahead prediction is required. Simulation results illustrate the effectiveness of the proposed multiscale kernel learning.

  9. Modeling and prediction of flotation performance using support vector regression

    Directory of Open Access Journals (Sweden)

    Despotović Vladimir

    2017-01-01

    Full Text Available Continuous efforts have been made in recent year to improve the process of paper recycling, as it is of critical importance for saving the wood, water and energy resources. Flotation deinking is considered to be one of the key methods for separation of ink particles from the cellulose fibres. Attempts to model the flotation deinking process have often resulted in complex models that are difficult to implement and use. In this paper a model for prediction of flotation performance based on Support Vector Regression (SVR, is presented. Representative data samples were created in laboratory, under a variety of practical control variables for the flotation deinking process, including different reagents, pH values and flotation residence time. Predictive model was created that was trained on these data samples, and the flotation performance was assessed showing that Support Vector Regression is a promising method even when dataset used for training the model is limited.

  10. Fuzzy support vector machine for microarray imbalanced data classification

    Science.gov (United States)

    Ladayya, Faroh; Purnami, Santi Wulan; Irhamah

    2017-11-01

    DNA microarrays are data containing gene expression with small sample sizes and high number of features. Furthermore, imbalanced classes is a common problem in microarray data. This occurs when a dataset is dominated by a class which have significantly more instances than the other minority classes. Therefore, it is needed a classification method that solve the problem of high dimensional and imbalanced data. Support Vector Machine (SVM) is one of the classification methods that is capable of handling large or small samples, nonlinear, high dimensional, over learning and local minimum issues. SVM has been widely applied to DNA microarray data classification and it has been shown that SVM provides the best performance among other machine learning methods. However, imbalanced data will be a problem because SVM treats all samples in the same importance thus the results is bias for minority class. To overcome the imbalanced data, Fuzzy SVM (FSVM) is proposed. This method apply a fuzzy membership to each input point and reformulate the SVM such that different input points provide different contributions to the classifier. The minority classes have large fuzzy membership so FSVM can pay more attention to the samples with larger fuzzy membership. Given DNA microarray data is a high dimensional data with a very large number of features, it is necessary to do feature selection first using Fast Correlation based Filter (FCBF). In this study will be analyzed by SVM, FSVM and both methods by applying FCBF and get the classification performance of them. Based on the overall results, FSVM on selected features has the best classification performance compared to SVM.

  11. Prediction of Carbohydrate-Binding Proteins from Sequences Using Support Vector Machines

    Directory of Open Access Journals (Sweden)

    Seizi Someya

    2010-01-01

    Full Text Available Carbohydrate-binding proteins are proteins that can interact with sugar chains but do not modify them. They are involved in many physiological functions, and we have developed a method for predicting them from their amino acid sequences. Our method is based on support vector machines (SVMs. We first clarified the definition of carbohydrate-binding proteins and then constructed positive and negative datasets with which the SVMs were trained. By applying the leave-one-out test to these datasets, our method delivered 0.92 of the area under the receiver operating characteristic (ROC curve. We also examined two amino acid grouping methods that enable effective learning of sequence patterns and evaluated the performance of these methods. When we applied our method in combination with the homology-based prediction method to the annotated human genome database, H-invDB, we found that the true positive rate of prediction was improved.

  12. Vector analysis

    CERN Document Server

    Newell, Homer E

    2006-01-01

    When employed with skill and understanding, vector analysis can be a practical and powerful tool. This text develops the algebra and calculus of vectors in a manner useful to physicists and engineers. Numerous exercises (with answers) not only provide practice in manipulation but also help establish students' physical and geometric intuition in regard to vectors and vector concepts.Part I, the basic portion of the text, consists of a thorough treatment of vector algebra and the vector calculus. Part II presents the illustrative matter, demonstrating applications to kinematics, mechanics, and e

  13. Generating and executing programs for a floating point single instruction multiple data instruction set architecture

    Science.gov (United States)

    Gschwind, Michael K

    2013-04-16

    Mechanisms for generating and executing programs for a floating point (FP) only single instruction multiple data (SIMD) instruction set architecture (ISA) are provided. A computer program product comprising a computer recordable medium having a computer readable program recorded thereon is provided. The computer readable program, when executed on a computing device, causes the computing device to receive one or more instructions and execute the one or more instructions using logic in an execution unit of the computing device. The logic implements a floating point (FP) only single instruction multiple data (SIMD) instruction set architecture (ISA), based on data stored in a vector register file of the computing device. The vector register file is configured to store both scalar and floating point values as vectors having a plurality of vector elements.

  14. The vectorization of a ray tracing program for image generation

    Science.gov (United States)

    Plunkett, D. J.; Cychosz, J. M.; Bailey, M. J.

    1984-01-01

    Ray tracing is a widely used method for producing realistic computer generated images. Ray tracing involves firing an imaginary ray from a view point, through a point on an image plane, into a three dimensional scene. The intersections of the ray with the objects in the scene determines what is visible at the point on the image plane. This process must be repeated many times, once for each point (commonly called a pixel) in the image plane. A typical image contains more than a million pixels making this process computationally expensive. A traditional ray tracing program processes one ray at a time. In such a serial approach, as much as ninety percent of the execution time is spent computing the intersection of a ray with the surface in the scene. With the CYBER 205, many rays can be intersected with all the bodies im the scene with a single series of vector operations. Vectorization of this intersection process results in large decreases in computation time. The CADLAB's interest in ray tracing stems from the need to produce realistic images of mechanical parts. A high quality image of a part during the design process can increase the productivity of the designer by helping him visualize the results of his work. To be useful in the design process, these images must be produced in a reasonable amount of time. This discussion will explain how the ray tracing process was vectorized and gives examples of the images obtained.

  15. Vector-Tensor and Vector-Vector Decay Amplitude Analysis of B0→φK*0

    International Nuclear Information System (INIS)

    Aubert, B.; Bona, M.; Boutigny, D.; Couderc, F.; Karyotakis, Y.; Lees, J. P.; Poireau, V.; Tisserand, V.; Zghiche, A.; Grauges, E.; Palano, A.; Chen, J. C.; Qi, N. D.; Rong, G.; Wang, P.; Zhu, Y. S.; Eigen, G.; Ofte, I.; Stugu, B.; Abrams, G. S.

    2007-01-01

    We perform an amplitude analysis of the decays B 0 →φK 2 * (1430) 0 , φK * (892) 0 , and φ(Kπ) S-wave 0 with a sample of about 384x10 6 BB pairs recorded with the BABAR detector. The fractions of longitudinal polarization f L of the vector-tensor and vector-vector decay modes are measured to be 0.853 -0.069 +0.061 ±0.036 and 0.506±0.040±0.015, respectively. Overall, twelve parameters are measured for the vector-vector decay and seven parameters for the vector-tensor decay, including the branching fractions and parameters sensitive to CP violation

  16. Inastemp: A Novel Intrinsics-as-Template Library for Portable SIMD-Vectorization

    Directory of Open Access Journals (Sweden)

    Berenger Bramas

    2017-01-01

    Full Text Available The development of scientific applications requires highly optimized computational kernels to benefit from modern hardware. In recent years, vectorization has gained key importance in exploiting the processing capabilities of modern CPUs, whose evolution is characterized by increasing register-widths and core numbers, but stagnating clock frequencies. In particular, vectorization allows floating point operations to be performed at a higher rate than the processor’s frequency. However, compilers often fail to vectorize complex codes and pure assembly/intrinsic implementations often suffer from software engineering issues, such as readability and maintainability. Moreover, it is difficult for domain scientists to write optimized code without technical support. To address these issues, we propose Inastemp, a lightweight open-source C++ library. Inastemp offers a solution to develop hardware-independent computational kernels for the CPU. These kernels are portable across compilers and floating point precision and vectorized targeting SSE(3,4.1,4.2, AVX(2, AVX512, or ALTIVEC/VMX instructions. Inastemp provides advanced features, such as an if-else statement that vectorizes branches that cannot be removed. Our performance study shows that Inastemp has the same efficiency as pure intrinsic approaches on modern architectures. As side-results, this study provides micro benchmarks on the latest HPC architectures for three different computational kernels, emphasizing comparisons between scalar and intrinsic-based codes.

  17. About vectors

    CERN Document Server

    Hoffmann, Banesh

    1975-01-01

    From his unusual beginning in ""Defining a vector"" to his final comments on ""What then is a vector?"" author Banesh Hoffmann has written a book that is provocative and unconventional. In his emphasis on the unresolved issue of defining a vector, Hoffmann mixes pure and applied mathematics without using calculus. The result is a treatment that can serve as a supplement and corrective to textbooks, as well as collateral reading in all courses that deal with vectors. Major topics include vectors and the parallelogram law; algebraic notation and basic ideas; vector algebra; scalars and scalar p

  18. Combining deep residual neural network features with supervised machine learning algorithms to classify diverse food image datasets.

    Science.gov (United States)

    McAllister, Patrick; Zheng, Huiru; Bond, Raymond; Moorhead, Anne

    2018-04-01

    Obesity is increasing worldwide and can cause many chronic conditions such as type-2 diabetes, heart disease, sleep apnea, and some cancers. Monitoring dietary intake through food logging is a key method to maintain a healthy lifestyle to prevent and manage obesity. Computer vision methods have been applied to food logging to automate image classification for monitoring dietary intake. In this work we applied pretrained ResNet-152 and GoogleNet convolutional neural networks (CNNs), initially trained using ImageNet Large Scale Visual Recognition Challenge (ILSVRC) dataset with MatConvNet package, to extract features from food image datasets; Food 5K, Food-11, RawFooT-DB, and Food-101. Deep features were extracted from CNNs and used to train machine learning classifiers including artificial neural network (ANN), support vector machine (SVM), Random Forest, and Naive Bayes. Results show that using ResNet-152 deep features with SVM with RBF kernel can accurately detect food items with 99.4% accuracy using Food-5K validation food image dataset and 98.8% with Food-5K evaluation dataset using ANN, SVM-RBF, and Random Forest classifiers. Trained with ResNet-152 features, ANN can achieve 91.34%, 99.28% when applied to Food-11 and RawFooT-DB food image datasets respectively and SVM with RBF kernel can achieve 64.98% with Food-101 image dataset. From this research it is clear that using deep CNN features can be used efficiently for diverse food item image classification. The work presented in this research shows that pretrained ResNet-152 features provide sufficient generalisation power when applied to a range of food image classification tasks. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. SU-C-18A-04: 3D Markerless Registration of Lung Based On Coherent Point Drift: Application in Image Guided Radiotherapy

    International Nuclear Information System (INIS)

    Nasehi Tehrani, J; Wang, J; Guo, X; Yang, Y

    2014-01-01

    Purpose: This study evaluated a new probabilistic non-rigid registration method called coherent point drift for real time 3D markerless registration of the lung motion during radiotherapy. Method: 4DCT image datasets Dir-lab (www.dir-lab.com) have been used for creating 3D boundary element model of the lungs. For the first step, the 3D surface of the lungs in respiration phases T0 and T50 were segmented and divided into a finite number of linear triangular elements. Each triangle is a two dimensional object which has three vertices (each vertex has three degree of freedom). One of the main features of the lungs motion is velocity coherence so the vertices that creating the mesh of the lungs should also have features and degree of freedom of lung structure. This means that the vertices close to each other tend to move coherently. In the next step, we implemented a probabilistic non-rigid registration method called coherent point drift to calculate nonlinear displacement of vertices between different expiratory phases. Results: The method has been applied to images of 10-patients in Dir-lab dataset. The normal distribution of vertices to the origin for each expiratory stage were calculated. The results shows that the maximum error of registration between different expiratory phases is less than 0.4 mm (0.38 SI, 0.33 mm AP, 0.29 mm RL direction). This method is a reliable method for calculating the vector of displacement, and the degrees of freedom (DOFs) of lung structure in radiotherapy. Conclusions: We evaluated a new 3D registration method for distribution set of vertices inside lungs mesh. In this technique, lungs motion considering velocity coherence are inserted as a penalty in regularization function. The results indicate that high registration accuracy is achievable with CPD. This method is helpful for calculating of displacement vector and analyzing possible physiological and anatomical changes during treatment

  20. Vectors

    DEFF Research Database (Denmark)

    Boeriis, Morten; van Leeuwen, Theo

    2017-01-01

    should be taken into account in discussing ‘reactions’, which Kress and van Leeuwen link only to eyeline vectors. Finally, the question can be raised as to whether actions are always realized by vectors. Drawing on a re-reading of Rudolf Arnheim’s account of vectors, these issues are outlined......This article revisits the concept of vectors, which, in Kress and van Leeuwen’s Reading Images (2006), plays a crucial role in distinguishing between ‘narrative’, action-oriented processes and ‘conceptual’, state-oriented processes. The use of this concept in image analysis has usually focused...

  1. Dual Vector Spaces and Physical Singularities

    Science.gov (United States)

    Rowlands, Peter

    Though we often refer to 3-D vector space as constructed from points, there is no mechanism from within its definition for doing this. In particular, space, on its own, cannot accommodate the singularities that we call fundamental particles. This requires a commutative combination of space as we know it with another 3-D vector space, which is dual to the first (in a physical sense). The combination of the two spaces generates a nilpotent quantum mechanics/quantum field theory, which incorporates exact supersymmetry and ultimately removes the anomalies due to self-interaction. Among the many natural consequences of the dual space formalism are half-integral spin for fermions, zitterbewegung, Berry phase and a zero norm Berwald-Moor metric for fermionic states.

  2. RVMAB: Using the Relevance Vector Machine Model Combined with Average Blocks to Predict the Interactions of Proteins from Protein Sequences

    Directory of Open Access Journals (Sweden)

    Ji-Yong An

    2016-05-01

    Full Text Available Protein-Protein Interactions (PPIs play essential roles in most cellular processes. Knowledge of PPIs is becoming increasingly more important, which has prompted the development of technologies that are capable of discovering large-scale PPIs. Although many high-throughput biological technologies have been proposed to detect PPIs, there are unavoidable shortcomings, including cost, time intensity, and inherently high false positive and false negative rates. For the sake of these reasons, in silico methods are attracting much attention due to their good performances in predicting PPIs. In this paper, we propose a novel computational method known as RVM-AB that combines the Relevance Vector Machine (RVM model and Average Blocks (AB to predict PPIs from protein sequences. The main improvements are the results of representing protein sequences using the AB feature representation on a Position Specific Scoring Matrix (PSSM, reducing the influence of noise using a Principal Component Analysis (PCA, and using a Relevance Vector Machine (RVM based classifier. We performed five-fold cross-validation experiments on yeast and Helicobacter pylori datasets, and achieved very high accuracies of 92.98% and 95.58% respectively, which is significantly better than previous works. In addition, we also obtained good prediction accuracies of 88.31%, 89.46%, 91.08%, 91.55%, and 94.81% on other five independent datasets C. elegans, M. musculus, H. sapiens, H. pylori, and E. coli for cross-species prediction. To further evaluate the proposed method, we compare it with the state-of-the-art support vector machine (SVM classifier on the yeast dataset. The experimental results demonstrate that our RVM-AB method is obviously better than the SVM-based method. The promising experimental results show the efficiency and simplicity of the proposed method, which can be an automatic decision support tool. To facilitate extensive studies for future proteomics research, we developed

  3. On the non-Gaussian correlation of the primordial curvature perturbation with vector fields

    DEFF Research Database (Denmark)

    Kumar Jain, Rajeev; Sloth, Martin Snoager

    2013-01-01

    We compute the three-point cross-correlation function of the primordial curvature perturbation generated during inflation with two powers of a vector field in a model where conformal invariance is broken by a direct coupling of the vector field with the inflaton. If the vector field is identified...... with the electromagnetic field, this correlation would be a non-Gaussian signature of primordial magnetic fields generated during inflation. We find that the signal is maximized for the flattened configuration where the wave number of the curvature perturbation is twice that of the vector field and in this limit...

  4. Simultaneous feature selection and parameter optimisation using an artificial ant colony: case study of melting point prediction

    Directory of Open Access Journals (Sweden)

    Nigsch Florian

    2008-10-01

    Full Text Available Abstract Background We present a novel feature selection algorithm, Winnowing Artificial Ant Colony (WAAC, that performs simultaneous feature selection and model parameter optimisation for the development of predictive quantitative structure-property relationship (QSPR models. The WAAC algorithm is an extension of the modified ant colony algorithm of Shen et al. (J Chem Inf Model 2005, 45: 1024–1029. We test the ability of the algorithm to develop a predictive partial least squares model for the Karthikeyan dataset (J Chem Inf Model 2005, 45: 581–590 of melting point values. We also test its ability to perform feature selection on a support vector machine model for the same dataset. Results Starting from an initial set of 203 descriptors, the WAAC algorithm selected a PLS model with 68 descriptors which has an RMSE on an external test set of 46.6°C and R2 of 0.51. The number of components chosen for the model was 49, which was close to optimal for this feature selection. The selected SVM model has 28 descriptors (cost of 5, ε of 0.21 and an RMSE of 45.1°C and R2 of 0.54. This model outperforms a kNN model (RMSE of 48.3°C, R2 of 0.47 for the same data and has similar performance to a Random Forest model (RMSE of 44.5°C, R2 of 0.55. However it is much less prone to bias at the extremes of the range of melting points as shown by the slope of the line through the residuals: -0.43 for WAAC/SVM, -0.53 for Random Forest. Conclusion With a careful choice of objective function, the WAAC algorithm can be used to optimise machine learning and regression models that suffer from overfitting. Where model parameters also need to be tuned, as is the case with support vector machine and partial least squares models, it can optimise these simultaneously. The moving probabilities used by the algorithm are easily interpreted in terms of the best and current models of the ants, and the winnowing procedure promotes the removal of irrelevant descriptors.

  5. Simultaneous feature selection and parameter optimisation using an artificial ant colony: case study of melting point prediction.

    Science.gov (United States)

    O'Boyle, Noel M; Palmer, David S; Nigsch, Florian; Mitchell, John Bo

    2008-10-29

    We present a novel feature selection algorithm, Winnowing Artificial Ant Colony (WAAC), that performs simultaneous feature selection and model parameter optimisation for the development of predictive quantitative structure-property relationship (QSPR) models. The WAAC algorithm is an extension of the modified ant colony algorithm of Shen et al. (J Chem Inf Model 2005, 45: 1024-1029). We test the ability of the algorithm to develop a predictive partial least squares model for the Karthikeyan dataset (J Chem Inf Model 2005, 45: 581-590) of melting point values. We also test its ability to perform feature selection on a support vector machine model for the same dataset. Starting from an initial set of 203 descriptors, the WAAC algorithm selected a PLS model with 68 descriptors which has an RMSE on an external test set of 46.6 degrees C and R2 of 0.51. The number of components chosen for the model was 49, which was close to optimal for this feature selection. The selected SVM model has 28 descriptors (cost of 5, epsilon of 0.21) and an RMSE of 45.1 degrees C and R2 of 0.54. This model outperforms a kNN model (RMSE of 48.3 degrees C, R2 of 0.47) for the same data and has similar performance to a Random Forest model (RMSE of 44.5 degrees C, R2 of 0.55). However it is much less prone to bias at the extremes of the range of melting points as shown by the slope of the line through the residuals: -0.43 for WAAC/SVM, -0.53 for Random Forest. With a careful choice of objective function, the WAAC algorithm can be used to optimise machine learning and regression models that suffer from overfitting. Where model parameters also need to be tuned, as is the case with support vector machine and partial least squares models, it can optimise these simultaneously. The moving probabilities used by the algorithm are easily interpreted in terms of the best and current models of the ants, and the winnowing procedure promotes the removal of irrelevant descriptors.

  6. Leaking Underground Storage Tank Points, Region 9 Indian Country, 2017, US EPA Region 9

    Data.gov (United States)

    U.S. Environmental Protection Agency — This GIS dataset contains point features that represent Leaking Underground Storage Tanks in US EPA Region 9 Indian Country. This dataset contains facility name and...

  7. Vector regression introduced

    Directory of Open Access Journals (Sweden)

    Mok Tik

    2014-06-01

    Full Text Available This study formulates regression of vector data that will enable statistical analysis of various geodetic phenomena such as, polar motion, ocean currents, typhoon/hurricane tracking, crustal deformations, and precursory earthquake signals. The observed vector variable of an event (dependent vector variable is expressed as a function of a number of hypothesized phenomena realized also as vector variables (independent vector variables and/or scalar variables that are likely to impact the dependent vector variable. The proposed representation has the unique property of solving the coefficients of independent vector variables (explanatory variables also as vectors, hence it supersedes multivariate multiple regression models, in which the unknown coefficients are scalar quantities. For the solution, complex numbers are used to rep- resent vector information, and the method of least squares is deployed to estimate the vector model parameters after transforming the complex vector regression model into a real vector regression model through isomorphism. Various operational statistics for testing the predictive significance of the estimated vector parameter coefficients are also derived. A simple numerical example demonstrates the use of the proposed vector regression analysis in modeling typhoon paths.

  8. The Lunar Source Disk: Old Lunar Datasets on a New CD-ROM

    Science.gov (United States)

    Hiesinger, H.

    1998-01-01

    datasets to a selected standard geometry in order to create an "image-cube"-like data pool for further interpretation. The starting point was a number of datasets on a CD-ROM published by the Lunar Consortium. The task of creating an uniform data pool was further complicated by some missing or wrong references and keys on the Lunar Consortium CD as well as erroneous reproduction of some datasets in the literature.

  9. The waiting room: vector for health education? The general practitioner's point of view.

    Science.gov (United States)

    Gignon, Maxine; Idris, Hadjila; Manaouil, Cecile; Ganry, Oliver

    2012-09-18

    General practitioners (GPs) play a central role in disseminating information and most health policies are tending to develop this pivotal role of GPs in dissemination of health-related information to the public. The objective of this study was to evaluate use of the waiting room by GPs as a vector for health promotion. A cross-sectional study was conducted on a representative sample of GPs using semi-structured, face-to-face interviews. A structured grid was used to describe the documents. Quantitative and qualitative analysis was performed. Sixty GPs participated in the study. They stated that a waiting room had to be pleasant, but agreed that it was a useful vector for providing health information. The GPs stated that they distributed documents designed to improve patient care by encouraging screening, providing health education information and addressing delicate subjects more easily. However, some physicians believed that this information can sometimes make patients more anxious. A large number of documents were often available, covering a variety of topics. General practitioners intentionally use their waiting rooms to disseminate a broad range of health-related information, but without developing a clearly defined strategy. It would be interesting to correlate the topics addressed by waiting room documents with prevention practices introduced during the visit.

  10. Evaluation of terrestrial photogrammetric point clouds derived from thermal imagery

    Science.gov (United States)

    Metcalf, Jeremy P.; Olsen, Richard C.

    2016-05-01

    Computer vision and photogrammetric techniques have been widely applied to digital imagery producing high density 3D point clouds. Using thermal imagery as input, the same techniques can be applied to infrared data to produce point clouds in 3D space, providing surface temperature information. The work presented here is an evaluation of the accuracy of 3D reconstruction of point clouds produced using thermal imagery. An urban scene was imaged over an area at the Naval Postgraduate School, Monterey, CA, viewing from above as with an airborne system. Terrestrial thermal and RGB imagery were collected from a rooftop overlooking the site using a FLIR SC8200 MWIR camera and a Canon T1i DSLR. In order to spatially align each dataset, ground control points were placed throughout the study area using Trimble R10 GNSS receivers operating in RTK mode. Each image dataset is processed to produce a dense point cloud for 3D evaluation.

  11. The divergence theorem for unbounded vector fields

    OpenAIRE

    De Pauw, Thierry; Pfeffer, Washek F.

    2007-01-01

    In the context of Lebesgue integration, we derive the divergence theorem for unbounded vector. elds that can have singularities at every point of a compact set whose Minkowski content of codimension greater than two is. nite. The resulting integration by parts theorem is applied to removable sets of holomorphic and harmonic functions.

  12. A scalable and multi-purpose point cloud server (PCS) for easier and faster point cloud data management and processing

    Science.gov (United States)

    Cura, Rémi; Perret, Julien; Paparoditis, Nicolas

    2017-05-01

    In addition to more traditional geographical data such as images (rasters) and vectors, point cloud data are becoming increasingly available. Such data are appreciated for their precision and true three-Dimensional (3D) nature. However, managing point clouds can be difficult due to scaling problems and specificities of this data type. Several methods exist but are usually fairly specialised and solve only one aspect of the management problem. In this work, we propose a comprehensive and efficient point cloud management system based on a database server that works on groups of points (patches) rather than individual points. This system is specifically designed to cover the basic needs of point cloud users: fast loading, compressed storage, powerful patch and point filtering, easy data access and exporting, and integrated processing. Moreover, the proposed system fully integrates metadata (like sensor position) and can conjointly use point clouds with other geospatial data, such as images, vectors, topology and other point clouds. Point cloud (parallel) processing can be done in-base with fast prototyping capabilities. Lastly, the system is built on open source technologies; therefore it can be easily extended and customised. We test the proposed system with several billion points obtained from Lidar (aerial and terrestrial) and stereo-vision. We demonstrate loading speeds in the ˜50 million pts/h per process range, transparent-for-user and greater than 2 to 4:1 compression ratio, patch filtering in the 0.1 to 1 s range, and output in the 0.1 million pts/s per process range, along with classical processing methods, such as object detection.

  13. Mridangam stroke dataset

    OpenAIRE

    CompMusic

    2014-01-01

    The audio examples were recorded from a professional Carnatic percussionist in a semi-anechoic studio conditions by Akshay Anantapadmanabhan using SM-58 microphones and an H4n ZOOM recorder. The audio was sampled at 44.1 kHz and stored as 16 bit wav files. The dataset can be used for training models for each Mridangam stroke. /n/nA detailed description of the Mridangam and its strokes can be found in the paper below. A part of the dataset was used in the following paper. /nAkshay Anantapadman...

  14. Kochen-Specker vectors

    International Nuclear Information System (INIS)

    Pavicic, Mladen; Merlet, Jean-Pierre; McKay, Brendan; Megill, Norman D

    2005-01-01

    We give a constructive and exhaustive definition of Kochen-Specker (KS) vectors in a Hilbert space of any dimension as well as of all the remaining vectors of the space. KS vectors are elements of any set of orthonormal states, i.e., vectors in an n-dimensional Hilbert space, H n , n≥3, to which it is impossible to assign 1s and 0s in such a way that no two mutually orthogonal vectors from the set are both assigned 1 and that not all mutually orthogonal vectors are assigned 0. Our constructive definition of such KS vectors is based on algorithms that generate MMP diagrams corresponding to blocks of orthogonal vectors in R n , on algorithms that single out those diagrams on which algebraic (0)-(1) states cannot be defined, and on algorithms that solve nonlinear equations describing the orthogonalities of the vectors by means of statistically polynomially complex interval analysis and self-teaching programs. The algorithms are limited neither by the number of dimensions nor by the number of vectors. To demonstrate the power of the algorithms, all four-dimensional KS vector systems containing up to 24 vectors were generated and described, all three-dimensional vector systems containing up to 30 vectors were scanned, and several general properties of KS vectors were found

  15. 2008 TIGER/Line Nationwide Dataset

    Data.gov (United States)

    California Natural Resource Agency — This dataset contains a nationwide build of the 2008 TIGER/Line datasets from the US Census Bureau downloaded in April 2009. The TIGER/Line Shapefiles are an extract...

  16. Automated estimation of leaf distribution for individual trees based on TLS point clouds

    Science.gov (United States)

    Koma, Zsófia; Rutzinger, Martin; Bremer, Magnus

    2017-04-01

    Light Detection and Ranging (LiDAR) especially the ground based LiDAR (Terrestrial Laser Scanning - TLS) is an operational used and widely available measurement tool supporting forest inventory updating and research in forest ecology. High resolution point clouds from TLS already represent single leaves which can be used for a more precise estimation of Leaf Area Index (LAI) and for higher accurate biomass estimation. However, currently the methodology for extracting single leafs from the unclassified point clouds for individual trees is still missing. The aim of this study is to present a novel segmentation approach in order to extract single leaves and derive features related to leaf morphology (such as area, slope, length and width) of each single leaf from TLS point cloud data. For the study two exemplary single trees were scanned in leaf-on condition on the university campus of Innsbruck during calm wind conditions. A northern red oak (Quercus rubra) was scanned by a discrete return recording Optech ILRIS-3D TLS scanner and a tulip tree (Liliodendron tulpifera) with Riegl VZ-6000 scanner. During the scanning campaign a reference dataset was measured parallel to scanning. In this case 230 leaves were randomly collected around the lower branches of the tree and photos were taken. The developed workflow steps were the following: in the first step normal vectors and eigenvalues were calculated based on the user specified neighborhood. Then using the direction of the largest eigenvalue outliers i.e. ghost points were removed. After that region growing segmentation based on the curvature and angles between normal vectors was applied on the filtered point cloud. On each segment a RANSAC plane fitting algorithm was applied in order to extract the segment based normal vectors. Using the related features of the calculated segments the stem and branches were labeled as non-leaf and other segments were classified as leaf. The validation of the different segmentation

  17. Automatic markerless registration of point clouds with semantic-keypoint-based 4-points congruent sets

    Science.gov (United States)

    Ge, Xuming

    2017-08-01

    The coarse registration of point clouds from urban building scenes has become a key topic in applications of terrestrial laser scanning technology. Sampling-based algorithms in the random sample consensus (RANSAC) model have emerged as mainstream solutions to address coarse registration problems. In this paper, we propose a novel combined solution to automatically align two markerless point clouds from building scenes. Firstly, the method segments non-ground points from ground points. Secondly, the proposed method detects feature points from each cross section and then obtains semantic keypoints by connecting feature points with specific rules. Finally, the detected semantic keypoints from two point clouds act as inputs to a modified 4PCS algorithm. Examples are presented and the results compared with those of K-4PCS to demonstrate the main contributions of the proposed method, which are the extension of the original 4PCS to handle heavy datasets and the use of semantic keypoints to improve K-4PCS in relation to registration accuracy and computational efficiency.

  18. Design of an audio advertisement dataset

    Science.gov (United States)

    Fu, Yutao; Liu, Jihong; Zhang, Qi; Geng, Yuting

    2015-12-01

    Since more and more advertisements swarm into radios, it is necessary to establish an audio advertising dataset which could be used to analyze and classify the advertisement. A method of how to establish a complete audio advertising dataset is presented in this paper. The dataset is divided into four different kinds of advertisements. Each advertisement's sample is given in *.wav file format, and annotated with a txt file which contains its file name, sampling frequency, channel number, broadcasting time and its class. The classifying rationality of the advertisements in this dataset is proved by clustering the different advertisements based on Principal Component Analysis (PCA). The experimental results show that this audio advertisement dataset offers a reliable set of samples for correlative audio advertisement experimental studies.

  19. Integration of Point Clouds Dataset from Different Sensors

    Science.gov (United States)

    Abdullah, C. K. A. F. Che Ku; Baharuddin, N. Z. S.; Ariff, M. F. M.; Majid, Z.; Lau, C. L.; Yusoff, A. R.; Idris, K. M.; Aspuri, A.

    2017-02-01

    Laser Scanner technology become an option in the process of collecting data nowadays. It is composed of Airborne Laser Scanner (ALS) and Terrestrial Laser Scanner (TLS). ALS like Phoenix AL3-32 can provide accurate information from the viewpoint of rooftop while TLS as Leica C10 can provide complete data for building facade. However if both are integrated, it is able to produce more accurate data. The focus of this study is to integrate both types of data acquisition of ALS and TLS and determine the accuracy of the data obtained. The final results acquired will be used to generate models of three-dimensional (3D) buildings. The scope of this study is focusing on data acquisition of UTM Eco-home through laser scanning methods such as ALS which scanning on the roof and the TLS which scanning on building façade. Both device is used to ensure that no part of the building that are not scanned. In data integration process, both are registered by the selected points among the manmade features which are clearly visible in Cyclone 7.3 software. The accuracy of integrated data is determined based on the accuracy assessment which is carried out using man-made registration methods. The result of integration process can achieve below 0.04m. This integrated data then are used to generate a 3D model of UTM Eco-home building using SketchUp software. In conclusion, the combination of the data acquisition integration between ALS and TLS would produce the accurate integrated data and able to use for generate a 3D model of UTM eco-home. For visualization purposes, the 3D building model which generated is prepared in Level of Detail 3 (LOD3) which recommended by City Geographic Mark-Up Language (CityGML).

  20. Failure prognostics by support vector regression of time series data under stationary/nonstationary environmental and operational conditions

    International Nuclear Information System (INIS)

    Liu, Jie

    2015-01-01

    This Ph. D. work is motivated by the possibility of monitoring the conditions of components of energy systems for their extended and safe use, under proper practice of operation and adequate policies of maintenance. The aim is to develop a Support Vector Regression (SVR)-based framework for predicting time series data under stationary/nonstationary environmental and operational conditions. Single SVR and SVR-based ensemble approaches are developed to tackle the prediction problem based on both small and large datasets. Strategies are proposed for adaptively updating the single SVR and SVR-based ensemble models in the existence of pattern drifts. Comparisons with other online learning approaches for kernel-based modelling are provided with reference to time series data from a critical component in Nuclear Power Plants (NPPs) provided by Electricite de France (EDF). The results show that the proposed approaches achieve comparable prediction results, considering the Mean Squared Error (MSE) and Mean Relative Error (MRE), in much less computation time. Furthermore, by analyzing the geometrical meaning of the Feature Vector Selection (FVS) method proposed in the literature, a novel geometrically interpretable kernel method, named Reduced Rank Kernel Ridge Regression-II (RRKRR-II), is proposed to describe the linear relations between a predicted value and the predicted values of the Feature Vectors (FVs) selected by FVS. Comparisons with several kernel methods on a number of public datasets prove the good prediction accuracy and the easy-of-tuning of the hyper-parameters of RRKRR-II. (author)

  1. Elementary vectors

    CERN Document Server

    Wolstenholme, E Œ

    1978-01-01

    Elementary Vectors, Third Edition serves as an introductory course in vector analysis and is intended to present the theoretical and application aspects of vectors. The book covers topics that rigorously explain and provide definitions, principles, equations, and methods in vector analysis. Applications of vector methods to simple kinematical and dynamical problems; central forces and orbits; and solutions to geometrical problems are discussed as well. This edition of the text also provides an appendix, intended for students, which the author hopes to bridge the gap between theory and appl

  2. Background qualitative analysis of the European reference life cycle database (ELCD) energy datasets - part II: electricity datasets.

    Science.gov (United States)

    Garraín, Daniel; Fazio, Simone; de la Rúa, Cristina; Recchioni, Marco; Lechón, Yolanda; Mathieux, Fabrice

    2015-01-01

    The aim of this paper is to identify areas of potential improvement of the European Reference Life Cycle Database (ELCD) electricity datasets. The revision is based on the data quality indicators described by the International Life Cycle Data system (ILCD) Handbook, applied on sectorial basis. These indicators evaluate the technological, geographical and time-related representativeness of the dataset and the appropriateness in terms of completeness, precision and methodology. Results show that ELCD electricity datasets have a very good quality in general terms, nevertheless some findings and recommendations in order to improve the quality of Life-Cycle Inventories have been derived. Moreover, these results ensure the quality of the electricity-related datasets to any LCA practitioner, and provide insights related to the limitations and assumptions underlying in the datasets modelling. Giving this information, the LCA practitioner will be able to decide whether the use of the ELCD electricity datasets is appropriate based on the goal and scope of the analysis to be conducted. The methodological approach would be also useful for dataset developers and reviewers, in order to improve the overall Data Quality Requirements of databases.

  3. Nephele: genotyping via complete composition vectors and MapReduce

    Directory of Open Access Journals (Sweden)

    Mardis Scott

    2011-08-01

    Full Text Available Abstract Background Current sequencing technology makes it practical to sequence many samples of a given organism, raising new challenges for the processing and interpretation of large genomics data sets with associated metadata. Traditional computational phylogenetic methods are ideal for studying the evolution of gene/protein families and using those to infer the evolution of an organism, but are less than ideal for the study of the whole organism mainly due to the presence of insertions/deletions/rearrangements. These methods provide the researcher with the ability to group a set of samples into distinct genotypic groups based on sequence similarity, which can then be associated with metadata, such as host information, pathogenicity, and time or location of occurrence. Genotyping is critical to understanding, at a genomic level, the origin and spread of infectious diseases. Increasingly, genotyping is coming into use for disease surveillance activities, as well as for microbial forensics. The classic genotyping approach has been based on phylogenetic analysis, starting with a multiple sequence alignment. Genotypes are then established by expert examination of phylogenetic trees. However, these traditional single-processor methods are suboptimal for rapidly growing sequence datasets being generated by next-generation DNA sequencing machines, because they increase in computational complexity quickly with the number of sequences. Results Nephele is a suite of tools that uses the complete composition vector algorithm to represent each sequence in the dataset as a vector derived from its constituent k-mers by passing the need for multiple sequence alignment, and affinity propagation clustering to group the sequences into genotypes based on a distance measure over the vectors. Our methods produce results that correlate well with expert-defined clades or genotypes, at a fraction of the computational cost of traditional phylogenetic methods run on

  4. Wall shear stress fixed points in cardiovascular fluid mechanics.

    Science.gov (United States)

    Arzani, Amirhossein; Shadden, Shawn C

    2018-05-17

    Complex blood flow in large arteries creates rich wall shear stress (WSS) vectorial features. WSS acts as a link between blood flow dynamics and the biology of various cardiovascular diseases. WSS has been of great interest in a wide range of studies and has been the most popular measure to correlate blood flow to cardiovascular disease. Recent studies have emphasized different vectorial features of WSS. However, fixed points in the WSS vector field have not received much attention. A WSS fixed point is a point on the vessel wall where the WSS vector vanishes. In this article, WSS fixed points are classified and the aspects by which they could influence cardiovascular disease are reviewed. First, the connection between WSS fixed points and the flow topology away from the vessel wall is discussed. Second, the potential role of time-averaged WSS fixed points in biochemical mass transport is demonstrated using the recent concept of Lagrangian WSS structures. Finally, simple measures are proposed to quantify the exposure of the endothelial cells to WSS fixed points. Examples from various arterial flow applications are demonstrated. Copyright © 2018 Elsevier Ltd. All rights reserved.

  5. The National Assessment of Shoreline Change: A GIS Compilation of Vector Shorelines and Associated Shoreline Change Data for the U.S. Gulf of Mexico

    Science.gov (United States)

    Miller, Tara L.; Morton, Robert A.; Sallenger, Asbury H.; Moore, Laura J.

    2004-01-01

    Report 2004-1043) for additional information regarding methods and results. Data in this report are organized into data layers by state and are provided as single-point vector datasets with metadata. Vector shorelines may represent a compilation of data from one or more sources and these sources are attributed in the dataset. All data are intended to be GIS-ready inasmuch as the data should not require any additional cleanup, formatting, or renaming of fields in order to use the data in a Geographic Information System (GIS). This project employs the Environmental Systems Research Institute's (ESRI) ArcView as its GIS mapping tool and contains several data layers (or themes) that are used to create a geographic view of the margin off the U.S. Gulf of Mexico. These vector data form a basemap comprised of polygon and line themes that include a U.S. coastline (1:80,000), U.S. cities, and state boundaries.

  6. Hawaii ESI: M_MAMPT (Marine Mammal Points)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains sensitive biological resource data for endangered Hawaiian monk seal pupping and haul-out sites. Vector points in this data set represent...

  7. Null point of discrimination in crustacean polarisation vision.

    Science.gov (United States)

    How, Martin J; Christy, John; Roberts, Nicholas W; Marshall, N Justin

    2014-07-15

    The polarisation of light is used by many species of cephalopods and crustaceans to discriminate objects or to communicate. Most visual systems with this ability, such as that of the fiddler crab, include receptors with photopigments that are oriented horizontally and vertically relative to the outside world. Photoreceptors in such an orthogonal array are maximally sensitive to polarised light with the same fixed e-vector orientation. Using opponent neural connections, this two-channel system may produce a single value of polarisation contrast and, consequently, it may suffer from null points of discrimination. Stomatopod crustaceans use a different system for polarisation vision, comprising at least four types of polarisation-sensitive photoreceptor arranged at 0, 45, 90 and 135 deg relative to each other, in conjunction with extensive rotational eye movements. This anatomical arrangement should not suffer from equivalent null points of discrimination. To test whether these two systems were vulnerable to null points, we presented the fiddler crab Uca heteropleura and the stomatopod Haptosquilla trispinosa with polarised looming stimuli on a modified LCD monitor. The fiddler crab was less sensitive to differences in the degree of polarised light when the e-vector was at -45 deg than when the e-vector was horizontal. In comparison, stomatopods showed no difference in sensitivity between the two stimulus types. The results suggest that fiddler crabs suffer from a null point of sensitivity, while stomatopods do not. © 2014. Published by The Company of Biologists Ltd.

  8. Vector mass in curved space-times

    International Nuclear Information System (INIS)

    Maia, M.D.

    The use of the Poincare-symmetry appears to be incompatible with the presence of the gravitational field. The consequent problem of the definition of the mass operator is analysed and an alternative definition based on constant curvature tangent spaces is proposed. In the case where the space-time has no killing vector fields, four independent mass operators can be defined at each point. (Author) [pt

  9. Evaluation of Uncertainty in Precipitation Datasets for New Mexico, USA

    Science.gov (United States)

    Besha, A. A.; Steele, C. M.; Fernald, A.

    2014-12-01

    Climate change, population growth and other factors are endangering water availability and sustainability in semiarid/arid areas particularly in the southwestern United States. Wide coverage of spatial and temporal measurements of precipitation are key for regional water budget analysis and hydrological operations which themselves are valuable tool for water resource planning and management. Rain gauge measurements are usually reliable and accurate at a point. They measure rainfall continuously, but spatial sampling is limited. Ground based radar and satellite remotely sensed precipitation have wide spatial and temporal coverage. However, these measurements are indirect and subject to errors because of equipment, meteorological variability, the heterogeneity of the land surface itself and lack of regular recording. This study seeks to understand precipitation uncertainty and in doing so, lessen uncertainty propagation into hydrological applications and operations. We reviewed, compared and evaluated the TRMM (Tropical Rainfall Measuring Mission) precipitation products, NOAA's (National Oceanic and Atmospheric Administration) Global Precipitation Climatology Centre (GPCC) monthly precipitation dataset, PRISM (Parameter elevation Regression on Independent Slopes Model) data and data from individual climate stations including Cooperative Observer Program (COOP), Remote Automated Weather Stations (RAWS), Soil Climate Analysis Network (SCAN) and Snowpack Telemetry (SNOTEL) stations. Though not yet finalized, this study finds that the uncertainty within precipitation estimates datasets is influenced by regional topography, season, climate and precipitation rate. Ongoing work aims to further evaluate precipitation datasets based on the relative influence of these phenomena so that we can identify the optimum datasets for input to statewide water budget analysis.

  10. Method to Minimize the Low-Frequency Neutral-Point Voltage Oscillations With Time-Offset Injection for Neutral-Point-Clamped Inverters

    DEFF Research Database (Denmark)

    Choi, Ui-Min; Blaabjerg, Frede; Lee, Kyo-Beum

    2015-01-01

    time of small- and medium-voltage vectors. However, if the power factor is lower, there is a limitation to eliminate neutral-point oscillations. In this case, the proposed method can be improved by changing the switching sequence properly. Additionally, a method for neutral-point voltage balancing......This paper proposes a method to reduce the low-frequency neutral-point voltage oscillations. The neutral-point voltage oscillations are considerably reduced by adding a time offset to the three-phase turn-on times. The proper time offset is simply calculated considering the phase currents and dwell...

  11. The GTZAN dataset

    DEFF Research Database (Denmark)

    Sturm, Bob L.

    2013-01-01

    The GTZAN dataset appears in at least 100 published works, and is the most-used public dataset for evaluation in machine listening research for music genre recognition (MGR). Our recent work, however, shows GTZAN has several faults (repetitions, mislabelings, and distortions), which challenge...... of GTZAN, and provide a catalog of its faults. We review how GTZAN has been used in MGR research, and find few indications that its faults have been known and considered. Finally, we rigorously study the effects of its faults on evaluating five different MGR systems. The lesson is not to banish GTZAN...

  12. Generalized Toeplitz operators and cyclic vectors

    International Nuclear Information System (INIS)

    Gassier, G.; Mahzouli, H.; Zerouali, E.H.

    2003-04-01

    We give in this paper some asymptotic Von Neumann inequalities for power bounded operators in the class C ρ intersection C 1 . and some spacial von Neumann inequalities associated with non zero elements of the point spectrum, when it is non void, of generalized Toeplitz operators. Introducing perturbed kernel, we consider classes C R which extend the classical classes C ρ . We give results about absolute continuity with respect to the Haar measure for operators in class C R intersection C 1 . This allows us to give new results on cyclic vectors for such operators and provides invariant subspaces for their powers. Relationships between cyclic vectors for T and T* involving generalized Toeplitz operators are given and the commutativity of {T}', the commutant of T is discussed. (author)

  13. The standardised freight container: vector of vectors and vector-borne diseases.

    Science.gov (United States)

    Reiter, P

    2010-04-01

    The standardised freight container was one of the most important innovations of the 20th Century. Containerised cargoes travel from their point of origin to their destination by ship, road and rail as part of a single journey, without unpacking. This simple concept is the key element in cheap, rapid transport by land and sea, and has led to a phenomenal growth in global trade. Likewise, containerised air cargo has led to a remarkable increase in the inter-continental transportation of goods, particularly perishable items such as flowers, fresh vegetables and live animals. In both cases, containerisation offers great advantages in speed and security, but reduces the opportunity to inspect cargoes in transit. An inevitable consequence is the globalisation of undesirable species of animals, plants and pathogens. Moreover, cheap passenger flights offer worldwide travel for viral and parasitic pathogens in infected humans. The continued emergence of exotic pests, vectors and pathogens throughout the world is an unavoidable consequence of these advances in transportation technology.

  14. On sample size and different interpretations of snow stability datasets

    Science.gov (United States)

    Schirmer, M.; Mitterer, C.; Schweizer, J.

    2009-04-01

    aspect distributions to the large dataset. We used 100 different subsets for each sample size. Statistical variations obtained in the complete dataset were also tested on the smaller subsets using the Mann-Whitney or the Kruskal-Wallis test. For each subset size, the number of subsets were counted in which the significance level was reached. For these tests no nominal data scale was assumed. (iii) For the same subsets described above, the distribution of the aspect median was determined. A count of how often this distribution was substantially different from the distribution obtained with the complete dataset was made. Since two valid stability interpretations were available (an objective and a subjective interpretation as described above), the effect of the arbitrary choice of the interpretation on spatial variability results was tested. In over one third of the cases the two interpretations came to different results. The effect of these differences were studied in a similar method as described in (iii): the distribution of the aspect median was determined for subsets of the complete dataset using both interpretations, compared against each other as well as to the results of the complete dataset. For the complete dataset the two interpretations showed mainly identical results. Therefore the subset size was determined from the point at which the results of the two interpretations converged. A universal result for the optimal subset size cannot be presented since results differed between different situations contained in the dataset. The optimal subset size is thus dependent on stability variation in a given situation, which is unknown initially. There are indications that for some situations even the complete dataset might be not large enough. At a subset size of approximately 25, the significant differences between aspect groups (as determined using the whole dataset) were only obtained in one out of five situations. In some situations, up to 20% of the subsets showed a

  15. Indian Country Leaking Underground Storage Tank (LUST) Points, Region 9, 2016, US EPA Region 9

    Data.gov (United States)

    U.S. Environmental Protection Agency — This GIS dataset contains point features that represent Leaking Underground Storage Tanks in US EPA Region 9 Indian Country. This dataset contains facility name and...

  16. Adeno-associated virus gene therapy vector scAAVIGF-I for transduction of equine articular chondrocytes and RNA-seq analysis.

    Science.gov (United States)

    Hemphill, D D; McIlwraith, C W; Slayden, R A; Samulski, R J; Goodrich, L R

    2016-05-01

    IGF-I is one of several anabolic factors being investigated for the treatment of osteoarthritis (OA). Due to the short biological half-life, extended administration is required for more robust cartilage healing. Here we create a self-complimentary adeno-associated virus (AAV) gene therapy vector utilizing the transgene for IGF-I. Various biochemical assays were performed to investigate the cellular response to scAAVIGF-I treatment vs an scAAVGFP positive transduction control and a negative for transduction control culture. RNA-sequencing analysis was also performed to establish a differential regulation profile of scAAVIGF-I transduced chondrocytes. Biochemical analyses indicated an average media IGF-I concentration of 608 ng/ml in the scAAVIGF-I transduced chondrocytes. This increase in IGF-I led to increased expression of collagen type II and aggrecan and increased protein concentrations of cellular collagen type II and media glycosaminoglycan vs both controls. RNA-seq revealed a global regulatory pattern consisting of 113 differentially regulated GO categories including those for chondrocyte and cartilage development and regulation of apoptosis. This research substantiates that scAAVIGF-I gene therapy vector increased production of IGF-I to clinically relevant levels with a biological response by chondrocytes conducive to increased cartilage healing. The RNA-seq further established a set of differentially expressed genes and gene ontologies induced by the scAAVIGF-I vector while controlling for AAV infection. This dataset provides a static representation of the cellular transcriptome that, while only consisting of one time point, will allow for further gene expression analyses to compare additional cartilage healing therapeutics or a transient cellular response. Copyright © 2015. Published by Elsevier Ltd.

  17. Point Information Gain and Multidimensional Data Analysis

    Directory of Open Access Journals (Sweden)

    Renata Rychtáriková

    2016-10-01

    Full Text Available We generalize the point information gain (PIG and derived quantities, i.e., point information gain entropy (PIE and point information gain entropy density (PIED, for the case of the Rényi entropy and simulate the behavior of PIG for typical distributions. We also use these methods for the analysis of multidimensional datasets. We demonstrate the main properties of PIE/PIED spectra for the real data with the examples of several images and discuss further possible utilizations in other fields of data processing.

  18. Insect vectors of Leishmania: distribution, physiology and their control.

    Science.gov (United States)

    Sharma, Umakant; Singh, Sarman

    2008-12-01

    Leishmaniasis is a deadly vector-borne disease that causes significant morbidity and mortality in Africa, Asia, Latin America and Mediterranean regions. The causative agent of leishmaniasis is transmitted from man to man by a tiny insect called sandfly. Approximately, 600 species of sandflies are known but only 10% of these act as disease vectors. Further, only 30 species of these are important from public health point. Fauna of Indian sub-zone is represented by 46 species, of these, 11 belong to Phlebotomine species and 35 to Sergentomyia species. Phlebotomus argentipes is the proven vector of kala-azar or visceral leishmaniasis in India. This review gives an insight into the insect vectors of human leishmaniasis, their geographical distribution, recent taxonomic classification, habitat, and different control measures including indoor residual spraying (IRS), insecticide-treated bednets (ITNs), environmental management, biological control, and emerging resistance to DDT. Role of satellite remote sensing for early prediction of the disease by identifying the sandflygenic conditions cannot be undermined. The article also underlines the importance of synthetic pheromones which can be used in near future for the control of these vectors.

  19. Automatic registration method for multisensor datasets adopted for dimensional measurements on cutting tools

    International Nuclear Information System (INIS)

    Shaw, L; Mehari, F; Weckenmann, A; Ettl, S; Häusler, G

    2013-01-01

    Multisensor systems with optical 3D sensors are frequently employed to capture complete surface information by measuring workpieces from different views. During coarse and fine registration the resulting datasets are afterward transformed into one common coordinate system. Automatic fine registration methods are well established in dimensional metrology, whereas there is a deficit in automatic coarse registration methods. The advantage of a fully automatic registration procedure is twofold: it enables a fast and contact-free alignment and further a flexible application to datasets of any kind of optical 3D sensor. In this paper, an algorithm adapted for a robust automatic coarse registration is presented. The method was originally developed for the field of object reconstruction or localization. It is based on a segmentation of planes in the datasets to calculate the transformation parameters. The rotation is defined by the normals of three corresponding segmented planes of two overlapping datasets, while the translation is calculated via the intersection point of the segmented planes. First results have shown that the translation is strongly shape dependent: 3D data of objects with non-orthogonal planar flanks cannot be registered with the current method. In the novel supplement for the algorithm, the translation is additionally calculated via the distance between centroids of corresponding segmented planes, which results in more than one option for the transformation. A newly introduced measure considering the distance between the datasets after coarse registration evaluates the best possible transformation. Results of the robust automatic registration method are presented on the example of datasets taken from a cutting tool with a fringe-projection system and a focus-variation system. The successful application in dimensional metrology is proven with evaluations of shape parameters based on the registered datasets of a calibrated workpiece. (paper)

  20. Current status of Plasmodium knowlesi vectors: a public health concern?

    Science.gov (United States)

    Vythilingam, I; Wong, M L; Wan-Yussof, W S

    2018-01-01

    Plasmodium knowlesi a simian malaria parasite is currently affecting humans in Southeast Asia. Malaysia has reported the most number of cases and P. knowlesi is the predominant species occurring in humans. The vectors of P. knowlesi belong to the Leucosphyrus group of Anopheles mosquitoes. These are generally described as forest-dwelling mosquitoes. With deforestation and changes in land-use, some species have become predominant in farms and villages. However, knowledge on the distribution of these vectors in the country is sparse. From a public health point of view it is important to know the vectors, so that risk factors towards knowlesi malaria can be identified and control measures instituted where possible. Here, we review what is known about the knowlesi malaria vectors and ascertain the gaps in knowledge, so that future studies could concentrate on this paucity of data in-order to address this zoonotic problem.

  1. Abandoned Uranium Mine (AUM) Points, Navajo Nation, 2016, US EPA Region 9

    Data.gov (United States)

    U.S. Environmental Protection Agency — This GIS dataset contains point features of all Abandoned Uranium Mines (AUMs) on or within one mile of the Navajo Nation. Points are centroids developed from the...

  2. North Slope, Alaska ESI: FACILITY (Facility Points)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains data for oil field facilities for the North Slope of Alaska. Vector points in this data set represent oil field facility locations. This data...

  3. Perturbative determination of mass-dependent renormalization and improvement coefficients for the heavy-light vector and axial-vector currents with relativistic heavy and domain-wall light quarks

    International Nuclear Information System (INIS)

    Yamada, Norikazu; Aoki, Sinya; Kuramashi, Yoshinobu

    2005-01-01

    We determine the mass-dependent renormalization as well as improvement coefficients for the heavy-light vector and axial-vector currents consisting of the relativistic heavy and the domain-wall light quarks through the standard matching procedure. The calculation is carried out perturbatively at the one-loop level to remove the systematic error of O(α s (am Q ) n ap) as well as O(α s (am Q ) n ) (n>=0), where p is a typical momentum scale in the heavy-light system. We point out that renormalization and improvement coefficients of the heavy-light vector current agree with those of the axial-vector current, thanks to the exact chiral symmetry for the light quark. The results obtained with three different gauge actions, plaquette, Iwasaki and DBW2, are presented as a function of heavy quark mass and domain-wall height

  4. Dynamic analysis of suspension cable based on vector form intrinsic finite element method

    Science.gov (United States)

    Qin, Jian; Qiao, Liang; Wan, Jiancheng; Jiang, Ming; Xia, Yongjun

    2017-10-01

    A vector finite element method is presented for the dynamic analysis of cable structures based on the vector form intrinsic finite element (VFIFE) and mechanical properties of suspension cable. Firstly, the suspension cable is discretized into different elements by space points, the mass and external forces of suspension cable are transformed into space points. The structural form of cable is described by the space points at different time. The equations of motion for the space points are established according to the Newton’s second law. Then, the element internal forces between the space points are derived from the flexible truss structure. Finally, the motion equations of space points are solved by the central difference method with reasonable time integration step. The tangential tension of the bearing rope in a test ropeway with the moving concentrated loads is calculated and compared with the experimental data. The results show that the tangential tension of suspension cable with moving loads is consistent with the experimental data. This method has high calculated precision and meets the requirements of engineering application.

  5. Genetic manipulation of endosymbionts to control vector and vector borne diseases

    Directory of Open Access Journals (Sweden)

    Jay Prakash Gupta

    Full Text Available Vector borne diseases (VBD are on the rise because of failure of the existing methods of control of vector and vector borne diseases and the climate change. A steep rise of VBDs are due to several factors like selection of insecticide resistant vector population, drug resistant parasite population and lack of effective vaccines against the VBDs. Environmental pollution, public health hazard and insecticide resistant vector population indicate that the insecticides are no longer a sustainable control method of vector and vector-borne diseases. Amongst the various alternative control strategies, symbiont based approach utilizing endosymbionts of arthropod vectors could be explored to control the vector and vector borne diseases. The endosymbiont population of arthropod vectors could be exploited in different ways viz., as a chemotherapeutic target, vaccine target for the control of vectors. Expression of molecules with antiparasitic activity by genetically transformed symbiotic bacteria of disease-transmitting arthropods may serve as a powerful approach to control certain arthropod-borne diseases. Genetic transformation of symbiotic bacteria of the arthropod vector to alter the vector’s ability to transmit pathogen is an alternative means of blocking the transmission of VBDs. In Indian scenario, where dengue, chikungunya, malaria and filariosis are prevalent, paratransgenic based approach can be used effectively. [Vet World 2012; 5(9.000: 571-576

  6. Almost purity for overconvergent Witt vectors

    DEFF Research Database (Denmark)

    Davis, Christopher James; Kedlaya, Kiran

    2015-01-01

     . Here, we use almost purity to lift the finite étale extension of R[p−1]R[p−1] to a finite étale extension of rings of overconvergent Witt vectors. The point is that no hypothesis of p-adic completeness is needed; this result thus points towards potential global analogues of p  -adic Hodge theory....... As an illustration, we construct (φ,Γ)(φ,Γ)-modules associated with Artin Motives over QQ. The (φ,Γ)(φ,Γ)-modules we construct are defined over a base ring which seems well-suited to generalization to a more global setting; we plan to pursue such generalizations in later work....

  7. The local structure of a Liouville vector field

    International Nuclear Information System (INIS)

    Ciriza, E.

    1990-05-01

    In this work we investigate the local structure of a Liouville vector field ξ of a Kaehler manifold (P,Ω) which vanishes on an isotropic submanifold Q of P. Some of the eigenvalues of its linear part at the singular points are zero and the remaining ones are in resonance. We show that there is a C 1 -smooth linearizing conjugation between the Liouville vector field ξ and its linear part. To do this we construct Darboux coordinates adapted to the unstable foliation which is provided by the Centre Manifold Theorem. We then apply recent linearization results due to G. Sell. (author). 11 refs

  8. Emerging vector borne diseases – incidence through vectors

    Directory of Open Access Journals (Sweden)

    Sara eSavic

    2014-12-01

    Full Text Available Vector borne diseases use to be a major public health concern only in tropical and subtropical areas, but today they are an emerging threat for the continental and developed countries also. Nowdays, in intercontinetal countries, there is a struggle with emerging diseases which have found their way to appear through vectors. Vector borne zoonotic diseases occur when vectors, animal hosts, climate conditions, pathogens and susceptible human population exist at the same time, at the same place. Global climate change is predicted to lead to an increase in vector borne infectious diseases and disease outbreaks. It could affect the range and popultion of pathogens, host and vectors, transmission season, etc. Reliable surveilance for diseases that are most likely to emerge is required. Canine vector borne diseases represent a complex group of diseases including anaplasmosis, babesiosis, bartonellosis, borreliosis, dirofilariosis, erlichiosis, leishmaniosis. Some of these diseases cause serious clinical symptoms in dogs and some of them have a zoonotic potential with an effect to public health. It is expected from veterinarians in coordination with medical doctors to play a fudamental role at primeraly prevention and then treatment of vector borne diseases in dogs. The One Health concept has to be integrated into the struggle against emerging diseases.During a four year period, from 2009-2013, a total number of 551 dog samples were analysed for vector borne diseases (borreliosis, babesiosis, erlichiosis, anaplasmosis, dirofilariosis and leishmaniasis in routine laboratory work. The analysis were done by serological tests – ELISA for borreliosis, dirofilariosis and leishmaniasis, modified Knott test for dirofilariosis and blood smear for babesiosis, erlichiosis and anaplasmosis. This number of samples represented 75% of total number of samples that were sent for analysis for different diseases in dogs. Annually, on avarege more then half of the samples

  9. Address Points, The Address Point layer contains an address point for almost every structure over 200 square feet and for some vacant properties. Attributes include addresses, sub-units, address use, LAT/LONG, 10-digit SDAT taxpins, political areas and more., Published in 2013, 1:2400 (1in=200ft) scale, Baltimore County Government.

    Data.gov (United States)

    NSGIC Local Govt | GIS Inventory — Address Points dataset current as of 2013. The Address Point layer contains an address point for almost every structure over 200 square feet and for some vacant...

  10. Estimation of Missed Statin Prescription Use in an Administrative Claims Dataset.

    Science.gov (United States)

    Wade, Rolin L; Patel, Jeetvan G; Hill, Jerrold W; De, Ajita P; Harrison, David J

    2017-09-01

    Nonadherence to statin medications is associated with increased risk of cardiovascular disease and poses a challenge to lipid management in patients who are at risk for atherosclerotic cardiovascular disease. Numerous studies have examined statin adherence based on administrative claims data; however, these data may underestimate statin use in patients who participate in generic drug discount programs or who have alternative coverage. To estimate the proportion of patients with missing statin claims in a claims database and determine how missing claims affect commonly used utilization metrics. This retrospective cohort study used pharmacy data from the PharMetrics Plus (P+) claims dataset linked to the IMS longitudinal pharmacy point-of-sale prescription database (LRx) from January 1, 2012, through December 31, 2014. Eligible patients were represented in the P+ and LRx datasets, had ≥1 claim for a statin (index claim) in either database, and had ≥ 24 months of continuous enrollment in P+. Patients were linked between P+ and LRx using a deterministic method. Duplicate claims between LRx and P+ were removed to produce a new dataset comprised of P+ claims augmented with LRx claims. Statin use was then compared between P+ and the augmented P+ dataset. Utilization metrics that were evaluated included percentage of patients with ≥ 1 missing statin claim over 12 months in P+; the number of patients misclassified as new users in P+; the number of patients misclassified as nonstatin users in P+; the change in 12-month medication possession ratio (MPR) and proportion of days covered (PDC) in P+; the comparison between P+ and LRx of classifications of statin treatment patterns (statin intensity and patients with treatment modifications); and the payment status for missing statin claims. Data from 965,785 patients with statin claims in P+ were analyzed (mean age 56.6 years; 57% male). In P+, 20.1% had ≥ 1 missing statin claim post-index; 13.7% were misclassified as

  11. Prediction and analysis of beta-turns in proteins by support vector machine.

    Science.gov (United States)

    Pham, Tho Hoan; Satou, Kenji; Ho, Tu Bao

    2003-01-01

    Tight turn has long been recognized as one of the three important features of proteins after the alpha-helix and beta-sheet. Tight turns play an important role in globular proteins from both the structural and functional points of view. More than 90% tight turns are beta-turns. Analysis and prediction of beta-turns in particular and tight turns in general are very useful for the design of new molecules such as drugs, pesticides, and antigens. In this paper, we introduce a support vector machine (SVM) approach to prediction and analysis of beta-turns. We have investigated two aspects of applying SVM to the prediction and analysis of beta-turns. First, we developed a new SVM method, called BTSVM, which predicts beta-turns of a protein from its sequence. The prediction results on the dataset of 426 non-homologous protein chains by sevenfold cross-validation technique showed that our method is superior to the other previous methods. Second, we analyzed how amino acid positions support (or prevent) the formation of beta-turns based on the "multivariable" classification model of a linear SVM. This model is more general than the other ones of previous statistical methods. Our analysis results are more comprehensive and easier to use than previously published analysis results.

  12. Vector-vector production in photon-photon interactions

    International Nuclear Information System (INIS)

    Ronan, M.T.

    1988-01-01

    Measurements of exclusive untagged /rho/ 0 /rho/ 0 , /rho//phi/, K/sup *//bar K//sup */, and /rho/ω production and tagged /rho/ 0 /rho/ 0 production in photon-photon interactions by the TPC/Two-Gamma experiment are reviewed. Comparisons to the results of other experiments and to models of vector-vector production are made. Fits to the data following a four quark model prescription for vector meson pair production are also presented. 10 refs., 9 figs

  13. Quantifying uncertainty in observational rainfall datasets

    Science.gov (United States)

    Lennard, Chris; Dosio, Alessandro; Nikulin, Grigory; Pinto, Izidine; Seid, Hussen

    2015-04-01

    The CO-ordinated Regional Downscaling Experiment (CORDEX) has to date seen the publication of at least ten journal papers that examine the African domain during 2012 and 2013. Five of these papers consider Africa generally (Nikulin et al. 2012, Kim et al. 2013, Hernandes-Dias et al. 2013, Laprise et al. 2013, Panitz et al. 2013) and five have regional foci: Tramblay et al. (2013) on Northern Africa, Mariotti et al. (2014) and Gbobaniyi el al. (2013) on West Africa, Endris et al. (2013) on East Africa and Kalagnoumou et al. (2013) on southern Africa. There also are a further three papers that the authors know about under review. These papers all use an observed rainfall and/or temperature data to evaluate/validate the regional model output and often proceed to assess projected changes in these variables due to climate change in the context of these observations. The most popular reference rainfall data used are the CRU, GPCP, GPCC, TRMM and UDEL datasets. However, as Kalagnoumou et al. (2013) point out there are many other rainfall datasets available for consideration, for example, CMORPH, FEWS, TAMSAT & RIANNAA, TAMORA and the WATCH & WATCH-DEI data. They, with others (Nikulin et al. 2012, Sylla et al. 2012) show that the observed datasets can have a very wide spread at a particular space-time coordinate. As more ground, space and reanalysis-based rainfall products become available, all which use different methods to produce precipitation data, the selection of reference data is becoming an important factor in model evaluation. A number of factors can contribute to a uncertainty in terms of the reliability and validity of the datasets such as radiance conversion algorithims, the quantity and quality of available station data, interpolation techniques and blending methods used to combine satellite and guage based products. However, to date no comprehensive study has been performed to evaluate the uncertainty in these observational datasets. We assess 18 gridded

  14. The significance of vector magnetic field measurements

    Science.gov (United States)

    Hagyard, M. J.

    1990-01-01

    Observations of four flaring solar active regions, obtained during 1980-1986 with the NASA Marshall vector magnetograph (Hagyard et al., 1982 and 1985), are presented graphically and characterized in detail, with reference to nearly simultaneous Big Bear Solar Observatory and USAF ASW H-alpha images. It is shown that the flares occurred where local photospheric magnetic fields differed most from the potential field, with initial brightening on either side of a magnetic-neutral line near the point of maximum angular shear (rather than that of maximum magnetic-field strength, typically 1 kG or greater). Particular emphasis is placed on the fact that these significant nonpotential features were detected only by measuring all three components of the vector magnetic field.

  15. Editorial: Datasets for Learning Analytics

    NARCIS (Netherlands)

    Dietze, Stefan; George, Siemens; Davide, Taibi; Drachsler, Hendrik

    2018-01-01

    The European LinkedUp and LACE (Learning Analytics Community Exchange) project have been responsible for setting up a series of data challenges at the LAK conferences 2013 and 2014 around the LAK dataset. The LAK datasets consists of a rich collection of full text publications in the domain of

  16. Rare Hadronic B Decays to Vector, Axial-Vector and Tensors

    International Nuclear Information System (INIS)

    Gao, Y.Y.

    2011-01-01

    The authors review BABAR measurements of several rare B decays, including vector-axial-vector decays B ± → φK 1 ± (1270), B ± → φ K 1 ± (1400) and B ± → b 1 # -+ρ# ± , vector-vector decays B ± → φK* ± (1410), B 0 → K* 0 (bar K)* 0 , B 0 → K*0K*0 and B 0 → K*+K*-, vector-tensor decays B ± → φK* 2 (1430) ± and φK 2 (1770)/ ± (1820), and vector-scalar decays B ± → φK* 0 (1430) ± . Understanding the observed polarization pattern requires amplitude contributions from an uncertain source.

  17. Vector disparity sensor with vergence control for active vision systems.

    Science.gov (United States)

    Barranco, Francisco; Diaz, Javier; Gibaldi, Agostino; Sabatini, Silvio P; Ros, Eduardo

    2012-01-01

    This paper presents an architecture for computing vector disparity for active vision systems as used on robotics applications. The control of the vergence angle of a binocular system allows us to efficiently explore dynamic environments, but requires a generalization of the disparity computation with respect to a static camera setup, where the disparity is strictly 1-D after the image rectification. The interaction between vision and motor control allows us to develop an active sensor that achieves high accuracy of the disparity computation around the fixation point, and fast reaction time for the vergence control. In this contribution, we address the development of a real-time architecture for vector disparity computation using an FPGA device. We implement the disparity unit and the control module for vergence, version, and tilt to determine the fixation point. In addition, two on-chip different alternatives for the vector disparity engines are discussed based on the luminance (gradient-based) and phase information of the binocular images. The multiscale versions of these engines are able to estimate the vector disparity up to 32 fps on VGA resolution images with very good accuracy as shown using benchmark sequences with known ground-truth. The performances in terms of frame-rate, resource utilization, and accuracy of the presented approaches are discussed. On the basis of these results, our study indicates that the gradient-based approach leads to the best trade-off choice for the integration with the active vision system.

  18. The Geometry of Finite Equilibrium Datasets

    DEFF Research Database (Denmark)

    Balasko, Yves; Tvede, Mich

    We investigate the geometry of finite datasets defined by equilibrium prices, income distributions, and total resources. We show that the equilibrium condition imposes no restrictions if total resources are collinear, a property that is robust to small perturbations. We also show that the set...... of equilibrium datasets is pathconnected when the equilibrium condition does impose restrictions on datasets, as for example when total resources are widely non collinear....

  19. Study on Huizhou architecture of point cloud registration based on optimized ICP algorithm

    Science.gov (United States)

    Zhang, Runmei; Wu, Yulu; Zhang, Guangbin; Zhou, Wei; Tao, Yuqian

    2018-03-01

    In view of the current point cloud registration software has high hardware requirements, heavy workload and moltiple interactive definition, the source of software with better processing effect is not open, a two--step registration method based on normal vector distribution feature and coarse feature based iterative closest point (ICP) algorithm is proposed in this paper. This method combines fast point feature histogram (FPFH) algorithm, define the adjacency region of point cloud and the calculation model of the distribution of normal vectors, setting up the local coordinate system for each key point, and obtaining the transformation matrix to finish rough registration, the rough registration results of two stations are accurately registered by using the ICP algorithm. Experimental results show that, compared with the traditional ICP algorithm, the method used in this paper has obvious time and precision advantages for large amount of point clouds.

  20. Vectorization of DOT3.5 code

    International Nuclear Information System (INIS)

    Nonomiya, Iwao; Ishiguro, Misako; Tsutsui, Tsuneo

    1990-07-01

    In this report, we describe the vectorization of two-dimensional Sn-method radiation transport code DOT3.5. Vectorized codes are not only the NEA original version developed at ORNL but also the versions improved by JAERI: DOT3.5 FNS version for fusion neutronics analyses, DOT3.5 FER version for fusion reactor design, and ESPRIT module of RADHEAT-V4 code system for radiation shielding and radiation transport analyses. In DOT3.5, input/output processing time amounts to a great part of the elapsed time when a large number of energy groups and/or a large number of spatial mesh points are used in the calculated problem. Therefore, an improvement has been made for the speedup of input/output processing in the DOT3.5 FNS version, and DOT-DD (Double Differential cross section) code. The total speedup ratio of vectorized version to the original scalar one is 1.7∼1.9 for DOT3.5 NEA version, 2.2∼2.3 fro DOT3.5 FNS version, 1.7 for DOT3.5 FER version, and 3.1∼4.4 for RADHEAT-V4, respectively. The elapsed times for improved DOT3.5 FNS version and DOT-DD are reduced to 50∼65% that of the original version by the input/output speedup. In this report, we describe summary of codes, the techniques used for the vectorization and input/output speedup, verification of computed results, and speedup effect. (author)

  1. Vector analysis

    CERN Document Server

    Brand, Louis

    2006-01-01

    The use of vectors not only simplifies treatments of differential geometry, mechanics, hydrodynamics, and electrodynamics, but also makes mathematical and physical concepts more tangible and easy to grasp. This text for undergraduates was designed as a short introductory course to give students the tools of vector algebra and calculus, as well as a brief glimpse into these subjects' manifold applications. The applications are developed to the extent that the uses of the potential function, both scalar and vector, are fully illustrated. Moreover, the basic postulates of vector analysis are brou

  2. Packaging of HCV-RNA into lentiviral vector

    Energy Technology Data Exchange (ETDEWEB)

    Caval, Vincent [INSERM U966, Universite Francois Rabelais de Tours, Faculte de Medecine, 10 Bd. Tonnelle, 37000 Tours (France); Piver, Eric [INSERM U966, Universite Francois Rabelais de Tours, Faculte de Medecine, 10 Bd. Tonnelle, 37000 Tours (France); Service de Biochimie et Biologie Moleculaire, CHRU de Tours (France); Ivanyi-Nagy, Roland; Darlix, Jean-Luc [LaboRetro, ENS-Lyon INSERM, U758, 46 Allee d' Italie, 69364 Lyon (France); Pages, Jean-Christophe, E-mail: jean-christophe.pages@univ-tours.fr [INSERM U966, Universite Francois Rabelais de Tours, Faculte de Medecine, 10 Bd. Tonnelle, 37000 Tours (France); Service de Biochimie et Biologie Moleculaire, CHRU de Tours (France)

    2011-11-04

    Highlights: Black-Right-Pointing-Pointer Description of HCV-RNA Core-D1 interactions. Black-Right-Pointing-Pointer In vivo evaluation of the packaging of HCV genome. Black-Right-Pointing-Pointer Determination of the role of the three basic sub-domains of D1. Black-Right-Pointing-Pointer Heterologous system involving HIV-1 vector particles to mobilise HCV genome. Black-Right-Pointing-Pointer Full length mobilisation of HCV genome and HCV-receptor-independent entry. -- Abstract: The advent of infectious molecular clones of Hepatitis C virus (HCV) has unlocked the understanding of HCV life cycle. However, packaging of the genomic RNA, which is crucial to generate infectious viral particles, remains poorly understood. Molecular interactions of the domain 1 (D1) of HCV Core protein and HCV RNA have been described in vitro. Since compaction of genetic information within HCV genome has hampered conventional mutational approach to study packaging in vivo, we developed a novel heterologous system to evaluate the interactions between HCV RNA and Core D1. For this, we took advantage of the recruitment of Vpr fusion-proteins into HIV-1 particles. By fusing HCV Core D1 to Vpr we were able to package and transfer a HCV subgenomic replicon into a HIV-1 based lentiviral vector. We next examined how deletion mutants of basic sub-domains of Core D1 influenced HCV RNA recruitment. The results emphasized the crucial role of the first and third basic regions of D1 in packaging. Interestingly, the system described here allowed us to mobilise full-length JFH1 genome in CD81 defective cells, which are normally refractory to HCV infection. This finding paves the way to an evaluation of the replication capability of HCV in various cell types.

  3. Vectors and Rotations in 3-Dimensions: Vector Algebra for the C++ Programmer

    Science.gov (United States)

    2016-12-01

    release; distribution is unlimited. 1. Introduction This report describes 2 C++ classes: a Vector class for performing vector algebra in 3-dimensional...ARL-TR-7894•DEC 2016 US Army Research Laboratory Vectors and Rotations in 3-Dimensions:Vector Algebra for the C++ Programmer by Richard Saucier...Army Research Laboratory Vectors and Rotations in 3-Dimensions:Vector Algebra for the C++ Programmer by Richard Saucier Survivability/Lethality

  4. American Samoa ESI: T_MAMPT (Terrestrial Mammal Points)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains sensitive biological resource data for bats in American Samoa. Vector points in this data set represent bat roosts and caves. Species-specific...

  5. Registration of Urban Aerial Image and LiDAR Based on Line Vectors

    Directory of Open Access Journals (Sweden)

    Qinghong Sheng

    2017-09-01

    Full Text Available In a traditional registration of a single aerial image with airborne light detection and ranging (LiDAR data using linear features that regard line direction as a control or linear features as constraints in the solution, lacking the constraint of linear position leads to the error propagation of the adjustment model. To solve this problem, this paper presents a line vector-based registration mode (LVR in which image rays and LiDAR lines are expressed by a line vector that integrates the line direction and the line position. A registration equation of line vector is set up by coplanar imaging rays and corresponding control lines. Three types of datasets consisting of synthetic, theInternational Society for Photogrammetry and Remote Sensing (ISPRS test project, and real aerial data are used. A group of progressive experiments is undertaken to evaluate the robustness of the LVR. Experimental results demonstrate that the integrated line direction and the line position contributes a great deal to the theoretical and real accuracies of the unknowns, as well as the stability of the adjustment model. This paper provides a new suggestion that, for a single image and LiDAR data, registration in urban areas can be accomplished by accommodating rich line features.

  6. Retrieval of Brain Tumors by Adaptive Spatial Pooling and Fisher Vector Representation.

    Science.gov (United States)

    Cheng, Jun; Yang, Wei; Huang, Meiyan; Huang, Wei; Jiang, Jun; Zhou, Yujia; Yang, Ru; Zhao, Jie; Feng, Yanqiu; Feng, Qianjin; Chen, Wufan

    2016-01-01

    Content-based image retrieval (CBIR) techniques have currently gained increasing popularity in the medical field because they can use numerous and valuable archived images to support clinical decisions. In this paper, we concentrate on developing a CBIR system for retrieving brain tumors in T1-weighted contrast-enhanced MRI images. Specifically, when the user roughly outlines the tumor region of a query image, brain tumor images in the database of the same pathological type are expected to be returned. We propose a novel feature extraction framework to improve the retrieval performance. The proposed framework consists of three steps. First, we augment the tumor region and use the augmented tumor region as the region of interest to incorporate informative contextual information. Second, the augmented tumor region is split into subregions by an adaptive spatial division method based on intensity orders; within each subregion, we extract raw image patches as local features. Third, we apply the Fisher kernel framework to aggregate the local features of each subregion into a respective single vector representation and concatenate these per-subregion vector representations to obtain an image-level signature. After feature extraction, a closed-form metric learning algorithm is applied to measure the similarity between the query image and database images. Extensive experiments are conducted on a large dataset of 3604 images with three types of brain tumors, namely, meningiomas, gliomas, and pituitary tumors. The mean average precision can reach 94.68%. Experimental results demonstrate the power of the proposed algorithm against some related state-of-the-art methods on the same dataset.

  7. Support vector machine classification and validation of cancer tissue samples using microarray expression data.

    Science.gov (United States)

    Furey, T S; Cristianini, N; Duffy, N; Bednarski, D W; Schummer, M; Haussler, D

    2000-10-01

    DNA microarray experiments generating thousands of gene expression measurements, are being used to gather information from tissue and cell samples regarding gene expression differences that will be useful in diagnosing disease. We have developed a new method to analyse this kind of data using support vector machines (SVMs). This analysis consists of both classification of the tissue samples, and an exploration of the data for mis-labeled or questionable tissue results. We demonstrate the method in detail on samples consisting of ovarian cancer tissues, normal ovarian tissues, and other normal tissues. The dataset consists of expression experiment results for 97,802 cDNAs for each tissue. As a result of computational analysis, a tissue sample is discovered and confirmed to be wrongly labeled. Upon correction of this mistake and the removal of an outlier, perfect classification of tissues is achieved, but not with high confidence. We identify and analyse a subset of genes from the ovarian dataset whose expression is highly differentiated between the types of tissues. To show robustness of the SVM method, two previously published datasets from other types of tissues or cells are analysed. The results are comparable to those previously obtained. We show that other machine learning methods also perform comparably to the SVM on many of those datasets. The SVM software is available at http://www.cs. columbia.edu/ approximately bgrundy/svm.

  8. Genetic shifting: a novel approach for controlling vector-borne diseases.

    Science.gov (United States)

    Powell, Jeffrey R; Tabachnick, Walter J

    2014-06-01

    Rendering populations of vectors of diseases incapable of transmitting pathogens through genetic methods has long been a goal of vector geneticists. We outline a method to achieve this goal that does not involve the introduction of any new genetic variants to the target population. Rather we propose that shifting the frequencies of naturally occurring alleles that confer refractoriness to transmission can reduce transmission below a sustainable level. The program employs methods successfully used in plant and animal breeding. Because no artificially constructed genetically modified organisms (GMOs) are introduced into the environment, the method is minimally controversial. We use Aedes aegypti and dengue virus (DENV) for illustrative purposes but point out that the proposed program is generally applicable to vector-borne disease control. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Combinatorial vector fields and the valley structure of fitness landscapes.

    Science.gov (United States)

    Stadler, Bärbel M R; Stadler, Peter F

    2010-12-01

    Adaptive (downhill) walks are a computationally convenient way of analyzing the geometric structure of fitness landscapes. Their inherently stochastic nature has limited their mathematical analysis, however. Here we develop a framework that interprets adaptive walks as deterministic trajectories in combinatorial vector fields and in return associate these combinatorial vector fields with weights that measure their steepness across the landscape. We show that the combinatorial vector fields and their weights have a product structure that is governed by the neutrality of the landscape. This product structure makes practical computations feasible. The framework presented here also provides an alternative, and mathematically more convenient, way of defining notions of valleys, saddle points, and barriers in landscape. As an application, we propose a refined approximation for transition rates between macrostates that are associated with the valleys of the landscape.

  10. National Cooperative Soil Survey (NCSS) Laboratory Data, NCSS Lab Data Mart Point Dataset

    Data.gov (United States)

    Department of Agriculture — This layer represents the National Cooperative Soil Survey laboratory data of soil properties for soil samples taken at sites or points on the Earth’s globe – mainly...

  11. CoSpa: A Co-training Approach for Spam Review Identification with Support Vector Machine

    Directory of Open Access Journals (Sweden)

    Wen Zhang

    2016-03-01

    Full Text Available Spam reviews are increasingly appearing on the Internet to promote sales or defame competitors by misleading consumers with deceptive opinions. This paper proposes a co-training approach called CoSpa (Co-training for Spam review identification to identify spam reviews by two views: one is the lexical terms derived from the textual content of the reviews and the other is the PCFG (Probabilistic Context-Free Grammars rules derived from a deep syntax analysis of the reviews. Using SVM (Support Vector Machine as the base classifier, we develop two strategies, CoSpa-C and CoSpa-U, embedded within the CoSpa approach. The CoSpa-C strategy selects unlabeled reviews classified with the largest confidence to augment the training dataset to retrain the classifier. The CoSpa-U strategy randomly selects unlabeled reviews with a uniform distribution of confidence. Experiments on the spam dataset and the deception dataset demonstrate that both the proposed CoSpa algorithms outperform the traditional SVM with lexical terms and PCFG rules in spam review identification. Moreover, the CoSpa-U strategy outperforms the CoSpa-C strategy when we use the absolute value of decision function of SVM as the confidence.

  12. Point Cluster Analysis Using a 3D Voronoi Diagram with Applications in Point Cloud Segmentation

    Directory of Open Access Journals (Sweden)

    Shen Ying

    2015-08-01

    Full Text Available Three-dimensional (3D point analysis and visualization is one of the most effective methods of point cluster detection and segmentation in geospatial datasets. However, serious scattering and clotting characteristics interfere with the visual detection of 3D point clusters. To overcome this problem, this study proposes the use of 3D Voronoi diagrams to analyze and visualize 3D points instead of the original data item. The proposed algorithm computes the cluster of 3D points by applying a set of 3D Voronoi cells to describe and quantify 3D points. The decompositions of point cloud of 3D models are guided by the 3D Voronoi cell parameters. The parameter values are mapped from the Voronoi cells to 3D points to show the spatial pattern and relationships; thus, a 3D point cluster pattern can be highlighted and easily recognized. To capture different cluster patterns, continuous progressive clusters and segmentations are tested. The 3D spatial relationship is shown to facilitate cluster detection. Furthermore, the generated segmentations of real 3D data cases are exploited to demonstrate the feasibility of our approach in detecting different spatial clusters for continuous point cloud segmentation.

  13. Numerical simulation using vorticity-vector potential formulation

    Science.gov (United States)

    Tokunaga, Hiroshi

    1993-01-01

    An accurate and efficient computational method is needed for three-dimensional incompressible viscous flows in engineering applications. On solving the turbulent shear flows directly or using the subgrid scale model, it is indispensable to resolve the small scale fluid motions as well as the large scale motions. From this point of view, the pseudo-spectral method is used so far as the computational method. However, the finite difference or the finite element methods are widely applied for computing the flow with practical importance since these methods are easily applied to the flows with complex geometric configurations. However, there exist several problems in applying the finite difference method to direct and large eddy simulations. Accuracy is one of most important problems. This point was already addressed by the present author on the direct simulations on the instability of the plane Poiseuille flow and also on the transition to turbulence. In order to obtain high efficiency, the multi-grid Poisson solver is combined with the higher-order, accurate finite difference method. The formulation method is also one of the most important problems in applying the finite difference method to the incompressible turbulent flows. The three-dimensional Navier-Stokes equations have been solved so far in the primitive variables formulation. One of the major difficulties of this method is the rigorous satisfaction of the equation of continuity. In general, the staggered grid is used for the satisfaction of the solenoidal condition for the velocity field at the wall boundary. However, the velocity field satisfies the equation of continuity automatically in the vorticity-vector potential formulation. From this point of view, the vorticity-vector potential method was extended to the generalized coordinate system. In the present article, we adopt the vorticity-vector potential formulation, the generalized coordinate system, and the 4th-order accurate difference method as the

  14. SIMADL: Simulated Activities of Daily Living Dataset

    Directory of Open Access Journals (Sweden)

    Talal Alshammari

    2018-04-01

    Full Text Available With the realisation of the Internet of Things (IoT paradigm, the analysis of the Activities of Daily Living (ADLs, in a smart home environment, is becoming an active research domain. The existence of representative datasets is a key requirement to advance the research in smart home design. Such datasets are an integral part of the visualisation of new smart home concepts as well as the validation and evaluation of emerging machine learning models. Machine learning techniques that can learn ADLs from sensor readings are used to classify, predict and detect anomalous patterns. Such techniques require data that represent relevant smart home scenarios, for training, testing and validation. However, the development of such machine learning techniques is limited by the lack of real smart home datasets, due to the excessive cost of building real smart homes. This paper provides two datasets for classification and anomaly detection. The datasets are generated using OpenSHS, (Open Smart Home Simulator, which is a simulation software for dataset generation. OpenSHS records the daily activities of a participant within a virtual environment. Seven participants simulated their ADLs for different contexts, e.g., weekdays, weekends, mornings and evenings. Eighty-four files in total were generated, representing approximately 63 days worth of activities. Forty-two files of classification of ADLs were simulated in the classification dataset and the other forty-two files are for anomaly detection problems in which anomalous patterns were simulated and injected into the anomaly detection dataset.

  15. The NOAA Dataset Identifier Project

    Science.gov (United States)

    de la Beaujardiere, J.; Mccullough, H.; Casey, K. S.

    2013-12-01

    The US National Oceanic and Atmospheric Administration (NOAA) initiated a project in 2013 to assign persistent identifiers to datasets archived at NOAA and to create informational landing pages about those datasets. The goals of this project are to enable the citation of datasets used in products and results in order to help provide credit to data producers, to support traceability and reproducibility, and to enable tracking of data usage and impact. A secondary goal is to encourage the submission of datasets for long-term preservation, because only archived datasets will be eligible for a NOAA-issued identifier. A team was formed with representatives from the National Geophysical, Oceanographic, and Climatic Data Centers (NGDC, NODC, NCDC) to resolve questions including which identifier scheme to use (answer: Digital Object Identifier - DOI), whether or not to embed semantics in identifiers (no), the level of granularity at which to assign identifiers (as coarsely as reasonable), how to handle ongoing time-series data (do not break into chunks), creation mechanism for the landing page (stylesheet from formal metadata record preferred), and others. Decisions made and implementation experience gained will inform the writing of a Data Citation Procedural Directive to be issued by the Environmental Data Management Committee in 2014. Several identifiers have been issued as of July 2013, with more on the way. NOAA is now reporting the number as a metric to federal Open Government initiatives. This paper will provide further details and status of the project.

  16. Predicting weather regime transitions in Northern Hemisphere datasets

    Energy Technology Data Exchange (ETDEWEB)

    Kondrashov, D. [University of California, Department of Atmospheric and Oceanic Sciences and Institute of Geophysics and Planetary Physics, Los Angeles, CA (United States); Shen, J. [UCLA, Department of Statistics, Los Angeles, CA (United States); Berk, R. [UCLA, Department of Statistics, Los Angeles, CA (United States); University of Pennsylvania, Department of Criminology, Philadelphia, PA (United States); D' Andrea, F.; Ghil, M. [Ecole Normale Superieure, Departement Terre-Atmosphere-Ocean and Laboratoire de Meteorologie Dynamique (CNRS and IPSL), Paris Cedex 05 (France)

    2007-10-15

    A statistical learning method called random forests is applied to the prediction of transitions between weather regimes of wintertime Northern Hemisphere (NH) atmospheric low-frequency variability. A dataset composed of 55 winters of NH 700-mb geopotential height anomalies is used in the present study. A mixture model finds that the three Gaussian components that were statistically significant in earlier work are robust; they are the Pacific-North American (PNA) regime, its approximate reverse (the reverse PNA, or RNA), and the blocked phase of the North Atlantic Oscillation (BNAO). The most significant and robust transitions in the Markov chain generated by these regimes are PNA {yields} BNAO, PNA {yields} RNA and BNAO {yields} PNA. The break of a regime and subsequent onset of another one is forecast for these three transitions. Taking the relative costs of false positives and false negatives into account, the random-forests method shows useful forecasting skill. The calculations are carried out in the phase space spanned by a few leading empirical orthogonal functions of dataset variability. Plots of estimated response functions to a given predictor confirm the crucial influence of the exit angle on a preferred transition path. This result points to the dynamic origin of the transitions. (orig.)

  17. a Variant of Lsd-Slam Capable of Processing High-Speed Low-Framerate Monocular Datasets

    Science.gov (United States)

    Schmid, S.; Fritsch, D.

    2017-11-01

    We develop a new variant of LSD-SLAM, called C-LSD-SLAM, which is capable of performing monocular tracking and mapping in high-speed low-framerate situations such as those of the KITTI datasets. The methods used here are robust against the influence of erronously triangulated points near the epipolar direction, which otherwise causes tracking divergence.

  18. Control Measure Dataset

    Data.gov (United States)

    U.S. Environmental Protection Agency — The EPA Control Measure Dataset is a collection of documents describing air pollution control available to regulated facilities for the control and abatement of air...

  19. Superfund Removal Site Points, Region 9, 2012, US EPA Region 9

    Data.gov (United States)

    U.S. Environmental Protection Agency — Point geospatial dataset representing locations of CERCLA (Superfund) Removal sites. CERCLA (Comprehensive Environmental Response, Compensation, and Liability Act)...

  20. Signed zeros of Gaussian vector fields - density, correlation functions and curvature

    CERN Document Server

    Foltin, G

    2003-01-01

    We calculate correlation functions of the (signed) density of zeros of Gaussian distributed vector fields. We are able to express correlation functions of arbitrary order through the curvature tensor of a certain abstract Riemann Cartan or Riemannian manifold. As an application, we discuss one- and two-point functions. The zeros of a two-dimensional Gaussian vector field model the distribution of topological defects in the high-temperature phase of two-dimensional systems with orientational degrees of freedom, such as superfluid films, thin superconductors and liquid crystals.

  1. MosquitoMap and the Mal-area calculator: new web tools to relate mosquito species distribution with vector borne disease.

    Science.gov (United States)

    Foley, Desmond H; Wilkerson, Richard C; Birney, Ian; Harrison, Stanley; Christensen, Jamie; Rueda, Leopoldo M

    2010-02-18

    Mosquitoes are important vectors of diseases but, in spite of various mosquito faunistic surveys globally, there is a need for a spatial online database of mosquito collection data and distribution summaries. Such a resource could provide entomologists with the results of previous mosquito surveys, and vector disease control workers, preventative medicine practitioners, and health planners with information relating mosquito distribution to vector-borne disease risk. A web application called MosquitoMap was constructed comprising mosquito collection point data stored in an ArcGIS 9.3 Server/SQL geodatabase that includes administrative area and vector species x country lookup tables. In addition to the layer containing mosquito collection points, other map layers were made available including environmental, and vector and pathogen/disease distribution layers. An application within MosquitoMap called the Mal-area calculator (MAC) was constructed to quantify the area of overlap, for any area of interest, of vector, human, and disease distribution models. Data standards for mosquito records were developed for MosquitoMap. MosquitoMap is a public domain web resource that maps and compares georeferenced mosquito collection points to other spatial information, in a geographical information system setting. The MAC quantifies the Mal-area, i.e. the area where it is theoretically possible for vector-borne disease transmission to occur, thus providing a useful decision tool where other disease information is limited. The Mal-area approach emphasizes the independent but cumulative contribution to disease risk of the vector species predicted present. MosquitoMap adds value to, and makes accessible, the results of past collecting efforts, as well as providing a template for other arthropod spatial databases.

  2. Determination of key parameters of vector multifractal vector fields

    Science.gov (United States)

    Schertzer, D. J. M.; Tchiguirinskaia, I.

    2017-12-01

    For too long time, multifractal analyses and simulations have been restricted to scalar-valued fields (Schertzer and Tchiguirinskaia, 2017a,b). For instance, the wind velocity multifractality has been mostly analysed in terms of scalar structure functions and with the scalar energy flux. This restriction has had the unfortunate consequences that multifractals were applicable to their full extent in geophysics, whereas it has inspired them. Indeed a key question in geophysics is the complexity of the interactions between various fields or they components. Nevertheless, sophisticated methods have been developed to determine the key parameters of scalar valued fields. In this communication, we first present the vector extensions of the universal multifractal analysis techniques to multifractals whose generator belong to a Levy-Clifford algebra (Schertzer and Tchiguirinskaia, 2015). We point out further extensions noting the increased complexity. For instance, the (scalar) index of multifractality becomes a matrice. Schertzer, D. and Tchiguirinskaia, I. (2015) `Multifractal vector fields and stochastic Clifford algebra', Chaos: An Interdisciplinary Journal of Nonlinear Science, 25(12), p. 123127. doi: 10.1063/1.4937364. Schertzer, D. and Tchiguirinskaia, I. (2017) `An Introduction to Multifractals and Scale Symmetry Groups', in Ghanbarian, B. and Hunt, A. (eds) Fractals: Concepts and Applications in Geosciences. CRC Press, p. (in press). Schertzer, D. and Tchiguirinskaia, I. (2017b) `Pandora Box of Multifractals: Barely Open ?', in Tsonis, A. A. (ed.) 30 Years of Nonlinear Dynamics in Geophysics. Berlin: Springer, p. (in press).

  3. Fixed Point in Topological Vector Space-Valued Cone Metric Spaces

    Directory of Open Access Journals (Sweden)

    Muhammad Arshad

    2010-01-01

    Full Text Available We obtain common fixed points of a pair of mappings satisfying a generalized contractive type condition in TVS-valued cone metric spaces. Our results generalize some well-known recent results in the literature.

  4. The Kinetics Human Action Video Dataset

    OpenAIRE

    Kay, Will; Carreira, Joao; Simonyan, Karen; Zhang, Brian; Hillier, Chloe; Vijayanarasimhan, Sudheendra; Viola, Fabio; Green, Tim; Back, Trevor; Natsev, Paul; Suleyman, Mustafa; Zisserman, Andrew

    2017-01-01

    We describe the DeepMind Kinetics human action video dataset. The dataset contains 400 human action classes, with at least 400 video clips for each action. Each clip lasts around 10s and is taken from a different YouTube video. The actions are human focussed and cover a broad range of classes including human-object interactions such as playing instruments, as well as human-human interactions such as shaking hands. We describe the statistics of the dataset, how it was collected, and give some ...

  5. Vectorization of KENO IV code and an estimate of vector-parallel processing

    International Nuclear Information System (INIS)

    Asai, Kiyoshi; Higuchi, Kenji; Katakura, Jun-ichi; Kurita, Yutaka.

    1986-10-01

    The multi-group criticality safety code KENO IV has been vectorized and tested on FACOM VP-100 vector processor. At first the vectorized KENO IV on a scalar processor became slower than the original one by a factor of 1.4 because of the overhead introduced by the vectorization. Making modifications of algorithms and techniques for vectorization, the vectorized version has become faster than the original one by a factor of 1.4 and 3.0 on the vector processor for sample problems of complex and simple geometries, respectively. For further speedup of the code, some improvements on compiler and hardware, especially on addition of Monte Carlo pipelines to the vector processor, are discussed. Finally a pipelined parallel processor system is proposed and its performance is estimated. (author)

  6. Review: Vector-host-parasite inter-relationships in Leishmaniasis: a ...

    African Journals Online (AJOL)

    This manuscript proposes a new hypothesis and a new concept in Leishmaniasis transmission, that points to a “vector-host-parasite” specificity. This concept will open a new approach for analysis of Leishmania/sandfly problems in transmission. Indeed, it will help to answer the following questions: 1-Why transmission ...

  7. Comparison of CORA and EN4 in-situ datasets validation methods, toward a better quality merged dataset.

    Science.gov (United States)

    Szekely, Tanguy; Killick, Rachel; Gourrion, Jerome; Reverdin, Gilles

    2017-04-01

    CORA and EN4 are both global delayed time mode validated in-situ ocean temperature and salinity datasets distributed by the Met Office (http://www.metoffice.gov.uk/) and Copernicus (www.marine.copernicus.eu). A large part of the profiles distributed by CORA and EN4 in recent years are Argo profiles from the ARGO DAC, but profiles are also extracted from the World Ocean Database and TESAC profiles from GTSPP. In the case of CORA, data coming from the EUROGOOS Regional operationnal oserving system( ROOS) operated by European institutes no managed by National Data Centres and other datasets of profiles povided by scientific sources can also be found (Sea mammals profiles from MEOP, XBT datasets from cruises ...). (EN4 also takes data from the ASBO dataset to supplement observations in the Arctic). First advantage of this new merge product is to enhance the space and time coverage at global and european scales for the period covering 1950 till a year before the current year. This product is updated once a year and T&S gridded fields are alos generated for the period 1990-year n-1. The enhancement compared to the revious CORA product will be presented Despite the fact that the profiles distributed by both datasets are mostly the same, the quality control procedures developed by the Met Office and Copernicus teams differ, sometimes leading to different quality control flags for the same profile. Started in 2016 a new study started that aims to compare both validation procedures to move towards a Copernicus Marine Service dataset with the best features of CORA and EN4 validation.A reference data set composed of the full set of in-situ temperature and salinity measurements collected by Coriolis during 2015 is used. These measurements have been made thanks to wide range of instruments (XBTs, CTDs, Argo floats, Instrumented sea mammals,...), covering the global ocean. The reference dataset has been validated simultaneously by both teams.An exhaustive comparison of the

  8. Dissipative N-point-vortex Models in the Plane

    Science.gov (United States)

    Shashikanth, Banavara N.

    2010-02-01

    A method is presented for constructing point vortex models in the plane that dissipate the Hamiltonian function at any prescribed rate and yet conserve the level sets of the invariants of the Hamiltonian model arising from the SE (2) symmetries. The method is purely geometric in that it uses the level sets of the Hamiltonian and the invariants to construct the dissipative field and is based on elementary classical geometry in ℝ3. Extension to higher-dimensional spaces, such as the point vortex phase space, is done using exterior algebra. The method is in fact general enough to apply to any smooth finite-dimensional system with conserved quantities, and, for certain special cases, the dissipative vector field constructed can be associated with an appropriately defined double Nambu-Poisson bracket. The most interesting feature of this method is that it allows for an infinite sequence of such dissipative vector fields to be constructed by repeated application of a symmetric linear operator (matrix) at each point of the intersection of the level sets.

  9. VECTORIZATION OF ROAD DATA EXTRACTED FROM AERIAL AND UAV IMAGERY

    Directory of Open Access Journals (Sweden)

    D. Bulatov

    2016-06-01

    Full Text Available Road databases are essential instances of urban infrastructure. Therefore, automatic road detection from sensor data has been an important research activity during many decades. Given aerial images in a sufficient resolution, dense 3D reconstruction can be performed. Starting at a classification result of road pixels from combined elevation and optical data, we present in this paper a fivestep procedure for creating vectorized road networks. These main steps of the algorithm are: preprocessing, thinning, polygonization, filtering, and generalization. In particular, for the generalization step, which represents the principal area of innovation, two strategies are presented. The first strategy corresponds to a modification of the Douglas-Peucker-algorithm in order to reduce the number of vertices while the second strategy allows a smoother representation of street windings by Bezir curves, which results in reduction – to a decimal power – of the total curvature defined for the dataset. We tested our approach on three datasets with different complexity. The quantitative assessment of the results was performed by means of shapefiles from OpenStreetMap data. For a threshold of 6 m, completeness and correctness values of up to 85% were achieved.

  10. Study of the Integration of LIDAR and Photogrammetric Datasets by in Situ Camera Calibration and Integrated Sensor Orientation

    Science.gov (United States)

    Mitishita, E.; Costa, F.; Martins, M.

    2017-05-01

    Photogrammetric and Lidar datasets should be in the same mapping or geodetic frame to be used simultaneously in an engineering project. Nowadays direct sensor orientation is a common procedure used in simultaneous photogrammetric and Lidar surveys. Although the direct sensor orientation technologies provide a high degree of automation process due to the GNSS/INS technologies, the accuracies of the results obtained from the photogrammetric and Lidar surveys are dependent on the quality of a group of parameters that models accurately the user conditions of the system at the moment the job is performed. This paper shows the study that was performed to verify the importance of the in situ camera calibration and Integrated Sensor Orientation without control points to increase the accuracies of the photogrammetric and LIDAR datasets integration. The horizontal and vertical accuracies of photogrammetric and Lidar datasets integration by photogrammetric procedure improved significantly when the Integrated Sensor Orientation (ISO) approach was performed using Interior Orientation Parameter (IOP) values estimated from the in situ camera calibration. The horizontal and vertical accuracies, estimated by the Root Mean Square Error (RMSE) of the 3D discrepancies from the Lidar check points, increased around of 37% and 198% respectively.

  11. Data-driven probability concentration and sampling on manifold

    Energy Technology Data Exchange (ETDEWEB)

    Soize, C., E-mail: christian.soize@univ-paris-est.fr [Université Paris-Est, Laboratoire Modélisation et Simulation Multi-Echelle, MSME UMR 8208 CNRS, 5 bd Descartes, 77454 Marne-La-Vallée Cedex 2 (France); Ghanem, R., E-mail: ghanem@usc.edu [University of Southern California, 210 KAP Hall, Los Angeles, CA 90089 (United States)

    2016-09-15

    A new methodology is proposed for generating realizations of a random vector with values in a finite-dimensional Euclidean space that are statistically consistent with a dataset of observations of this vector. The probability distribution of this random vector, while a priori not known, is presumed to be concentrated on an unknown subset of the Euclidean space. A random matrix is introduced whose columns are independent copies of the random vector and for which the number of columns is the number of data points in the dataset. The approach is based on the use of (i) the multidimensional kernel-density estimation method for estimating the probability distribution of the random matrix, (ii) a MCMC method for generating realizations for the random matrix, (iii) the diffusion-maps approach for discovering and characterizing the geometry and the structure of the dataset, and (iv) a reduced-order representation of the random matrix, which is constructed using the diffusion-maps vectors associated with the first eigenvalues of the transition matrix relative to the given dataset. The convergence aspects of the proposed methodology are analyzed and a numerical validation is explored through three applications of increasing complexity. The proposed method is found to be robust to noise levels and data complexity as well as to the intrinsic dimension of data and the size of experimental datasets. Both the methodology and the underlying mathematical framework presented in this paper contribute new capabilities and perspectives at the interface of uncertainty quantification, statistical data analysis, stochastic modeling and associated statistical inverse problems.

  12. Introducing a Web API for Dataset Submission into a NASA Earth Science Data Center

    Science.gov (United States)

    Moroni, D. F.; Quach, N.; Francis-Curley, W.

    2016-12-01

    As the landscape of data becomes increasingly more diverse in the domain of Earth Science, the challenges of managing and preserving data become more onerous and complex, particularly for data centers on fixed budgets and limited staff. Many solutions already exist to ease the cost burden for the downstream component of the data lifecycle, yet most archive centers are still racing to keep up with the influx of new data that still needs to find a quasi-permanent resting place. For instance, having well-defined metadata that is consistent across the entire data landscape provides for well-managed and preserved datasets throughout the latter end of the data lifecycle. Translators between different metadata dialects are already in operational use, and facilitate keeping older datasets relevant in today's world of rapidly evolving metadata standards. However, very little is done to address the first phase of the lifecycle, which deals with the entry of both data and the corresponding metadata into a system that is traditionally opaque and closed off to external data producers, thus resulting in a significant bottleneck to the dataset submission process. The ATRAC system was the NOAA NCEI's answer to this previously obfuscated barrier to scientists wishing to find a home for their climate data records, providing a web-based entry point to submit timely and accurate metadata and information about a very specific dataset. A couple of NASA's Distributed Active Archive Centers (DAACs) have implemented their own versions of a web-based dataset and metadata submission form including the ASDC and the ORNL DAAC. The Physical Oceanography DAAC is the most recent in the list of NASA-operated DAACs who have begun to offer their own web-based dataset and metadata submission services to data producers. What makes the PO.DAAC dataset and metadata submission service stand out from these pre-existing services is the option of utilizing both a web browser GUI and a RESTful API to

  13. Versatile generation of optical vector fields and vector beams using a non-interferometric approach.

    Science.gov (United States)

    Tripathi, Santosh; Toussaint, Kimani C

    2012-05-07

    We present a versatile, non-interferometric method for generating vector fields and vector beams which can produce all the states of polarization represented on a higher-order Poincaré sphere. The versatility and non-interferometric nature of this method is expected to enable exploration of various exotic properties of vector fields and vector beams. To illustrate this, we study the propagation properties of some vector fields and find that, in general, propagation alters both their intensity and polarization distribution, and more interestingly, converts some vector fields into vector beams. In the article, we also suggest a modified Jones vector formalism to represent vector fields and vector beams.

  14. Lefschetz thimbles in fermionic effective models with repulsive vector-field

    Science.gov (United States)

    Mori, Yuto; Kashiwa, Kouji; Ohnishi, Akira

    2018-06-01

    We discuss two problems in complexified auxiliary fields in fermionic effective models, the auxiliary sign problem associated with the repulsive vector-field and the choice of the cut for the scalar field appearing from the logarithmic function. In the fermionic effective models with attractive scalar and repulsive vector-type interaction, the auxiliary scalar and vector fields appear in the path integral after the bosonization of fermion bilinears. When we make the path integral well-defined by the Wick rotation of the vector field, the oscillating Boltzmann weight appears in the partition function. This "auxiliary" sign problem can be solved by using the Lefschetz-thimble path-integral method, where the integration path is constructed in the complex plane. Another serious obstacle in the numerical construction of Lefschetz thimbles is caused by singular points and cuts induced by multivalued functions of the complexified scalar field in the momentum integration. We propose a new prescription which fixes gradient flow trajectories on the same Riemann sheet in the flow evolution by performing the momentum integration in the complex domain.

  15. Some BMO estimates for vector-valued multilinear singular integral ...

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    the multilinear operator related to some singular integral operators is obtained. The main purpose of this paper is to establish the BMO end-point estimates for some vector-valued multilinear operators related to certain singular integral operators. First, let us introduce some notations [10,16]. Throughout this paper, Q = Q(x,r).

  16. PREDICTING THE BOILING POINT OF PCDD/Fs BY THE QSPR METHOD BASED ON THE MOLECULAR DISTANCE-EDGE VECTOR INDEX

    Directory of Open Access Journals (Sweden)

    Long Jiao

    2015-05-01

    Full Text Available The quantitative structure property relationship (QSPR for the boiling point (Tb of polychlorinated dibenzo-p-dioxins and polychlorinated dibenzofurans (PCDD/Fs was investigated. The molecular distance-edge vector (MDEV index was used as the structural descriptor. The quantitative relationship between the MDEV index and Tb was modeled by using multivariate linear regression (MLR and artificial neural network (ANN, respectively. Leave-one-out cross validation and external validation were carried out to assess the prediction performance of the models developed. For the MLR method, the prediction root mean square relative error (RMSRE of leave-one-out cross validation and external validation was 1.77 and 1.23, respectively. For the ANN method, the prediction RMSRE of leave-one-out cross validation and external validation was 1.65 and 1.16, respectively. A quantitative relationship between the MDEV index and Tb of PCDD/Fs was demonstrated. Both MLR and ANN are practicable for modeling this relationship. The MLR model and ANN model developed can be used to predict the Tb of PCDD/Fs. Thus, the Tb of each PCDD/F was predicted by the developed models.

  17. Spacelike conformal Killing vectors and spacelike congruences

    International Nuclear Information System (INIS)

    Mason, D.P.; Tsamparlis, M.

    1985-01-01

    Necessary and sufficient conditions are derived for space-time to admit a spacelike conformal motion with symmetry vector parallel to a unit spacelike vector field n/sup a/. These conditions are expressed in terms of the shear and expansion of the spacelike congruence generated by n/sup a/ and in terms of the four-velocity of the observer employed at any given point of the congruence. It is shown that either the expansion or the rotation of this spacelike congruence must vanish if Dn/sup a//dp = 0, where p denotes arc length measured along the integral curves of n/sup a/, and also that there exist no proper spacelike homothetic motions with constant expansion. Propagation equations for the projection tensor and the rotation tensor are derived and it is proved that every isometric spacelike congruence is rigid. Fluid space-times are studied in detail. A relation is established between spacelike conformal motions and material curves in the fluid: if a fluid space-time admits a spacelike conformal Killing vector parallel to n/sup a/ and n/sub a/u/sup a/ = 0, where u/sup a/ is the fluid four-velocity, then the integral curves of n/sup a/ are material curves in an irrotational fluid, while if the fluid vorticity is nonzero, then the integral curves of n/sup a/ are material curves if and only if they are vortex lines. An alternative derivation, based on the theory of spacelike congruences, of some of the results of Collins [J. Math. Phys. 25, 995 (1984)] on conformal Killing vectors parallel to the local vorticity vector in shear-free perfect fluids with zero magnetic Weyl tensor is given

  18. Rapid, semi-automatic fracture and contact mapping for point clouds, images and geophysical data

    Science.gov (United States)

    Thiele, Samuel T.; Grose, Lachlan; Samsu, Anindita; Micklethwaite, Steven; Vollgger, Stefan A.; Cruden, Alexander R.

    2017-12-01

    The advent of large digital datasets from unmanned aerial vehicle (UAV) and satellite platforms now challenges our ability to extract information across multiple scales in a timely manner, often meaning that the full value of the data is not realised. Here we adapt a least-cost-path solver and specially tailored cost functions to rapidly interpolate structural features between manually defined control points in point cloud and raster datasets. We implement the method in the geographic information system QGIS and the point cloud and mesh processing software CloudCompare. Using these implementations, the method can be applied to a variety of three-dimensional (3-D) and two-dimensional (2-D) datasets, including high-resolution aerial imagery, digital outcrop models, digital elevation models (DEMs) and geophysical grids. We demonstrate the algorithm with four diverse applications in which we extract (1) joint and contact patterns in high-resolution orthophotographs, (2) fracture patterns in a dense 3-D point cloud, (3) earthquake surface ruptures of the Greendale Fault associated with the Mw7.1 Darfield earthquake (New Zealand) from high-resolution light detection and ranging (lidar) data, and (4) oceanic fracture zones from bathymetric data of the North Atlantic. The approach improves the consistency of the interpretation process while retaining expert guidance and achieves significant improvements (35-65 %) in digitisation time compared to traditional methods. Furthermore, it opens up new possibilities for data synthesis and can quantify the agreement between datasets and an interpretation.

  19. Evaluating the use of different precipitation datasets in simulating a flood event

    Science.gov (United States)

    Akyurek, Z.; Ozkaya, A.

    2016-12-01

    Floods caused by convective storms in mountainous regions are sensitive to the temporal and spatial variability of rainfall. Space-time estimates of rainfall from weather radar, satellites and numerical weather prediction models can be a remedy to represent pattern of the rainfall with some inaccuracy. However, there is a strong need for evaluation of the performance and limitations of these estimates in hydrology. This study aims to provide a comparison of gauge, radar, satellite (Hydro-Estimator (HE)) and numerical weather prediciton model (Weather Research and Forecasting (WRF)) precipitation datasets during an extreme flood event (22.11.2014) lasting 40 hours in Samsun-Turkey. For this study, hourly rainfall data from 13 ground observation stations were used in the analyses. This event having a peak discharge of 541 m3/sec created flooding at the downstream of Terme Basin. Comparisons were performed in two parts. First the analysis were performed in areal and point based manner. Secondly, a semi-distributed hydrological model was used to assess the accuracy of the rainfall datasets to simulate river flows for the flood event. Kalman Filtering was used in the bias correction of radar rainfall data compared to gauge measurements. Radar, gauge, corrected radar, HE and WRF rainfall data were used as model inputs. Generally, the HE product underestimates the cumulative rainfall amounts in all stations, radar data underestimates the results in cumulative sense but keeps the consistency in the results. On the other hand, almost all stations in WRF mean statistics computations have better results compared to the HE product but worse than the radar dataset. Results in point comparisons indicated that, trend of the rainfall is captured by the radar rainfall estimation well but radar underestimates the maximum values. According to cumulative gauge value, radar underestimated the cumulative rainfall amount by % 32. Contrary to other datasets, the bias of WRF is positive

  20. Realization of vector fields for quantum groups as pseudodifferential operators on quantum spaces

    International Nuclear Information System (INIS)

    Chu, Chong-Sun; Zumino, B.

    1995-01-01

    The vector fields of the quantum Lie algebra are described for the quantum groups GL q (n), SL q (N) and SO q (N) as pseudodifferential operators on the linear quantum spaces covariant under the corresponding quantum group. Their expressions are simple and compact. It is pointed out that these vector fields satisfy certain characteristic polynomial identities. The real forms SU q (N) and SO q (N,R) are discussed in detail

  1. Vectorized Monte Carlo

    International Nuclear Information System (INIS)

    Brown, F.B.

    1981-01-01

    Examination of the global algorithms and local kernels of conventional general-purpose Monte Carlo codes shows that multigroup Monte Carlo methods have sufficient structure to permit efficient vectorization. A structured multigroup Monte Carlo algorithm for vector computers is developed in which many particle events are treated at once on a cell-by-cell basis. Vectorization of kernels for tracking and variance reduction is described, and a new method for discrete sampling is developed to facilitate the vectorization of collision analysis. To demonstrate the potential of the new method, a vectorized Monte Carlo code for multigroup radiation transport analysis was developed. This code incorporates many features of conventional general-purpose production codes, including general geometry, splitting and Russian roulette, survival biasing, variance estimation via batching, a number of cutoffs, and generalized tallies of collision, tracklength, and surface crossing estimators with response functions. Predictions of vectorized performance characteristics for the CYBER-205 were made using emulated coding and a dynamic model of vector instruction timing. Computation rates were examined for a variety of test problems to determine sensitivities to batch size and vector lengths. Significant speedups are predicted for even a few hundred particles per batch, and asymptotic speedups by about 40 over equivalent Amdahl 470V/8 scalar codes arepredicted for a few thousand particles per batch. The principal conclusion is that vectorization of a general-purpose multigroup Monte Carlo code is well worth the significant effort required for stylized coding and major algorithmic changes

  2. Beaconless Pointing for Deep-Space Optical Communication

    Science.gov (United States)

    Swank, Aaron J.; Aretskin-Hariton, Eliot; Le, Dzu K.; Sands, Obed S.; Wroblewski, Adam

    2016-01-01

    Free space optical communication is of interest to NASA as a complement to existing radio frequency communication methods. The potential for an increase in science data return capability over current radio-frequency communications is the primary objective. Deep space optical communication requires laser beam pointing accuracy on the order of a few microradians. The laser beam pointing approach discussed here operates without the aid of a terrestrial uplink beacon. Precision pointing is obtained from an on-board star tracker in combination with inertial rate sensors and an outgoing beam reference vector. The beaconless optical pointing system presented in this work is the current approach for the Integrated Radio and Optical Communication (iROC) project.

  3. Vector Network Coding

    OpenAIRE

    Ebrahimi, Javad; Fragouli, Christina

    2010-01-01

    We develop new algebraic algorithms for scalar and vector network coding. In vector network coding, the source multicasts information by transmitting vectors of length L, while intermediate nodes process and combine their incoming packets by multiplying them with L X L coding matrices that play a similar role as coding coefficients in scalar coding. Our algorithms for scalar network jointly optimize the employed field size while selecting the coding coefficients. Similarly, for vector co...

  4. Vector-Quantization using Information Theoretic Concepts

    DEFF Research Database (Denmark)

    Lehn-Schiøler, Tue; Hegde, Anant; Erdogmus, Deniz

    2005-01-01

    interpretation and relies on minimization of a well defined cost-function. It is also shown how the potential field approach can be linked to information theory by use of the Parzen density estimator. In the light of information theory it becomes clear that minimizing the free energy of the system is in fact......The process of representing a large data set with a smaller number of vectors in the best possible way, also known as vector quantization, has been intensively studied in the recent years. Very efficient algorithms like the Kohonen Self Organizing Map (SOM) and the Linde Buzo Gray (LBG) algorithm...... have been devised. In this paper a physical approach to the problem is taken, and it is shown that by considering the processing elements as points moving in a potential field an algorithm equally efficient as the before mentioned can be derived. Unlike SOM and LBG this algorithm has a clear physical...

  5. Stabilization at almost arbitrary points for chaotic systems

    International Nuclear Information System (INIS)

    Huang, C.-S.; Lian, K.-Y.; Su, C.-H.; Wu, J.-W.

    2008-01-01

    We consider how to design a feasible control input for chaotic systems via a suitable input channel to achieve the stabilization at arbitrary points. Regarding the nonlinear systems without naturally defined input vectors, we propose a local stabilization controller which works for almost arbitrary points. Subsequently, according to topologically transitive property for chaotic systems, the feedback control force is activated only when the trajectory passes through the neighboring region of the regulated point. Hence the global stabilization is achieved whereas the control effort of the hybrid controller is extremely low

  6. A Hybrid Method for Interpolating Missing Data in Heterogeneous Spatio-Temporal Datasets

    Directory of Open Access Journals (Sweden)

    Min Deng

    2016-02-01

    Full Text Available Space-time interpolation is widely used to estimate missing or unobserved values in a dataset integrating both spatial and temporal records. Although space-time interpolation plays a key role in space-time modeling, existing methods were mainly developed for space-time processes that exhibit stationarity in space and time. It is still challenging to model heterogeneity of space-time data in the interpolation model. To overcome this limitation, in this study, a novel space-time interpolation method considering both spatial and temporal heterogeneity is developed for estimating missing data in space-time datasets. The interpolation operation is first implemented in spatial and temporal dimensions. Heterogeneous covariance functions are constructed to obtain the best linear unbiased estimates in spatial and temporal dimensions. Spatial and temporal correlations are then considered to combine the interpolation results in spatial and temporal dimensions to estimate the missing data. The proposed method is tested on annual average temperature and precipitation data in China (1984–2009. Experimental results show that, for these datasets, the proposed method outperforms three state-of-the-art methods—e.g., spatio-temporal kriging, spatio-temporal inverse distance weighting, and point estimation model of biased hospitals-based area disease estimation methods.

  7. Vector independent transmission of the vector-borne bluetongue virus.

    Science.gov (United States)

    van der Sluijs, Mirjam Tineke Willemijn; de Smit, Abraham J; Moormann, Rob J M

    2016-01-01

    Bluetongue is an economically important disease of ruminants. The causative agent, Bluetongue virus (BTV), is mainly transmitted by insect vectors. This review focuses on vector-free BTV transmission, and its epizootic and economic consequences. Vector-free transmission can either be vertical, from dam to fetus, or horizontal via direct contract. For several BTV-serotypes, vertical (transplacental) transmission has been described, resulting in severe congenital malformations. Transplacental transmission had been mainly associated with live vaccine strains. Yet, the European BTV-8 strain demonstrated a high incidence of transplacental transmission in natural circumstances. The relevance of transplacental transmission for the epizootiology is considered limited, especially in enzootic areas. However, transplacental transmission can have a substantial economic impact due to the loss of progeny. Inactivated vaccines have demonstrated to prevent transplacental transmission. Vector-free horizontal transmission has also been demonstrated. Since direct horizontal transmission requires close contact of animals, it is considered only relevant for within-farm spreading of BTV. The genetic determinants which enable vector-free transmission are present in virus strains circulating in the field. More research into the genetic changes which enable vector-free transmission is essential to better evaluate the risks associated with outbreaks of new BTV serotypes and to design more appropriate control measures.

  8. CompareSVM: supervised, Support Vector Machine (SVM) inference of gene regularity networks.

    Science.gov (United States)

    Gillani, Zeeshan; Akash, Muhammad Sajid Hamid; Rahaman, M D Matiur; Chen, Ming

    2014-11-30

    Predication of gene regularity network (GRN) from expression data is a challenging task. There are many methods that have been developed to address this challenge ranging from supervised to unsupervised methods. Most promising methods are based on support vector machine (SVM). There is a need for comprehensive analysis on prediction accuracy of supervised method SVM using different kernels on different biological experimental conditions and network size. We developed a tool (CompareSVM) based on SVM to compare different kernel methods for inference of GRN. Using CompareSVM, we investigated and evaluated different SVM kernel methods on simulated datasets of microarray of different sizes in detail. The results obtained from CompareSVM showed that accuracy of inference method depends upon the nature of experimental condition and size of the network. For network with nodes (SVM Gaussian kernel outperform on knockout, knockdown, and multifactorial datasets compared to all the other inference methods. For network with large number of nodes (~500), choice of inference method depend upon nature of experimental condition. CompareSVM is available at http://bis.zju.edu.cn/CompareSVM/ .

  9. Renormalization group fixed points of foliated gravity-matter systems

    Energy Technology Data Exchange (ETDEWEB)

    Biemans, Jorn [Institute for Mathematics, Astrophysics and Particle Physics (IMAPP),Radboud University Nijmegen,Heyendaalseweg 135, 6525 AJ Nijmegen (Netherlands); Platania, Alessia [Institute for Mathematics, Astrophysics and Particle Physics (IMAPP),Radboud University Nijmegen,Heyendaalseweg 135, 6525 AJ Nijmegen (Netherlands); Department of Physics and Astronomy, University of Catania,Via S. Sofia 63, 95123 Catania (Italy); INFN, Catania section,Via S. Sofia 64, 95123, Catania (Italy); INAF, Catania Astrophysical Observatory,Via S. Sofia 78, 95123, Catania (Italy); Saueressig, Frank [Institute for Mathematics, Astrophysics and Particle Physics (IMAPP),Radboud University Nijmegen,Heyendaalseweg 135, 6525 AJ Nijmegen (Netherlands)

    2017-05-17

    We employ the Arnowitt-Deser-Misner formalism to study the renormalization group flow of gravity minimally coupled to an arbitrary number of scalar, vector, and Dirac fields. The decomposition of the gravitational degrees of freedom into a lapse function, shift vector, and spatial metric equips spacetime with a preferred (Euclidean) “time”-direction. In this work, we provide a detailed derivation of the renormalization group flow of Newton’s constant and the cosmological constant on a flat Friedmann-Robertson-Walker background. Adding matter fields, it is shown that their contribution to the flow is the same as in the covariant formulation and can be captured by two parameters d{sub g}, d{sub λ}. We classify the resulting fixed point structure as a function of these parameters finding that the existence of non-Gaussian renormalization group fixed points is rather generic. In particular the matter content of the standard model and its most common extensions gives rise to one non-Gaussian fixed point with real critical exponents suitable for Asymptotic Safety. Moreover, we find non-Gaussian fixed points for any number of scalar matter fields, making the scenario attractive for cosmological model building.

  10. Fluxnet Synthesis Dataset Collaboration Infrastructure

    Energy Technology Data Exchange (ETDEWEB)

    Agarwal, Deborah A. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Humphrey, Marty [Univ. of Virginia, Charlottesville, VA (United States); van Ingen, Catharine [Microsoft. San Francisco, CA (United States); Beekwilder, Norm [Univ. of Virginia, Charlottesville, VA (United States); Goode, Monte [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Jackson, Keith [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Rodriguez, Matt [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Weber, Robin [Univ. of California, Berkeley, CA (United States)

    2008-02-06

    The Fluxnet synthesis dataset originally compiled for the La Thuile workshop contained approximately 600 site years. Since the workshop, several additional site years have been added and the dataset now contains over 920 site years from over 240 sites. A data refresh update is expected to increase those numbers in the next few months. The ancillary data describing the sites continues to evolve as well. There are on the order of 120 site contacts and 60proposals have been approved to use thedata. These proposals involve around 120 researchers. The size and complexity of the dataset and collaboration has led to a new approach to providing access to the data and collaboration support and the support team attended the workshop and worked closely with the attendees and the Fluxnet project office to define the requirements for the support infrastructure. As a result of this effort, a new website (http://www.fluxdata.org) has been created to provide access to the Fluxnet synthesis dataset. This new web site is based on a scientific data server which enables browsing of the data on-line, data download, and version tracking. We leverage database and data analysis tools such as OLAP data cubes and web reports to enable browser and Excel pivot table access to the data.

  11. LVQ-SMOTE - Learning Vector Quantization based Synthetic Minority Over-sampling Technique for biomedical data.

    Science.gov (United States)

    Nakamura, Munehiro; Kajiwara, Yusuke; Otsuka, Atsushi; Kimura, Haruhiko

    2013-10-02

    Over-sampling methods based on Synthetic Minority Over-sampling Technique (SMOTE) have been proposed for classification problems of imbalanced biomedical data. However, the existing over-sampling methods achieve slightly better or sometimes worse result than the simplest SMOTE. In order to improve the effectiveness of SMOTE, this paper presents a novel over-sampling method using codebooks obtained by the learning vector quantization. In general, even when an existing SMOTE applied to a biomedical dataset, its empty feature space is still so huge that most classification algorithms would not perform well on estimating borderlines between classes. To tackle this problem, our over-sampling method generates synthetic samples which occupy more feature space than the other SMOTE algorithms. Briefly saying, our over-sampling method enables to generate useful synthetic samples by referring to actual samples taken from real-world datasets. Experiments on eight real-world imbalanced datasets demonstrate that our proposed over-sampling method performs better than the simplest SMOTE on four of five standard classification algorithms. Moreover, it is seen that the performance of our method increases if the latest SMOTE called MWMOTE is used in our algorithm. Experiments on datasets for β-turn types prediction show some important patterns that have not been seen in previous analyses. The proposed over-sampling method generates useful synthetic samples for the classification of imbalanced biomedical data. Besides, the proposed over-sampling method is basically compatible with basic classification algorithms and the existing over-sampling methods.

  12. Western Alaska ESI: SOCECON (Socioeconomic Resource Points and Lines)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains human-use resource data for airports, mining sites, area boundaries, and scenic rivers in Western Alaska. Vector points and lines in this data...

  13. Simulation of Smart Home Activity Datasets

    Directory of Open Access Journals (Sweden)

    Jonathan Synnott

    2015-06-01

    Full Text Available A globally ageing population is resulting in an increased prevalence of chronic conditions which affect older adults. Such conditions require long-term care and management to maximize quality of life, placing an increasing strain on healthcare resources. Intelligent environments such as smart homes facilitate long-term monitoring of activities in the home through the use of sensor technology. Access to sensor datasets is necessary for the development of novel activity monitoring and recognition approaches. Access to such datasets is limited due to issues such as sensor cost, availability and deployment time. The use of simulated environments and sensors may address these issues and facilitate the generation of comprehensive datasets. This paper provides a review of existing approaches for the generation of simulated smart home activity datasets, including model-based approaches and interactive approaches which implement virtual sensors, environments and avatars. The paper also provides recommendation for future work in intelligent environment simulation.

  14. Simulation of Smart Home Activity Datasets.

    Science.gov (United States)

    Synnott, Jonathan; Nugent, Chris; Jeffers, Paul

    2015-06-16

    A globally ageing population is resulting in an increased prevalence of chronic conditions which affect older adults. Such conditions require long-term care and management to maximize quality of life, placing an increasing strain on healthcare resources. Intelligent environments such as smart homes facilitate long-term monitoring of activities in the home through the use of sensor technology. Access to sensor datasets is necessary for the development of novel activity monitoring and recognition approaches. Access to such datasets is limited due to issues such as sensor cost, availability and deployment time. The use of simulated environments and sensors may address these issues and facilitate the generation of comprehensive datasets. This paper provides a review of existing approaches for the generation of simulated smart home activity datasets, including model-based approaches and interactive approaches which implement virtual sensors, environments and avatars. The paper also provides recommendation for future work in intelligent environment simulation.

  15. RECONSTRUCTION OF 3D VECTOR MODELS OF BUILDINGS BY COMBINATION OF ALS, TLS AND VLS DATA

    Directory of Open Access Journals (Sweden)

    H. Boulaassal

    2012-09-01

    Full Text Available Airborne Laser Scanning (ALS, Terrestrial Laser Scanning (TLS and Vehicle based Laser Scanning (VLS are widely used as data acquisition methods for 3D building modelling. ALS data is often used to generate, among others, roof models. TLS data has proven its effectiveness in the geometric reconstruction of building façades. Although the operating algorithms used in the processing chain of these two kinds of data are quite similar, their combination should be more investigated. This study explores the possibility of combining ALS and TLS data for simultaneously producing 3D building models from bird point of view and pedestrian point of view. The geometric accuracy of roofs and façades models is different due to the acquisition techniques. In order to take these differences into account, the surfaces composing roofs and façades are extracted with the same algorithm of segmentation. Nevertheless the segmentation algorithm must be adapted to the properties of the different point clouds. It is based on the RANSAC algorithm, but has been applied in a sequential way in order to extract all potential planar clusters from airborne and terrestrial datasets. Surfaces are fitted to planar clusters, allowing edge detection and reconstruction of vector polygons. Models resulting from TLS data are obviously more accurate than those generated from ALS data. Therefore, the geometry of the roofs is corrected and adapted according to the geometry of the corresponding façades. Finally, the effects of the differences between raw ALS and TLS data on the results of the modeling process are analyzed. It is shown that such combination could be used to produce reliable 3D building models.

  16. Medium Term Analysis of Technical and Allocative Efficiency in Romanian Farms Using FADN Dataset

    Directory of Open Access Journals (Sweden)

    Nicola GALLUZZO

    2017-05-01

    Full Text Available The Farm Accountancy Data Network is an annual survey proposed by the European Union in order to estimate the impact of the Common Agricultural Policy on farmers. Lots of scholars have investigated the technical, economical and allocative efficiency using a non parametric approach such as the Data Envelopment Analysis (DEA in Romanian farms throughout the Farm Accountancy Data Network dataset pointing out poor levels of technical efficiency, which were lower than the average European value. The purpose of this study was to assess using DEA approach technical, economic and allocative efficiency in Romanian farms part of the FADN dataset over six year time from 2007 to 2012. Findings pointed out an increase of technical efficiency compared to previous studies, as a consequence of a significant turn over of a younger high skill and qualified farmers generation. Poor land capital, in terms of utilized agricultural areas, connected to an increase of new technologies, was the downside of Romanian farms and this implied that the National Rural Development Plan should  have taken into account financial subsides in order to implement agricultural areas scattered in Romanian rural space.

  17. VectorBase

    Data.gov (United States)

    U.S. Department of Health & Human Services — VectorBase is a Bioinformatics Resource Center for invertebrate vectors. It is one of four Bioinformatics Resource Centers funded by NIAID to provide web-based...

  18. Custodial vector model

    DEFF Research Database (Denmark)

    Becciolini, Diego; Franzosi, Diogo Buarque; Foadi, Roshan

    2015-01-01

    We analyze the Large Hadron Collider (LHC) phenomenology of heavy vector resonances with a $SU(2)_L\\times SU(2)_R$ spectral global symmetry. This symmetry partially protects the electroweak S-parameter from large contributions of the vector resonances. The resulting custodial vector model spectrum...

  19. Solar Integration National Dataset Toolkit | Grid Modernization | NREL

    Science.gov (United States)

    Solar Integration National Dataset Toolkit Solar Integration National Dataset Toolkit NREL is working on a Solar Integration National Dataset (SIND) Toolkit to enable researchers to perform U.S . regional solar generation integration studies. It will provide modeled, coherent subhourly solar power data

  20. Experimental demonstration of vector E x vector B plasma divertor

    International Nuclear Information System (INIS)

    Strait, E.J.; Kerst, D.W.; Sprott, J.C.

    1977-01-01

    The vector E x vector B drift due to an applied radial electric field in a tokamak with poloidal divertor can speed the flow of plasma out of the scrape-off region, and provide a means of externally controlling the flow rate and thus the width of the density fall-off. An experiment in the Wisconsin levitated toroidal octupole, using vector E x vector B drifts alone, demonstrates divertor-like behavior, including 70% reduction of plasma density near the wall and 40% reduction of plasma flux to the wall, with no adverse effects on confinement of the main plasma

  1. [Research on developping the spectral dataset for Dunhuang typical colors based on color constancy].

    Science.gov (United States)

    Liu, Qiang; Wan, Xiao-Xia; Liu, Zhen; Li, Chan; Liang, Jin-Xing

    2013-11-01

    The present paper aims at developping a method to reasonably set up the typical spectral color dataset for different kinds of Chinese cultural heritage in color rendering process. The world famous wall paintings dating from more than 1700 years ago in Dunhuang Mogao Grottoes was taken as typical case in this research. In order to maintain the color constancy during the color rendering workflow of Dunhuang culture relics, a chromatic adaptation based method for developping the spectral dataset of typical colors for those wall paintings was proposed from the view point of human vision perception ability. Under the help and guidance of researchers in the art-research institution and protection-research institution of Dunhuang Academy and according to the existing research achievement of Dunhuang Research in the past years, 48 typical known Dunhuang pigments were chosen and 240 representative color samples were made with reflective spectral ranging from 360 to 750 nm was acquired by a spectrometer. In order to find the typical colors of the above mentioned color samples, the original dataset was devided into several subgroups by clustering analysis. The grouping number, together with the most typical samples for each subgroup which made up the firstly built typical color dataset, was determined by wilcoxon signed rank test according to the color inconstancy index comprehensively calculated under 6 typical illuminating conditions. Considering the completeness of gamut of Dunhuang wall paintings, 8 complementary colors was determined and finally the typical spectral color dataset was built up which contains 100 representative spectral colors. The analytical calculating results show that the median color inconstancy index of the built dataset in 99% confidence level by wilcoxon signed rank test was 3.28 and the 100 colors are distributing in the whole gamut uniformly, which ensures that this dataset can provide reasonable reference for choosing the color with highest

  2. PROVIDING GEOGRAPHIC DATASETS AS LINKED DATA IN SDI

    Directory of Open Access Journals (Sweden)

    E. Hietanen

    2016-06-01

    Full Text Available In this study, a prototype service to provide data from Web Feature Service (WFS as linked data is implemented. At first, persistent and unique Uniform Resource Identifiers (URI are created to all spatial objects in the dataset. The objects are available from those URIs in Resource Description Framework (RDF data format. Next, a Web Ontology Language (OWL ontology is created to describe the dataset information content using the Open Geospatial Consortium’s (OGC GeoSPARQL vocabulary. The existing data model is modified in order to take into account the linked data principles. The implemented service produces an HTTP response dynamically. The data for the response is first fetched from existing WFS. Then the Geographic Markup Language (GML format output of the WFS is transformed on-the-fly to the RDF format. Content Negotiation is used to serve the data in different RDF serialization formats. This solution facilitates the use of a dataset in different applications without replicating the whole dataset. In addition, individual spatial objects in the dataset can be referred with URIs. Furthermore, the needed information content of the objects can be easily extracted from the RDF serializations available from those URIs. A solution for linking data objects to the dataset URI is also introduced by using the Vocabulary of Interlinked Datasets (VoID. The dataset is divided to the subsets and each subset is given its persistent and unique URI. This enables the whole dataset to be explored with a web browser and all individual objects to be indexed by search engines.

  3. Relevance Vector Machine for Prediction of Soil Properties | Samui ...

    African Journals Online (AJOL)

    One of the first, most important steps in geotechnical engineering is site characterization. The ultimate goal of site characterization is to predict the in-situ soil properties at any half-space point for a site based on limited number of tests and data. In the present study, relevance vector machine (RVM) has been used to develop ...

  4. Wind Integration National Dataset Toolkit | Grid Modernization | NREL

    Science.gov (United States)

    Integration National Dataset Toolkit Wind Integration National Dataset Toolkit The Wind Integration National Dataset (WIND) Toolkit is an update and expansion of the Eastern Wind Integration Data Set and Western Wind Integration Data Set. It supports the next generation of wind integration studies. WIND

  5. Efficient morse decompositions of vector fields.

    Science.gov (United States)

    Chen, Guoning; Mischaikow, Konstantin; Laramee, Robert S; Zhang, Eugene

    2008-01-01

    Existing topology-based vector field analysis techniques rely on the ability to extract the individual trajectories such as fixed points, periodic orbits, and separatrices that are sensitive to noise and errors introduced by simulation and interpolation. This can make such vector field analysis unsuitable for rigorous interpretations. We advocate the use of Morse decompositions, which are robust with respect to perturbations, to encode the topological structures of a vector field in the form of a directed graph, called a Morse connection graph (MCG). While an MCG exists for every vector field, it need not be unique. Previous techniques for computing MCG's, while fast, are overly conservative and usually results in MCG's that are too coarse to be useful for the applications. To address this issue, we present a new technique for performing Morse decomposition based on the concept of tau-maps, which typically provides finer MCG's than existing techniques. Furthermore, the choice of tau provides a natural tradeoff between the fineness of the MCG's and the computational costs. We provide efficient implementations of Morse decomposition based on tau-maps, which include the use of forward and backward mapping techniques and an adaptive approach in constructing better approximations of the images of the triangles in the meshes used for simulation.. Furthermore, we propose the use of spatial tau-maps in addition to the original temporal tau-maps. These techniques provide additional trade-offs between the quality of the MCGs and the speed of computation. We demonstrate the utility of our technique with various examples in the plane and on surfaces including engine simulation data sets.

  6. Gradient Evolution-based Support Vector Machine Algorithm for Classification

    Science.gov (United States)

    Zulvia, Ferani E.; Kuo, R. J.

    2018-03-01

    This paper proposes a classification algorithm based on a support vector machine (SVM) and gradient evolution (GE) algorithms. SVM algorithm has been widely used in classification. However, its result is significantly influenced by the parameters. Therefore, this paper aims to propose an improvement of SVM algorithm which can find the best SVMs’ parameters automatically. The proposed algorithm employs a GE algorithm to automatically determine the SVMs’ parameters. The GE algorithm takes a role as a global optimizer in finding the best parameter which will be used by SVM algorithm. The proposed GE-SVM algorithm is verified using some benchmark datasets and compared with other metaheuristic-based SVM algorithms. The experimental results show that the proposed GE-SVM algorithm obtains better results than other algorithms tested in this paper.

  7. Wall shear stress fixed points in blood flow

    Science.gov (United States)

    Arzani, Amirhossein; Shadden, Shawn

    2017-11-01

    Patient-specific computational fluid dynamics produces large datasets, and wall shear stress (WSS) is one of the most important parameters due to its close connection with the biological processes at the wall. While some studies have investigated WSS vectorial features, the WSS fixed points have not received much attention. In this talk, we will discuss the importance of WSS fixed points from three viewpoints. First, we will review how WSS fixed points relate to the flow physics away from the wall. Second, we will discuss how certain types of WSS fixed points lead to high biochemical surface concentration in cardiovascular mass transport problems. Finally, we will introduce a new measure to track the exposure of endothelial cells to WSS fixed points.

  8. Address Points, Address points were attributed according to NENA standards and field verfied between the dates of June 2008 thru August 2008. The address points were then matched to the Verizon Telco database with a 99% hit rate in October of 2008., Published in 2006, 1:1200 (1in=100ft) scale, Eastern Shore Regional GIS Cooperative.

    Data.gov (United States)

    NSGIC Regional | GIS Inventory — Address Points dataset current as of 2006. Address points were attributed according to NENA standards and field verfied between the dates of June 2008 thru August...

  9. A SUPPORT VECTOR MACHINE APPROACH FOR DEVELOPING TELEMEDICINE SOLUTIONS: MEDICAL DIAGNOSIS

    Directory of Open Access Journals (Sweden)

    Mihaela GHEORGHE

    2015-06-01

    Full Text Available Support vector machine represents an important tool for artificial neural networks techniques including classification and prediction. It offers a solution for a wide range of different issues in which cases the traditional optimization algorithms and methods cannot be applied directly due to different constraints, including memory restrictions, hidden relationships between variables, very high volume of computations that needs to be handled. One of these issues relates to medical diagnosis, a subset of the medical field. In this paper, the SVM learning algorithm is tested on a diabetes dataset and the results obtained for training with different kernel functions are presented and analyzed in order to determine a good approach from a telemedicine perspective.

  10. Survival Prediction and Feature Selection in Patients with Breast Cancer Using Support Vector Regression

    Directory of Open Access Journals (Sweden)

    Shahrbanoo Goli

    2016-01-01

    Full Text Available The Support Vector Regression (SVR model has been broadly used for response prediction. However, few researchers have used SVR for survival analysis. In this study, a new SVR model is proposed and SVR with different kernels and the traditional Cox model are trained. The models are compared based on different performance measures. We also select the best subset of features using three feature selection methods: combination of SVR and statistical tests, univariate feature selection based on concordance index, and recursive feature elimination. The evaluations are performed using available medical datasets and also a Breast Cancer (BC dataset consisting of 573 patients who visited the Oncology Clinic of Hamadan province in Iran. Results show that, for the BC dataset, survival time can be predicted more accurately by linear SVR than nonlinear SVR. Based on the three feature selection methods, metastasis status, progesterone receptor status, and human epidermal growth factor receptor 2 status are the best features associated to survival. Also, according to the obtained results, performance of linear and nonlinear kernels is comparable. The proposed SVR model performs similar to or slightly better than other models. Also, SVR performs similar to or better than Cox when all features are included in model.

  11. Fučík spectra for vector equations

    Directory of Open Access Journals (Sweden)

    Christian Fabry

    2000-01-01

    Full Text Available Let $L:\\hbox{dom} L\\subset L^2(\\Omega;R^N\\rightarrow L^2(\\Omega;R^N$ be a linear operator, $\\Omega$ being open and bounded in $R^M$. The aim of this paper is to study the Fu\\v c\\'\\i k spectrum for vector problems of the form $Lu=\\alpha Au^+ -\\beta Au^-$, where $A$ is an $N\\times N$ matrix, $\\alpha, \\beta$ are real numbers, $u^+$ a vector defined componentwise by $(u^+_i=\\max\\{u_i,0\\}$, $u^-$ being defined similarly. With $\\lambda^*$ an eigenvalue for the problem $Lu=\\lambda Au$, we describe (locally curves in the Fučík spectrum passing through the point $(\\lambda^*,\\lambda^*$, distinguishing different cases illustrated by examples, for which Fučík curves have been computed numerically.

  12. Dataset of aqueous humor cytokine profile in HIV patients with Cytomegalovirus (CMV retinitis

    Directory of Open Access Journals (Sweden)

    Jayant Venkatramani Iyer

    2016-09-01

    Full Text Available The data shows the aqueous humor cytokine profiling results acquired in a small cohort of 17 HIV patients clinically diagnosed with Cytomegalovirus retinitis using the FlexMAP 3D (Luminex® platform using the Milliplex Human Cytokine® kit. Aqueous humor samples were collected from these patients at different time points (pre-treatment and at 4-weekly intervals through the 12-week course of intravitreal ganciclovir treatment and 41 cytokine levels were analyzed at each time point. CMV DNA viral load was assessed in 8 patients at different time points throughout the course of ganciclovir treatment. The data described herein is related to the research article entitled “Aqueous humor immune factors and cytomegalovirus (CMV levels in CMV retinitis through treatment - The CRIGSS study” (Iyer et al., 2016 [1]. Cytokine levels against the different time points which indicate the response to the given treatment and against the CMV viral load were analyzed. Keywords: Cytokines, CMV retinitis, Dataset, HIV, Luminex bead assay

  13. Application of vector CSAMT for the imaging of an active fault; CSAMT ho ni yoru danso no imaging

    Energy Technology Data Exchange (ETDEWEB)

    Kobayashi, T; Fukuoka, K [Oyo Corp., Tokyo (Japan)

    1997-05-27

    With an objective to identify three-dimensionally resistivity in deep fault in the Mizunawa fault in Fukuoka Prefecture, a measurement was carried out by using the CSAMT method. The measurement was conducted by arranging seven traverse lines, each line having observation points installed at intervals of about 500 m. Among the 68 observation points in total, 33 points performed the vector measurement, and the remaining points the scaler measurement. For observation points having performed the vector measurement, polarized wave eclipses were depicted in the electric field to discuss which direction the current will prevail in. For analyses, a one-dimensional analysis was performed by using an inversion with smoothing restriction, and a two-dimensional analysis was conducted by using the finite element method based on the result of the former analysis. The vector measurement revealed that the structure in the vicinity of a fault was estimated to have become complex, and the two-dimensional analysis discovered that the Mizunawa fault is located on a relatively clear resistivity boundary. In addition, it was made clear that the high resistivity band may even be divided into two regions of about 200 ohm-m and about 1000 ohm-m. 2 refs., 7 figs.

  14. MOBBED: a computational data infrastructure for handling large collections of event-rich time series datasets in MATLAB.

    Science.gov (United States)

    Cockfield, Jeremy; Su, Kyungmin; Robbins, Kay A

    2013-01-01

    Experiments to monitor human brain activity during active behavior record a variety of modalities (e.g., EEG, eye tracking, motion capture, respiration monitoring) and capture a complex environmental context leading to large, event-rich time series datasets. The considerable variability of responses within and among subjects in more realistic behavioral scenarios requires experiments to assess many more subjects over longer periods of time. This explosion of data requires better computational infrastructure to more systematically explore and process these collections. MOBBED is a lightweight, easy-to-use, extensible toolkit that allows users to incorporate a computational database into their normal MATLAB workflow. Although capable of storing quite general types of annotated data, MOBBED is particularly oriented to multichannel time series such as EEG that have event streams overlaid with sensor data. MOBBED directly supports access to individual events, data frames, and time-stamped feature vectors, allowing users to ask questions such as what types of events or features co-occur under various experimental conditions. A database provides several advantages not available to users who process one dataset at a time from the local file system. In addition to archiving primary data in a central place to save space and avoid inconsistencies, such a database allows users to manage, search, and retrieve events across multiple datasets without reading the entire dataset. The database also provides infrastructure for handling more complex event patterns that include environmental and contextual conditions. The database can also be used as a cache for expensive intermediate results that are reused in such activities as cross-validation of machine learning algorithms. MOBBED is implemented over PostgreSQL, a widely used open source database, and is freely available under the GNU general public license at http://visual.cs.utsa.edu/mobbed. Source and issue reports for MOBBED

  15. Integration of vectors by homologous recombination in the plant pathogen Glomerella cingulata.

    Science.gov (United States)

    Rikkerink, E H; Solon, S L; Crowhurst, R N; Templeton, M D

    1994-03-01

    An homologous transformation system has been developed for the plant pathogenic fungus Glomerella cingulata (Colletotrichum gloeosporioides). A transformation vector containing the G. cingulata gpdA promoter fused to the hygromycin phosphotransferase gene was constructed. Southern analyses indicated that this vector integrated at single sites in most transformants. A novel method of PCR amplification across the recombination junction point indicated that the integration event occurred by homologous recombination in more than 95% of the transformants. Deletion studies demonstrated that 505 bp (the minimum length of homologous promoter DNA analysed which was still capable of promoter function) was sufficient to target integration events. Homologous integration of the vector resulted in duplication of the gdpA promoter region. When transformants were grown without selective pressure, a high incidence of vector excision by recombination between the duplicated regions was evident. The significance of these recombination characteristics is discussed with reference to the feasibility of performing gene disruption experiments.

  16. Preclinical Diagnosis of Magnetic Resonance (MR Brain Images via Discrete Wavelet Packet Transform with Tsallis Entropy and Generalized Eigenvalue Proximal Support Vector Machine (GEPSVM

    Directory of Open Access Journals (Sweden)

    Yudong Zhang

    2015-03-01

    Full Text Available Background: Developing an accurate computer-aided diagnosis (CAD system of MR brain images is essential for medical interpretation and analysis. In this study, we propose a novel automatic CAD system to distinguish abnormal brains from normal brains in MRI scanning. Methods: The proposed method simplifies the task to a binary classification problem. We used discrete wavelet packet transform (DWPT to extract wavelet packet coefficients from MR brain images. Next, Shannon entropy (SE and Tsallis entropy (TE were harnessed to obtain entropy features from DWPT coefficients. Finally, generalized eigenvalue proximate support vector machine (GEPSVM, and GEPSVM with radial basis function (RBF kernel, were employed as classifier. We tested the four proposed diagnosis methods (DWPT + SE + GEPSVM, DWPT + TE + GEPSVM, DWPT + SE + GEPSVM + RBF, and DWPT + TE + GEPSVM + RBF on three benchmark datasets of Dataset-66, Dataset-160, and Dataset-255. Results: The 10 repetition of K-fold stratified cross validation results showed the proposed DWPT + TE + GEPSVM + RBF method excelled not only other three proposed classifiers but also existing state-of-the-art methods in terms of classification accuracy. In addition, the DWPT + TE + GEPSVM + RBF method achieved accuracy of 100%, 100%, and 99.53% on Dataset-66, Dataset-160, and Dataset-255, respectively. For Dataset-255, the offline learning cost 8.4430s and online prediction cost merely 0.1059s. Conclusions: We have proved the effectiveness of the proposed method, which achieved nearly 100% accuracy over three benchmark datasets.

  17. STUDY OF THE INTEGRATION OF LIDAR AND PHOTOGRAMMETRIC DATASETS BY IN SITU CAMERA CALIBRATION AND INTEGRATED SENSOR ORIENTATION

    Directory of Open Access Journals (Sweden)

    E. Mitishita

    2017-05-01

    Full Text Available Photogrammetric and Lidar datasets should be in the same mapping or geodetic frame to be used simultaneously in an engineering project. Nowadays direct sensor orientation is a common procedure used in simultaneous photogrammetric and Lidar surveys. Although the direct sensor orientation technologies provide a high degree of automation process due to the GNSS/INS technologies, the accuracies of the results obtained from the photogrammetric and Lidar surveys are dependent on the quality of a group of parameters that models accurately the user conditions of the system at the moment the job is performed. This paper shows the study that was performed to verify the importance of the in situ camera calibration and Integrated Sensor Orientation without control points to increase the accuracies of the photogrammetric and LIDAR datasets integration. The horizontal and vertical accuracies of photogrammetric and Lidar datasets integration by photogrammetric procedure improved significantly when the Integrated Sensor Orientation (ISO approach was performed using Interior Orientation Parameter (IOP values estimated from the in situ camera calibration. The horizontal and vertical accuracies, estimated by the Root Mean Square Error (RMSE of the 3D discrepancies from the Lidar check points, increased around of 37% and 198% respectively.

  18. Point- and curve-based geometric conflation

    KAUST Repository

    Ló pez-Vá zquez, C.; Manso Callejo, M.A.

    2013-01-01

    Geometric conflation is the process undertaken to modify the coordinates of features in dataset A in order to match corresponding ones in dataset B. The overwhelming majority of the literature considers the use of points as features to define the transformation. In this article we present a procedure to consider one-dimensional curves also, which are commonly available as Global Navigation Satellite System (GNSS) tracks, routes, coastlines, and so on, in order to define the estimate of the displacements to be applied to each object in A. The procedure involves three steps, including the partial matching of corresponding curves, the computation of some analytical expression, and the addition of a correction term in order to satisfy basic cartographic rules. A numerical example is presented. © 2013 Copyright Taylor and Francis Group, LLC.

  19. Measurement of K/sub NN/, K/sub LL/ in p vector d → n vector X and p vector 9Be → n vector X at 800 MeV

    International Nuclear Information System (INIS)

    Riley, P.J.; Hollas, C.L.; Newsom, C.R.

    1980-01-01

    The spin transfer parameters, K/sub NN/ and K/sub LL/, have been measured in p vector d → n vector X and p vector 9 Be → n vector X at 0 0 and 800 MeV. The rather large values of K/sub LL/ demonstrate that this transfer mechanism will provide a useful source of polarized neutrons at LAMPF energies

  20. Raster images vectorization system

    OpenAIRE

    Genytė, Jurgita

    2006-01-01

    The problem of raster images vectorization was analyzed and researched in this work. Existing vectorization systems are quite expensive, the results are inaccurate, and the manual vectorization of a large number of drafts is impossible. That‘s why our goal was to design and develop a new raster images vectorization system using our suggested automatic vectorization algorithm and the way to record results in a new universal vectorial file format. The work consists of these main parts: analysis...

  1. Vectorization of phase space Monte Carlo code in FACOM vector processor VP-200

    International Nuclear Information System (INIS)

    Miura, Kenichi

    1986-01-01

    This paper describes the vectorization techniques for Monte Carlo codes in Fujitsu's Vector Processor System. The phase space Monte Carlo code FOWL is selected as a benchmark, and scalar and vector performances are compared. The vectorized kernel Monte Carlo routine which contains heavily nested IF tests runs up to 7.9 times faster in vector mode than in scalar mode. The overall performance improvement of the vectorized FOWL code over the original scalar code reaches 3.3. The results of this study strongly indicate that supercomputer can be a powerful tool for Monte Carlo simulations in high energy physics. (Auth.)

  2. A New Outlier Detection Method for Multidimensional Datasets

    KAUST Repository

    Abdel Messih, Mario A.

    2012-07-01

    This study develops a novel hybrid method for outlier detection (HMOD) that combines the idea of distance based and density based methods. The proposed method has two main advantages over most of the other outlier detection methods. The first advantage is that it works well on both dense and sparse datasets. The second advantage is that, unlike most other outlier detection methods that require careful parameter setting and prior knowledge of the data, HMOD is not very sensitive to small changes in parameter values within certain parameter ranges. The only required parameter to set is the number of nearest neighbors. In addition, we made a fully parallelized implementation of HMOD that made it very efficient in applications. Moreover, we proposed a new way of using the outlier detection for redundancy reduction in datasets where the confidence level that evaluates how accurate the less redundant dataset can be used to represent the original dataset can be specified by users. HMOD is evaluated on synthetic datasets (dense and mixed “dense and sparse”) and a bioinformatics problem of redundancy reduction of dataset of position weight matrices (PWMs) of transcription factor binding sites. In addition, in the process of assessing the performance of our redundancy reduction method, we developed a simple tool that can be used to evaluate the confidence level of reduced dataset representing the original dataset. The evaluation of the results shows that our method can be used in a wide range of problems.

  3. Comment on ‘An educational path for the magnetic vector potential and its physical implications’

    International Nuclear Information System (INIS)

    Heras, José A

    2014-01-01

    In their recent paper, Barbieri et al (2013 Eur. J. Phys. 34 1209) proposed an educational path for the magnetic vector potential. In this comment I point out that this educational path involves several inconsistencies and is therefore unattractive from a pedagogical point of view. (letters and comments)

  4. Axial-vector gluons and the fine structure of heavy quark--antiquark systems

    International Nuclear Information System (INIS)

    Feinberg, G.; Lynn, B.; Sucher, J.

    1979-01-01

    We point out that two models of the origin of spin-dependent forces in heavy quark systems make very different predictions about the relative size of these forces in c-barc and b-barb. The model in which these forces are relativistic corrections to vector or scalar gluon exchange predicts smaller spin-dependent effects in b-barb than in c-barc while a model in which these forces are due to exchange of axial-vector gluons predicts a similar size for spin-dependent splittings in the two systems

  5. A New Curve Tracing Algorithm Based on Local Feature in the Vectorization of Paper Seismograms

    Directory of Open Access Journals (Sweden)

    Maofa Wang

    2014-02-01

    Full Text Available History paper seismograms are very important information for earthquake monitoring and prediction. The vectorization of paper seismograms is an import problem to be resolved. Auto tracing of waveform curves is a key technology for the vectorization of paper seismograms. It can transform an original scanning image into digital waveform data. Accurately tracing out all the key points of each curve in seismograms is the foundation for vectorization of paper seismograms. In the paper, we present a new curve tracing algorithm based on local feature, applying to auto extraction of earthquake waveform in paper seismograms.

  6. ''Massless'' vector field in de Sitter universe

    International Nuclear Information System (INIS)

    Garidi, T.; Gazeau, J.-P.; Rouhani, S.; Takook, M. V.

    2008-01-01

    We proceed to the quantization of the massless vector field in the de Sitter (dS) space. This work is the natural continuation of a previous article devoted to the quantization of the dS massive vector field [J. P. Gazeau and M. V. Takook, J. Math. Phys. 41, 5920 (2000); T. Garidi et al., ibid. 43, 6379 (2002).] The term ''massless'' is used by reference to conformal invariance and propagation on the dS lightcone whereas ''massive'' refers to those dS fields which unambiguously contract to Minkowskian massive fields at zero curvature. Due to the combined occurrences of gauge invariance and indefinite metric, the covariant quantization of the massless vector field requires an indecomposable representation of the de Sitter group. We work with the gauge fixing corresponding to the simplest Gupta-Bleuler structure. The field operator is defined with the help of coordinate-independent de Sitter waves (the modes). The latter are simple to manipulate and most adapted to group theoretical approaches. The physical states characterized by the divergencelessness condition are, for instance, easy to identify. The whole construction is based on analyticity requirements in the complexified pseudo-Riemannian manifold for the modes and the two-point function

  7. ``Massless'' vector field in de Sitter universe

    Science.gov (United States)

    Garidi, T.; Gazeau, J.-P.; Rouhani, S.; Takook, M. V.

    2008-03-01

    We proceed to the quantization of the massless vector field in the de Sitter (dS) space. This work is the natural continuation of a previous article devoted to the quantization of the dS massive vector field [J. P. Gazeau and M. V. Takook, J. Math. Phys. 41, 5920 (2000); T. Garidi et al., ibid. 43, 6379 (2002).] The term ``massless'' is used by reference to conformal invariance and propagation on the dS lightcone whereas ``massive'' refers to those dS fields which unambiguously contract to Minkowskian massive fields at zero curvature. Due to the combined occurrences of gauge invariance and indefinite metric, the covariant quantization of the massless vector field requires an indecomposable representation of the de Sitter group. We work with the gauge fixing corresponding to the simplest Gupta-Bleuler structure. The field operator is defined with the help of coordinate-independent de Sitter waves (the modes). The latter are simple to manipulate and most adapted to group theoretical approaches. The physical states characterized by the divergencelessness condition are, for instance, easy to identify. The whole construction is based on analyticity requirements in the complexified pseudo-Riemannian manifold for the modes and the two-point function.

  8. Topographic and Hydrographic GIS Datasets for the Afghanistan Geological Survey and U.S. Geological Survey 2014 Mineral Areas of Interest

    Science.gov (United States)

    DeWitt, Jessica D.; Chirico, Peter G.; Malpeli, Katherine C.

    2015-11-18

    Mineral extraction and associated industries play an important role in the Afghan economy, particularly in the “transitional era” of declining foreign aid and withdrawal of foreign troops post 2014. In addition to providing a substantial source of government revenue, other potential benefits of natural resource development include boosted exports, employment opportunities, and strengthened industrialization (Joya, 2012). Continued exploration and investment in these industries has resulted in large economic improvements since 2007, when this series of studies was initiated. At that time, the “Preliminary Non-Fuel Mineral Resource Assessment of Afghanistan” was completed by members of the U.S. Geological Survey and Afghanistan Geological Survey (Peters and others, 2007). The assessment published a series of country-wide datasets, including a digital elevation model (DEM), elevation contours, hydrography, transportation routes, geophysics, and cultural datasets (Peters and others, 2007). It also delineated 20 mineralized areas for further study using a geologic-based methodology. A second data product, “Summaries of Important Areas for Mineral Investment and Production Opportunities of Nonfuel Minerals in Afghanistan,” was released by Peters and others in 2011. This work highlighted geologic, geohydrologic, and hyperspectral studies that were carried out in specific Areas of Interest (AOIs) to assess the location and characteristics of mineral resources. Also included in the 2011 publication is a collection of appendixes and inventories of Geographic Information System (GIS) datasets for each of the 24 identified AOIs. A third data product was released in 2013 (Casey and Chirico, 2013), publishing datasets for five different AOIs, two subareas, and one AOI extension. Each dataset contains vector shapefiles of the AOI boundary, streams, roads, and contours at 25-, 50-, and 100-meter (m) intervals, as well as raster files of the AOI’s DEM and hillshade.

  9. Topological events on the lines of circular polarization in nonparaxial vector optical fields.

    Science.gov (United States)

    Freund, Isaac

    2017-02-01

    In nonparaxial vector optical fields, the following topological events are shown to occur in apparent violation of charge conservation: as one translates the observation plane along a line of circular polarization (a C line), the points on the line (C points) are seen to change not only the signs of their topological charges, but also their handedness, and, at turning points on the line, paired C points with the same topological charge and opposite handedness are seen to nucleate. These counter-intuitive events cannot occur in paraxial fields.

  10. NP-PAH Interaction Dataset

    Data.gov (United States)

    U.S. Environmental Protection Agency — Dataset presents concentrations of organic pollutants, such as polyaromatic hydrocarbon compounds, in water samples. Water samples of known volume and concentration...

  11. Simplest bifurcation diagrams for monotone families of vector fields on a torus

    Science.gov (United States)

    Baesens, C.; MacKay, R. S.

    2018-06-01

    In part 1, we prove that the bifurcation diagram for a monotone two-parameter family of vector fields on a torus has to be at least as complicated as the conjectured simplest one proposed in Baesens et al (1991 Physica D 49 387–475). To achieve this, we define ‘simplest’ by sequentially minimising the numbers of equilibria, Bogdanov–Takens points, closed curves of centre and of neutral saddle, intersections of curves of centre and neutral saddle, Reeb components, other invariant annuli, arcs of rotational homoclinic bifurcation of horizontal homotopy type, necklace points, contractible periodic orbits, points of neutral horizontal homoclinic bifurcation and half-plane fan points. We obtain two types of simplest case, including that initially proposed. In part 2, we analyse the bifurcation diagram for an explicit monotone family of vector fields on a torus and prove that it has at most two equilibria, precisely four Bogdanov–Takens points, no closed curves of centre nor closed curves of neutral saddle, at most two Reeb components, precisely four arcs of rotational homoclinic connection of ‘horizontal’ homotopy type, eight horizontal saddle-node loop points, two necklace points, four points of neutral horizontal homoclinic connection, and two half-plane fan points, and there is no simultaneous existence of centre and neutral saddle, nor contractible homoclinic connection to a neutral saddle. Furthermore, we prove that all saddle-nodes, Bogdanov–Takens points, non-neutral and neutral horizontal homoclinic bifurcations are non-degenerate and the Hopf condition is satisfied for all centres. We also find it has four points of degenerate Hopf bifurcation. It thus provides an example of a family satisfying all the assumptions of part 1 except the one of at most one contractible periodic orbit.

  12. Geodetic Control Points, Benchmarks; Vertical elevation bench marks for monumented geodetic survey control points for which mean sea level elevations have been determined., Published in 1995, 1:24000 (1in=2000ft) scale, Rhode Island and Providence Plantations.

    Data.gov (United States)

    NSGIC State | GIS Inventory — Geodetic Control Points dataset current as of 1995. Benchmarks; Vertical elevation bench marks for monumented geodetic survey control points for which mean sea level...

  13. A dataset on tail risk of commodities markets.

    Science.gov (United States)

    Powell, Robert J; Vo, Duc H; Pham, Thach N; Singh, Abhay K

    2017-12-01

    This article contains the datasets related to the research article "The long and short of commodity tails and their relationship to Asian equity markets"(Powell et al., 2017) [1]. The datasets contain the daily prices (and price movements) of 24 different commodities decomposed from the S&P GSCI index and the daily prices (and price movements) of three share market indices including World, Asia, and South East Asia for the period 2004-2015. Then, the dataset is divided into annual periods, showing the worst 5% of price movements for each year. The datasets are convenient to examine the tail risk of different commodities as measured by Conditional Value at Risk (CVaR) as well as their changes over periods. The datasets can also be used to investigate the association between commodity markets and share markets.

  14. Modelling aggregation on the large scale and regularity on the small scale in spatial point pattern datasets

    DEFF Research Database (Denmark)

    Lavancier, Frédéric; Møller, Jesper

    We consider a dependent thinning of a regular point process with the aim of obtaining aggregation on the large scale and regularity on the small scale in the resulting target point process of retained points. Various parametric models for the underlying processes are suggested and the properties...

  15. Detection of correct and incorrect measurements in real-time continuous glucose monitoring systems by applying a postprocessing support vector machine.

    Science.gov (United States)

    Leal, Yenny; Gonzalez-Abril, Luis; Lorencio, Carol; Bondia, Jorge; Vehi, Josep

    2013-07-01

    Support vector machines (SVMs) are an attractive option for detecting correct and incorrect measurements in real-time continuous glucose monitoring systems (RTCGMSs), because their learning mechanism can introduce a postprocessing strategy for imbalanced datasets. The proposed SVM considers the geometric mean to obtain a more balanced performance between sensitivity and specificity. To test this approach, 23 critically ill patients receiving insulin therapy were monitored over 72 h using an RTCGMS, and a dataset of 537 samples, classified according to International Standards Organization (ISO) criteria (372 correct and 165 incorrect measurements), was obtained. The results obtained were promising for patients with septic shock or with sepsis, for which the proposed system can be considered as reliable. However, this approach cannot be considered suitable for patients without sepsis.

  16. Online Least Squares One-Class Support Vector Machines-Based Abnormal Visual Event Detection

    Directory of Open Access Journals (Sweden)

    Tian Wang

    2013-12-01

    Full Text Available The abnormal event detection problem is an important subject in real-time video surveillance. In this paper, we propose a novel online one-class classification algorithm, online least squares one-class support vector machine (online LS-OC-SVM, combined with its sparsified version (sparse online LS-OC-SVM. LS-OC-SVM extracts a hyperplane as an optimal description of training objects in a regularized least squares sense. The online LS-OC-SVM learns a training set with a limited number of samples to provide a basic normal model, then updates the model through remaining data. In the sparse online scheme, the model complexity is controlled by the coherence criterion. The online LS-OC-SVM is adopted to handle the abnormal event detection problem. Each frame of the video is characterized by the covariance matrix descriptor encoding the moving information, then is classified into a normal or an abnormal frame. Experiments are conducted, on a two-dimensional synthetic distribution dataset and a benchmark video surveillance dataset, to demonstrate the promising results of the proposed online LS-OC-SVM method.

  17. Image Classification of Ribbed Smoked Sheet using Learning Vector Quantization

    Science.gov (United States)

    Rahmat, R. F.; Pulungan, A. F.; Faza, S.; Budiarto, R.

    2017-01-01

    Natural rubber is an important export commodity in Indonesia, which can be a major contributor to national economic development. One type of rubber used as rubber material exports is Ribbed Smoked Sheet (RSS). The quantity of RSS exports depends on the quality of RSS. RSS rubber quality has been assigned in SNI 06-001-1987 and the International Standards of Quality and Packing for Natural Rubber Grades (The Green Book). The determination of RSS quality is also known as the sorting process. In the rubber factones, the sorting process is still done manually by looking and detecting at the levels of air bubbles on the surface of the rubber sheet by naked eyes so that the result is subjective and not so good. Therefore, a method is required to classify RSS rubber automatically and precisely. We propose some image processing techniques for the pre-processing, zoning method for feature extraction and Learning Vector Quantization (LVQ) method for classifying RSS rubber into two grades, namely RSS1 and RSS3. We used 120 RSS images as training dataset and 60 RSS images as testing dataset. The result shows that our proposed method can give 89% of accuracy and the best perform epoch is in the fifteenth epoch.

  18. Proteomics dataset

    DEFF Research Database (Denmark)

    Bennike, Tue Bjerg; Carlsen, Thomas Gelsing; Ellingsen, Torkell

    2017-01-01

    patients (Morgan et al., 2012; Abraham and Medzhitov, 2011; Bennike, 2014) [8–10. Therefore, we characterized the proteome of colon mucosa biopsies from 10 inflammatory bowel disease ulcerative colitis (UC) patients, 11 gastrointestinal healthy rheumatoid arthritis (RA) patients, and 10 controls. We...... been deposited to the ProteomeXchange Consortium via the PRIDE partner repository with the dataset identifiers PXD001608 for ulcerative colitis and control samples, and PXD003082 for rheumatoid arthritis samples....

  19. Vector velocimeter

    DEFF Research Database (Denmark)

    2012-01-01

    The present invention relates to a compact, reliable and low-cost vector velocimeter for example for determining velocities of particles suspended in a gas or fluid flow, or for determining velocity, displacement, rotation, or vibration of a solid surface, the vector velocimeter comprising a laser...

  20. Cloning vector

    Science.gov (United States)

    Guilfoyle, Richard A.; Smith, Lloyd M.

    1994-01-01

    A vector comprising a filamentous phage sequence containing a first copy of filamentous phage gene X and other sequences necessary for the phage to propagate is disclosed. The vector also contains a second copy of filamentous phage gene X downstream from a promoter capable of promoting transcription in a bacterial host. In a preferred form of the present invention, the filamentous phage is M13 and the vector additionally includes a restriction endonuclease site located in such a manner as to substantially inactivate the second gene X when a DNA sequence is inserted into the restriction site.

  1. Cloning vector

    Science.gov (United States)

    Guilfoyle, R.A.; Smith, L.M.

    1994-12-27

    A vector comprising a filamentous phage sequence containing a first copy of filamentous phage gene X and other sequences necessary for the phage to propagate is disclosed. The vector also contains a second copy of filamentous phage gene X downstream from a promoter capable of promoting transcription in a bacterial host. In a preferred form of the present invention, the filamentous phage is M13 and the vector additionally includes a restriction endonuclease site located in such a manner as to substantially inactivate the second gene X when a DNA sequence is inserted into the restriction site. 2 figures.

  2. SAR image dataset of military ground targets with multiple poses for ATR

    Science.gov (United States)

    Belloni, Carole; Balleri, Alessio; Aouf, Nabil; Merlet, Thomas; Le Caillec, Jean-Marc

    2017-10-01

    Automatic Target Recognition (ATR) is the task of automatically detecting and classifying targets. Recognition using Synthetic Aperture Radar (SAR) images is interesting because SAR images can be acquired at night and under any weather conditions, whereas optical sensors operating in the visible band do not have this capability. Existing SAR ATR algorithms have mostly been evaluated using the MSTAR dataset.1 The problem with the MSTAR is that some of the proposed ATR methods have shown good classification performance even when targets were hidden,2 suggesting the presence of a bias in the dataset. Evaluations of SAR ATR techniques are currently challenging due to the lack of publicly available data in the SAR domain. In this paper, we present a high resolution SAR dataset consisting of images of a set of ground military target models taken at various aspect angles, The dataset can be used for a fair evaluation and comparison of SAR ATR algorithms. We applied the Inverse Synthetic Aperture Radar (ISAR) technique to echoes from targets rotating on a turntable and illuminated with a stepped frequency waveform. The targets in the database consist of four variants of two 1.7m-long models of T-64 and T-72 tanks. The gun, the turret position and the depression angle are varied to form 26 different sequences of images. The emitted signal spanned the frequency range from 13 GHz to 18 GHz to achieve a bandwidth of 5 GHz sampled with 4001 frequency points. The resolution obtained with respect to the size of the model targets is comparable to typical values obtained using SAR airborne systems. Single polarized images (Horizontal-Horizontal) are generated using the backprojection algorithm.3 A total of 1480 images are produced using a 20° integration angle. The images in the dataset are organized in a suggested training and testing set to facilitate a standard evaluation of SAR ATR algorithms.

  3. Comparison of Shallow Survey 2012 Multibeam Datasets

    Science.gov (United States)

    Ramirez, T. M.

    2012-12-01

    The purpose of the Shallow Survey common dataset is a comparison of the different technologies utilized for data acquisition in the shallow survey marine environment. The common dataset consists of a series of surveys conducted over a common area of seabed using a variety of systems. It provides equipment manufacturers the opportunity to showcase their latest systems while giving hydrographic researchers and scientists a chance to test their latest algorithms on the dataset so that rigorous comparisons can be made. Five companies collected data for the Common Dataset in the Wellington Harbor area in New Zealand between May 2010 and May 2011; including Kongsberg, Reson, R2Sonic, GeoAcoustics, and Applied Acoustics. The Wellington harbor and surrounding coastal area was selected since it has a number of well-defined features, including the HMNZS South Seas and HMNZS Wellington wrecks, an armored seawall constructed of Tetrapods and Akmons, aquifers, wharves and marinas. The seabed inside the harbor basin is largely fine-grained sediment, with gravel and reefs around the coast. The area outside the harbor on the southern coast is an active environment, with moving sand and exposed reefs. A marine reserve is also in this area. For consistency between datasets, the coastal research vessel R/V Ikatere and crew were used for all surveys conducted for the common dataset. Using Triton's Perspective processing software multibeam datasets collected for the Shallow Survey were processed for detail analysis. Datasets from each sonar manufacturer were processed using the CUBE algorithm developed by the Center for Coastal and Ocean Mapping/Joint Hydrographic Center (CCOM/JHC). Each dataset was gridded at 0.5 and 1.0 meter resolutions for cross comparison and compliance with International Hydrographic Organization (IHO) requirements. Detailed comparisons were made of equipment specifications (transmit frequency, number of beams, beam width), data density, total uncertainty, and

  4. Lsiviewer 2.0 - a Client-Oriented Online Visualization Tool for Geospatial Vector Data

    Science.gov (United States)

    Manikanta, K.; Rajan, K. S.

    2017-09-01

    Geospatial data visualization systems have been predominantly through applications that are installed and run in a desktop environment. Over the last decade, with the advent of web technologies and its adoption by Geospatial community, the server-client model for data handling, data rendering and visualization respectively has been the most prevalent approach in Web-GIS. While the client devices have become functionally more powerful over the recent years, the above model has largely ignored it and is still in a mode of serverdominant computing paradigm. In this paper, an attempt has been made to develop and demonstrate LSIViewer - a simple, easy-to-use and robust online geospatial data visualisation system for the user's own data that harness the client's capabilities for data rendering and user-interactive styling, with a reduced load on the server. The developed system can support multiple geospatial vector formats and can be integrated with other web-based systems like WMS, WFS, etc. The technology stack used to build this system is Node.js on the server side and HTML5 Canvas and JavaScript on the client side. Various tests run on a range of vector datasets, upto 35 MB, showed that the time taken to render the vector data using LSIViewer is comparable to a desktop GIS application, QGIS, over an identical system.

  5. Cook Inlet and Kenai Peninsula, Alaska ESI: VOLCANOS (Volcano Points)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains the locations of volcanos in Cook Inlet and Kenai Peninsula, Alaska. Vector points in the data set represent the location of the volcanos....

  6. The national assessment of shoreline change: a GIS compilation of vector cliff edges and associated cliff erosion data for the California coast

    Science.gov (United States)

    Hapke, Cheryl; Reid, David; Borrelli, Mark

    2007-01-01

    The U.S. Geological Survey has generated a comprehensive data clearinghouse of digital vector cliff edges and associated rates of cliff retreat along the open-ocean California coast. These data, which are presented herein, were compiled as part of the U.S. Geological Survey's National Assessment of Shoreline Change Project. Cliff erosion is a chronic problem along many coastlines of the United States. As coastal populations continue to grow and community infrastructures are threatened by erosion, there is increased demand for accurate information including rates and trends of coastal cliff retreat. There is also a critical need for these data to be consistent from one region to another. One objective of this work is to a develop standard, repeatable methodology for mapping and analyzing cliff edge retreat so that periodic, systematic, and internally consistent updates of cliff edge position and associated rates of erosion can be made at a national scale. This data compilation for open-ocean cliff edges for the California coast is a separate, yet related study to Hapke and others, 2006 documenting shoreline change along sandy shorelines of the California coast, which is itself one in a series that includes the Gulf of Mexico and the Southeast Atlantic coast (Morton and others, 2004; Morton and Miller, 2005). Future reports and data compilations will include coverage of the Northeast U.S., the Great Lakes, Hawaii and Alaska. Cliff edge change is determined by comparing the positions of one historical cliff edge digitized from maps with a modern cliff edge derived from topographic LIDAR (light detection and ranging) surveys. Historical cliff edges for the California coast represent the 1920s-1930s time-period; the most recent cliff edge was delineated using data collected between 1998 and 2002. End-point rate calculations were used to evaluate rates of erosion between the two cliff edges. Please refer to our full report on cliff edge erosion along the California

  7. Dataset of statements on policy integration of selected intergovernmental organizations

    Directory of Open Access Journals (Sweden)

    Jale Tosun

    2018-04-01

    Full Text Available This article describes data for 78 intergovernmental organizations (IGOs working on topics related to energy governance, environmental protection, and the economy. The number of IGOs covered also includes organizations active in other sectors. The point of departure for data construction was the Correlates of War dataset, from which we selected this sample of IGOs. We updated and expanded the empirical information on the IGOs selected by manual coding. Most importantly, we collected the primary law texts of the individual IGOs in order to code whether they commit themselves to environmental policy integration (EPI, climate policy integration (CPI and/or energy policy integration (EnPI.

  8. A Genealogy of Convex Solids Via Local and Global Bifurcations of Gradient Vector Fields

    Science.gov (United States)

    Domokos, Gábor; Holmes, Philip; Lángi, Zsolt

    2016-12-01

    Three-dimensional convex bodies can be classified in terms of the number and stability types of critical points on which they can balance at rest on a horizontal plane. For typical bodies, these are non-degenerate maxima, minima, and saddle points, the numbers of which provide a primary classification. Secondary and tertiary classifications use graphs to describe orbits connecting these critical points in the gradient vector field associated with each body. In previous work, it was shown that these classifications are complete in that no class is empty. Here, we construct 1- and 2-parameter families of convex bodies connecting members of adjacent primary and secondary classes and show that transitions between them can be realized by codimension 1 saddle-node and saddle-saddle (heteroclinic) bifurcations in the gradient vector fields. Our results indicate that all combinatorially possible transitions can be realized in physical shape evolution processes, e.g., by abrasion of sedimentary particles.

  9. National Hydrography Dataset (NHD)

    Data.gov (United States)

    Kansas Data Access and Support Center — The National Hydrography Dataset (NHD) is a feature-based database that interconnects and uniquely identifies the stream segments or reaches that comprise the...

  10. Vector 33: A reduce program for vector algebra and calculus in orthogonal curvilinear coordinates

    Science.gov (United States)

    Harper, David

    1989-06-01

    This paper describes a package with enables REDUCE 3.3 to perform algebra and calculus operations upon vectors. Basic algebraic operations between vectors and between scalars and vectors are provided, including scalar (dot) product and vector (cross) product. The vector differential operators curl, divergence, gradient and Laplacian are also defined, and are valid in any orthogonal curvilinear coordinate system. The package is written in RLISP to allow algebra and calculus to be performed using notation identical to that for operations. Scalars and vectors can be mixed quite freely in the same expression. The package will be of interest to mathematicians, engineers and scientists who need to perform vector calculations in orthogonal curvilinear coordinates.

  11. SU(3) breaking in hyperon transition vector form factors

    International Nuclear Information System (INIS)

    Shanahan, P.E.; Thomas, A.W.; Young, R.D.; Zanotti, J.M.; Rakow, P.E.L.

    2015-08-01

    We present a calculation of the SU(3)-breaking corrections to the hyperon transition vector form factors to O(p 4 ) in heavy baryon chiral perturbation theory with finite-range regularisation. Both octet and decuplet degrees of freedom are included. We formulate a chiral expansion at the kinematic point Q 2 =-(M B 1 -M B 2 ) 2 , which can be conveniently accessed in lattice QCD. The two unknown low-energy constants at this point are constrained by lattice QCD simulation results for the Σ - →n and Ξ 0 →Σ + transition form factors. Hence we determine lattice-informed values of f 1 at the physical point. This work constitutes progress towards the precise determination of vertical stroke V us vertical stroke from hyperon semileptonic decays.

  12. The Harvard organic photovoltaic dataset.

    Science.gov (United States)

    Lopez, Steven A; Pyzer-Knapp, Edward O; Simm, Gregor N; Lutzow, Trevor; Li, Kewei; Seress, Laszlo R; Hachmann, Johannes; Aspuru-Guzik, Alán

    2016-09-27

    The Harvard Organic Photovoltaic Dataset (HOPV15) presented in this work is a collation of experimental photovoltaic data from the literature, and corresponding quantum-chemical calculations performed over a range of conformers, each with quantum chemical results using a variety of density functionals and basis sets. It is anticipated that this dataset will be of use in both relating electronic structure calculations to experimental observations through the generation of calibration schemes, as well as for the creation of new semi-empirical methods and the benchmarking of current and future model chemistries for organic electronic applications.

  13. Probing deformed orbitals with vector A( vector e, e' N)B reactions

    International Nuclear Information System (INIS)

    Garrido, E.; Caballero, J.A.; Moya de Guerra, E.; Sarriguren, P.; Udias, J.M.

    1995-01-01

    We present results for response functions and asymmetries in the nuclear reactions 37 vector Ar( vector e, e' n) 36 Ar and 37 vector K( vector e,e' p) 36 Ar at quasifree kinematics. We compare PWIA results obtained using deformed HF wave functions with PWIA and DWIA results obtained assuming a spherical mean field. We show that the complex structure of the deformed orbitals can be probed by coincidence measurements with polarized beam and targets. ((orig.))

  14. Tables and figure datasets

    Data.gov (United States)

    U.S. Environmental Protection Agency — Soil and air concentrations of asbestos in Sumas study. This dataset is associated with the following publication: Wroble, J., T. Frederick, A. Frame, and D....

  15. De novo transcriptome sequencing and sequence analysis of the malaria vector Anopheles sinensis (Diptera: Culicidae)

    Science.gov (United States)

    2014-01-01

    Background Anopheles sinensis is the major malaria vector in China and Southeast Asia. Vector control is one of the most effective measures to prevent malaria transmission. However, there is little transcriptome information available for the malaria vector. To better understand the biological basis of malaria transmission and to develop novel and effective means of vector control, there is a need to build a transcriptome dataset for functional genomics analysis by large-scale RNA sequencing (RNA-seq). Methods To provide a more comprehensive and complete transcriptome of An. sinensis, eggs, larvae, pupae, male adults and female adults RNA were pooled together for cDNA preparation, sequenced using the Illumina paired-end sequencing technology and assembled into unigenes. These unigenes were then analyzed in their genome mapping, functional annotation, homology, codon usage bias and simple sequence repeats (SSRs). Results Approximately 51.6 million clean reads were obtained, trimmed, and assembled into 38,504 unigenes with an average length of 571 bp, an N50 of 711 bp, and an average GC content 51.26%. Among them, 98.4% of unigenes could be mapped onto the reference genome, and 69% of unigenes could be annotated with known biological functions. Homology analysis identified certain numbers of An. sinensis unigenes that showed homology or being putative 1:1 orthologues with genomes of other Dipteran species. Codon usage bias was analyzed and 1,904 SSRs were detected, which will provide effective molecular markers for the population genetics of this species. Conclusions Our data and analysis provide the most comprehensive transcriptomic resource and characteristics currently available for An. sinensis, and will facilitate genetic, genomic studies, and further vector control of An. sinensis. PMID:25000941

  16. Predicting membrane protein types using various decision tree classifiers based on various modes of general PseAAC for imbalanced datasets.

    Science.gov (United States)

    Sankari, E Siva; Manimegalai, D

    2017-12-21

    Predicting membrane protein types is an important and challenging research area in bioinformatics and proteomics. Traditional biophysical methods are used to classify membrane protein types. Due to large exploration of uncharacterized protein sequences in databases, traditional methods are very time consuming, expensive and susceptible to errors. Hence, it is highly desirable to develop a robust, reliable, and efficient method to predict membrane protein types. Imbalanced datasets and large datasets are often handled well by decision tree classifiers. Since imbalanced datasets are taken, the performance of various decision tree classifiers such as Decision Tree (DT), Classification And Regression Tree (CART), C4.5, Random tree, REP (Reduced Error Pruning) tree, ensemble methods such as Adaboost, RUS (Random Under Sampling) boost, Rotation forest and Random forest are analysed. Among the various decision tree classifiers Random forest performs well in less time with good accuracy of 96.35%. Another inference is RUS boost decision tree classifier is able to classify one or two samples in the class with very less samples while the other classifiers such as DT, Adaboost, Rotation forest and Random forest are not sensitive for the classes with fewer samples. Also the performance of decision tree classifiers is compared with SVM (Support Vector Machine) and Naive Bayes classifier. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Peridomestic Aedes malayensis and Aedes albopictus are capable vectors of arboviruses in cities.

    Science.gov (United States)

    Mendenhall, Ian H; Manuel, Menchie; Moorthy, Mahesh; Lee, Theodore T M; Low, Dolyce H W; Missé, Dorothée; Gubler, Duane J; Ellis, Brett R; Ooi, Eng Eong; Pompon, Julien

    2017-06-01

    Dengue and chikungunya are global re-emerging mosquito-borne diseases. In Singapore, sustained vector control coupled with household improvements reduced domestic mosquito populations for the past 45 years, particularly the primary vector Aedes aegypti. However, while disease incidence was low for the first 30 years following vector control implementation, outbreaks have re-emerged in the past 15 years. Epidemiological observations point to the importance of peridomestic infection in areas not targeted by control programs. We investigated the role of vectors in peri-domestic areas. We carried out entomological surveys to identify the Aedes species present in vegetated sites in highly populated areas and determine whether mosquitoes were present in open-air areas frequented by people. We compared vector competence of Aedes albopictus and Aedes malayensis with Ae. aegypti after oral infection with sympatric dengue serotype 2 and chikungunya viruses. Mosquito saliva was tested for the presence of infectious virus particles as a surrogate for transmission following oral infection. We identified Aedes albopictus and Aedes malayensis throughout Singapore and quantified their presence in forested and opened grassy areas. Both Ae. albopictus and Ae. malayensis can occupy sylvatic niches and were highly susceptible to both arboviruses. A majority of saliva of infected Ae. malayensis contained infectious particles for both viruses. Our study reveals the prevalence of competent vectors in peri-domestic areas, including Ae. malayensis for which we established the vector status. Epidemics can be driven by infection foci, which are epidemiologically enhanced in the context of low herd immunity, selective pressure on arbovirus transmission and the presence of infectious asymptomatic persons, all these conditions being present in Singapore. Learning from Singapore's vector control success that reduced domestic vector populations, but has not sustainably reduced arboviral incidence

  18. Wronskian type solutions for the vector k-constrained KP hierarchy

    International Nuclear Information System (INIS)

    Zhang Youjin.

    1995-07-01

    Motivated by a relation of the 1-constrained Kadomtsev-Petviashvili (KP) hierarchy with the 2 component KP hierarchy, the tau-conditions of the vector k-constrained KP hierarchy are constructed by using an analogue of the Baker-Akhiezer (m + 1)-point function. These tau functions are expressed in terms of Wronskian type determinants. (author). 20 refs

  19. Dynamics and Biocontrol: The Indirect Effects of a Predator Population on a Host-Vector Disease Model

    Directory of Open Access Journals (Sweden)

    Fengyan Zhou

    2014-01-01

    Full Text Available A model of the interactions among a host population, an insect-vector population, which transmits virus from hosts to hosts, and a vector predator population is proposed based on virus-host, host-vector, and prey (vector-enemy theories. The model is investigated to explore the indirect effect of natural enemies on host-virus dynamics by reducing the vector densities, which shows the basic reproduction numbers R01 (without predators and R02 (with predators that provide threshold conditions on determining the uniform persistence and extinction of the disease in a host population. When the model is absent from predator, the disease is persistent if R01>1; in such a case, by introducing predators of a vector, then the insect-transmitted disease will be controlled if R02<1. From the point of biological control, these results show that an additional predator population of the vector may suppress the spread of vector-borne diseases. In addition, there exist limit cycles with persistence of the disease or without disease in presence of predators. Finally, numerical simulations are conducted to support analytical results.

  20. Developing Verification Systems for Building Information Models of Heritage Buildings with Heterogeneous Datasets

    Science.gov (United States)

    Chow, L.; Fai, S.

    2017-08-01

    The digitization and abstraction of existing buildings into building information models requires the translation of heterogeneous datasets that may include CAD, technical reports, historic texts, archival drawings, terrestrial laser scanning, and photogrammetry into model elements. In this paper, we discuss a project undertaken by the Carleton Immersive Media Studio (CIMS) that explored the synthesis of heterogeneous datasets for the development of a building information model (BIM) for one of Canada's most significant heritage assets - the Centre Block of the Parliament Hill National Historic Site. The scope of the project included the development of an as-found model of the century-old, six-story building in anticipation of specific model uses for an extensive rehabilitation program. The as-found Centre Block model was developed in Revit using primarily point cloud data from terrestrial laser scanning. The data was captured by CIMS in partnership with Heritage Conservation Services (HCS), Public Services and Procurement Canada (PSPC), using a Leica C10 and P40 (exterior and large interior spaces) and a Faro Focus (small to mid-sized interior spaces). Secondary sources such as archival drawings, photographs, and technical reports were referenced in cases where point cloud data was not available. As a result of working with heterogeneous data sets, a verification system was introduced in order to communicate to model users/viewers the source of information for each building element within the model.

  1. CURB-BASED STREET FLOOR EXTRACTION FROM MOBILE TERRESTRIAL LIDAR POINT CLOUD

    Directory of Open Access Journals (Sweden)

    S. Ibrahim

    2012-07-01

    Full Text Available Mobile terrestrial laser scanners (MTLS produce huge 3D point clouds describing the terrestrial surface, from which objects like different street furniture can be generated. Extraction and modelling of the street curb and the street floor from MTLS point clouds is important for many applications such as right-of-way asset inventory, road maintenance and city planning. The proposed pipeline for the curb and street floor extraction consists of a sequence of five steps: organizing the 3D point cloud and nearest neighbour search; 3D density-based segmentation to segment the ground; morphological analysis to refine out the ground segment; derivative of Gaussian filtering to detect the curb; solving the travelling salesman problem to form a closed polygon of the curb and point-inpolygon test to extract the street floor. Two mobile laser scanning datasets of different scenes are tested with the proposed pipeline. The results of the extracted curb and street floor are evaluated based on a truth data. The obtained detection rates for the extracted street floor for the datasets are 95% and 96.53%. This study presents a novel approach to the detection and extraction of the road curb and the street floor from unorganized 3D point clouds captured by MTLS. It utilizes only the 3D coordinates of the point cloud.

  2. Real parabolic vector bundles over a real curve

    Indian Academy of Sciences (India)

    Abstract. We define real parabolic structures on real vector bundles over a real curve. Let (X,σX ) be a real curve, and let S ⊂ X be a non-empty finite subset of X such that σX (S) = S. Let N ≥ 2 be an integer. We construct an N-fold cyclic cover p : Y → X in the category of real curves, ramified precisely over each point of S, ...

  3. Multi-perspective views of students’ difficulties with one-dimensional vector and two-dimensional vector

    Science.gov (United States)

    Fauzi, Ahmad; Ratna Kawuri, Kunthi; Pratiwi, Retno

    2017-01-01

    Researchers of students’ conceptual change usually collects data from written tests and interviews. Moreover, reports of conceptual change often simply refer to changes in concepts, such as on a test, without any identification of the learning processes that have taken place. Research has shown that students have difficulties with vectors in university introductory physics courses and high school physics courses. In this study, we intended to explore students’ understanding of one-dimensional and two-dimensional vector in multi perspective views. In this research, we explore students’ understanding through test perspective and interviews perspective. Our research study adopted the mixed-methodology design. The participants of this research were sixty students of third semester of physics education department. The data of this research were collected by testand interviews. In this study, we divided the students’ understanding of one-dimensional vector and two-dimensional vector in two categories, namely vector skills of the addition of one-dimensionaland two-dimensional vector and the relation between vector skills and conceptual understanding. From the investigation, only 44% of students provided correct answer for vector skills of the addition of one-dimensional and two-dimensional vector and only 27% students provided correct answer for the relation between vector skills and conceptual understanding.

  4. PHYSICS PERFORMANCE AND DATASET (PPD)

    CERN Multimedia

    L. Silvestris

    2013-01-01

    The first part of the Long Shutdown period has been dedicated to the preparation of the samples for the analysis targeting the summer conferences. In particular, the 8 TeV data acquired in 2012, including most of the “parked datasets”, have been reconstructed profiting from improved alignment and calibration conditions for all the sub-detectors. A careful planning of the resources was essential in order to deliver the datasets well in time to the analysts, and to schedule the update of all the conditions and calibrations needed at the analysis level. The newly reprocessed data have undergone detailed scrutiny by the Dataset Certification team allowing to recover some of the data for analysis usage and further improving the certification efficiency, which is now at 91% of the recorded luminosity. With the aim of delivering a consistent dataset for 2011 and 2012, both in terms of conditions and release (53X), the PPD team is now working to set up a data re-reconstruction and a new MC pro...

  5. Prediction of human breast and colon cancers from imbalanced data using nearest neighbor and support vector machines.

    Science.gov (United States)

    Majid, Abdul; Ali, Safdar; Iqbal, Mubashar; Kausar, Nabeela

    2014-03-01

    This study proposes a novel prediction approach for human breast and colon cancers using different feature spaces. The proposed scheme consists of two stages: the preprocessor and the predictor. In the preprocessor stage, the mega-trend diffusion (MTD) technique is employed to increase the samples of the minority class, thereby balancing the dataset. In the predictor stage, machine-learning approaches of K-nearest neighbor (KNN) and support vector machines (SVM) are used to develop hybrid MTD-SVM and MTD-KNN prediction models. MTD-SVM model has provided the best values of accuracy, G-mean and Matthew's correlation coefficient of 96.71%, 96.70% and 71.98% for cancer/non-cancer dataset, breast/non-breast cancer dataset and colon/non-colon cancer dataset, respectively. We found that hybrid MTD-SVM is the best with respect to prediction performance and computational cost. MTD-KNN model has achieved moderately better prediction as compared to hybrid MTD-NB (Naïve Bayes) but at the expense of higher computing cost. MTD-KNN model is faster than MTD-RF (random forest) but its prediction is not better than MTD-RF. To the best of our knowledge, the reported results are the best results, so far, for these datasets. The proposed scheme indicates that the developed models can be used as a tool for the prediction of cancer. This scheme may be useful for study of any sequential information such as protein sequence or any nucleic acid sequence. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  6. Integrated Surface Dataset (Global)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Integrated Surface (ISD) Dataset (ISD) is composed of worldwide surface weather observations from over 35,000 stations, though the best spatial coverage is...

  7. Aaron Journal article datasets

    Data.gov (United States)

    U.S. Environmental Protection Agency — All figures used in the journal article are in netCDF format. This dataset is associated with the following publication: Sims, A., K. Alapaty , and S. Raman....

  8. Market Squid Ecology Dataset

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset contains ecological information collected on the major adult spawning and juvenile habitats of market squid off California and the US Pacific Northwest....

  9. Custodial vector model

    Science.gov (United States)

    Becciolini, Diego; Franzosi, Diogo Buarque; Foadi, Roshan; Frandsen, Mads T.; Hapola, Tuomas; Sannino, Francesco

    2015-07-01

    We analyze the Large Hadron Collider (LHC) phenomenology of heavy vector resonances with a S U (2 )L×S U (2 )R spectral global symmetry. This symmetry partially protects the electroweak S parameter from large contributions of the vector resonances. The resulting custodial vector model spectrum and interactions with the standard model fields lead to distinct signatures at the LHC in the diboson, dilepton, and associated Higgs channels.

  10. Vector Differential Calculus

    OpenAIRE

    HITZER, Eckhard MS

    2002-01-01

    This paper treats the fundamentals of the vector differential calculus part of universal geometric calculus. Geometric calculus simplifies and unifies the structure and notation of mathematics for all of science and engineering, and for technological applications. In order to make the treatment self-contained, I first compile all important geometric algebra relationships,which are necesssary for vector differential calculus. Then differentiation by vectors is introduced and a host of major ve...

  11. Structural analysis of online handwritten mathematical symbols based on support vector machines

    Science.gov (United States)

    Simistira, Foteini; Papavassiliou, Vassilis; Katsouros, Vassilis; Carayannis, George

    2013-01-01

    Mathematical expression recognition is still a very challenging task for the research community mainly because of the two-dimensional (2d) structure of mathematical expressions (MEs). In this paper, we present a novel approach for the structural analysis between two on-line handwritten mathematical symbols of a ME, based on spatial features of the symbols. We introduce six features to represent the spatial affinity of the symbols and compare two multi-class classification methods that employ support vector machines (SVMs): one based on the "one-against-one" technique and one based on the "one-against-all", in identifying the relation between a pair of symbols (i.e. subscript, numerator, etc). A dataset containing 1906 spatial relations derived from the Competition on Recognition of Online Handwritten Mathematical Expressions (CROHME) 2012 training dataset is constructed to evaluate the classifiers and compare them with the rule-based classifier of the ILSP-1 system participated in the contest. The experimental results give an overall mean error rate of 2.61% for the "one-against-one" SVM approach, 6.57% for the "one-against-all" SVM technique and 12.31% error rate for the ILSP-1 classifier.

  12. Ghost instabilities of cosmological models with vector fields nonminimally coupled to the curvature

    International Nuclear Information System (INIS)

    Himmetoglu, Burak; Peloso, Marco; Contaldi, Carlo R.

    2009-01-01

    We prove that many cosmological models characterized by vectors nonminimally coupled to the curvature (such as the Turner-Widrow mechanism for the production of magnetic fields during inflation, and models of vector inflation or vector curvaton) contain ghosts. The ghosts are associated with the longitudinal vector polarization present in these models and are found from studying the sign of the eigenvalues of the kinetic matrix for the physical perturbations. Ghosts introduce two main problems: (1) they make the theories ill defined at the quantum level in the high energy/subhorizon regime (and create serious problems for finding a well-behaved UV completion), and (2) they create an instability already at the linearized level. This happens because the eigenvalue corresponding to the ghost crosses zero during the cosmological evolution. At this point the linearized equations for the perturbations become singular (we show that this happens for all the models mentioned above). We explicitly solve the equations in the simplest cases of a vector without a vacuum expectation value in a Friedmann-Robertson-Walker geometry, and of a vector with a vacuum expectation value plus a cosmological constant, and we show that indeed the solutions of the linearized equations diverge when these equations become singular.

  13. Vectorization, parallelization and porting of nuclear codes. Vectorization and parallelization. Progress report fiscal 1999

    Energy Technology Data Exchange (ETDEWEB)

    Adachi, Masaaki; Ogasawara, Shinobu; Kume, Etsuo [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Ishizuki, Shigeru; Nemoto, Toshiyuki; Kawasaki, Nobuo; Kawai, Wataru [Fujitsu Ltd., Tokyo (Japan); Yatake, Yo-ichi [Hitachi Ltd., Tokyo (Japan)

    2001-02-01

    Several computer codes in the nuclear field have been vectorized, parallelized and trans-ported on the FUJITSU VPP500 system, the AP3000 system, the SX-4 system and the Paragon system at Center for Promotion of Computational Science and Engineering in Japan Atomic Energy Research Institute. We dealt with 18 codes in fiscal 1999. These results are reported in 3 parts, i.e., the vectorization and the parallelization part on vector processors, the parallelization part on scalar processors and the porting part. In this report, we describe the vectorization and parallelization on vector processors. In this vectorization and parallelization on vector processors part, the vectorization of Relativistic Molecular Orbital Calculation code RSCAT, a microscopic transport code for high energy nuclear collisions code JAM, three-dimensional non-steady thermal-fluid analysis code STREAM, Relativistic Density Functional Theory code RDFT and High Speed Three-Dimensional Nodal Diffusion code MOSRA-Light on the VPP500 system and the SX-4 system are described. (author)

  14. Elliptic-symmetry vector optical fields.

    Science.gov (United States)

    Pan, Yue; Li, Yongnan; Li, Si-Min; Ren, Zhi-Cheng; Kong, Ling-Jun; Tu, Chenghou; Wang, Hui-Tian

    2014-08-11

    We present in principle and demonstrate experimentally a new kind of vector fields: elliptic-symmetry vector optical fields. This is a significant development in vector fields, as this breaks the cylindrical symmetry and enriches the family of vector fields. Due to the presence of an additional degrees of freedom, which is the interval between the foci in the elliptic coordinate system, the elliptic-symmetry vector fields are more flexible than the cylindrical vector fields for controlling the spatial structure of polarization and for engineering the focusing fields. The elliptic-symmetry vector fields can find many specific applications from optical trapping to optical machining and so on.

  15. A vectorization of the Jameson-Caughey NYU transonic swept-wing computer program FLO-22-V1 for the STAR-100 computer

    Science.gov (United States)

    Smith, R. E.; Pitts, J. I.; Lambiotte, J. J., Jr.

    1978-01-01

    The computer program FLO-22 for analyzing inviscid transonic flow past 3-D swept-wing configurations was modified to use vector operations and run on the STAR-100 computer. The vectorized version described herein was called FLO-22-V1. Vector operations were incorporated into Successive Line Over-Relaxation in the transformed horizontal direction. Vector relational operations and control vectors were used to implement upwind differencing at supersonic points. A high speed of computation and extended grid domain were characteristics of FLO-22-V1. The new program was not the optimal vectorization of Successive Line Over-Relaxation applied to transonic flow; however, it proved that vector operations can readily be implemented to increase the computation rate of the algorithm.

  16. Chikungunya Virus–Vector Interactions

    Directory of Open Access Journals (Sweden)

    Lark L. Coffey

    2014-11-01

    Full Text Available Chikungunya virus (CHIKV is a mosquito-borne alphavirus that causes chikungunya fever, a severe, debilitating disease that often produces chronic arthralgia. Since 2004, CHIKV has emerged in Africa, Indian Ocean islands, Asia, Europe, and the Americas, causing millions of human infections. Central to understanding CHIKV emergence is knowledge of the natural ecology of transmission and vector infection dynamics. This review presents current understanding of CHIKV infection dynamics in mosquito vectors and its relationship to human disease emergence. The following topics are reviewed: CHIKV infection and vector life history traits including transmission cycles, genetic origins, distribution, emergence and spread, dispersal, vector competence, vector immunity and microbial interactions, and co-infection by CHIKV and other arboviruses. The genetics of vector susceptibility and host range changes, population heterogeneity and selection for the fittest viral genomes, dual host cycling and its impact on CHIKV adaptation, viral bottlenecks and intrahost diversity, and adaptive constraints on CHIKV evolution are also discussed. The potential for CHIKV re-emergence and expansion into new areas and prospects for prevention via vector control are also briefly reviewed.

  17. Extended vector-tensor theories

    Energy Technology Data Exchange (ETDEWEB)

    Kimura, Rampei; Naruko, Atsushi; Yoshida, Daisuke, E-mail: rampei@th.phys.titech.ac.jp, E-mail: naruko@th.phys.titech.ac.jp, E-mail: yoshida@th.phys.titech.ac.jp [Department of Physics, Tokyo Institute of Technology, 2-12-1 Ookayama, Meguro-ku, Tokyo 152-8551 (Japan)

    2017-01-01

    Recently, several extensions of massive vector theory in curved space-time have been proposed in many literatures. In this paper, we consider the most general vector-tensor theories that contain up to two derivatives with respect to metric and vector field. By imposing a degeneracy condition of the Lagrangian in the context of ADM decomposition of space-time to eliminate an unwanted mode, we construct a new class of massive vector theories where five degrees of freedom can propagate, corresponding to three for massive vector modes and two for massless tensor modes. We find that the generalized Proca and the beyond generalized Proca theories up to the quartic Lagrangian, which should be included in this formulation, are degenerate theories even in curved space-time. Finally, introducing new metric and vector field transformations, we investigate the properties of thus obtained theories under such transformations.

  18. Supergravity inspired vector curvaton

    International Nuclear Information System (INIS)

    Dimopoulos, Konstantinos

    2007-01-01

    It is investigated whether a massive Abelian vector field, whose gauge kinetic function is growing during inflation, can be responsible for the generation of the curvature perturbation in the Universe. Particle production is studied and it is shown that the vector field can obtain a scale-invariant superhorizon spectrum of perturbations with a reasonable choice of kinetic function. After inflation the vector field begins coherent oscillations, during which it corresponds to pressureless isotropic matter. When the vector field dominates the Universe, its perturbations give rise to the observed curvature perturbation following the curvaton scenario. It is found that this is possible if, after the end of inflation, the mass of the vector field increases at a phase transition at temperature of order 1 TeV or lower. Inhomogeneous reheating, whereby the vector field modulates the decay rate of the inflaton, is also studied

  19. ATLAS File and Dataset Metadata Collection and Use

    CERN Document Server

    Albrand, S; The ATLAS collaboration; Lambert, F; Gallas, E J

    2012-01-01

    The ATLAS Metadata Interface (“AMI”) was designed as a generic cataloguing system, and as such it has found many uses in the experiment including software release management, tracking of reconstructed event sizes and control of dataset nomenclature. The primary use of AMI is to provide a catalogue of datasets (file collections) which is searchable using physics criteria. In this paper we discuss the various mechanisms used for filling the AMI dataset and file catalogues. By correlating information from different sources we can derive aggregate information which is important for physics analysis; for example the total number of events contained in dataset, and possible reasons for missing events such as a lost file. Finally we will describe some specialized interfaces which were developed for the Data Preparation and reprocessing coordinators. These interfaces manipulate information from both the dataset domain held in AMI, and the run-indexed information held in the ATLAS COMA application (Conditions and ...

  20. Constraining vectors and axial-vectors in walking technicolour by a holographic principle

    DEFF Research Database (Denmark)

    D. Dietrich, Dennis; Kouvaris, Christoforos

    2008-01-01

    We use a holographic principle to study the low-energy spectrum of walking technicolour models. In particular, we predict the masses of the axial vectors as well as the decay constants of vectors and axial vectors as functions of the mass of the techni-rho. Given that there are very few...

  1. Address Points, MFRDC has address points for all municipalities in Dooly, Crisp, Macon, Taylor, Schley, Marion and Webster counties and for the counties as well., Published in 2008, 1:1200 (1in=100ft) scale, Middle Flint Regional Development Commission.

    Data.gov (United States)

    NSGIC Regional | GIS Inventory — Address Points dataset current as of 2008. MFRDC has address points for all municipalities in Dooly, Crisp, Macon, Taylor, Schley, Marion and Webster counties and...

  2. Norwegian Hydrological Reference Dataset for Climate Change Studies

    Energy Technology Data Exchange (ETDEWEB)

    Magnussen, Inger Helene; Killingland, Magnus; Spilde, Dag

    2012-07-01

    Based on the Norwegian hydrological measurement network, NVE has selected a Hydrological Reference Dataset for studies of hydrological change. The dataset meets international standards with high data quality. It is suitable for monitoring and studying the effects of climate change on the hydrosphere and cryosphere in Norway. The dataset includes streamflow, groundwater, snow, glacier mass balance and length change, lake ice and water temperature in rivers and lakes.(Author)

  3. Support vector machine regression (SVR/LS-SVM)--an alternative to neural networks (ANN) for analytical chemistry? Comparison of nonlinear methods on near infrared (NIR) spectroscopy data.

    Science.gov (United States)

    Balabin, Roman M; Lomakina, Ekaterina I

    2011-04-21

    In this study, we make a general comparison of the accuracy and robustness of five multivariate calibration models: partial least squares (PLS) regression or projection to latent structures, polynomial partial least squares (Poly-PLS) regression, artificial neural networks (ANNs), and two novel techniques based on support vector machines (SVMs) for multivariate data analysis: support vector regression (SVR) and least-squares support vector machines (LS-SVMs). The comparison is based on fourteen (14) different datasets: seven sets of gasoline data (density, benzene content, and fractional composition/boiling points), two sets of ethanol gasoline fuel data (density and ethanol content), one set of diesel fuel data (total sulfur content), three sets of petroleum (crude oil) macromolecules data (weight percentages of asphaltenes, resins, and paraffins), and one set of petroleum resins data (resins content). Vibrational (near-infrared, NIR) spectroscopic data are used to predict the properties and quality coefficients of gasoline, biofuel/biodiesel, diesel fuel, and other samples of interest. The four systems presented here range greatly in composition, properties, strength of intermolecular interactions (e.g., van der Waals forces, H-bonds), colloid structure, and phase behavior. Due to the high diversity of chemical systems studied, general conclusions about SVM regression methods can be made. We try to answer the following question: to what extent can SVM-based techniques replace ANN-based approaches in real-world (industrial/scientific) applications? The results show that both SVR and LS-SVM methods are comparable to ANNs in accuracy. Due to the much higher robustness of the former, the SVM-based approaches are recommended for practical (industrial) application. This has been shown to be especially true for complicated, highly nonlinear objects.

  4. An IFC schema extension and binary serialization format to efficiently integrate point cloud data into building models

    NARCIS (Netherlands)

    Krijnen, T.F.; Beetz, J.

    2017-01-01

    In this paper we suggest an extension to the Industry Foundation Classes (IFC) model to integrate point cloud datasets. The proposal includes a schema extension to the core model allowing the storage of points, either as Cartesian coordinates, points in parametric space of associated building

  5. Recommendation on vectors and vector-transmitted diseases

    OpenAIRE

    Netherlands Food and Consumer Product Safety Authority

    2009-01-01

    In view of their increasing risk of introduction and their possible implications in causing major disease outbreaks, vectors, as well as vector-transmitted diseases like dengue, West Nile disease, Lyme disease and bluetongue need to be recognised as a threat to public and animal health and to the economy, also in the Netherlands. There has been an increase in the incidence of these diseases in the past two to three decades. Climate changes and changes in the use of land, water managemen...

  6. The Harvard organic photovoltaic dataset

    Science.gov (United States)

    Lopez, Steven A.; Pyzer-Knapp, Edward O.; Simm, Gregor N.; Lutzow, Trevor; Li, Kewei; Seress, Laszlo R.; Hachmann, Johannes; Aspuru-Guzik, Alán

    2016-01-01

    The Harvard Organic Photovoltaic Dataset (HOPV15) presented in this work is a collation of experimental photovoltaic data from the literature, and corresponding quantum-chemical calculations performed over a range of conformers, each with quantum chemical results using a variety of density functionals and basis sets. It is anticipated that this dataset will be of use in both relating electronic structure calculations to experimental observations through the generation of calibration schemes, as well as for the creation of new semi-empirical methods and the benchmarking of current and future model chemistries for organic electronic applications. PMID:27676312

  7. Gauge anomaly with vector and axial-vector fields in 6D curved space

    Science.gov (United States)

    Yajima, Satoshi; Eguchi, Kohei; Fukuda, Makoto; Oka, Tomonori

    2018-03-01

    Imposing the conservation equation of the vector current for a fermion of spin 1/2 at the quantum level, a gauge anomaly for the fermion coupling with non-Abelian vector and axial-vector fields in 6D curved space is expressed in tensorial form. The anomaly consists of terms that resemble the chiral U(1) anomaly and the commutator terms that disappear if the axial-vector field is Abelian.

  8. Synthetic and Empirical Capsicum Annuum Image Dataset

    NARCIS (Netherlands)

    Barth, R.

    2016-01-01

    This dataset consists of per-pixel annotated synthetic (10500) and empirical images (50) of Capsicum annuum, also known as sweet or bell pepper, situated in a commercial greenhouse. Furthermore, the source models to generate the synthetic images are included. The aim of the datasets are to

  9. Doppler Lidar Vector Retrievals and Atmospheric Data Visualization in Mixed/Augmented Reality

    Science.gov (United States)

    Cherukuru, Nihanth Wagmi

    Environmental remote sensing has seen rapid growth in the recent years and Doppler wind lidars have gained popularity primarily due to their non-intrusive, high spatial and temporal measurement capabilities. While lidar applications early on, relied on the radial velocity measurements alone, most of the practical applications in wind farm control and short term wind prediction require knowledge of the vector wind field. Over the past couple of years, multiple works on lidars have explored three primary methods of retrieving wind vectors viz., using homogeneous windfield assumption, computationally extensive variational methods and the use of multiple Doppler lidars. Building on prior research, the current three-part study, first demonstrates the capabilities of single and dual Doppler lidar retrievals in capturing downslope windstorm-type flows occurring at Arizona's Barringer Meteor Crater as a part of the METCRAX II field experiment. Next, to address the need for a reliable and computationally efficient vector retrieval for adaptive wind farm control applications, a novel 2D vector retrieval based on a variational formulation was developed and applied on lidar scans from an offshore wind farm and validated with data from a cup and vane anemometer installed on a nearby research platform. Finally, a novel data visualization technique using Mixed Reality (MR)/ Augmented Reality (AR) technology is presented to visualize data from atmospheric sensors. MR is an environment in which the user's visual perception of the real world is enhanced with live, interactive, computer generated sensory input (in this case, data from atmospheric sensors like Doppler lidars). A methodology using modern game development platforms is presented and demonstrated with lidar retrieved wind fields. In the current study, the possibility of using this technology to visualize data from atmospheric sensors in mixed reality is explored and demonstrated with lidar retrieved wind fields as well as

  10. Identification of species based on DNA barcode using k-mer feature vector and Random forest classifier.

    Science.gov (United States)

    Meher, Prabina Kumar; Sahu, Tanmaya Kumar; Rao, A R

    2016-11-05

    DNA barcoding is a molecular diagnostic method that allows automated and accurate identification of species based on a short and standardized fragment of DNA. To this end, an attempt has been made in this study to develop a computational approach for identifying the species by comparing its barcode with the barcode sequence of known species present in the reference library. Each barcode sequence was first mapped onto a numeric feature vector based on k-mer frequencies and then Random forest methodology was employed on the transformed dataset for species identification. The proposed approach outperformed similarity-based, tree-based, diagnostic-based approaches and found comparable with existing supervised learning based approaches in terms of species identification success rate, while compared using real and simulated datasets. Based on the proposed approach, an online web interface SPIDBAR has also been developed and made freely available at http://cabgrid.res.in:8080/spidbar/ for species identification by the taxonomists. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Heavy Scalar, Vector, and Axial-Vector Mesons in Hot and Dense Nuclear Medium

    Directory of Open Access Journals (Sweden)

    Arvind Kumar

    2014-01-01

    Full Text Available In this work we shall investigate the mass modifications of scalar mesons (D0; B0, vector mesons (D*; B*, and axial-vector mesons (D1; B1 at finite density and temperature of the nuclear medium. The above mesons are modified in the nuclear medium through the modification of quark and gluon condensates. We will find the medium modification of quark and gluon condensates within chiral SU(3 model through the medium modification of scalar-isoscalar fields σ and ζ at finite density and temperature. These medium modified quark and gluon condensates will further be used through QCD sum rules for the evaluation of in-medium properties of the above mentioned scalar, vector, and axial vector mesons. We will also discuss the effects of density and temperature of the nuclear medium on the scattering lengths of the above scalar, vector, and axial-vector mesons. The study of the medium modifications of the above mesons may be helpful for understanding their production rates in heavy-ion collision experiments. The results of present investigations of medium modifications of scalar, vector, and axial-vector mesons at finite density and temperature can be verified in the compressed baryonic matter (CBM experiment of FAIR facility at GSI, Germany.

  12. Vector Fields on Product Manifolds

    OpenAIRE

    Kurz, Stefan

    2011-01-01

    This short report establishes some basic properties of smooth vector fields on product manifolds. The main results are: (i) On a product manifold there always exists a direct sum decomposition into horizontal and vertical vector fields. (ii) Horizontal and vertical vector fields are naturally isomorphic to smooth families of vector fields defined on the factors. Vector fields are regarded as derivations of the algebra of smooth functions.

  13. Light scattering of rectangular slot antennas: parallel magnetic vector vs perpendicular electric vector

    Science.gov (United States)

    Lee, Dukhyung; Kim, Dai-Sik

    2016-01-01

    We study light scattering off rectangular slot nano antennas on a metal film varying incident polarization and incident angle, to examine which field vector of light is more important: electric vector perpendicular to, versus magnetic vector parallel to the long axis of the rectangle. While vector Babinet’s principle would prefer magnetic field along the long axis for optimizing slot antenna function, convention and intuition most often refer to the electric field perpendicular to it. Here, we demonstrate experimentally that in accordance with vector Babinet’s principle, the incident magnetic vector parallel to the long axis is the dominant component, with the perpendicular incident electric field making a small contribution of the factor of 1/|ε|, the reciprocal of the absolute value of the dielectric constant of the metal, owing to the non-perfectness of metals at optical frequencies.

  14. Full-Angle Quaternions for Robustly Matching Vectors of 3D Rotations

    NARCIS (Netherlands)

    Liwicki, Stephan; Pham, Minh-Tri; Zafeiriou, Stefanos; Pantic, Maja; Stenger, Björn

    In this paper we introduce a new distance for robustly matching vectors of 3D rotations. A special representation of 3D rotations, which we coin full-angle quaternion (FAQ), allows us to express this distance as Euclidean. We apply the distance to the problems of 3D shape recognition from point

  15. Generalization of concurrence vectors

    International Nuclear Information System (INIS)

    Yu Changshui; Song Heshan

    2004-01-01

    In this Letter, based on the generalization of concurrence vectors for bipartite pure state with respect to employing tensor product of generators of the corresponding rotation groups, we generalize concurrence vectors to the case of mixed states; a new criterion of separability of multipartite pure states is given out, for which we define a concurrence vector; we generalize the vector to the case of multipartite mixed state and give out a good measure of free entanglement

  16. Vector Network Coding Algorithms

    OpenAIRE

    Ebrahimi, Javad; Fragouli, Christina

    2010-01-01

    We develop new algebraic algorithms for scalar and vector network coding. In vector network coding, the source multicasts information by transmitting vectors of length L, while intermediate nodes process and combine their incoming packets by multiplying them with L x L coding matrices that play a similar role as coding c in scalar coding. Our algorithms for scalar network jointly optimize the employed field size while selecting the coding coefficients. Similarly, for vector coding, our algori...

  17. Improved Vector Velocity Estimation using Directional Transverse Oscillation

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt

    2015-01-01

    A method for estimating vector velocities using transverse oscillation (TO) combined with directional beamforming is presented. Directional Transverse Oscillation (DTO) is selfcalibrating, which increase the estimation accuracy and finds the lateral oscillation period automatically. A normal...... focused field is emitted and the received signals are beamformed in the lateral direction transverse to the ultrasound beam. A lateral oscillation is obtained by having a receive apodization waveform with two separate peaks. The IQ data are obtained by making a Hilbert transform of the directional signal...... transducer with a focal point at 105.6 mm (F#=5) for Vector Flow Imaging (VFI). A 6 mm radius tube in a circulating flow rig was scanned and the parabolic volume flow of 112.7 l/h (peak velocity 0.55 m/s) measured by a Danfoss Magnetic flow meter for reference. Velocity estimates for DTO are found for 32...

  18. EEG datasets for motor imagery brain-computer interface.

    Science.gov (United States)

    Cho, Hohyun; Ahn, Minkyu; Ahn, Sangtae; Kwon, Moonyoung; Jun, Sung Chan

    2017-07-01

    Most investigators of brain-computer interface (BCI) research believe that BCI can be achieved through induced neuronal activity from the cortex, but not by evoked neuronal activity. Motor imagery (MI)-based BCI is one of the standard concepts of BCI, in that the user can generate induced activity by imagining motor movements. However, variations in performance over sessions and subjects are too severe to overcome easily; therefore, a basic understanding and investigation of BCI performance variation is necessary to find critical evidence of performance variation. Here we present not only EEG datasets for MI BCI from 52 subjects, but also the results of a psychological and physiological questionnaire, EMG datasets, the locations of 3D EEG electrodes, and EEGs for non-task-related states. We validated our EEG datasets by using the percentage of bad trials, event-related desynchronization/synchronization (ERD/ERS) analysis, and classification analysis. After conventional rejection of bad trials, we showed contralateral ERD and ipsilateral ERS in the somatosensory area, which are well-known patterns of MI. Finally, we showed that 73.08% of datasets (38 subjects) included reasonably discriminative information. Our EEG datasets included the information necessary to determine statistical significance; they consisted of well-discriminated datasets (38 subjects) and less-discriminative datasets. These may provide researchers with opportunities to investigate human factors related to MI BCI performance variation, and may also achieve subject-to-subject transfer by using metadata, including a questionnaire, EEG coordinates, and EEGs for non-task-related states. © The Authors 2017. Published by Oxford University Press.

  19. Complex Polynomial Vector Fields

    DEFF Research Database (Denmark)

    Dias, Kealey

    vector fields. Since the class of complex polynomial vector fields in the plane is natural to consider, it is remarkable that its study has only begun very recently. There are numerous fundamental questions that are still open, both in the general classification of these vector fields, the decomposition...... of parameter spaces into structurally stable domains, and a description of the bifurcations. For this reason, the talk will focus on these questions for complex polynomial vector fields.......The two branches of dynamical systems, continuous and discrete, correspond to the study of differential equations (vector fields) and iteration of mappings respectively. In holomorphic dynamics, the systems studied are restricted to those described by holomorphic (complex analytic) functions...

  20. A high-resolution European dataset for hydrologic modeling

    Science.gov (United States)

    Ntegeka, Victor; Salamon, Peter; Gomes, Goncalo; Sint, Hadewij; Lorini, Valerio; Thielen, Jutta

    2013-04-01

    There is an increasing demand for large scale hydrological models not only in the field of modeling the impact of climate change on water resources but also for disaster risk assessments and flood or drought early warning systems. These large scale models need to be calibrated and verified against large amounts of observations in order to judge their capabilities to predict the future. However, the creation of large scale datasets is challenging for it requires collection, harmonization, and quality checking of large amounts of observations. For this reason, only a limited number of such datasets exist. In this work, we present a pan European, high-resolution gridded dataset of meteorological observations (EFAS-Meteo) which was designed with the aim to drive a large scale hydrological model. Similar European and global gridded datasets already exist, such as the HadGHCND (Caesar et al., 2006), the JRC MARS-STAT database (van der Goot and Orlandi, 2003) and the E-OBS gridded dataset (Haylock et al., 2008). However, none of those provide similarly high spatial resolution and/or a complete set of variables to force a hydrologic model. EFAS-Meteo contains daily maps of precipitation, surface temperature (mean, minimum and maximum), wind speed and vapour pressure at a spatial grid resolution of 5 x 5 km for the time period 1 January 1990 - 31 December 2011. It furthermore contains calculated radiation, which is calculated by using a staggered approach depending on the availability of sunshine duration, cloud cover and minimum and maximum temperature, and evapotranspiration (potential evapotranspiration, bare soil and open water evapotranspiration). The potential evapotranspiration was calculated using the Penman-Monteith equation with the above-mentioned meteorological variables. The dataset was created as part of the development of the European Flood Awareness System (EFAS) and has been continuously updated throughout the last years. The dataset variables are used as

  1. Validation of SplitVectors Encoding for Quantitative Visualization of Large-Magnitude-Range Vector Fields.

    Science.gov (United States)

    Henan Zhao; Bryant, Garnett W; Griffin, Wesley; Terrill, Judith E; Jian Chen

    2017-06-01

    We designed and evaluated SplitVectors, a new vector field display approach to help scientists perform new discrimination tasks on large-magnitude-range scientific data shown in three-dimensional (3D) visualization environments. SplitVectors uses scientific notation to display vector magnitude, thus improving legibility. We present an empirical study comparing the SplitVectors approach with three other approaches - direct linear representation, logarithmic, and text display commonly used in scientific visualizations. Twenty participants performed three domain analysis tasks: reading numerical values (a discrimination task), finding the ratio between values (a discrimination task), and finding the larger of two vectors (a pattern detection task). Participants used both mono and stereo conditions. Our results suggest the following: (1) SplitVectors improve accuracy by about 10 times compared to linear mapping and by four times to logarithmic in discrimination tasks; (2) SplitVectors have no significant differences from the textual display approach, but reduce cluttering in the scene; (3) SplitVectors and textual display are less sensitive to data scale than linear and logarithmic approaches; (4) using logarithmic can be problematic as participants' confidence was as high as directly reading from the textual display, but their accuracy was poor; and (5) Stereoscopy improved performance, especially in more challenging discrimination tasks.

  2. ASSISTments Dataset from Multiple Randomized Controlled Experiments

    Science.gov (United States)

    Selent, Douglas; Patikorn, Thanaporn; Heffernan, Neil

    2016-01-01

    In this paper, we present a dataset consisting of data generated from 22 previously and currently running randomized controlled experiments inside the ASSISTments online learning platform. This dataset provides data mining opportunities for researchers to analyze ASSISTments data in a convenient format across multiple experiments at the same time.…

  3. A terrestrial lidar-based workflow for determining three-dimensional slip vectors and associated uncertainties

    Science.gov (United States)

    Gold, Peter O.; Cowgill, Eric; Kreylos, Oliver; Gold, Ryan D.

    2012-01-01

    Three-dimensional (3D) slip vectors recorded by displaced landforms are difficult to constrain across complex fault zones, and the uncertainties associated with such measurements become increasingly challenging to assess as landforms degrade over time. We approach this problem from a remote sensing perspective by using terrestrial laser scanning (TLS) and 3D structural analysis. We have developed an integrated TLS data collection and point-based analysis workflow that incorporates accurate assessments of aleatoric and epistemic uncertainties using experimental surveys, Monte Carlo simulations, and iterative site reconstructions. Our scanning workflow and equipment requirements are optimized for single-operator surveying, and our data analysis process is largely completed using new point-based computing tools in an immersive 3D virtual reality environment. In a case study, we measured slip vector orientations at two sites along the rupture trace of the 1954 Dixie Valley earthquake (central Nevada, United States), yielding measurements that are the first direct constraints on the 3D slip vector for this event. These observations are consistent with a previous approximation of net extension direction for this event. We find that errors introduced by variables in our survey method result in <2.5 cm of variability in components of displacement, and are eclipsed by the 10–60 cm epistemic errors introduced by reconstructing the field sites to their pre-erosion geometries. Although the higher resolution TLS data sets enabled visualization and data interactivity critical for reconstructing the 3D slip vector and for assessing uncertainties, dense topographic constraints alone were not sufficient to significantly narrow the wide (<26°) range of allowable slip vector orientations that resulted from accounting for epistemic uncertainties.

  4. Gradients estimation from random points with volumetric tensor in turbulence

    Science.gov (United States)

    Watanabe, Tomoaki; Nagata, Koji

    2017-12-01

    We present an estimation method of fully-resolved/coarse-grained gradients from randomly distributed points in turbulence. The method is based on a linear approximation of spatial gradients expressed with the volumetric tensor, which is a 3 × 3 matrix determined by a geometric distribution of the points. The coarse grained gradient can be considered as a low pass filtered gradient, whose cutoff is estimated with the eigenvalues of the volumetric tensor. The present method, the volumetric tensor approximation, is tested for velocity and passive scalar gradients in incompressible planar jet and mixing layer. Comparison with a finite difference approximation on a Cartesian grid shows that the volumetric tensor approximation computes the coarse grained gradients fairly well at a moderate computational cost under various conditions of spatial distributions of points. We also show that imposing the solenoidal condition improves the accuracy of the present method for solenoidal vectors, such as a velocity vector in incompressible flows, especially when the number of the points is not large. The volumetric tensor approximation with 4 points poorly estimates the gradient because of anisotropic distribution of the points. Increasing the number of points from 4 significantly improves the accuracy. Although the coarse grained gradient changes with the cutoff length, the volumetric tensor approximation yields the coarse grained gradient whose magnitude is close to the one obtained by the finite difference. We also show that the velocity gradient estimated with the present method well captures the turbulence characteristics such as local flow topology, amplification of enstrophy and strain, and energy transfer across scales.

  5. Vectorization, parallelization and porting of nuclear codes (vectorization and parallelization). Progress report fiscal 1998

    International Nuclear Information System (INIS)

    Ishizuki, Shigeru; Kawai, Wataru; Nemoto, Toshiyuki; Ogasawara, Shinobu; Kume, Etsuo; Adachi, Masaaki; Kawasaki, Nobuo; Yatake, Yo-ichi

    2000-03-01

    Several computer codes in the nuclear field have been vectorized, parallelized and transported on the FUJITSU VPP500 system, the AP3000 system and the Paragon system at Center for Promotion of Computational Science and Engineering in Japan Atomic Energy Research Institute. We dealt with 12 codes in fiscal 1998. These results are reported in 3 parts, i.e., the vectorization and parallelization on vector processors part, the parallelization on scalar processors part and the porting part. In this report, we describe the vectorization and parallelization on vector processors. In this vectorization and parallelization on vector processors part, the vectorization of General Tokamak Circuit Simulation Program code GTCSP, the vectorization and parallelization of Molecular Dynamics NTV (n-particle, Temperature and Velocity) Simulation code MSP2, Eddy Current Analysis code EDDYCAL, Thermal Analysis Code for Test of Passive Cooling System by HENDEL T2 code THANPACST2 and MHD Equilibrium code SELENEJ on the VPP500 are described. In the parallelization on scalar processors part, the parallelization of Monte Carlo N-Particle Transport code MCNP4B2, Plasma Hydrodynamics code using Cubic Interpolated Propagation Method PHCIP and Vectorized Monte Carlo code (continuous energy model / multi-group model) MVP/GMVP on the Paragon are described. In the porting part, the porting of Monte Carlo N-Particle Transport code MCNP4B2 and Reactor Safety Analysis code RELAP5 on the AP3000 are described. (author)

  6. An open, interoperable, transdisciplinary approach to a point cloud data service using OGC standards and open source software.

    Science.gov (United States)

    Steer, Adam; Trenham, Claire; Druken, Kelsey; Evans, Benjamin; Wyborn, Lesley

    2017-04-01

    High resolution point clouds and other topology-free point data sources are widely utilised for research, management and planning activities. A key goal for research and management users is making these data and common derivatives available in a way which is seamlessly interoperable with other observed and modelled data. The Australian National Computational Infrastructure (NCI) stores point data from a range of disciplines, including terrestrial and airborne LiDAR surveys, 3D photogrammetry, airborne and ground-based geophysical observations, bathymetric observations and 4D marine tracers. These data are stored alongside a significant store of Earth systems data including climate and weather, ecology, hydrology, geoscience and satellite observations, and available from NCI's National Environmental Research Data Interoperability Platform (NERDIP) [1]. Because of the NERDIP requirement for interoperability with gridded datasets, the data models required to store these data may not conform to the LAS/LAZ format - the widely accepted community standard for point data storage and transfer. The goal for NCI is making point data discoverable, accessible and useable in ways which allow seamless integration with earth observation datasets and model outputs - in turn assisting researchers and decision-makers in the often-convoluted process of handling and analyzing massive point datasets. With a use-case of providing a web data service and supporting a derived product workflow, NCI has implemented and tested a web-based point cloud service using the Open Geospatial Consortium (OGC) Web Processing Service [2] as a transaction handler between a web-based client and server-side computing tools based on a native Linux operating system. Using this model, the underlying toolset for driving a data service is flexible and can take advantage of NCI's highly scalable research cloud. Present work focusses on the Point Data Abstraction Library (PDAL) [3] as a logical choice for

  7. Would the ‘real’ observed dataset stand up? A critical examination of eight observed gridded climate datasets for China

    International Nuclear Information System (INIS)

    Sun, Qiaohong; Miao, Chiyuan; Duan, Qingyun; Kong, Dongxian; Ye, Aizhong; Di, Zhenhua; Gong, Wei

    2014-01-01

    This research compared and evaluated the spatio-temporal similarities and differences of eight widely used gridded datasets. The datasets include daily precipitation over East Asia (EA), the Climate Research Unit (CRU) product, the Global Precipitation Climatology Centre (GPCC) product, the University of Delaware (UDEL) product, Precipitation Reconstruction over Land (PREC/L), the Asian Precipitation Highly Resolved Observational (APHRO) product, the Institute of Atmospheric Physics (IAP) dataset from the Chinese Academy of Sciences, and the National Meteorological Information Center dataset from the China Meteorological Administration (CN05). The meteorological variables focus on surface air temperature (SAT) or precipitation (PR) in China. All datasets presented general agreement on the whole spatio-temporal scale, but some differences appeared for specific periods and regions. On a temporal scale, EA shows the highest amount of PR, while APHRO shows the lowest. CRU and UDEL show higher SAT than IAP or CN05. On a spatial scale, the most significant differences occur in western China for PR and SAT. For PR, the difference between EA and CRU is the largest. When compared with CN05, CRU shows higher SAT in the central and southern Northwest river drainage basin, UDEL exhibits higher SAT over the Southwest river drainage system, and IAP has lower SAT in the Tibetan Plateau. The differences in annual mean PR and SAT primarily come from summer and winter, respectively. Finally, potential factors impacting agreement among gridded climate datasets are discussed, including raw data sources, quality control (QC) schemes, orographic correction, and interpolation techniques. The implications and challenges of these results for climate research are also briefly addressed. (paper)

  8. Estimating parameters for probabilistic linkage of privacy-preserved datasets.

    Science.gov (United States)

    Brown, Adrian P; Randall, Sean M; Ferrante, Anna M; Semmens, James B; Boyd, James H

    2017-07-10

    Probabilistic record linkage is a process used to bring together person-based records from within the same dataset (de-duplication) or from disparate datasets using pairwise comparisons and matching probabilities. The linkage strategy and associated match probabilities are often estimated through investigations into data quality and manual inspection. However, as privacy-preserved datasets comprise encrypted data, such methods are not possible. In this paper, we present a method for estimating the probabilities and threshold values for probabilistic privacy-preserved record linkage using Bloom filters. Our method was tested through a simulation study using synthetic data, followed by an application using real-world administrative data. Synthetic datasets were generated with error rates from zero to 20% error. Our method was used to estimate parameters (probabilities and thresholds) for de-duplication linkages. Linkage quality was determined by F-measure. Each dataset was privacy-preserved using separate Bloom filters for each field. Match probabilities were estimated using the expectation-maximisation (EM) algorithm on the privacy-preserved data. Threshold cut-off values were determined by an extension to the EM algorithm allowing linkage quality to be estimated for each possible threshold. De-duplication linkages of each privacy-preserved dataset were performed using both estimated and calculated probabilities. Linkage quality using the F-measure at the estimated threshold values was also compared to the highest F-measure. Three large administrative datasets were used to demonstrate the applicability of the probability and threshold estimation technique on real-world data. Linkage of the synthetic datasets using the estimated probabilities produced an F-measure that was comparable to the F-measure using calculated probabilities, even with up to 20% error. Linkage of the administrative datasets using estimated probabilities produced an F-measure that was higher

  9. Viking Seismometer PDS Archive Dataset

    Science.gov (United States)

    Lorenz, R. D.

    2016-12-01

    The Viking Lander 2 seismometer operated successfully for over 500 Sols on the Martian surface, recording at least one likely candidate Marsquake. The Viking mission, in an era when data handling hardware (both on board and on the ground) was limited in capability, predated modern planetary data archiving, and ad-hoc repositories of the data, and the very low-level record at NSSDC, were neither convenient to process nor well-known. In an effort supported by the NASA Mars Data Analysis Program, we have converted the bulk of the Viking dataset (namely the 49,000 and 270,000 records made in High- and Event- modes at 20 and 1 Hz respectively) into a simple ASCII table format. Additionally, since wind-generated lander motion is a major component of the signal, contemporaneous meteorological data are included in summary records to facilitate correlation. These datasets are being archived at the PDS Geosciences Node. In addition to brief instrument and dataset descriptions, the archive includes code snippets in the freely-available language 'R' to demonstrate plotting and analysis. Further, we present examples of lander-generated noise, associated with the sampler arm, instrument dumps and other mechanical operations.

  10. Equivalent Vectors

    Science.gov (United States)

    Levine, Robert

    2004-01-01

    The cross-product is a mathematical operation that is performed between two 3-dimensional vectors. The result is a vector that is orthogonal or perpendicular to both of them. Learning about this for the first time while taking Calculus-III, the class was taught that if AxB = AxC, it does not necessarily follow that B = C. This seemed baffling. The…

  11. Chemoselective ligation and antigen vectorization.

    Science.gov (United States)

    Gras-Masse, H

    2001-01-01

    The interest in cocktail-lipopeptide vaccines has now been confirmed by phase I clinical trials: highly diversified B-, T-helper or cytotoxic T-cell epitopes can be combined with a lipophilic vector for the induction of B- and T-cell responses of predetermined specificity. With the goal of producing an improved vaccine that should ideally induce a multispecific response in non-selected populations, increasing the diversity of the immunizing mixture represents one of the most obvious strategies.The selective delivery of antigens to professional antigen-presenting cells represents another promising approach for the improvement of vaccine efficacy. In this context, the mannose-receptor represents an attractive entry point for the targeting to dendritic cells of antigens linked to clustered glycosides or glycomimetics. In all cases, highly complex but fully characterized molecules must be produced. To develop a modular and flexible strategy which could be generally applicable to a large set of peptide antigens, we elected to explore the potentialities of chemoselective ligation methods. The hydrazone bond was found particularly reliable and fully compatible with sulphide ligation. Hydrazone/thioether orthogonal ligation systems could be developed to account for the nature of the antigens and the solubility of the vector systems. Copyright 2001 The International Association for Biologicals.

  12. Speculative dynamic vectorization to assist static vectorization in a HW/SW co-designed environment

    OpenAIRE

    Kumar, R.; Martinez, A.; Gonzalez, A.

    2013-01-01

    Compiler based static vectorization is used widely to extract data level parallelism from computation intensive applications. Static vectorization is very effective in vectorizing traditional array based applications. However, compilers inability to reorder ambiguous memory references severely limits vectorization opportunities, especially in pointer rich applications. HW/SW co-designed processors provide an excellent opportunity to optimize the applications at runtime. The availability of dy...

  13. Identification and optimization of classifier genes from multi-class earthworm microarray dataset.

    Directory of Open Access Journals (Sweden)

    Ying Li

    Full Text Available Monitoring, assessment and prediction of environmental risks that chemicals pose demand rapid and accurate diagnostic assays. A variety of toxicological effects have been associated with explosive compounds TNT and RDX. One important goal of microarray experiments is to discover novel biomarkers for toxicity evaluation. We have developed an earthworm microarray containing 15,208 unique oligo probes and have used it to profile gene expression in 248 earthworms exposed to TNT, RDX or neither. We assembled a new machine learning pipeline consisting of several well-established feature filtering/selection and classification techniques to analyze the 248-array dataset in order to construct classifier models that can separate earthworm samples into three groups: control, TNT-treated, and RDX-treated. First, a total of 869 genes differentially expressed in response to TNT or RDX exposure were identified using a univariate statistical algorithm of class comparison. Then, decision tree-based algorithms were applied to select a subset of 354 classifier genes, which were ranked by their overall weight of significance. A multiclass support vector machine (MC-SVM method and an unsupervised K-mean clustering method were applied to independently refine the classifier, producing a smaller subset of 39 and 30 classifier genes, separately, with 11 common genes being potential biomarkers. The combined 58 genes were considered the refined subset and used to build MC-SVM and clustering models with classification accuracy of 83.5% and 56.9%, respectively. This study demonstrates that the machine learning approach can be used to identify and optimize a small subset of classifier/biomarker genes from high dimensional datasets and generate classification models of acceptable precision for multiple classes.

  14. Search for outlying data points in multivariate solar activity data sets

    International Nuclear Information System (INIS)

    Bartkowiak, A.; Jakimiec, M.

    1989-01-01

    The aim of this paper is the investigation of outlying data points in the solar activity data sets. Two statistical methods for identifying of multivariate outliers are presented: the chi2-plot method based on the analysis of Mahalanobis distances and the method based on principal component analysis, i.e. on scatterdiagrams constructed from the first two or last two eigenvectors. We demonstrate the usefullness of these methods applying them to same data of solar activity. The methods allow to reveal quite precisely the data vectors containing some errors and also some untypical vectors, i.e. vectors with unusually large values or with values revealing untypical relations as compared with the common relations between the appropriate variables. 12 refs., 7 figs., 8 tabs. (author)

  15. Fully Convolutional Networks for Ground Classification from LIDAR Point Clouds

    Science.gov (United States)

    Rizaldy, A.; Persello, C.; Gevaert, C. M.; Oude Elberink, S. J.

    2018-05-01

    Deep Learning has been massively used for image classification in recent years. The use of deep learning for ground classification from LIDAR point clouds has also been recently studied. However, point clouds need to be converted into an image in order to use Convolutional Neural Networks (CNNs). In state-of-the-art techniques, this conversion is slow because each point is converted into a separate image. This approach leads to highly redundant computation during conversion and classification. The goal of this study is to design a more efficient data conversion and ground classification. This goal is achieved by first converting the whole point cloud into a single image. The classification is then performed by a Fully Convolutional Network (FCN), a modified version of CNN designed for pixel-wise image classification. The proposed method is significantly faster than state-of-the-art techniques. On the ISPRS Filter Test dataset, it is 78 times faster for conversion and 16 times faster for classification. Our experimental analysis on the same dataset shows that the proposed method results in 5.22 % of total error, 4.10 % of type I error, and 15.07 % of type II error. Compared to the previous CNN-based technique and LAStools software, the proposed method reduces the total error and type I error (while type II error is slightly higher). The method was also tested on a very high point density LIDAR point clouds resulting in 4.02 % of total error, 2.15 % of type I error and 6.14 % of type II error.

  16. Spin waves in terbium. III. Magnetic anisotropy at zero wave vector

    DEFF Research Database (Denmark)

    Houmann, Jens Christian Gylden; Jensen, J.; Touborg, P.

    1975-01-01

    The energy gap at zero wave vector in the spin-wave dispersion relation of ferromagnetic. Tb has been studied by inelastic neutron scattering. The energy was measured as a function of temperature and applied magnetic field, and the dynamic anisotropy parameters were deduced from the results...... the effects of zero-point deviations from the fully aligned ground state, and we tentatively propose polarization-dependent two-ion couplings as their origin........ The axial anisotropy is found to depend sensitively on the orientation of the magnetic moments in the basal plane. This behavior is shown to be a convincing indication of considerable two-ion contributions to the magnetic anisotropy at zero wave vector. With the exception of the sixfold basal...

  17. RAPID INSPECTION OF PAVEMENT MARKINGS USING MOBILE LIDAR POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    H. Zhang

    2016-06-01

    Full Text Available This study aims at building a robust semi-automated pavement marking extraction workflow based on the use of mobile LiDAR point clouds. The proposed workflow consists of three components: preprocessing, extraction, and classification. In preprocessing, the mobile LiDAR point clouds are converted into the radiometrically corrected intensity imagery of the road surface. Then the pavement markings are automatically extracted with the intensity using a set of algorithms, including Otsu’s thresholding, neighbor-counting filtering, and region growing. Finally, the extracted pavement markings are classified with the geometric parameters using a manually defined decision tree. Case studies are conducted using the mobile LiDAR dataset acquired in Xiamen (Fujian, China with different road environments by the RIEGL VMX-450 system. The results demonstrated that the proposed workflow and our software tool can achieve 93% in completeness, 95% in correctness, and 94% in F-score when using Xiamen dataset.

  18. Homogenised Australian climate datasets used for climate change monitoring

    International Nuclear Information System (INIS)

    Trewin, Blair; Jones, David; Collins; Dean; Jovanovic, Branislava; Braganza, Karl

    2007-01-01

    Full text: The Australian Bureau of Meteorology has developed a number of datasets for use in climate change monitoring. These datasets typically cover 50-200 stations distributed as evenly as possible over the Australian continent, and have been subject to detailed quality control and homogenisation.The time period over which data are available for each element is largely determined by the availability of data in digital form. Whilst nearly all Australian monthly and daily precipitation data have been digitised, a significant quantity of pre-1957 data (for temperature and evaporation) or pre-1987 data (for some other elements) remains to be digitised, and is not currently available for use in the climate change monitoring datasets. In the case of temperature and evaporation, the start date of the datasets is also determined by major changes in instruments or observing practices for which no adjustment is feasible at the present time. The datasets currently available cover: Monthly and daily precipitation (most stations commence 1915 or earlier, with many extending back to the late 19th century, and a few to the mid-19th century); Annual temperature (commences 1910); Daily temperature (commences 1910, with limited station coverage pre-1957); Twice-daily dewpoint/relative humidity (commences 1957); Monthly pan evaporation (commences 1970); Cloud amount (commences 1957) (Jovanovic etal. 2007). As well as the station-based datasets listed above, an additional dataset being developed for use in climate change monitoring (and other applications) covers tropical cyclones in the Australian region. This is described in more detail in Trewin (2007). The datasets already developed are used in analyses of observed climate change, which are available through the Australian Bureau of Meteorology website (http://www.bom.gov.au/silo/products/cli_chg/). They are also used as a basis for routine climate monitoring, and in the datasets used for the development of seasonal

  19. Vectorization at the KENO-IV code

    International Nuclear Information System (INIS)

    Asai, K.; Higuchi, K.; Katakura, J.

    1986-01-01

    The multigroup criticality safety code KENO-IV has been vectorized and tested on the FACOM VP-100 vector processor. At first, the vectorized KENO-IV on a scalar processor was slower than the original one by a factor of 1.4 because of the overhead introduced by vectorization. Making modifications of algorithms and techniques for vectorization, the vectorized version has become faster than the original one by a factor of 1.4 on the vector processor. For further speedup of the code, some improvements on compiler and hardware, especially on addition of Monte Carlo pipelines to the vector processor, are discussed

  20. Reciprocity relationships in vector acoustics and their application to vector field calculations.

    Science.gov (United States)

    Deal, Thomas J; Smith, Kevin B

    2017-08-01

    The reciprocity equation commonly stated in underwater acoustics relates pressure fields and monopole sources. It is often used to predict the pressure measured by a hydrophone for multiple source locations by placing a source at the hydrophone location and calculating the field everywhere for that source. A similar equation that governs the orthogonal components of the particle velocity field is needed to enable this computational method to be used for acoustic vector sensors. This paper derives a general reciprocity equation that accounts for both monopole and dipole sources. This vector-scalar reciprocity equation can be used to calculate individual components of the received vector field by altering the source type used in the propagation calculation. This enables a propagation model to calculate the received vector field components for an arbitrary number of source locations with a single model run for each vector field component instead of requiring one model run for each source location. Application of the vector-scalar reciprocity principle is demonstrated with analytic solutions for a range-independent environment and with numerical solutions for a range-dependent environment using a parabolic equation model.

  1. Vectors and their applications

    CERN Document Server

    Pettofrezzo, Anthony J

    2005-01-01

    Geared toward undergraduate students, this text illustrates the use of vectors as a mathematical tool in plane synthetic geometry, plane and spherical trigonometry, and analytic geometry of two- and three-dimensional space. Its rigorous development includes a complete treatment of the algebra of vectors in the first two chapters.Among the text's outstanding features are numbered definitions and theorems in the development of vector algebra, which appear in italics for easy reference. Most of the theorems include proofs, and coordinate position vectors receive an in-depth treatment. Key concept

  2. Influence of the velocity vector base relocation to the center of mass of the interrogation area on PIV accuracy

    Directory of Open Access Journals (Sweden)

    Kouba Jan

    2014-03-01

    Full Text Available This paper is aimed at modification of calculation algorithm used in data processing from PIV (Particle Image Velocimetry method. The modification of standard Multi-step correlation algorithm is based on imaging the centre of mass of the interrogation area to define the initial point of the respective vector, instead of the geometrical centre. This paper describes the principle of initial point-vector assignment, the corresponding data processing methodology including the test track analysis. Both approaches are compared within the framework of accuracy in the conclusion. The accuracy test is performed using synthetic and real data.

  3. Vectorized and multitasked solution of the few-group neutron diffusion equations

    International Nuclear Information System (INIS)

    Zee, S.K.; Turinsky, P.J.; Shayer, Z.

    1989-01-01

    A numerical algorithm with parallelism was used to solve the two-group, multidimensional neutron diffusion equations on computers characterized by shared memory, vector pipeline, and multi-CPU architecture features. Specifically, solutions were obtained on the Cray X/MP-48, the IBM-3090 with vector facilities, and the FPS-164. The material-centered mesh finite difference method approximation and outer-inner iteration method were employed. Parallelism was introduced in the inner iterations using the cyclic line successive overrelaxation iterative method and solving in parallel across lines. The outer iterations were completed using the Chebyshev semi-iterative method that allows parallelism to be introduced in both space and energy groups. For the three-dimensional model, power, soluble boron, and transient fission product feedbacks were included. Concentrating on the pressurized water reactor (PWR), the thermal-hydraulic calculation of moderator density assumed single-phase flow and a closed flow channel, allowing parallelism to be introduced in the solution across the radial plane. Using a pinwise detail, quarter-core model of a typical PWR in cycle 1, for the two-dimensional model without feedback the measured million floating point operations per second (MFLOPS)/vector speedups were 83/11.7. 18/2.2, and 2.4/5.6 on the Cray, IBM, and FPS without multitasking, respectively. Lower performance was observed with a coarser mesh, i.e., shorter vector length, due to vector pipeline start-up. For an 18 x 18 x 30 (x-y-z) three-dimensional model with feedback of the same core, MFLOPS/vector speedups of --61/6.7 and an execution time of 0.8 CPU seconds on the Cray without multitasking were measured. Finally, using two CPUs and the vector pipelines of the Cray, a multitasking efficiency of 81% was noted for the three-dimensional model

  4. Introduction of a simple-model-based land surface dataset for Europe

    Science.gov (United States)

    Orth, Rene; Seneviratne, Sonia I.

    2015-04-01

    Land surface hydrology can play a crucial role during extreme events such as droughts, floods and even heat waves. We introduce in this study a new hydrological dataset for Europe that consists of soil moisture, runoff and evapotranspiration (ET). It is derived with a simple water balance model (SWBM) forced with precipitation, temperature and net radiation. The SWBM dataset extends over the period 1984-2013 with a daily time step and 0.5° × 0.5° resolution. We employ a novel calibration approach, in which we consider 300 random parameter sets chosen from an observation-based range. Using several independent validation datasets representing soil moisture (or terrestrial water content), ET and streamflow, we identify the best performing parameter set and hence the new dataset. To illustrate its usefulness, the SWBM dataset is compared against several state-of-the-art datasets (ERA-Interim/Land, MERRA-Land, GLDAS-2-Noah, simulations of the Community Land Model Version 4), using all validation datasets as reference. For soil moisture dynamics it outperforms the benchmarks. Therefore the SWBM soil moisture dataset constitutes a reasonable alternative to sparse measurements, little validated model results, or proxy data such as precipitation indices. Also in terms of runoff the SWBM dataset performs well, whereas the evaluation of the SWBM ET dataset is overall satisfactory, but the dynamics are less well captured for this variable. This highlights the limitations of the dataset, as it is based on a simple model that uses uniform parameter values. Hence some processes impacting ET dynamics may not be captured, and quality issues may occur in regions with complex terrain. Even though the SWBM is well calibrated, it cannot replace more sophisticated models; but as their calibration is a complex task the present dataset may serve as a benchmark in future. In addition we investigate the sources of skill of the SWBM dataset and find that the parameter set has a similar

  5. Data Mining for Imbalanced Datasets: An Overview

    Science.gov (United States)

    Chawla, Nitesh V.

    A dataset is imbalanced if the classification categories are not approximately equally represented. Recent years brought increased interest in applying machine learning techniques to difficult "real-world" problems, many of which are characterized by imbalanced data. Additionally the distribution of the testing data may differ from that of the training data, and the true misclassification costs may be unknown at learning time. Predictive accuracy, a popular choice for evaluating performance of a classifier, might not be appropriate when the data is imbalanced and/or the costs of different errors vary markedly. In this Chapter, we discuss some of the sampling techniques used for balancing the datasets, and the performance measures more appropriate for mining imbalanced datasets.

  6. Convexity and Marginal Vectors

    NARCIS (Netherlands)

    van Velzen, S.; Hamers, H.J.M.; Norde, H.W.

    2002-01-01

    In this paper we construct sets of marginal vectors of a TU game with the property that if the marginal vectors from these sets are core elements, then the game is convex.This approach leads to new upperbounds on the number of marginal vectors needed to characterize convexity.An other result is that

  7. System for Automated Calibration of Vector Modulators

    Science.gov (United States)

    Lux, James; Boas, Amy; Li, Samuel

    2009-01-01

    Vector modulators are used to impose baseband modulation on RF signals, but non-ideal behavior limits the overall performance. The non-ideal behavior of the vector modulator is compensated using data collected with the use of an automated test system driven by a LabVIEW program that systematically applies thousands of control-signal values to the device under test and collects RF measurement data. The technology innovation automates several steps in the process. First, an automated test system, using computer controlled digital-to-analog converters (DACs) and a computer-controlled vector network analyzer (VNA) systematically can apply different I and Q signals (which represent the complex number by which the RF signal is multiplied) to the vector modulator under test (VMUT), while measuring the RF performance specifically, gain and phase. The automated test system uses the LabVIEW software to control the test equipment, collect the data, and write it to a file. The input to the Lab - VIEW program is either user-input for systematic variation, or is provided in a file containing specific test values that should be fed to the VMUT. The output file contains both the control signals and the measured data. The second step is to post-process the file to determine the correction functions as needed. The result of the entire process is a tabular representation, which allows translation of a desired I/Q value to the required analog control signals to produce a particular RF behavior. In some applications, corrected performance is needed only for a limited range. If the vector modulator is being used as a phase shifter, there is only a need to correct I and Q values that represent points on a circle, not the entire plane. This innovation has been used to calibrate 2-GHz MMIC (monolithic microwave integrated circuit) vector modulators in the High EIRP Cluster Array project (EIRP is high effective isotropic radiated power). These calibrations were then used to create

  8. Integrating Transgenic Vector Manipulation with Clinical Interventions to Manage Vector-Borne Diseases.

    Directory of Open Access Journals (Sweden)

    Kenichi W Okamoto

    2016-03-01

    Full Text Available Many vector-borne diseases lack effective vaccines and medications, and the limitations of traditional vector control have inspired novel approaches based on using genetic engineering to manipulate vector populations and thereby reduce transmission. Yet both the short- and long-term epidemiological effects of these transgenic strategies are highly uncertain. If neither vaccines, medications, nor transgenic strategies can by themselves suffice for managing vector-borne diseases, integrating these approaches becomes key. Here we develop a framework to evaluate how clinical interventions (i.e., vaccination and medication can be integrated with transgenic vector manipulation strategies to prevent disease invasion and reduce disease incidence. We show that the ability of clinical interventions to accelerate disease suppression can depend on the nature of the transgenic manipulation deployed (e.g., whether vector population reduction or replacement is attempted. We find that making a specific, individual strategy highly effective may not be necessary for attaining public-health objectives, provided suitable combinations can be adopted. However, we show how combining only partially effective antimicrobial drugs or vaccination with transgenic vector manipulations that merely temporarily lower vector competence can amplify disease resurgence following transient suppression. Thus, transgenic vector manipulation that cannot be sustained can have adverse consequences-consequences which ineffective clinical interventions can at best only mitigate, and at worst temporarily exacerbate. This result, which arises from differences between the time scale on which the interventions affect disease dynamics and the time scale of host population dynamics, highlights the importance of accounting for the potential delay in the effects of deploying public health strategies on long-term disease incidence. We find that for systems at the disease-endemic equilibrium, even

  9. A Point-Wise Quantification of Asymmetry Using Deformation Fields

    DEFF Research Database (Denmark)

    Ólafsdóttir, Hildur; Lanche, Stephanie; Darvann, Tron Andre

    2007-01-01

    of the resulting displacement vectors on the left and right side of the symmetry plane, gives a point-wise measure of asymmetry. The asymmetry measure was applied to the study of Crouzon syndrome using Micro CT scans of genetically modified mice. Crouzon syndrome is characterised by the premature fusion of cranial...

  10. Web mapping system for complex processing and visualization of environmental geospatial datasets

    Science.gov (United States)

    Titov, Alexander; Gordov, Evgeny; Okladnikov, Igor

    2016-04-01

    Environmental geospatial datasets (meteorological observations, modeling and reanalysis results, etc.) are used in numerous research applications. Due to a number of objective reasons such as inherent heterogeneity of environmental datasets, big dataset volume, complexity of data models used, syntactic and semantic differences that complicate creation and use of unified terminology, the development of environmental geodata access, processing and visualization services as well as client applications turns out to be quite a sophisticated task. According to general INSPIRE requirements to data visualization geoportal web applications have to provide such standard functionality as data overview, image navigation, scrolling, scaling and graphical overlay, displaying map legends and corresponding metadata information. It should be noted that modern web mapping systems as integrated geoportal applications are developed based on the SOA and might be considered as complexes of interconnected software tools for working with geospatial data. In the report a complex web mapping system including GIS web client and corresponding OGC services for working with geospatial (NetCDF, PostGIS) dataset archive is presented. There are three basic tiers of the GIS web client in it: 1. Tier of geospatial metadata retrieved from central MySQL repository and represented in JSON format 2. Tier of JavaScript objects implementing methods handling: --- NetCDF metadata --- Task XML object for configuring user calculations, input and output formats --- OGC WMS/WFS cartographical services 3. Graphical user interface (GUI) tier representing JavaScript objects realizing web application business logic Metadata tier consists of a number of JSON objects containing technical information describing geospatial datasets (such as spatio-temporal resolution, meteorological parameters, valid processing methods, etc). The middleware tier of JavaScript objects implementing methods for handling geospatial

  11. Complex Polynomial Vector Fields

    DEFF Research Database (Denmark)

    The two branches of dynamical systems, continuous and discrete, correspond to the study of differential equations (vector fields) and iteration of mappings respectively. In holomorphic dynamics, the systems studied are restricted to those described by holomorphic (complex analytic) functions...... or meromorphic (allowing poles as singularities) functions. There already exists a well-developed theory for iterative holomorphic dynamical systems, and successful relations found between iteration theory and flows of vector fields have been one of the main motivations for the recent interest in holomorphic...... vector fields. Since the class of complex polynomial vector fields in the plane is natural to consider, it is remarkable that its study has only begun very recently. There are numerous fundamental questions that are still open, both in the general classification of these vector fields, the decomposition...

  12. Constructing Support Vector Machine Ensembles for Cancer Classification Based on Proteomic Profiling

    Institute of Scientific and Technical Information of China (English)

    Yong Mao; Xiao-Bo Zhou; Dao-Ying Pi; You-Xian Sun

    2005-01-01

    In this study, we present a constructive algorithm for training cooperative support vector machine ensembles (CSVMEs). CSVME combines ensemble architecture design with cooperative training for individual SVMs in ensembles. Unlike most previous studies on training ensembles, CSVME puts emphasis on both accuracy and collaboration among individual SVMs in an ensemble. A group of SVMs selected on the basis of recursive classifier elimination is used in CSVME, and the number of the individual SVMs selected to construct CSVME is determined by 10-fold cross-validation. This kind of SVME has been tested on two ovarian cancer datasets previously obtained by proteomic mass spectrometry. By combining several individual SVMs, the proposed method achieves better performance than the SVME of all base SVMs.

  13. DEVELOPING VERIFICATION SYSTEMS FOR BUILDING INFORMATION MODELS OF HERITAGE BUILDINGS WITH HETEROGENEOUS DATASETS

    Directory of Open Access Journals (Sweden)

    L. Chow

    2017-08-01

    Full Text Available The digitization and abstraction of existing buildings into building information models requires the translation of heterogeneous datasets that may include CAD, technical reports, historic texts, archival drawings, terrestrial laser scanning, and photogrammetry into model elements. In this paper, we discuss a project undertaken by the Carleton Immersive Media Studio (CIMS that explored the synthesis of heterogeneous datasets for the development of a building information model (BIM for one of Canada’s most significant heritage assets – the Centre Block of the Parliament Hill National Historic Site. The scope of the project included the development of an as-found model of the century-old, six-story building in anticipation of specific model uses for an extensive rehabilitation program. The as-found Centre Block model was developed in Revit using primarily point cloud data from terrestrial laser scanning. The data was captured by CIMS in partnership with Heritage Conservation Services (HCS, Public Services and Procurement Canada (PSPC, using a Leica C10 and P40 (exterior and large interior spaces and a Faro Focus (small to mid-sized interior spaces. Secondary sources such as archival drawings, photographs, and technical reports were referenced in cases where point cloud data was not available. As a result of working with heterogeneous data sets, a verification system was introduced in order to communicate to model users/viewers the source of information for each building element within the model.

  14. Development of ε-insensitive smooth support vector regression for predicting minimum miscibility pressure in CO2 flooding

    Directory of Open Access Journals (Sweden)

    Shahram Mollaiy-Berneti

    2018-02-01

    Full Text Available Successful design of a carbon dioxide (CO2 flooding in enhanced oil recovery projects mostly depends on accurate determination of CO2-crude oil minimum miscibility pressure (MMP. Due to the high expensive and time-consuming of experimental determination of MMP, developing a fast and robust method to predict MMP is necessary. In this study, a new method based on ε-insensitive smooth support vector regression (ε-SSVR is introduced to predict MMP for both pure and impure CO2 gas injection cases. The proposed ε-SSVR is developed using dataset of reservoir temperature, crude oil composition and composition of injected CO2. To serve better understanding of the proposed, feed-forward neural network and radial basis function network applied to denoted dataset. The results show that the suggested ε-SSVR has acceptable reliability and robustness in comparison with two other models. Thus, the proposed method can be considered as an alternative way to monitor the MMP in miscible flooding process.

  15. Point-splitting regularization of composite operators and anomalies

    International Nuclear Information System (INIS)

    Novotny, J.; Schnabl, M.

    2000-01-01

    The point-splitting regularization technique for composite operators is discussed in connection with anomaly calculation. We present a pedagogical and self-contained review of the topic with an emphasis on the technical details. We also develop simple algebraic tools to handle the path ordered exponential insertions used within the covariant and non-covariant version of the point-splitting method. The method is then applied to the calculation of the chiral, vector, trace, translation and Lorentz anomalies within diverse versions of the point-splitting regularization and a connection between the results is described. As an alternative to the standard approach we use the idea of deformed point-split transformation and corresponding Ward-Takahashi identities rather than an application of the equation of motion, which seems to reduce the complexity of the calculations. (orig.)

  16. Vector financial rogue waves

    International Nuclear Information System (INIS)

    Yan, Zhenya

    2011-01-01

    The coupled nonlinear volatility and option pricing model presented recently by Ivancevic is investigated, which generates a leverage effect, i.e., stock volatility is (negatively) correlated to stock returns, and can be regarded as a coupled nonlinear wave alternative of the Black–Scholes option pricing model. In this Letter, we analytically propose vector financial rogue waves of the coupled nonlinear volatility and option pricing model without an embedded w-learning. Moreover, we exhibit their dynamical behaviors for chosen different parameters. The vector financial rogue wave (rogon) solutions may be used to describe the possible physical mechanisms for the rogue wave phenomena and to further excite the possibility of relative researches and potential applications of vector rogue waves in the financial markets and other related fields. -- Highlights: ► We investigate the coupled nonlinear volatility and option pricing model. ► We analytically present vector financial rogue waves. ► The vector financial rogue waves may be used to describe the extreme events in financial markets. ► This results may excite the relative researches and potential applications of vector rogue waves.

  17. Video Vectorization via Tetrahedral Remeshing.

    Science.gov (United States)

    Wang, Chuan; Zhu, Jie; Guo, Yanwen; Wang, Wenping

    2017-02-09

    We present a video vectorization method that generates a video in vector representation from an input video in raster representation. A vector-based video representation offers the benefits of vector graphics, such as compactness and scalability. The vector video we generate is represented by a simplified tetrahedral control mesh over the spatial-temporal video volume, with color attributes defined at the mesh vertices. We present novel techniques for simplification and subdivision of a tetrahedral mesh to achieve high simplification ratio while preserving features and ensuring color fidelity. From an input raster video, our method is capable of generating a compact video in vector representation that allows a faithful reconstruction with low reconstruction errors.

  18. Mass effects in three-point chronological current correlators in n-dimensional multifermion models

    International Nuclear Information System (INIS)

    Kucheryavyj, V.I.

    1991-01-01

    Three-types of quantities associated with three-point chronological fermion-current correlators having arbitrary Lorentz and internal structure are calculated in the n-dimensional multifermion models with different masses. The analysis of vector and axial-vector Ward identities for regular (finite) and dimensionally regularized values of these quantities is carried out. Quantum corrections to the canonical Ward identities are obtained. These corrections are generally homogenious functions of zeroth order in masses and under some definite conditions they are reduced to known axial-vector anomalies. The structure and properties of quantum corrections to AVV and AAA correlators in the four-dimension space-time are investigated in detail

  19. A hybrid organic-inorganic perovskite dataset

    Science.gov (United States)

    Kim, Chiho; Huan, Tran Doan; Krishnan, Sridevi; Ramprasad, Rampi

    2017-05-01

    Hybrid organic-inorganic perovskites (HOIPs) have been attracting a great deal of attention due to their versatility of electronic properties and fabrication methods. We prepare a dataset of 1,346 HOIPs, which features 16 organic cations, 3 group-IV cations and 4 halide anions. Using a combination of an atomic structure search method and density functional theory calculations, the optimized structures, the bandgap, the dielectric constant, and the relative energies of the HOIPs are uniformly prepared and validated by comparing with relevant experimental and/or theoretical data. We make the dataset available at Dryad Digital Repository, NoMaD Repository, and Khazana Repository (http://khazana.uconn.edu/), hoping that it could be useful for future data-mining efforts that can explore possible structure-property relationships and phenomenological models. Progressive extension of the dataset is expected as new organic cations become appropriate within the HOIP framework, and as additional properties are calculated for the new compounds found.

  20. Genomics dataset of unidentified disclosed isolates

    Directory of Open Access Journals (Sweden)

    Bhagwan N. Rekadwad

    2016-09-01

    Full Text Available Analysis of DNA sequences is necessary for higher hierarchical classification of the organisms. It gives clues about the characteristics of organisms and their taxonomic position. This dataset is chosen to find complexities in the unidentified DNA in the disclosed patents. A total of 17 unidentified DNA sequences were thoroughly analyzed. The quick response codes were generated. AT/GC content of the DNA sequences analysis was carried out. The QR is helpful for quick identification of isolates. AT/GC content is helpful for studying their stability at different temperatures. Additionally, a dataset on cleavage code and enzyme code studied under the restriction digestion study, which helpful for performing studies using short DNA sequences was reported. The dataset disclosed here is the new revelatory data for exploration of unique DNA sequences for evaluation, identification, comparison and analysis. Keywords: BioLABs, Blunt ends, Genomics, NEB cutter, Restriction digestion, Short DNA sequences, Sticky ends