WorldWideScience

Sample records for tuple matricola cognome

  1. Theoretical analysis and modeling of a photonic integrated circuit for frequency 8-tupled and 24-tupled millimeter wave signal generation.

    Science.gov (United States)

    Hasan, Mehedi; Guemri, Rabiaa; Maldonado-Basilio, Ramón; Lucarz, Frédéric; de Bougrenet de la Tocnaye, Jean-Louis; Hall, Trevor

    2014-12-15

    A photonic circuit design for implementing frequency 8-tupling and 24-tupling is proposed. The front- and back-end of the circuit comprises 4×4 MMI couplers enclosing an array of four pairs of phase modulators and 2×2 MMI couplers. The proposed design for frequency multiplication requires no optical or electrical filters, the operation is not limited to carefully adjusted modulation indexes, and the drift originated from static DC bias is mitigated by making use of the intrinsic phase relations of multi-mode interference couplers. A transfer matrix approach is used to represent the main building blocks of the design and hence to describe the operation of the frequency 8-tupling and 24-tupling. The concept is theoretically developed and demonstrated by simulations. Ideal and imperfect power imbalances in the multi-mode interference couplers, as well as ideal and imperfect phases of the electric drives to the phase modulators, are analyzed.

  2. Theoretical analysis and modeling of a photonic integrated circuit for frequency 8-tupled and 24-tupled millimeter wave signal generation: erratum.

    Science.gov (United States)

    Hasan, Mehedi; Guemri, Rabiaa; Maldonado-Basilio, Ramón; Lucarz, Frédéric; de Bougrenet de la Tocnaye, Jean-Louis; Hall, Trevor

    2015-12-15

    A novel photonic circuit design for implementing frequency 8-tupling and 24-tupling was presented [Opt. Lett.39, 6950 (2014)10.1364/OL.39.006950OPLEDP0146-9592], and although its key message remains unaltered, there were typographical errors in the equations that are corrected in this erratum.

  3. LifePrint: a novel k-tuple distance method for construction of phylogenetic trees

    Directory of Open Access Journals (Sweden)

    Fabián Reyes-Prieto

    2011-01-01

    Full Text Available Fabián Reyes-Prieto1, Adda J García-Chéquer1, Hueman Jaimes-Díaz1, Janet Casique-Almazán1, Juana M Espinosa-Lara1, Rosaura Palma-Orozco2, Alfonso Méndez-Tenorio1, Rogelio Maldonado-Rodríguez1, Kenneth L Beattie31Laboratory of Biotechnology and Genomic Bioinformatics, Department of Biochemistry, National School of Biological Sciences, 2Superior School of Computer Sciences, National Polytechnic Institute, Mexico City, Mexico; 3Amerigenics Inc, Crossville, Tennessee, USAPurpose: Here we describe LifePrint, a sequence alignment-independent k-tuple distance method to estimate relatedness between complete genomes.Methods: We designed a representative sample of all possible DNA tuples of length 9 (9-tuples. The final sample comprises 1878 tuples (called the LifePrint set of 9-tuples; LPS9 that are distinct from each other by at least two internal and noncontiguous nucleotide differences. For validation of our k-tuple distance method, we analyzed several real and simulated viroid genomes. Using different distance metrics, we scrutinized diverse viroid genomes to estimate the k-tuple distances between these genomic sequences. Then we used the estimated genomic k-tuple distances to construct phylogenetic trees using the neighbor-joining algorithm. A comparison of the accuracy of LPS9 and the previously reported 5-tuple method was made using symmetric differences between the trees estimated from each method and a simulated “true” phylogenetic tree.Results: The identified optimal search scheme for LPS9 allows only up to two nucleotide differences between each 9-tuple and the scrutinized genome. Similarity search results of simulated viroid genomes indicate that, in most cases, LPS9 is able to detect single-base substitutions between genomes efficiently. Analysis of simulated genomic variants with a high proportion of base substitutions indicates that LPS9 is able to discern relationships between genomic variants with up to 40% of nucleotide

  4. Hybrid normed ideal perturbations of n-tuples of operators I

    Science.gov (United States)

    Voiculescu, Dan-Virgil

    2018-06-01

    In hybrid normed ideal perturbations of n-tuples of operators, the normed ideal is allowed to vary with the component operators. We begin extending to this setting the machinery we developed for normed ideal perturbations based on the modulus of quasicentral approximation and an adaptation of our non-commutative generalization of the Weyl-von Neumann theorem. For commuting n-tuples of hermitian operators, the modulus of quasicentral approximation remains essentially the same when Cn- is replaced by a hybrid n-tuple Cp1,…- , … , Cpn- , p1-1 + ⋯ + pn-1 = 1. The proof involves singular integrals of mixed homogeneity.

  5. Interval 2-Tuple Linguistic Distance Operators and Their Applications to Supplier Evaluation and Selection

    Directory of Open Access Journals (Sweden)

    Meng-Meng Shan

    2016-01-01

    Full Text Available With respect to multicriteria supplier selection problems with interval 2-tuple linguistic information, a new decision making approach that uses distance measures is proposed. Motivated by the ordered weighted distance (OWD measures, in this paper, we develop some interval 2-tuple linguistic distance operators such as the interval 2-tuple weighted distance (ITWD, the interval 2-tuple ordered weighted distance (ITOWD, and the interval 2-tuple hybrid weighted distance (ITHWD operators. These aggregation operators are very useful for the treatment of input data in the form of interval 2-tuple linguistic variables. We study some desirable properties of the ITOWD operator and further generalize it by using the generalized and the quasi-arithmetic means. Finally, the new approach is utilized to complete a supplier selection study for an actual hospital from the healthcare industry.

  6. On CNC Commuting Contractive Tuples

    Indian Academy of Sciences (India)

    The characteristic function has been an important tool for studying completely non-unitary contractions on Hilbert spaces. In this note, we consider completely non-coisometric contractive tuples of commuting operators on a Hilbert space H . We show that the characteristic function, which is now an operator-valued analytic ...

  7. Unordered Tuples in Quantum Computation

    Directory of Open Access Journals (Sweden)

    Robert Furber

    2015-11-01

    Full Text Available It is well known that the C*-algebra of an ordered pair of qubits is M_2 (x M_2. What about unordered pairs? We show in detail that M_3 (+ C is the C*-algebra of an unordered pair of qubits. Then we use Schur-Weyl duality to characterize the C*-algebra of an unordered n-tuple of d-level quantum systems. Using some further elementary representation theory and number theory, we characterize the quantum cycles. We finish with a characterization of the von Neumann algebra for unordered words.

  8. Toolkit for data reduction to tuples for the ATLAS experiment

    International Nuclear Information System (INIS)

    Snyder, Scott; Krasznahorkay, Attila

    2012-01-01

    The final step in a HEP data-processing chain is usually to reduce the data to a ‘tuple’ form which can be efficiently read by interactive analysis tools such as ROOT. Often, this is implemented independently by each group analyzing the data, leading to duplicated effort and needless divergence in the format of the reduced data. ATLAS has implemented a common toolkit for performing this processing step. By using tools from this package, physics analysis groups can produce tuples customized for a particular analysis but which are still consistent in format and vocabulary with those produced by other physics groups. The package is designed so that almost all the code is independent of the specific form used to store the tuple. The code that does depend on this is grouped into a set of small backend packages. While the ROOT backend is the most used, backends also exist for HDF5 and for specialized databases. By now, the majority of ATLAS analyses rely on this package, and it is an important contributor to the ability of ATLAS to rapidly analyze physics data.

  9. Dual-function photonic integrated circuit for frequency octo-tupling or single-side-band modulation.

    Science.gov (United States)

    Hasan, Mehedi; Maldonado-Basilio, Ramón; Hall, Trevor J

    2015-06-01

    A dual-function photonic integrated circuit for microwave photonic applications is proposed. The circuit consists of four linear electro-optic phase modulators connected optically in parallel within a generalized Mach-Zehnder interferometer architecture. The photonic circuit is arranged to have two separate output ports. A first port provides frequency up-conversion of a microwave signal from the electrical to the optical domain; equivalently single-side-band modulation. A second port provides tunable millimeter wave carriers by frequency octo-tupling of an appropriate amplitude RF carrier. The circuit exploits the intrinsic relative phases between the ports of multi-mode interference couplers to provide substantially all the static optical phases needed. The operation of the proposed dual-function photonic integrated circuit is verified by computer simulations. The performance of the frequency octo-tupling and up-conversion functions is analyzed in terms of the electrical signal to harmonic distortion ratio and the optical single side band to unwanted harmonics ratio, respectively.

  10. An interval-valued 2-tuple linguistic group decision-making model based on the Choquet integral operator

    Science.gov (United States)

    Liu, Bingsheng; Fu, Meiqing; Zhang, Shuibo; Xue, Bin; Zhou, Qi; Zhang, Shiruo

    2018-01-01

    The Choquet integral (IL) operator is an effective approach for handling interdependence among decision attributes in complex decision-making problems. However, the fuzzy measures of attributes and attribute sets required by IL are difficult to achieve directly, which limits the application of IL. This paper proposes a new method for determining fuzzy measures of attributes by extending Marichal's concept of entropy for fuzzy measure. To well represent the assessment information, interval-valued 2-tuple linguistic context is utilised to represent information. Then, we propose a Choquet integral operator in an interval-valued 2-tuple linguistic environment, which can effectively handle the correlation between attributes. In addition, we apply these methods to solve multi-attribute group decision-making problems. The feasibility and validity of the proposed operator is demonstrated by comparisons with other models in illustrative example part.

  11. Nine-phase hex-tuple inverter for five-level output based on double carrier PWM technique

    DEFF Research Database (Denmark)

    Padmanaban, S.; Bhaskar, M.S.; Blaabjerg, F.

    2016-01-01

    This work articulates double carrier based five-level pulsewidth modulation for a nine-phase hex-tuple inverter AC drive. A set of standard three-phase voltage source inverter (VSI) with slight modification is used for framing the ninephase AC drive. In particular VSI packed with one bidirectiona...

  12. n-Tupled Coincidence Point Theorems in Partially Ordered Metric Spaces for Compatible Mappings

    Directory of Open Access Journals (Sweden)

    Sumitra Dalal

    2014-01-01

    Full Text Available The intent of this paper is to introduce the notion of compatible mappings for n-tupled coincidence points due to (Imdad et al. (2013. Related examples are also given to support our main results. Our results are the generalizations of the results of (Gnana Bhaskar and Lakshmikantham (2006, Lakshmikantham and Ćirić (2009, Choudhury and Kundu (2010, and Choudhary et al. (2013.

  13. A novel implementation of kNN classifier based on multi-tupled meteorological input data for wind power prediction

    International Nuclear Information System (INIS)

    Yesilbudak, Mehmet; Sagiroglu, Seref; Colak, Ilhami

    2017-01-01

    Highlights: • An accurate wind power prediction model is proposed for very short-term horizon. • The k-nearest neighbor classifier is implemented based on the multi-tupled inputs. • The variation of wind power prediction errors is evaluated in various aspects. • Our approach shows the superior prediction performance over the persistence method. - Abstract: With the growing share of wind power production in the electric power grids, many critical challenges to the grid operators have been emerged in terms of the power balance, power quality, voltage support, frequency stability, load scheduling, unit commitment and spinning reserve calculations. To overcome such problems, numerous studies have been conducted to predict the wind power production, but a small number of them have attempted to improve the prediction accuracy by employing the multidimensional meteorological input data. The novelties of this study lie in the proposal of an efficient and easy to implement very short-term wind power prediction model based on the k-nearest neighbor classifier (kNN), in the usage of wind speed, wind direction, barometric pressure and air temperature parameters as the multi-tupled meteorological inputs and in the comparison of wind power prediction results with respect to the persistence reference model. As a result of the achieved patterns, we characterize the variation of wind power prediction errors according to the input tuples, distance measures and neighbor numbers, and uncover the most influential and the most ineffective meteorological parameters on the optimization of wind power prediction results.

  14. Health-Care Waste Treatment Technology Selection Using the Interval 2-Tuple Induced TOPSIS Method

    Directory of Open Access Journals (Sweden)

    Chao Lu

    2016-06-01

    Full Text Available Health-care waste (HCW management is a major challenge for municipalities, particularly in the cities of developing nations. Selecting the best treatment technology for HCW can be regarded as a complex multi-criteria decision making (MCDM issue involving a number of alternatives and multiple evaluation criteria. In addition, decision makers tend to express their personal assessments via multi-granularity linguistic term sets because of different backgrounds and knowledge, some of which may be imprecise, uncertain and incomplete. Therefore, the main objective of this study is to propose a new hybrid decision making approach combining interval 2-tuple induced distance operators with the technique for order preference by similarity to an ideal solution (TOPSIS for tackling HCW treatment technology selection problems with linguistic information. The proposed interval 2-tuple induced TOPSIS (ITI-TOPSIS can not only model the uncertainty and diversity of the assessment information given by decision makers, but also reflect the complex attitudinal characters of decision makers and provide much more complete information for the selection of the optimum disposal alternative. Finally, an empirical example in Shanghai, China is provided to illustrate the proposed decision making method, and results show that the ITI-TOPSIS proposed in this paper can solve the problem of HCW treatment technology selection effectively.

  15. Relevance-aware filtering of tuples sorted by an attribute value via direct optimization of search quality metrics

    NARCIS (Netherlands)

    Spirin, N.V.; Kuznetsov, M.; Kiseleva, Y.; Spirin, Y.V.; Izhutov, P.A.

    2015-01-01

    Sorting tuples by an attribute value is a common search scenario and many search engines support such capabilities, e.g. price-based sorting in e-commerce, time-based sorting on a job or social media website. However, sorting purely by the attribute value might lead to poor user experience because

  16. A Fast Implementation for the Typical Testor Property Identification Based on an Accumulative Binary Tuple

    Directory of Open Access Journals (Sweden)

    Guillermo Sanchez-Diaz

    2012-11-01

    Full Text Available In this paper, we introduce a fast implementation of the CT EXT algorithm for testor property identification, that is based on an accumulative binary tuple. The fast implementation of the CT EXT algorithm (one of the fastest algorithms reported, is designed to generate all the typical testors from a training matrix, requiring a reduced number of operations. Experimental results using this fast implementation and the comparison with other state-of-the-art algorithms that generate typical testors are presented.

  17. Combined principal component preprocessing and n-tuple neural networks for improved classification

    DEFF Research Database (Denmark)

    Høskuldsson, Agnar; Linneberg, Christian

    2000-01-01

    We present a combined principal component analysis/neural network scheme for classification. The data used to illustrate the method consist of spectral fluorescence recordings from seven different production facilities, and the task is to relate an unknown sample to one of these seven factories....... The data are first preprocessed by performing an individual principal component analysis on each of the seven groups of data. The components found are then used for classifying the data, but instead of making a single multiclass classifier, we follow the ideas of turning a multiclass problem into a number...... of two-class problems. For each possible pair of classes we further apply a transformation to the calculated principal components in order to increase the separation between the classes. Finally we apply the so-called n-tuple neural network to the transformed data in order to give the classification...

  18. Combining Static Analysis and Runtime Checking in Security Aspects for Distributed Tuple Spaces

    DEFF Research Database (Denmark)

    Yang, Fan; Aotani, Tomoyuki; Masuhara, Hidehiko

    2011-01-01

    Enforcing security policies to distributed systems is difficult, in particular, to a system containing untrusted components. We designed AspectKE*, an aspect-oriented programming language based on distributed tuple spaces to tackle this issue. One of the key features in AspectKE* is the program...... analysis predicates and functions that provide information on future behavior of a program. With a dual value evaluation mechanism that handles results of static analysis and runtime values at the same time, those functions and predicates enable the users to specify security policies in a uniform manner....... Our two-staged implementation strategy gathers fundamental static analysis information at load-time, so as to avoid performing all analysis at runtime. We built a compiler for AspectKE*, and successfully implemented security aspects for a distributed chat system and an electronic healthcare record...

  19. Comprehensive Evaluation on Employee Satisfaction of Mine Occupational Health and Safety Management System Based on Improved AHP and 2-Tuple Linguistic Information

    Directory of Open Access Journals (Sweden)

    Jiangdong Bao

    2017-01-01

    Full Text Available In order to comprehensively evaluate the employee satisfaction of mine occupational health and safety management system, an analytic method based on fuzzy analytic hierarchy process and 2-tuple linguistic model was established. Based on the establishment of 5 first-grade indicators and 20 second-grade ones, method of improved AHP and the time-ordered Weighted Averaging Operator (T-OWA model is constructed. The results demonstrate that the employee satisfaction of the mine occupational health and safety management system is of the ‘general’ rank. The method including the evaluation of employee satisfaction and the quantitative analysis of language evaluation information ensures the authenticity of the language evaluation information.

  20. PseKNC: a flexible web server for generating pseudo K-tuple nucleotide composition.

    Science.gov (United States)

    Chen, Wei; Lei, Tian-Yu; Jin, Dian-Chuan; Lin, Hao; Chou, Kuo-Chen

    2014-07-01

    The pseudo oligonucleotide composition, or pseudo K-tuple nucleotide composition (PseKNC), can be used to represent a DNA or RNA sequence with a discrete model or vector yet still keep considerable sequence order information, particularly the global or long-range sequence order information, via the physicochemical properties of its constituent oligonucleotides. Therefore, the PseKNC approach may hold very high potential for enhancing the power in dealing with many problems in computational genomics and genome sequence analysis. However, dealing with different DNA or RNA problems may need different kinds of PseKNC. Here, we present a flexible and user-friendly web server for PseKNC (at http://lin.uestc.edu.cn/pseknc/default.aspx) by which users can easily generate many different modes of PseKNC according to their need by selecting various parameters and physicochemical properties. Furthermore, for the convenience of the vast majority of experimental scientists, a step-by-step guide is provided on how to use the current web server to generate their desired PseKNC without the need to follow the complicated mathematical equations, which are presented in this article just for the integrity of PseKNC formulation and its development. It is anticipated that the PseKNC web server will become a very useful tool in computational genomics and genome sequence analysis. Copyright © 2014 Elsevier Inc. All rights reserved.

  1. PseKRAAC: a flexible web server for generating pseudo K-tuple reduced amino acids composition.

    Science.gov (United States)

    Zuo, Yongchun; Li, Yuan; Chen, Yingli; Li, Guangpeng; Yan, Zhenhe; Yang, Lei

    2017-01-01

    The reduced amino acids perform powerful ability for both simplifying protein complexity and identifying functional conserved regions. However, dealing with different protein problems may need different kinds of cluster methods. Encouraged by the success of pseudo-amino acid composition algorithm, we developed a freely available web server, called PseKRAAC (the pseudo K-tuple reduced amino acids composition). By implementing reduced amino acid alphabets, the protein complexity can be significantly simplified, which leads to decrease chance of overfitting, lower computational handicap and reduce information redundancy. PseKRAAC delivers more capability for protein research by incorporating three crucial parameters that describes protein composition. Users can easily generate many different modes of PseKRAAC tailored to their needs by selecting various reduced amino acids alphabets and other characteristic parameters. It is anticipated that the PseKRAAC web server will become a very useful tool in computational proteomics and protein sequence analysis. Freely available on the web at http://bigdata.imu.edu.cn/psekraac CONTACTS: yczuo@imu.edu.cn or imu.hema@foxmail.com or yanglei_hmu@163.comSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  2. Application of interval 2-tuple linguistic MULTIMOORA method for health-care waste treatment technology evaluation and selection.

    Science.gov (United States)

    Liu, Hu-Chen; You, Jian-Xin; Lu, Chao; Shan, Meng-Meng

    2014-11-01

    The management of health-care waste (HCW) is a major challenge for municipalities, particularly in the cities of developing countries. Selection of the best treatment technology for HCW can be viewed as a complicated multi-criteria decision making (MCDM) problem which requires consideration of a number of alternatives and conflicting evaluation criteria. Additionally, decision makers often use different linguistic term sets to express their assessments because of their different backgrounds and preferences, some of which may be imprecise, uncertain and incomplete. In response, this paper proposes a modified MULTIMOORA method based on interval 2-tuple linguistic variables (named ITL-MULTIMOORA) for evaluating and selecting HCW treatment technologies. In particular, both subjective and objective importance coefficients of criteria are taken into consideration in the developed approach in order to conduct a more effective analysis. Finally, an empirical case study in Shanghai, the most crowded metropolis of China, is presented to demonstrate the proposed method, and results show that the proposed ITL-MULTIMOORA can solve the HCW treatment technology selection problem effectively under uncertain and incomplete information environment. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Integrated model of assisted parking system and performance evaluation with entropy weight extended analytic hierarchy process and two-tuple linguistic information

    Directory of Open Access Journals (Sweden)

    Yiding Hua

    2016-06-01

    Full Text Available Evaluating comprehensive performance of assisted parking system has been a very important issue for car companies for years, because the overall performance of assisted parking system directly influences car intellectualization and customers’ degree of satisfaction. Therefore, this article proposes two-tuple linguistic analytic hierarchy process to evaluate assisted parking system so as to avoid information loss during the processes of evaluation integration. The performance evaluation attributes for assisted parking system are established initially. Subsequently, the information entropy theory is proposed to improve the evaluation attribute weight determined by analytic hierarchy process for the influencing factors of the randomness in parking test process. Furthermore, the evaluation attribute measure values of comprehensive performance are calculated and the assisted parking system evaluation results are obtained with ordered weighted averaging operator. Finally, numerical examples of vehicle types equipped with eight different assisted parking systems and computational results are presented.

  4. The Extension of Quality Function Deployment Based on 2-Tuple Linguistic Representation Model for Product Design under Multigranularity Linguistic Environment

    Directory of Open Access Journals (Sweden)

    Ming Li

    2012-01-01

    Full Text Available Quality function deployment (QFD is a customer-driven approach for product design and development. A QFD analysis process includes a series of subprocesses, such as determination of the importance of customer requirements (CRs, the correlation among engineering characteristics (ECs, and the relationship between CRs and ECs. Usually more than group of one decision makers are involved in the subprocesses to make the decision. In most decision making problems, they often provide their evaluation information in the linguistic form. Moreover, because of different knowledge, background, and discrimination ability, decision makers may express their linguistic preferences in multigranularity linguistic information. Therefore, an effective approach to deal with the multi-granularity linguistic information in QFD analysis process is highly needed. In this study, the QFD methodology is extended with 2-tuple linguistic representation model under multi-granularity linguistic environment. The extended QFD methodology can cope with multi-granularity linguistic evaluation information and avoid the loss of information. The applicability of the proposed approach is demonstrated with a numerical example.

  5. Lossless Conditional Schema Evolution

    DEFF Research Database (Denmark)

    Jensen, Ole Guttorm; Bøhlen, Michael Hanspeter

    2003-01-01

    The paper considers conditional schema evolution, where schema changes change the schema of the tuples that satisfy the change condition. When the schema of a relation change some tuples may no longer fit the current schema. Handling the mismatch between the intended schema of tuples and the reco......The paper considers conditional schema evolution, where schema changes change the schema of the tuples that satisfy the change condition. When the schema of a relation change some tuples may no longer fit the current schema. Handling the mismatch between the intended schema of tuples...... and the recorded schema of tuples is at the core of a DBMS that supports schema evolution. We propose to keep track of schema mismatches at the level of individual tuples, and prove that conditionally evolving schemas, in contrast to current commercial database systems, are lossless when the schema evolves...

  6. Generating inferences from knowledge structures based on general automata

    Energy Technology Data Exchange (ETDEWEB)

    Koenig, E C

    1983-01-01

    The author shows that the model for knowledge structures for computers based on general automata accommodates procedures for establishing inferences. Algorithms are presented which generate inferences as output of a computer when its sentence input names appropriate knowledge elements contained in an associated knowledge structure already stored in the memory of the computer. The inferences are found to have either a single graph tuple or more than one graph tuple of associated knowledge. Six algorithms pertain to a single graph tuple and a seventh pertains to more than one graph tuple of associated knowledge. A named term is either the automaton, environment, auxiliary receptor, principal receptor, auxiliary effector, or principal effector. The algorithm pertaining to more than one graph tuple requires that the input sentence names the automaton, transformation response, and environment of one of the tuples of associated knowledge in a sequence of tuples. Interaction with the computer may be either in a conversation or examination mode. The algorithms are illustrated by an example. 13 references.

  7. Uniqueness of the joint measurement and the structure of the set of compatible quantum measurements

    Science.gov (United States)

    Guerini, Leonardo; Terra Cunha, Marcelo

    2018-04-01

    We address the problem of characterising the compatible tuples of measurements that admit a unique joint measurement. We derive a uniqueness criterion based on the method of perturbations and apply it to show that extremal points of the set of compatible tuples admit a unique joint measurement, while all tuples that admit a unique joint measurement lie in the boundary of such a set. We also provide counter-examples showing that none of these properties are both necessary and sufficient, thus completely describing the relation between the joint measurement uniqueness and the structure of the compatible set. As a by-product of our investigations, we completely characterise the extremal and boundary points of the set of general tuples of measurements and of the subset of compatible tuples.

  8. Lossless conditional schema evolution

    DEFF Research Database (Denmark)

    Jensen, Ole Guttorm; Böhlen, Michael

    2004-01-01

    is a precondition for a flexible semantics that allows to correctly answer general queries over evolving schemas. The key challenge is to handle attribute mismatches between the intended and recorded schema in a consistent way. We provide a parametric approach to resolve mismatches according to the needs......Conditional schema changes change the schema of the tuples that satisfy the change condition. When the schema of a relation changes some tuples may no longer fit the current schema. Handling the mismatch between the intended schema of tuples and the recorded schema of tuples is at the core...... of a DBMS that supports schema evolution. We propose to keep track of schema mismatches at the level of individual tuples, and prove that evolving schemas with conditional schema changes, in contrast to database systems relying on data migration, are lossless when the schema evolves. The lossless property...

  9. A knowledge base architecture for distributed knowledge agents

    Science.gov (United States)

    Riedesel, Joel; Walls, Bryan

    1990-01-01

    A tuple space based object oriented model for knowledge base representation and interpretation is presented. An architecture for managing distributed knowledge agents is then implemented within the model. The general model is based upon a database implementation of a tuple space. Objects are then defined as an additional layer upon the database. The tuple space may or may not be distributed depending upon the database implementation. A language for representing knowledge and inference strategy is defined whose implementation takes advantage of the tuple space. The general model may then be instantiated in many different forms, each of which may be a distinct knowledge agent. Knowledge agents may communicate using tuple space mechanisms as in the LINDA model as well as using more well known message passing mechanisms. An implementation of the model is presented describing strategies used to keep inference tractable without giving up expressivity. An example applied to a power management and distribution network for Space Station Freedom is given.

  10. Higher-Order and Symbolic Computation. LISP and Symbolic Computationditorial

    DEFF Research Database (Denmark)

    Danvy, Olivier; Dybvig, R. Kent; Lawall, Julia

    2008-01-01

    system for these static checks and a corresponding type-inference algorithm. In "An Investigation of Jones Optimality and BTI-Universal Specializers," Robert Glueck establishes a connection between Jones optimal-program specializers and binding-time improvers. This article completes a study started...... at ASIA-PEPM 2002 [1]. In "On the Implementation of Automatic Differentiation Tools," Christian H. Bischof, Paul D. Hovland, and Boyana Norris present a survey of some recent tools for the Automatic Differentiation technology (concentrating mainly on ADIC, ADIFOR and sketching XAIF). They also offer...... for removing tuple constructions and tuple selections. This technique solves the problem of efficiently passing tuples to polymorphic functions by avoiding extra memory operations in selecting components of the tuple....

  11. Labels, Cognomes and Cyclic Computation: An Ethological Perspective

    Directory of Open Access Journals (Sweden)

    Elliot eMurphy

    2015-06-01

    Full Text Available For the past two decades, it has widely been assumed by linguists that there is a single computational operation, Merge, which is unique to language, distinguishing it from other cognitive domains. The intention of this paper is to progress the discussion of language evolution in two ways: (i survey what the ethological record reveals about the uniqueness of the human computational system, and (ii explore how syntactic theories account for what ethology may determine to be human-specific. It is shown that the operation Label, not Merge, constitutes the evolutionary novelty which distinguishes human language from non-human computational systems; a proposal lending weight to a Weak Continuity Hypothesis and leading to the formation of what is termed Computational Ethology. Some directions for future ethological research are suggested.

  12. Labels, cognomes, and cyclic computation: an ethological perspective.

    Science.gov (United States)

    Murphy, Elliot

    2015-01-01

    For the past two decades, it has widely been assumed by linguists that there is a single computational operation, Merge, which is unique to language, distinguishing it from other cognitive domains. The intention of this paper is to progress the discussion of language evolution in two ways: (i) survey what the ethological record reveals about the uniqueness of the human computational system, and (ii) explore how syntactic theories account for what ethology may determine to be human-specific. It is shown that the operation Label, not Merge, constitutes the evolutionary novelty which distinguishes human language from non-human computational systems; a proposal lending weight to a Weak Continuity Hypothesis and leading to the formation of what is termed Computational Ethology. Some directions for future ethological research are suggested.

  13. Toward Determination of Venous Thrombosis Ages by Using Fuzzy Logic and Supervised Bayes Classification

    National Research Council Canada - National Science Library

    Lim, P

    2001-01-01

    .... Thus, the proposed learning base is constructed in a 3-tuple: observation, label, membership value in term of fuzzy logic for each class and not a 2-tuple as in the usual supervised Bayes classification application...

  14. Statistical properties of nucleotides in human chromosomes 21 and 22

    International Nuclear Information System (INIS)

    Zhang Linxi; Sun Tingting

    2005-01-01

    In this paper the statistical properties of nucleotides in human chromosomes 21 and 22 are investigated. The n-tuple Zipf analysis with n = 3, 4, 5, 6, and 7 is used in our investigation. It is found that the most common n-tuples are those which consist only of adenine (A) and thymine (T), and the rarest n-tuples are those in which GC or CG pattern appears twice. With the n-tuples become more and more frequent, the double GC or CG pattern becomes a single GC or CG pattern. The percentage of four nucleotides in the rarest ten and the most common ten n-tuples are also considered in human chromosomes 21 and 22, and different behaviors are found in the percentage of four nucleotides. Frequency of appearance of n-tuple f(r) as a function of rank r is also examined. We find the n-tuple Zipf plot shows a power-law behavior for r n-1 and a rapid decrease for r > 4 n-1 . In order to explore the interior statistical properties of human chromosomes 21 and 22 in detail, we divide the chromosome sequence into some moving windows and we discuss the percentage of ξη (ξ, η = A, C, G, T) pair in those moving windows. In some particular regions, there are some obvious changes in the percentage of ξη pair, and there maybe exist functional differences. The normalized number of repeats N 0 (l) can be described by a power law: N 0 (l) ∼ l -μ . The distance distributions P 0 (S) between two nucleotides in human chromosomes 21 and 22 are also discussed. A two-order polynomial fit exists in those distance distributions: log P 0 (S) = a + bS + cS 2 , and it is quite different from the random sequence

  15. Center for Advanced Sensors, Year One Funding (FY2005)

    Science.gov (United States)

    2006-10-30

    retrieve data which is stored in a PostGRE database. Each base station executes custom software developed by C. Goodwin that generates OntoSensor...update using PostGRE ODBC drivers and retrieves updated tuples corresponding to raw sensor percepts. The retrieved tuples are then post-processed

  16. Proceedings – Mathematical Sciences | Indian Academy of Sciences

    Indian Academy of Sciences (India)

    On CNC Commuting Contractive Tuples · T Bhattacharyya J Eschmeier J Sarkar · More Details Abstract Fulltext PDF. The characteristic function has been an important tool for studying completely non-unitary contractions on Hilbert spaces. In this note, we consider completely non-coisometric contractive tuples of commuting ...

  17. Facilitating Fine Grained Data Provenance using Temporal Data Model

    NARCIS (Netherlands)

    Huq, M.R.; Wombacher, Andreas; Apers, Peter M.G.

    2010-01-01

    E-science applications use fine grained data provenance to maintain the reproducibility of scientific results, i.e., for each processed data tuple, the source data used to process the tuple as well as the used approach is documented. Since most of the e-science applications perform on-line

  18. A Trust-Based Adaptive Probability Marking and Storage Traceback Scheme for WSNs

    Science.gov (United States)

    Liu, Anfeng; Liu, Xiao; Long, Jun

    2016-01-01

    Security is a pivotal issue for wireless sensor networks (WSNs), which are emerging as a promising platform that enables a wide range of military, scientific, industrial and commercial applications. Traceback, a key cyber-forensics technology, can play an important role in tracing and locating a malicious source to guarantee cybersecurity. In this work a trust-based adaptive probability marking and storage (TAPMS) traceback scheme is proposed to enhance security for WSNs. In a TAPMS scheme, the marking probability is adaptively adjusted according to the security requirements of the network and can substantially reduce the number of marking tuples and improve network lifetime. More importantly, a high trust node is selected to store marking tuples, which can avoid the problem of marking information being lost. Experimental results show that the total number of marking tuples can be reduced in a TAPMS scheme, thus improving network lifetime. At the same time, since the marking tuples are stored in high trust nodes, storage reliability can be guaranteed, and the traceback time can be reduced by more than 80%. PMID:27043566

  19. Efficient Joins with Compressed Bitmap Indexes

    Energy Technology Data Exchange (ETDEWEB)

    Computational Research Division; Madduri, Kamesh; Wu, Kesheng

    2009-08-19

    We present a new class of adaptive algorithms that use compressed bitmap indexes to speed up evaluation of the range join query in relational databases. We determine the best strategy to process a join query based on a fast sub-linear time computation of the join selectivity (the ratio of the number of tuples in the result to the total number of possible tuples). In addition, we use compressed bitmaps to represent the join output compactly: the space requirement for storing the tuples representing the join of two relations is asymptotically bounded by min(h; n . cb), where h is the number of tuple pairs in the result relation, n is the number of tuples in the smaller of the two relations, and cb is the cardinality of the larger column being joined. We present a theoretical analysis of our algorithms, as well as experimental results on large-scale synthetic and real data sets. Our implementations are efficient, and consistently outperform well-known approaches for a range of join selectivity factors. For instance, our count-only algorithm is up to three orders of magnitude faster than the sort-merge approach, and our best bitmap index-based algorithm is 1.2x-80x faster than the sort-merge algorithm, for various query instances. We achieve these speedups by exploiting several inherent performance advantages of compressed bitmap indexes for join processing: an implicit partitioning of the attributes, space-efficiency, and tolerance of high-cardinality relations.

  20. Security Modeling and Correctness Proof Using Specware and Isabelle

    Science.gov (United States)

    2008-12-01

    working system by the controlled stepwise introduction of implementation design decisions , in such a way that the refined specifications and ultimately... Auxillary Functions for Current Access to check if contains %% a tuple; Also for adding and removing tuples from Current %% Access...but rather design and implementation decisions by Kestrel but were undocumented. When the problems are encountered with proving in Isabelle, first the

  1. Expiration Times for Data Management

    DEFF Research Database (Denmark)

    Schmidt, Albrecht; Jensen, Christian Søndergaard; Saltenis, Simonas

    2006-01-01

    This paper describes an approach to incorporating the notion of expiration time into data management based on the relational model. Expiration times indicate when tuples cease to be current in a database. The paper presents a formal data model and a query algebra that handle expiration times...... transparently and declaratively. In particular, expiration times are exposed to users only on insertion and update, and when triggers fire due to the expiration of a tuple; for queries, they are handled behind the scenes and do not concern the user. Notably, tuples are removed automatically from (materialised......) query results as they expire in the (base) relations. For application developers, the benefits of using expiration times are (1) leaner application code, (2) lower transaction volume, (3) smaller databases, and, (4) higher consistency for replicated data with lower overhead. Expiration times turn out...

  2. Combinatorial aspects of covering arrays

    Directory of Open Access Journals (Sweden)

    Charles J. Colbourn

    2004-11-01

    Full Text Available Covering arrays generalize orthogonal arrays by requiring that t -tuples be covered, but not requiring that the appearance of t -tuples be balanced.Their uses in screening experiments has found application in software testing, hardware testing, and a variety of fields in which interactions among factors are to be identified. Here a combinatorial view of covering arrays is adopted, encompassing basic bounds, direct constructions, recursive constructions, algorithmic methods, and applications.

  3. A new approach to very short term wind speed prediction using k-nearest neighbor classification

    International Nuclear Information System (INIS)

    Yesilbudak, Mehmet; Sagiroglu, Seref; Colak, Ilhami

    2013-01-01

    Highlights: ► Wind speed parameter was predicted in an n-tupled inputs using k-NN classification. ► The effects of input parameters, nearest neighbors and distance metrics were analyzed. ► Many useful and reasonable inferences were uncovered using the developed model. - Abstract: Wind energy is an inexhaustible energy source and wind power production has been growing rapidly in recent years. However, wind power has a non-schedulable nature due to wind speed variations. Hence, wind speed prediction is an indispensable requirement for power system operators. This paper predicts wind speed parameter in an n-tupled inputs using k-nearest neighbor (k-NN) classification and analyzes the effects of input parameters, nearest neighbors and distance metrics on wind speed prediction. The k-NN classification model was developed using the object oriented programming techniques and includes Manhattan and Minkowski distance metrics except from Euclidean distance metric on the contrary of literature. The k-NN classification model which uses wind direction, air temperature, atmospheric pressure and relative humidity parameters in a 4-tupled space achieved the best wind speed prediction for k = 5 in the Manhattan distance metric. Differently, the k-NN classification model which uses wind direction, air temperature and atmospheric pressure parameters in a 3-tupled inputs gave the worst wind speed prediction for k = 1 in the Minkowski distance metric

  4. Certain integrable system on a space associated with a quantum search algorithm

    International Nuclear Information System (INIS)

    Uwano, Y.; Hino, H.; Ishiwatari, Y.

    2007-01-01

    On thinking up a Grover-type quantum search algorithm for an ordered tuple of multiqubit states, a gradient system associated with the negative von Neumann entropy is studied on the space of regular relative configurations of multiqubit states (SR 2 CMQ). The SR 2 CMQ emerges, through a geometric procedure, from the space of ordered tuples of multiqubit states for the quantum search. The aim of this paper is to give a brief report on the integrability of the gradient dynamical system together with quantum information geometry of the underlying space, SR 2 CMQ, of that system

  5. Subspace gaps and Weyl's theorem for an elementary operator

    Directory of Open Access Journals (Sweden)

    B. P. Duggal

    2005-01-01

    Full Text Available A range-kernal orthogonality property is established for the elementary operators ℰ(X=∑i=1nAiXBi and ℰ*(X=∑i=1nAi*XBi*, where A=(A1,A2,…,An and B=(B1,B2,…,Bn are n-tuples of mutually commuting scalar operators (in the sense of Dunford in the algebra B(H of operators on a Hilbert space H. It is proved that the operator ℰ satisfies Weyl's theorem in the case in which A and B are n-tuples of mutually commuting generalized scalar operators.

  6. In-Network Processing of an Iceberg Join Query in Wireless Sensor Networks Based on 2-Way Fragment Semijoins

    Directory of Open Access Journals (Sweden)

    Hyunchul Kang

    2015-03-01

    Full Text Available We investigate the in-network processing of an iceberg join query in wireless sensor networks (WSNs. An iceberg join is a special type of join where only those joined tuples whose cardinality exceeds a certain threshold (called iceberg threshold are qualified for the result. Processing such a join involves the value matching for the join predicate as well as the checking of the cardinality constraint for the iceberg threshold. In the previous scheme, the value matching is carried out as the main task for filtering non-joinable tuples while the iceberg threshold is treated as an additional constraint. We take an alternative approach, meeting the cardinality constraint first and matching values next. In this approach, with a logical fragmentation of the join operand relations on the aggregate counts of the joining attribute values, the optimal sequence of 2-way fragment semijoins is generated, where each fragment semijoin employs a Bloom filter as a synopsis of the joining attribute values. This sequence filters non-joinable tuples in an energy-efficient way in WSNs. Through implementation and a set of detailed experiments, we show that our alternative approach considerably outperforms the previous one.

  7. In-Network Processing of an Iceberg Join Query in Wireless Sensor Networks Based on 2-Way Fragment Semijoins

    Science.gov (United States)

    Kang, Hyunchul

    2015-01-01

    We investigate the in-network processing of an iceberg join query in wireless sensor networks (WSNs). An iceberg join is a special type of join where only those joined tuples whose cardinality exceeds a certain threshold (called iceberg threshold) are qualified for the result. Processing such a join involves the value matching for the join predicate as well as the checking of the cardinality constraint for the iceberg threshold. In the previous scheme, the value matching is carried out as the main task for filtering non-joinable tuples while the iceberg threshold is treated as an additional constraint. We take an alternative approach, meeting the cardinality constraint first and matching values next. In this approach, with a logical fragmentation of the join operand relations on the aggregate counts of the joining attribute values, the optimal sequence of 2-way fragment semijoins is generated, where each fragment semijoin employs a Bloom filter as a synopsis of the joining attribute values. This sequence filters non-joinable tuples in an energy-efficient way in WSNs. Through implementation and a set of detailed experiments, we show that our alternative approach considerably outperforms the previous one. PMID:25774710

  8. The cobordism category and Waldhausen's K-theory

    DEFF Research Database (Denmark)

    Bökstedt, M.; Madsen, Ib

    This paper examines the category C^k_{d,n} whose morphisms are d-dimensional smooth manifolds that are properly embedded in the product of a k-dimensional cube with an (d+n-k)-dimensional Euclidean space. There are k directions to compose k-dimensional cubes, so C^k_{d,n} is a (strict) k-tuple ca......-tuple category. The geometric realization of the k-dimensional multi-nerve is the classifying space BC^k_{d,n}. At the end of the paper we construct an infinite loop map to Waldhausens K-theory. \\Omega BC^1_{d,n}-> A(BO(d)), We believe that the map factors through \\Omega...

  9. Frequencies of digits, divergence points, and Schmidt games

    International Nuclear Information System (INIS)

    Olsen, L.

    2009-01-01

    Sets of divergence points, i.e. numbers x (or tuples of numbers) for which the limiting frequency of a given string of N-adic digits of x fails to exist, have recently attracted huge interest in the literature. In this paper we consider sets of simultaneous divergence points, i.e. numbers x (or tuples of numbers) for which the limiting frequencies of all strings of N-adic digits of x fail to exist. We show that many natural sets of simultaneous divergence points are (α, β)-wining sets in the sense of the Schmidt game. As an application we obtain lower bounds for the Hausdorff dimension of these sets.

  10. Replica-Based High-Performance Tuple Space Computing

    DEFF Research Database (Denmark)

    Andric, Marina; De Nicola, Rocco; Lluch Lafuente, Alberto

    2015-01-01

    of concurrency and data access. We investigate issues related to replica consistency, provide an operational semantics that guides the implementation of the language, and discuss the main synchronization mechanisms of our prototypical run-time framework. Finally, we provide a performance analysis, which includes...

  11. On splice site prediction using weight array models: a comparison of smoothing techniques

    International Nuclear Information System (INIS)

    Taher, Leila; Meinicke, Peter; Morgenstern, Burkhard

    2007-01-01

    In most eukaryotic genes, protein-coding exons are separated by non-coding introns which are removed from the primary transcript by a process called 'splicing'. The positions where introns are cut and exons are spliced together are called 'splice sites'. Thus, computational prediction of splice sites is crucial for gene finding in eukaryotes. Weight array models are a powerful probabilistic approach to splice site detection. Parameters for these models are usually derived from m-tuple frequencies in trusted training data and subsequently smoothed to avoid zero probabilities. In this study we compare three different ways of parameter estimation for m-tuple frequencies, namely (a) non-smoothed probability estimation, (b) standard pseudo counts and (c) a Gaussian smoothing procedure that we recently developed

  12. Empowering Provenance in Data Integration

    Science.gov (United States)

    Kondylakis, Haridimos; Doerr, Martin; Plexousakis, Dimitris

    The provenance of data has recently been recognized as central to the trust one places in data. This paper presents a novel framework in order to empower provenance in a mediator based data integration system. We use a simple mapping language for mapping schema constructs, between an ontology and relational sources, capable to carry provenance information. This language extends the traditional data exchange setting by translating our mapping specifications into source-to-target tuple generating dependencies (s-t tgds). Then we define formally the provenance information we want to retrieve i.e. annotation, source and tuple provenance. We provide three algorithms to retrieve provenance information using information stored on the mappings and the sources. We show the feasibility of our solution and the advantages of our framework.

  13. Inferring Human Activity in Mobile Devices by Computing Multiple Contexts.

    Science.gov (United States)

    Chen, Ruizhi; Chu, Tianxing; Liu, Keqiang; Liu, Jingbin; Chen, Yuwei

    2015-08-28

    This paper introduces a framework for inferring human activities in mobile devices by computing spatial contexts, temporal contexts, spatiotemporal contexts, and user contexts. A spatial context is a significant location that is defined as a geofence, which can be a node associated with a circle, or a polygon; a temporal context contains time-related information that can be e.g., a local time tag, a time difference between geographical locations, or a timespan; a spatiotemporal context is defined as a dwelling length at a particular spatial context; and a user context includes user-related information that can be the user's mobility contexts, environmental contexts, psychological contexts or social contexts. Using the measurements of the built-in sensors and radio signals in mobile devices, we can snapshot a contextual tuple for every second including aforementioned contexts. Giving a contextual tuple, the framework evaluates the posteriori probability of each candidate activity in real-time using a Naïve Bayes classifier. A large dataset containing 710,436 contextual tuples has been recorded for one week from an experiment carried out at Texas A&M University Corpus Christi with three participants. The test results demonstrate that the multi-context solution significantly outperforms the spatial-context-only solution. A classification accuracy of 61.7% is achieved for the spatial-context-only solution, while 88.8% is achieved for the multi-context solution.

  14. A hybrid filtering approach for storage optimization in main-memory cloud database

    Directory of Open Access Journals (Sweden)

    Ghada M. Afify

    2015-11-01

    Full Text Available Enterprises and cloud service providers face dramatic increase in the amount of data stored in private and public clouds. Thus, data storage costs are growing hastily because they use only one single high-performance storage tier for storing all cloud data. There’s considerable potential to reduce cloud costs by classifying data into active (hot and inactive (cold. In the main-memory databases research, recent works focus on approaches to identify hot/cold data. Most of these approaches track tuple accesses to identify hot/cold tuples. In contrast, we introduce a novel Hybrid Filtering Approach (HFA that tracks both tuples and columns accesses in main-memory databases. Our objective is to enhance the performance in terms of three dimensions: storage space, query elapsed time and CPU time. In order to validate the effectiveness of our approach, we realized its concrete implementation on Hekaton, a SQL’s server memory-optimized engine using the well-known TPC-H benchmark. Experimental results show that the proposed HFA outperforms Hekaton approach in respect of all performance dimensions. In specific, HFA reduces the storage space by average of 44–96%, reduces the query elapsed time by average of 25–93% and reduces the CPU time by average of 31–97% compared to the traditional database approach.

  15. A MapReduce Framework for DNA Sequencing Data Processing

    Directory of Open Access Journals (Sweden)

    Samy Ghoneimy

    2016-12-01

    Full Text Available Genomics and Next Generation Sequencers (NGS like Illumina Hiseq produce data in the order of ‎‎200 billion base pairs in a single one-week run for a 60x human genome coverage, which ‎requires modern high-throughput experimental technologies that can ‎only be tackled with high performance computing (HPC and specialized software algorithms called ‎‎“short read aligners”. This paper focuses on the implementation of the DNA sequencing as a set of MapReduce programs that will accept a DNA data set as a FASTQ file and finally generate a VCF (variant call format file, which has variants for a given DNA data set. In this paper MapReduce/Hadoop along with Burrows-Wheeler Aligner (BWA, Sequence Alignment/Map (SAM ‎tools, are fully utilized to provide various utilities for manipulating alignments, including sorting, merging, indexing, ‎and generating alignments. The Map-Sort-Reduce process is designed to be suited for a Hadoop framework in ‎which each cluster is a traditional N-node Hadoop cluster to utilize all of the Hadoop features like HDFS, program ‎management and fault tolerance. The Map step performs multiple instances of the short read alignment algorithm ‎‎(BoWTie that run in parallel in Hadoop. The ordered list of the sequence reads are used as input tuples and the ‎output tuples are the alignments of the short reads. In the Reduce step many parallel instances of the Short ‎Oligonucleotide Analysis Package for SNP (SOAPsnp algorithm run in the cluster. Input tuples are sorted ‎alignments for a partition and the output tuples are SNP calls. Results are stored via HDFS, and then archived in ‎SOAPsnp format. ‎ The proposed framework enables extremely fast discovering somatic mutations, inferring population genetical ‎parameters, and performing association tests directly based on sequencing data without explicit genotyping or ‎linkage-based imputation. It also demonstrate that this method achieves comparable

  16. On the classification of complex vector bundles of stable rank

    Indian Academy of Sciences (India)

    , the tuples of cohomology classes on a compact, complex manifold, corresponding to the Chern classes of a complex vector bundle of stable rank. This classification becomes more effective on generalized flag manifolds, where the Lie ...

  17. Relaxations of semiring constraint satisfaction problems

    CSIR Research Space (South Africa)

    Leenen, L

    2007-03-01

    Full Text Available The Semiring Constraint Satisfaction Problem (SCSP) framework is a popular approach for the representation of partial constraint satisfaction problems. In this framework preferences can be associated with tuples of values of the variable domains...

  18. Effective Use of Java Data Objects in Developing Database Applications; Advantages and Disadvantages

    National Research Council Canada - National Science Library

    Zilidis, Paschalis

    2004-01-01

    .... The major disadvantage of this approach is the well-known impedance mismatch in which some form of mapping is required to connect the objects in the frontend and the relational tuples in the backend. Java Data Objects (JDO...

  19. Evaluation on Collaborative Satisfaction for Project Management Team in Integrated Project Delivery Mode

    Science.gov (United States)

    Zhang, L.; Li, Y.; Wu, Q.

    2013-05-01

    Integrated Project Delivery (IPD) is a newly-developed project delivery approach for construction projects, and the level of collaboration of project management team is crucial to the success of its implementation. Existing research has shown that collaborative satisfaction is one of the key indicators of team collaboration. By reviewing the literature on team collaborative satisfaction and taking into consideration the characteristics of IPD projects, this paper summarizes the factors that influence collaborative satisfaction of IPD project management team. Based on these factors, this research develops a fuzzy linguistic method to effectively evaluate the level of team collaborative satisfaction, in which the authors adopted the 2-tuple linguistic variables and 2-tuple linguistic hybrid average operators to enhance the objectivity and accuracy of the evaluation. The paper demonstrates the practicality and effectiveness of the method through carrying out a case study with the method.

  20. Modification Semantics in Now-Relative Databases

    DEFF Research Database (Denmark)

    Torp, Kristian; Jensen, Christian Søndergaard; Snodgrass, R. T.

    2004-01-01

    Most real-world databases record time-varying information. In such databases, the notion of ??the current time,?? or NOW, occurs naturally and prominently. For example, when capturing the past states of a relation using begin and end time columns, tuples that are part of the current state have some...... past time as their begin time and NOW as their end time. While the semantics of such variable databases has been described in detail and is well understood, the modification of variable databases remains unexplored. This paper defines the semantics of modifications involving the variable NOW. More...... specifically,  the problems with modifications in the presence of NOW are explored, illustrating that the main problems are with modifications of tuples that reach into the future. The paper defines the semantics of modifications?including insertions, deletions, and updates?of databases without NOW, with NOW...

  1. «Cual doctora en cielo graduada... ». La poesia femenina per als certàmens literaris amb motiu de la beatificació i canonització de Teresa de Jesús (València, 1614 i 1621; Barcelona, 1614

    Directory of Open Access Journals (Sweden)

    Verònica Zaragoza Gómez

    2015-12-01

    Full Text Available L’objectiu d’aquest estudi és examinar les composicions poètiques aportades per dones als tres certàmens celebrats a València i Barcelona, amb motiu de la beatificació (1614 i posterior canonització (1622 de Teresa de Jesús; dos moments clau per a la consolidació de la fama de la carmelitana avilesa que van possibilitar l’exhibició pública i impressió de poesia femenina. Analitzem també la participació femenina catalana al certamen celebrat a Saragossa el 1614 per la beatificació. L’estudi està enfocat des d’una perspectiva femenina, a través de la revisió de dihuit poesies amb signatura de dones que ens han transmès les extenses relacions impreses. Al llarg del comentari dels poemes, analitzem la percepció general que aquestes dones tenen de Teresa de Jesús y ens centrem en algunes de les imatges tòpiques sobre la vida i l’obra de la fundadora. Incidim en la documentació biogràfica de les autores, setze poetes ‘de circumstàncies’ conegudes tan sols per un nom i cognom(s estampats a la rúbrica. A més, proporcionem unes breus pinzellades sobre la recepció femenina de la figura de la carmelita avilesa a l’àmbit lingüístic català. The aim of this study is to examine the poetic compositions made by women in the three contests held in Valencia and Barcelona on the occasion of the beatification (1614 and subsequent canonization (1622 of Teresa of Jesus; two special moments for the consoli-dation of the fame of the Carmelite nun which allowed the public display and then its printing of feminine poetry. We also analyze the Catalan female participation in the con-test held in Zaragoza in 1614 for the beatification. The study is focused from a female perspective, through the review of eighteen poems signed by women which arrived to us through extensive printed relations. Throughout the commentary of the poems, we analyze the general perception that these women have about Teresa of Jesus and we focus on some

  2. Exploiting differentiated tuple distribution in shared data spaces

    NARCIS (Netherlands)

    Russello, G.; Chaudron, M.R.V.; Steen, van M.; Danelutto, M.; Vanneschi, M.

    2004-01-01

    The shared data space model has proven to be an effective paradigm for building distributed applications. However, building an efficient distributed implementation remains a challenge. A plethora of different implementations exists. Each of them has a specific policy for distributing data across

  3. Pathfinder: XQuery - The Relational Way

    NARCIS (Netherlands)

    Boncz, Peter; Grust, Torsten; van Keulen, Maurice; Manegold, Stefan; Rittinger, Jan; Teubner, Jens; Boehm, K.; Jensen, C.S.; Haas, L.M.; Kersten, M.L.; Larson, P-K.; Ooi, B.C.

    Relational query processors are probably the best understood (as well as the best engineered) query engines available today. Although carefully tuned to process instances of the relational model (tables of tuples), these processors can also provide a foundation for the evaluation of "alien"

  4. Extended Set Constraints and Tree Grammar Abstraction of Programs

    DEFF Research Database (Denmark)

    Rosendahl, Mads; Gallagher, John Patrick

    2011-01-01

    Set constraints are relations between sets of ground terms or trees. This paper presents two main contributions: firstly we consider an extension of the systems of set constraints to include a tuple constructor, and secondly we construct a simplified solution procedure for set constraints. We...

  5. DSM vs. NSM: CPU Performance Tradeoffs in Block-Oriented Query Processing

    NARCIS (Netherlands)

    M. Zukowski (Marcin); N.J. Nes (Niels); P.A. Boncz (Peter)

    2008-01-01

    textabstractComparisons between the merits of row-wise storage (NSM) and columnar storage (DSM) are typically made with respect to the persistent storage layer of database systems. In this paper, however, we focus on the CPU efficiency tradeoffs of tuple representations inside the query

  6. Distribution of squares modulo a composite number

    OpenAIRE

    Aryan, Farzad

    2015-01-01

    In this paper we study the distribution of squares modulo a square-free number $q$. We also look at inverse questions for the large sieve in the distribution aspect and we make improvements on existing results on the distribution of $s$-tuples of reduced residues.

  7. Staircase Join: Teach a Relational DBMS to Watch its (Axis) Steps

    NARCIS (Netherlands)

    Freytag, J.C.; Grust, Torsten; Lockemann, P.C.; van Keulen, Maurice; Teubner, J.; Abiteboul, S.; Carey, M.; Selinger, P.; Heuer, A.

    Relational query processors derive much of their effectiveness from the awareness of specific table properties like sort order, size, or absence of duplicate tuples. This text applies (and adapts) this successful principle to database-supported XML and XPath processing: the relational system is made

  8. Converting nested algebra expressions into flat algebra expressions

    NARCIS (Netherlands)

    Paredaens, J.; Van Gucht, D.

    1992-01-01

    Nested relations generalize ordinary flat relations by allowing tuple values to be either atomic or set valued. The nested algebra is a generalization of the flat relational algebra to manipulate nested relations. In this paper we study the expressive power of the nested algebra relative to its

  9. Multitemporal conditional schema evolution

    DEFF Research Database (Denmark)

    Jensen, Ole Guttorm; Bøhlen, Michael Hanspeter

    2004-01-01

    Schema evolution is the ability of the database to respond to changes in the real world by allowing the schema to evolve.  The multidimensional conditionally evolving schema (MD-CES) is a conceptual model for conditional schema changes, which modify the schema of those tuples that satisfy...

  10. Tree Awareness for Relational DBMS Kernels: Staircase Join

    NARCIS (Netherlands)

    Grust, Torsten; van Keulen, Maurice; Blanken, Henk; Grabs, T.; Schek, H-J.; Schenkel, R.; Weikum, G.

    2003-01-01

    Relational database management systems (RDBMSs) derive much of their efficiency from the versatility of their core data structure: tables of tuples. Such tables are simple enough to allow for an efficient representation on all levels of the memory hierarchy, yet sufficiently generic to host a wide

  11. Case series

    African Journals Online (AJOL)

    raoul

    31 mai 2011 ... Syndromes microdélétionnels (syndrome de Williams et syndrome de la délétion .... Pour l'observation 2 et 3 (propositus et parents): Vysis DiGeorge Region Probe- LSI TUPLE 1 SpectrumOrange .... 2010 Apr;17(4):431-4.

  12. AspectKE*:Security Aspects with Program Analysis for Distributed Systems

    DEFF Research Database (Denmark)

    2010-01-01

    AspectKE* is the first distributed AOP language based on a tuple space system. It is designed to enforce security policies to applications containing untrusted processes. One of the key features is the high-level predicates that extract results of static program analysis. These predicates provide...

  13. Twin ''Fano-Snowflakes'' over the Smallest Ring of Ternions

    Directory of Open Access Journals (Sweden)

    Metod Saniga

    2008-06-01

    Full Text Available Given a finite associative ring with unity, R, any free (left cyclic submodule (FCS generated by a unimodular (n + 1-tuple of elements of R represents a point of the n-dimensional projective space over R. Suppose that R also features FCSs generated by (n + 1-tuples that are not unimodular: what kind of geometry can be ascribed to such FCSs? Here, we (partially answer this question for n = 2 when R is the (unique non-commutative ring of order eight. The corresponding geometry is dubbed a ''Fano-Snowflake'' due to its diagrammatic appearance and the fact that it contains the Fano plane in its center. There exist, in fact, two such configurations – each being tied to either of the two maximal ideals of the ring – which have the Fano plane in common and can, therefore, be viewed as twins. Potential relevance of these noteworthy configurations to quantum information theory and stringy black holes is also outlined.

  14. Estimates of Storage Capacity of Multilayer Perceptron with Threshold Logic Hidden Units.

    Science.gov (United States)

    Kowalczyk, Adam

    1997-11-01

    We estimate the storage capacity of multilayer perceptron with n inputs, h(1) threshold logic units in the first hidden layer and 1 output. We show that if the network can memorize 50% of all dichotomies of a randomly selected N-tuple of points of R(n) with probability 1, then Nmemory capacity (in the sense of Cover) between nh(1)+1 and 2(nh(1)+1) input patterns and for the most efficient networks in this class between 1 and 2 input patterns per connection. Comparing these results with the recent estimates of VC-dimension we find that in contrast to a single neuron case, the VC-dimension exceeds the capacity for a sufficiently large n and h(1). The results are based on the derivation of an explicit expression for the number of dichotomies which can be implemented by such a network for a special class of N-tuples of input patterns which has a positive probability of being randomly chosen.

  15. Tensors and second quantization

    NARCIS (Netherlands)

    Graaf, de J.

    2010-01-01

    Starting from a pair of vector spaces (formula) an inner product space and (formula), the space of linear mappings (formula), we construct a six-tuple (formula). Here (formula) is again an inner product space and (formula) the space of its linear mappings. It is required that (formula), as linear

  16. Entity resolution for uncertain data

    NARCIS (Netherlands)

    Ayat, N.; Akbarinia, R.; Afsarmanesh, H.; Valduriez, P.

    2012-01-01

    Entity resolution (ER), also known as duplicate detection or record matching, is the problem of identifying the tuples that represent the same real world entity. In this paper, we address the problem of ER for uncertain data, which we call ERUD. We propose two different approaches for the ERUD

  17. Involutive distributions of operator-valued evolutionary vector fields and their affine geometry

    NARCIS (Netherlands)

    Kiselev, A.V.; van de Leur, J.W.

    2010-01-01

    We generalize the notion of a Lie algebroid over infinite jet bundle by replacing the variational anchor with an N-tuple of differential operators whose images in the Lie algebra of evolutionary vector fields of the jet space are subject to collective commutation closure. The linear space of such

  18. Optimal shortening of uniform covering arrays.

    Directory of Open Access Journals (Sweden)

    Jose Torres-Jimenez

    Full Text Available Software test suites based on the concept of interaction testing are very useful for testing software components in an economical way. Test suites of this kind may be created using mathematical objects called covering arrays. A covering array, denoted by CA(N; t, k, v, is an N × k array over [Formula: see text] with the property that every N × t sub-array covers all t-tuples of [Formula: see text] at least once. Covering arrays can be used to test systems in which failures occur as a result of interactions among components or subsystems. They are often used in areas such as hardware Trojan detection, software testing, and network design. Because system testing is expensive, it is critical to reduce the amount of testing required. This paper addresses the Optimal Shortening of Covering ARrays (OSCAR problem, an optimization problem whose objective is to construct, from an existing covering array matrix of uniform level, an array with dimensions of (N - δ × (k - Δ such that the number of missing t-tuples is minimized. Two applications of the OSCAR problem are (a to produce smaller covering arrays from larger ones and (b to obtain quasi-covering arrays (covering arrays in which the number of missing t-tuples is small to be used as input to a meta-heuristic algorithm that produces covering arrays. In addition, it is proven that the OSCAR problem is NP-complete, and twelve different algorithms are proposed to solve it. An experiment was performed on 62 problem instances, and the results demonstrate the effectiveness of solving the OSCAR problem to facilitate the construction of new covering arrays.

  19. Moduli for decorated tuples of sheaves and representation spaces ...

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    We extend the scope of a former paper to vector bundle problems involving ... the machinery of algebraic geometry to the gauge theoretic moduli space for the pairs ... A nice example of a classification problem which can be formulated in our ... Numerous famous special cases of this construction have been studied in the ...

  20. Support-Less Association Rule Mining Using Tuple Count Cube

    OpenAIRE

    Qin Ding; William Perrizo

    2007-01-01

    Association rule mining is one of the important tasks in data mining and knowledge discovery (KDD). The traditional task of association rule mining is to find all the rules with high support and high confidence. In some applications, we are interested in finding high confidence rules even though the support may be low. This type of problem differs from the traditional association rule mining problem; hence, it is called support-less association rule mining. Existing algorithms for association...

  1. Event analysis using a massively parallel processor

    International Nuclear Information System (INIS)

    Bale, A.; Gerelle, E.; Messersmith, J.; Warren, R.; Hoek, J.

    1990-01-01

    This paper describes a system for performing histogramming of n-tuple data at interactive rates using a commercial SIMD processor array connected to a work-station running the well-known Physics Analysis Workstation software (PAW). Results indicate that an order of magnitude performance improvement over current RISC technology is easily achievable

  2. HippoDraw

    International Nuclear Information System (INIS)

    Gravina, M.F.; Kunz, P.F.; Rensing, P.E.

    1992-09-01

    HippoDraw is a NeXTSTEP application for viewing statistical data. It has several unique features which make viewing data distributions highly interactive. It also incorporates a set of simple drawing tools. HippoDraw is written in Objective-C and uses the Hippoplotamus library package to handle the n-tuples and displays

  3. Amoeba-inspired Tug-of-War algorithms for exploration-exploitation dilemma in extended Bandit Problem.

    Science.gov (United States)

    Aono, Masashi; Kim, Song-Ju; Hara, Masahiko; Munakata, Toshinori

    2014-03-01

    The true slime mold Physarum polycephalum, a single-celled amoeboid organism, is capable of efficiently allocating a constant amount of intracellular resource to its pseudopod-like branches that best fit the environment where dynamic light stimuli are applied. Inspired by the resource allocation process, the authors formulated a concurrent search algorithm, called the Tug-of-War (TOW) model, for maximizing the profit in the multi-armed Bandit Problem (BP). A player (gambler) of the BP should decide as quickly and accurately as possible which slot machine to invest in out of the N machines and faces an "exploration-exploitation dilemma." The dilemma is a trade-off between the speed and accuracy of the decision making that are conflicted objectives. The TOW model maintains a constant intracellular resource volume while collecting environmental information by concurrently expanding and shrinking its branches. The conservation law entails a nonlocal correlation among the branches, i.e., volume increment in one branch is immediately compensated by volume decrement(s) in the other branch(es). Owing to this nonlocal correlation, the TOW model can efficiently manage the dilemma. In this study, we extend the TOW model to apply it to a stretched variant of BP, the Extended Bandit Problem (EBP), which is a problem of selecting the best M-tuple of the N machines. We demonstrate that the extended TOW model exhibits better performances for 2-tuple-3-machine and 2-tuple-4-machine instances of EBP compared with the extended versions of well-known algorithms for BP, the ϵ-Greedy and SoftMax algorithms, particularly in terms of its short-term decision-making capability that is essential for the survival of the amoeba in a hostile environment. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  4. Distortion-Free Watermarking Approach for Relational Database Integrity Checking

    Directory of Open Access Journals (Sweden)

    Lancine Camara

    2014-01-01

    Full Text Available Nowadays, internet is becoming a suitable way of accessing the databases. Such data are exposed to various types of attack with the aim to confuse the ownership proofing or the content protection. In this paper, we propose a new approach based on fragile zero watermarking for the authentication of numeric relational data. Contrary to some previous databases watermarking techniques which cause some distortions in the original database and may not preserve the data usability constraints, our approach simply seeks to generate the watermark from the original database. First, the adopted method partitions the database relation into independent square matrix groups. Then, group-based watermarks are securely generated and registered in a trusted third party. The integrity verification is performed by computing the determinant and the diagonal’s minor for each group. As a result, tampering can be localized up to attribute group level. Theoretical and experimental results demonstrate that the proposed technique is resilient against tuples insertion, tuples deletion, and attributes values modification attacks. Furthermore, comparison with recent related effort shows that our scheme performs better in detecting multifaceted attacks.

  5. Totally optimal decision rules

    KAUST Repository

    Amin, Talha

    2017-11-22

    Optimality of decision rules (patterns) can be measured in many ways. One of these is referred to as length. Length signifies the number of terms in a decision rule and is optimally minimized. Another, coverage represents the width of a rule’s applicability and generality. As such, it is desirable to maximize coverage. A totally optimal decision rule is a decision rule that has the minimum possible length and the maximum possible coverage. This paper presents a method for determining the presence of totally optimal decision rules for “complete” decision tables (representations of total functions in which different variables can have domains of differing values). Depending on the cardinalities of the domains, we can either guarantee for each tuple of values of the function that totally optimal rules exist for each row of the table (as in the case of total Boolean functions where the cardinalities are equal to 2) or, for each row, we can find a tuple of values of the function for which totally optimal rules do not exist for this row.

  6. Totally optimal decision rules

    KAUST Repository

    Amin, Talha M.; Moshkov, Mikhail

    2017-01-01

    Optimality of decision rules (patterns) can be measured in many ways. One of these is referred to as length. Length signifies the number of terms in a decision rule and is optimally minimized. Another, coverage represents the width of a rule’s applicability and generality. As such, it is desirable to maximize coverage. A totally optimal decision rule is a decision rule that has the minimum possible length and the maximum possible coverage. This paper presents a method for determining the presence of totally optimal decision rules for “complete” decision tables (representations of total functions in which different variables can have domains of differing values). Depending on the cardinalities of the domains, we can either guarantee for each tuple of values of the function that totally optimal rules exist for each row of the table (as in the case of total Boolean functions where the cardinalities are equal to 2) or, for each row, we can find a tuple of values of the function for which totally optimal rules do not exist for this row.

  7. Convex trace functions of several variables

    DEFF Research Database (Denmark)

    Hansen, Frank

    2002-01-01

    We prove that the function (x1,...,xk)¿Tr(f(x1,...,xk)), defined on k-tuples of symmetric matrices of order (n1,...,nk) in the domain of f, is convex for any convex function f of k variables. The matrix f(x1,...,xk) is defined by the functional calculus for functions of several variables, and it ...

  8. AspectKE*

    DEFF Research Database (Denmark)

    Yang, Fan; Masuhara, Hidehiko; Aotani, Tomoyuki

    2010-01-01

    Enforcing security policies to distributed systems is difficult, in particular, when a system contains untrusted components. We designed AspectKE*, a distributed AOP language based on a tuple space, to tackle this issue. In AspectKE*, aspects can enforce access control policies that depend......KE*, and demonstrate usefulness of AspectKE* through a security aspect for a distributed chat system....

  9. DATA MINING METHODOLOGY FOR DETERMINING THE OPTIMAL MODEL OF COST PREDICTION IN SHIP INTERIM PRODUCT ASSEMBLY

    Directory of Open Access Journals (Sweden)

    Damir Kolich

    2016-03-01

    Full Text Available In order to accurately predict costs of the thousands of interim products that are assembled in shipyards, it is necessary to use skilled engineers to develop detailed Gantt charts for each interim product separately which takes many hours. It is helpful to develop a prediction tool to estimate the cost of interim products accurately and quickly without the need for skilled engineers. This will drive down shipyard costs and improve competitiveness. Data mining is used extensively for developing prediction models in other industries. Since ships consist of thousands of interim products, it is logical to develop a data mining methodology for a shipyard or any other manufacturing industry where interim products are produced. The methodology involves analysis of existing interim products and data collection. Pre-processing and principal component analysis is done to make the data “user-friendly” for later prediction processing and the development of both accurate and robust models. The support vector machine is demonstrated as the better model when there are a lower number of tuples. However as the number of tuples is increased to over 10000, then the artificial neural network model is recommended.

  10. An Environmental Monitoring System for Managing Spatiotemporal Sensor Data over Sensor Networks

    Directory of Open Access Journals (Sweden)

    Keun Ho Ryu

    2012-03-01

    Full Text Available In a wireless sensor network, sensors collect data about natural phenomena and transmit them to a server in real-time. Many studies have been conducted focusing on the processing of continuous queries in an approximate form. However, this approach is difficult to apply to environmental applications which require the correct data to be stored. In this paper, we propose a weather monitoring system for handling and storing the sensor data stream in real-time in order to support continuous spatial and/or temporal queries. In our system, we exploit two time-based insertion methods to store the sensor data stream and reduce the number of managed tuples, without losing any of the raw data which are useful for queries, by using the sensors’ temporal attributes. In addition, we offer a method for reducing the cost of the join operations used in processing spatiotemporal queries by filtering out a list of irrelevant sensors from query range before making a join operation. In the results of the performance evaluation, the number of tuples obtained from the data stream is reduced by about 30% in comparison to a naïve approach, thereby decreasing the query execution time.

  11. Multi-language Struct Support in Babel

    Energy Technology Data Exchange (ETDEWEB)

    Ebner, D; Prantl, A; Epperly, T W

    2011-03-22

    Babel is an open-source language interoperability framework tailored to the needs of high-performance scientific computing. As an integral element of the Common Component Architecture (CCA) it is used in a wide range of research projects. In this paper we describe how we extended Babel to support interoperable tuple data types (structs). Structs are a common idiom in scientific APIs; they are an efficient way to pass tuples of nonuniform data between functions, and are supported natively by most programming languages. Using our extended version of Babel, developers of scientific code can now pass structs as arguments between functions implemented in any of the supported languages. In C, C++ and Fortran 2003, structs can be passed without the overhead of data marshaling or copying, providing language interoperability at minimal cost. Other supported languages are Fortran 77, Fortran 90, Java and Python. We will show how we designed a struct implementation that is interoperable with all of the supported languages and present benchmark data compare the performance of all language bindings, highlighting the differences between languages that offer native struct support and an object-oriented interface with getter/setter methods.

  12. Uncertain Quality Function Deployment Using a Hybrid Group Decision Making Model

    Directory of Open Access Journals (Sweden)

    Ze-Ling Wang

    2016-11-01

    Full Text Available Quality function deployment (QFD is a widely used quality system tool for translating customer requirements (CRs into the engineering design requirements (DRs of products or services. The conventional QFD analysis, however, has been criticized as having some limitations such as in the assessment of relationships between CRs and DRs, the determination of CR weights and the prioritization of DRs. This paper aims to develop a new hybrid group decision-making model based on hesitant 2-tuple linguistic term sets and an extended QUALIFLEX (qualitative flexible multiple criteria method approach for handling QFD problems with incomplete weight information. First, hesitant linguistic term sets are combined with interval 2-tuple linguistic variables to express various uncertainties in the assessment information of QFD team members. Borrowing the idea of grey relational analysis (GRA, a multiple objective optimization model is constructed to determine the relative weights of CRs. Then, an extended QUALIFLEX approach with an inclusion comparison method is suggested to determine the ranking of the DRs identified in QFD. Finally, an analysis of a market segment selection problem is conducted to demonstrate and validate the proposed QFD approach.

  13. Fast Failure Recovery for Main-Memory DBMSs on Multicores

    OpenAIRE

    Wu, Yingjun; Guo, Wentian; Chan, Chee-Yong; Tan, Kian-Lee

    2016-01-01

    Main-memory database management systems (DBMS) can achieve excellent performance when processing massive volume of on-line transactions on modern multi-core machines. But existing durability schemes, namely, tuple-level and transaction-level logging-and-recovery mechanisms, either degrade the performance of transaction processing or slow down the process of failure recovery. In this paper, we show that, by exploiting application semantics, it is possible to achieve speedy failure recovery wit...

  14. Emergent intelligent properties of progressively structured pattern recognition nets

    Energy Technology Data Exchange (ETDEWEB)

    Aleksander, I

    1983-07-01

    The n-tuple recognition net is seen as a building brick of a progression of network structures. The emergent intelligent properties of such systems are discussed. They include the amplification of confidence for the recognition of images that differ in small detail, a short term memory of the last seen image, sequence sensitivity, sequence sensitivity, sequence acceptance and saccadic inspection as an aid in scene analysis. 12 references.

  15. Multidimensional cellular automata and generalization of Fekete's lemma

    Directory of Open Access Journals (Sweden)

    Silvio Capobianco

    2008-08-01

    Full Text Available Fekete's lemma is a well known combinatorial result on number sequences: we extend it to functions defined on $d$-tuples of integers. As an application of the new variant, we show that nonsurjective $d$-dimensional cellular automata are characterized by loss of arbitrarily much information on finite supports, at a growth rate greater than that of the support's boundary determined by the automaton's neighbourhood index.

  16. What is the Burnside Problem?

    Indian Academy of Sciences (India)

    all eigenvalues of hare o. Hence x-1y is I - h, which is clearly unipotent. This means x = y. Hence the association. , tr(gdg)) is one-to-one. As the traces of elements of G take at the most r values, the set of d-tuples above has cardinality at the most rd ::; rn2. This completes the proof. One can refine the corollary by dropping the ...

  17. On power series expansions of the S-resolvent operator and the Taylor formula

    Science.gov (United States)

    Colombo, Fabrizio; Gantner, Jonathan

    2016-12-01

    The S-functional calculus is based on the theory of slice hyperholomorphic functions and it defines functions of n-tuples of not necessarily commuting operators or of quaternionic operators. This calculus relays on the notion of S-spectrum and of S-resolvent operator. Since most of the properties that hold for the Riesz-Dunford functional calculus extend to the S-functional calculus, it can be considered its non commutative version. In this paper we show that the Taylor formula of the Riesz-Dunford functional calculus can be generalized to the S-functional calculus. The proof is not a trivial extension of the classical case because there are several obstructions due to the non commutativity of the setting in which we work that have to be overcome. To prove the Taylor formula we need to introduce a new series expansion of the S-resolvent operators associated to the sum of two n-tuples of operators. This result is a crucial step in the proof of our main results, but it is also of independent interest because it gives a new series expansion for the S-resolvent operators. This paper is addressed to researchers working in operator theory and in hypercomplex analysis.

  18. Computing with words to feasibility study of software projects

    Directory of Open Access Journals (Sweden)

    Marieta Peña Abreu

    2017-02-01

    Full Text Available Objective: This paper proposes a method to analyze the technical, commercial and social feasibility of software projects in environments of uncertainty. It allows working with multiple experts and multiple criteria and facilitates decision-making. Method: The proposal contains two phases, first the necessary information is collected and in second place projects are evaluated using 2-tuple linguistic representation model. The experts are selected by analyzing their curricular synthesis. The evaluation criteria are defined using the technique Focus Group and weighted in the interval (0,1 according to their importance. three domains are offered to express the preferences: numeric, interval-valued and linguistic. For aggregation extended arithmetic mean and weighted average extended are used, preventing the loss of information. A 2-tuple (feasibility, precision is obtained as a result for each project. Results: The evaluation of P1 project was a very high feasibility with -0,33 of precision. The P2 project obtained a high feasibility with 0,38 of precision and P3 project achieved a medium feasibility with -0,21 of precision. Conclusions: This method is favorable for software projects feasibility analysis with presence of multiple experts and criteria, in environments of uncertainty. It tries heterogeneous assessments without loss of information. Their results are consistent and useful for decision makers.

  19. Optimization of Extended Relational Database Systems

    Science.gov (United States)

    1986-07-23

    control functions are integrated into a single system in a homogeneoua way. As a first exam - ple, consider previous work in supporting various semantic...sizes are reduced and, wnk? quently, the number of materializations that will be needed is aba lower. For exam - pie, in the above query tuple...retrieve (EMP.name) where EMP hobbies instrument = ’ violin ’ When the various entries in the hobbies field are materialized, only those queries that

  20. Characterising landscape variation through spatial folksonomies

    OpenAIRE

    Derungs, Curdin; Purves, Ross S

    2016-01-01

    Describing current, past and future landscapes for inventory and policy making purposes requires classifications capturing variation in, for example, land use and land cover. Typical land cover classifi- cations for such purposes result from a top-down process and rely on expert conceptualisations, and thus provide limited space for incorporating more widely held views of key landscape elements. In this paper we introduce the notion of spatial folksonomies, which we define as a tuple linking ...

  1. Pitfalls of the most commonly used models of context dependent substitution

    Directory of Open Access Journals (Sweden)

    Huttley Gavin A

    2008-12-01

    Full Text Available Abstract Background Neighboring nucleotides exert a striking influence on mutation, with the hypermutability of CpG dinucleotides in many genomes being an exemplar. Among the approaches employed to measure the relative importance of sequence neighbors on molecular evolution have been continuous-time Markov process models for substitutions that treat sequences as a series of independent tuples. The most widely used examples are the codon substitution models. We evaluated the suitability of derivatives of the nucleotide frequency weighted (hereafter NF and tuple frequency weighted (hereafter TF models for measuring sequence context dependent substitution. Critical properties we address are their relationships to an independent nucleotide process and the robustness of parameter estimation to changes in sequence composition. We then consider the impact on inference concerning dinucleotide substitution processes from application of these two forms to intron sequence alignments from primates. Results We prove that the NF form always nests the independent nucleotide process and that this is not true for the TF form. As a consequence, using TF to study context effects can be misleading, which is shown by both theoretical calculations and simulations. We describe a simple example where a context parameter estimated under TF is confounded with composition terms unless all sequence states are equi-frequent. We illustrate this for the dinucleotide case by simulation under a nucleotide model, showing that the TF form identifies a CpG effect when none exists. Our analysis of primate introns revealed that the effect of nucleotide neighbors is over-estimated under TF compared with NF. Parameter estimates for a number of contexts are also strikingly discordant between the two model forms. Conclusion Our results establish that the NF form should be used for analysis of independent-tuple context dependent processes. Although neighboring effects in general are

  2. Estimation method of the cohesion degree for the users’ profiles of social network based on open data

    Directory of Open Access Journals (Sweden)

    Valentina A. Kataeva

    2017-01-01

    Full Text Available The purpose of research was to study the existing methods of determining the degree of cohesion of two users of social network, identifying their shortcomings and developing a new method. The research identified shortcomings of existing methods and proposed a new method for assessing the degree of cohesion of social network profiles based on open data from a social network. Under the degree of cohesion of users’ profiles is understood the probability of communication (interaction of profile owners in real life, it is calculated for two users of the social network and expressed in percent. The work of the method is demonstrated on the example of the social network “In contact”. This method includes the sequence of the following stages: the first stage is data collection about users of the social network with API and the formation of tuples of users’ profile characteristics. A tuple of characteristics of social network profiles is the data, collected for each user, stored in a structured form.The next step is the analysis of the collected information. For each characteristic of the tuple of profiles, i.e. the possible element of interaction of users in the social network, the coefficient of cohesion by the characteristic is calculated. In addition, for each feature, its informativeness is calculated, i.e. how important is this or that feature in this social network. At the final stage, the results are generated, using the formula for the probability of communication between two users, derived during the investigation. Obtained as a result of the application of the method, the probability of communication between two users can be used to optimize the activities of the operative-search services and special bodies.In addition, the received degree of cohesion of two users can be interpreted as the probability of a channel of information leakage between them. The role of the user of the method can be any private or state organization that cares

  3. An Algorithm to Solve the Equal-Sum-Product Problem

    OpenAIRE

    Nyblom, M. A.; Evans, C. D.

    2013-01-01

    A recursive algorithm is constructed which finds all solutions to a class of Diophantine equations connected to the problem of determining ordered n-tuples of positive integers satisfying the property that their sum is equal to their product. An examination of the use of Binary Search Trees in implementing the algorithm into a working program is given. In addition an application of the algorithm for searching possible extra exceptional values of the equal-sum-product problem is explored after...

  4. Modeling and clustering users with evolving profiles in usage streams

    KAUST Repository

    Zhang, Chongsheng; Masseglia, Florent; Zhang, Xiangliang

    2012-01-01

    Today, there is an increasing need of data stream mining technology to discover important patterns on the fly. Existing data stream models and algorithms commonly assume that users' records or profiles in data streams will not be updated or revised once they arrive. Nevertheless, in various applications such asWeb usage, the records/profiles of the users can evolve along time. This kind of streaming data evolves in two forms, the streaming of tuples or transactions as in the case of traditional data streams, and more importantly, the evolving of user records/profiles inside the streams. Such data streams bring difficulties on modeling and clustering for exploring users' behaviors. In this paper, we propose three models to summarize this kind of data streams, which are the batch model, the Evolving Objects (EO) model and the Dynamic Data Stream (DDS) model. Through creating, updating and deleting user profiles, these models summarize the behaviors of each user as a profile object. Based upon these models, clustering algorithms are employed to discover interesting user groups from the profile objects. We have evaluated all the proposed models on a large real-world data set, showing that the DDS model summarizes the data streams with evolving tuples more efficiently and effectively, and provides better basis for clustering users than the other two models. © 2012 IEEE.

  5. Modeling and clustering users with evolving profiles in usage streams

    KAUST Repository

    Zhang, Chongsheng

    2012-09-01

    Today, there is an increasing need of data stream mining technology to discover important patterns on the fly. Existing data stream models and algorithms commonly assume that users\\' records or profiles in data streams will not be updated or revised once they arrive. Nevertheless, in various applications such asWeb usage, the records/profiles of the users can evolve along time. This kind of streaming data evolves in two forms, the streaming of tuples or transactions as in the case of traditional data streams, and more importantly, the evolving of user records/profiles inside the streams. Such data streams bring difficulties on modeling and clustering for exploring users\\' behaviors. In this paper, we propose three models to summarize this kind of data streams, which are the batch model, the Evolving Objects (EO) model and the Dynamic Data Stream (DDS) model. Through creating, updating and deleting user profiles, these models summarize the behaviors of each user as a profile object. Based upon these models, clustering algorithms are employed to discover interesting user groups from the profile objects. We have evaluated all the proposed models on a large real-world data set, showing that the DDS model summarizes the data streams with evolving tuples more efficiently and effectively, and provides better basis for clustering users than the other two models. © 2012 IEEE.

  6. Validez y confiabilidad del instrumento de vínculo parental (Parental Bonding Instrument, PBI en una muestra de varones mexicanos

    Directory of Open Access Journals (Sweden)

    Erika Robles Estrada

    2016-01-01

    Full Text Available El objetivo de esta investigación fue obtener la validez y confiabilidad del instrumento de vínculo parental en una muestra de varones mexicanos. En total fueron 125 participantes. El instrumento original es de Parker, Tupling y Brown (1979; mide la percepción de la conducta y actitud de los padres hacia sus hijos hasta los 16 años. La aplicación se llevó de manera colectiva con consentimiento informado. Los datos se analizaron con análisis factorial y Alfa de Cronbach. Los resultados arrojaron la permanencia de los factores del instrumento: sobreprotección y cuidado. La confiabilidad fue de .900. Se puede concluir que el instrumento es válido y confiable para muestras mexicanas de varones. Abstract The objective of this research was to obtain the validity and reliability of the Parental Bonding Instrument in sample of Mexican males. In total there were 125 participants. The original instrument is Parker, Tupling & Brown (1979; it measures the perception of the conduct and attitude of parents until the age of 16. The application took collectively with informed consent. Data were analyzed with analysis, factorial and Cronbach Alpha. The results threw the permanence of the factors of the instrument: care and overprotection. Reliability was of .900. It can be concluded that the instrument is valid and reliable for Mexican males.

  7. Cross-language Babel structs—making scientific interfaces more efficient

    International Nuclear Information System (INIS)

    Prantl, Adrian; Epperly, Thomas G W; Ebner, Dietmar

    2013-01-01

    Babel is an open-source language interoperability framework tailored to the needs of high-performance scientific computing. As an integral element of the Common Component Architecture, it is employed in a wide range of scientific applications where it is used to connect components written in different programming languages. In this paper we describe how we extended Babel to support interoperable tuple data types (structs). Structs are a common idiom in (mono-lingual) scientific application programming interfaces (APIs); they are an efficient way to pass tuples of nonuniform data between functions, and are supported natively by most programming languages. Using our extended version of Babel, developers of scientific codes can now pass structs as arguments between functions implemented in any of the supported languages. In C, C++, Fortran 2003/2008 and Chapel, structs can be passed without the overhead of data marshaling or copying, providing language interoperability at minimal cost. Other supported languages are Fortran 77, Fortran 90/95, Java and Python. We will show how we designed a struct implementation that is interoperable with all of the supported languages and present benchmark data to compare the performance of all language bindings, highlighting the differences between languages that offer native struct support and an object-oriented interface with getter/setter methods. A case study shows how structs can help simplify the interfaces of scientific codes significantly. (paper)

  8. Estellés i Maragall: del «silenci estricte» a la «paraula viva»

    Directory of Open Access Journals (Sweden)

    Ferran Carbó

    2018-01-01

    Full Text Available Aquest treball analitza la relació intertextual que es va produir, des de la segona meitat dels anys cinquanta, entre el poeta valencià Vicent Andrés Estellés i el seu predecessor Joan Maragall. L’edició, el 1954, d’una antologia del poeta modernista preparada per Carles Riba en possibilità la seua coneixença i primera lectura. L’estudi revisa la presència de referències com el nom i cognom, l’esment de versos de poemes com «Cant espiritual» o de la paraula viva, entre d’altres, mitjançant diverses formulacions i en textos de diferents obres com ara Llibre de meravelles. També s’hi aporten provatures inèdites que tracten d’una vaca cega, i que reprenen la protagonista maragalliana. This paper analyzes the intertextual relationship that took place in the late fifties between the Valencian poet Vicent Andrés Estellés and his predecessor Joan Maragall. The 1954 edition of an anthology made by the Modernist poet, Carles Riba made it possible his knowledge and first reading. The study reviews the presence of references such as his name, the mentioning of lines like "Cant espiritual" or living word among others, through diverse formulations and in different texts like, for example, Llibre de meravelles. There are also unpublished attempts about a blind cow, reminding Maragall's character.

  9. Combining norms to prove termination

    DEFF Research Database (Denmark)

    Genaim, S.; Codish, M.; Gallagher, John Patrick

    2002-01-01

    Automatic termination analysers typically measure the size of terms applying norms which are mappings from terms to the natural numbers. This paper illustrates howt o enable the use of size functions defined as tuples of these simpler norm functions. This approach enables us to simplify the probl...... of the recursive data-types in the program, is often a suitable choice. We first demonstrate the power of combining norm functions and then the adequacy of combining norms based on regular types....

  10. Discovery of Approximate Differential Dependencies

    OpenAIRE

    Liu, Jixue; Kwashie, Selasi; Li, Jiuyong; Ye, Feiyue; Vincent, Millist

    2013-01-01

    Differential dependencies (DDs) capture the relationships between data columns of relations. They are more general than functional dependencies (FDs) and and the difference is that DDs are defined on the distances between values of two tuples, not directly on the values. Because of this difference, the algorithms for discovering FDs from data find only special DDs, not all DDs and therefore are not applicable to DD discovery. In this paper, we propose an algorithm to discover DDs from data fo...

  11. Employing finite-state machines in data integrity problems

    Directory of Open Access Journals (Sweden)

    Malikov Andrey

    2016-01-01

    Full Text Available This paper explores the issue of group integrity of tuple subsets regarding corporate integrity constraints in relational databases. A solution may be found by applying the finite-state machine theory to guarantee group integrity of data. We present a practical guide to coding such an automaton. After creating SQL queries to manipulate data and control its integrity for real data domains, we study the issue of query performance, determine the level of transaction isolation, and generate query plans.

  12. Flexible Decision Support in Device-Saturated Environments

    Science.gov (United States)

    2003-10-01

    also output tuples to a remote MySQL or Postgres database. 3.3 GUI The GUI allows the user to pose queries using SQL and to display query...DatabaseConnection.java – handles connections to an external database (such as MySQL or Postgres ). • Debug.java – contains the code for printing out Debug messages...also provided. It is possible to output the results of queries to a MySQL or Postgres database for archival and the GUI can query those results

  13. Empirical Analysis on Comparing the Performance of Alpha Miner Algorithm in SQL Query Language and NoSQL Column-Oriented Databases Using Apache Phoenix

    OpenAIRE

    Gupta, Kunal; Sachdev, Astha; Sureka, Ashish

    2017-01-01

    Process-Aware Information Systems (PAIS) is an IT system that support business processes and generate large amounts of event logs from the execution of business processes. An event log is represented as a tuple of CaseID, Timestamp, Activity and Actor. Process Mining is a new and emerging field that aims at analyzing the event logs to discover, enhance and improve business processes and check conformance between run time and design time business processes. The large volume of event logs gener...

  14. The Use of CK-LOG Formalism for Knowledge Representation and Problem Solving in OPPLAN-CONSULTANT: An Expert System for Naval Operational Planning.

    Science.gov (United States)

    1985-09-30

    further discussed in Sections 4 and 5. "=:.’. 1 ":210 NRL REPORT 8902 Notice that I have used the plural form, OBJECTS, in Fig. 2.1 to indicate that there...Washington, DC. Artificial Intelligence Center, SRI International , Menlo Park, CA, 1978. i 8. G. Gentzen, "Investigations into Logical Deduction," The...one of the form ( relatio -sm term-i term-Z) or (tuple-name term-I ... term-a) with or without the negation operator oT, and atm-exp denotes a timed

  15. Approaches to relativistic positioning around Earth and error estimations

    Science.gov (United States)

    Puchades, Neus; Sáez, Diego

    2016-01-01

    In the context of relativistic positioning, the coordinates of a given user may be calculated by using suitable information broadcast by a 4-tuple of satellites. Our 4-tuples belong to the Galileo constellation. Recently, we estimated the positioning errors due to uncertainties in the satellite world lines (U-errors). A distribution of U-errors was obtained, at various times, in a set of points covering a large region surrounding Earth. Here, the positioning errors associated to the simplifying assumption that photons move in Minkowski space-time (S-errors) are estimated and compared with the U-errors. Both errors have been calculated for the same points and times to make comparisons possible. For a certain realistic modeling of the world line uncertainties, the estimated S-errors have proved to be smaller than the U-errors, which shows that the approach based on the assumption that the Earth's gravitational field produces negligible effects on photons may be used in a large region surrounding Earth. The applicability of this approach - which simplifies numerical calculations - to positioning problems, and the usefulness of our S-error maps, are pointed out. A better approach, based on the assumption that photons move in the Schwarzschild space-time governed by an idealized Earth, is also analyzed. More accurate descriptions of photon propagation involving non symmetric space-time structures are not necessary for ordinary positioning and spacecraft navigation around Earth.

  16. Sudan-decoding generalized geometric Goppa codes

    DEFF Research Database (Denmark)

    Heydtmann, Agnes Eileen

    2003-01-01

    Generalized geometric Goppa codes are vector spaces of n-tuples with entries from different extension fields of a ground field. They are derived from evaluating functions similar to conventional geometric Goppa codes, but allowing evaluation in places of arbitrary degree. A decoding scheme...... for these codes based on Sudan's improved algorithm is presented and its error-correcting capacity is analyzed. For the implementation of the algorithm it is necessary that the so-called increasing zero bases of certain spaces of functions are available. A method to obtain such bases is developed....

  17. Compressing Data Cube in Parallel OLAP Systems

    Directory of Open Access Journals (Sweden)

    Frank Dehne

    2007-03-01

    Full Text Available This paper proposes an efficient algorithm to compress the cubes in the progress of the parallel data cube generation. This low overhead compression mechanism provides block-by-block and record-by-record compression by using tuple difference coding techniques, thereby maximizing the compression ratio and minimizing the decompression penalty at run-time. The experimental results demonstrate that the typical compression ratio is about 30:1 without sacrificing running time. This paper also demonstrates that the compression method is suitable for Hilbert Space Filling Curve, a mechanism widely used in multi-dimensional indexing.

  18. Continually Plastic Modeling of Non-Stationary Systems

    Science.gov (United States)

    2016-09-01

    Elements of a dataset D take the form of a 3-tuple, hT ; h;~pi, where T , time, specifies a calen- dar date, h is an estimate of the true value of the...Recall that an element d of dataset D takes the form hT ; h;~pi and that D has been divided into . and s. An estimation method M is applied to . D to...contribution was authored or co-authored by an em- ployee, or contractor of the national government. As such, the Government retains a nonexclusive, royalty

  19. Microsoft® SQL Server® 2008 MDX Step by Step

    CERN Document Server

    Smith, Bryan; Consulting, Hitachi

    2009-01-01

    Teach yourself the Multidimensional Expressions (MDX) query language-one step at a time. With this practical, learn-by-doing tutorial, you'll build the core techniques for using MDX with Analysis Services to deliver high-performance business intelligence solutions. Discover how to: Construct and execute MDX queriesWork with tuples, sets, and expressionsBuild complex sets to retrieve the exact data users needPerform aggregation functions and navigate data hierarchiesAssemble time-based business metricsCustomize an Analysis Services cube through the MDX scriptImplement dynamic security to cont

  20. Paths and Tableaux Descriptions of Jacobi-Trudi Determinant Associated with Quantum Affine Algebra of Type $C_n$

    Directory of Open Access Journals (Sweden)

    Wakako Nakai

    2007-07-01

    Full Text Available We study the Jacobi-Trudi-type determinant which is conjectured to be the $q$-character of a certain, in many cases irreducible, finite-dimensional representation of the quantum affine algebra of type $C_n$. Like the $D_n$ case studied by the authors recently, applying the Gessel-Viennot path method with an additional involution and a deformation of paths, we obtain an expression by a positive sum over a set of tuples of paths, which is naturally translated into the one over a set of tableaux on a skew diagram.

  1. A simple coherent attack and practical security of differential phase shift quantum cryptography

    International Nuclear Information System (INIS)

    Kronberg, D A

    2014-01-01

    The differential phase shift quantum key distribution protocol reveals good security against such powerful attacks as unambiguous state discrimination and beam splitting attacks. Its complete security analysis is complex due to high dimensions of the supposed spaces and density operators. In this paper, we consider a particular and conceptually simple coherent attack, available in practical implementations. The main condition for this attack is the length of used coherent state tuples of order 8–12. We show that under this condition, no high level of practical distance between legitimate users can be achieved. (paper)

  2. Construction of Fuzzy Sets and Applying Aggregation Operators for Fuzzy Queries

    DEFF Research Database (Denmark)

    Hudec, Miroslav; Sudzina, Frantisek

    Flexible query conditions could use linguistic terms described by fuzzy sets. The question is how to properly construct fuzzy sets for each linguistic term and apply an adequate aggregation function. For construction of fuzzy sets, the lowest value, the highest value of attribute...... and the distribution of data inside its domain are used. The logarithmic transformation of domains appears to be suitable. This way leads to a balanced distribution of tuples over fuzzy sets. In addition, users’ opinions about linguistic terms as well as current content in database are merged. The second investigated...

  3. Information in relational data bases

    Energy Technology Data Exchange (ETDEWEB)

    Abhyankar, R B

    1982-01-01

    A new knowledge representation scheme is proposed for representing incomplete information in relational data bases. The knowledge representation scheme introduces a novel convention for negative information based on modal logic and a novel data structure obtained by introducing tuple flags in the relational model of data. Standard and minimal forms are defined for relations conforming to the new data structure. The conventional relational operators, select, project and join, the redefined so they can be used to manipulate relations containing incomplete information. Conditions are presented for the lossless decomposition of relations containing incomplete information. 20 references.

  4. Systems of conservation laws with third-order Hamiltonian structures

    Science.gov (United States)

    Ferapontov, Evgeny V.; Pavlov, Maxim V.; Vitolo, Raffaele F.

    2018-02-01

    We investigate n-component systems of conservation laws that possess third-order Hamiltonian structures of differential-geometric type. The classification of such systems is reduced to the projective classification of linear congruences of lines in P^{n+2} satisfying additional geometric constraints. Algebraically, the problem can be reformulated as follows: for a vector space W of dimension n+2 , classify n-tuples of skew-symmetric 2-forms A^{α } \\in Λ ^2(W) such that φ _{β γ }A^{β }\\wedge A^{γ }=0, for some non-degenerate symmetric φ.

  5. From Flow Logic to Static Type Systems in Coordination Languages

    DEFF Research Database (Denmark)

    De Nicola, Rocco; Gorla, Daniele; Hansen, René Rydhof

    2008-01-01

    ; therefore, the correctness properties cannot be statically enforced. By contrast, static analysis approaches based on Flow Logic usually guarantee properties statically. In this paper we show how to combine these two approaches to obtain a static type system for describing secure access to tuple spaces......Coordination languages are often used to describe open ended systems. This makes it challenging to develop tools for guaranteeing security of the coordinated systems and correctness of their interaction. Successful approaches to this problem have been based on type systems with dynamic checks...

  6. Organization model and formalized description of nuclear enterprise information system

    International Nuclear Information System (INIS)

    Yuan Feng; Song Yafeng; Li Xudong

    2012-01-01

    Organization model is one of the most important models of Nuclear Enterprise Information System (NEIS). Scientific and reasonable organization model is the prerequisite that NEIS has robustness and extendibility, and is also the foundation of the integration of heterogeneous system. Firstly, the paper describes the conceptual model of the NEIS on ontology chart, which provides a consistent semantic framework of organization. Then it discusses the relations between the concepts in detail. Finally, it gives the formalized description of the organization model of NEIS based on six-tuple array. (authors)

  7. Physico-Chemical and Structural Interpretation of Discrete Derivative Indices on N-Tuples Atoms

    Science.gov (United States)

    Martínez-Santiago, Oscar; Marrero-Ponce, Yovani; Barigye, Stephen J.; Le Thi Thu, Huong; Torres, F. Javier; Zambrano, Cesar H.; Muñiz Olite, Jorge L.; Cruz-Monteagudo, Maykel; Vivas-Reyes, Ricardo; Vázquez Infante, Liliana; Artiles Martínez, Luis M.

    2016-01-01

    This report examines the interpretation of the Graph Derivative Indices (GDIs) from three different perspectives (i.e., in structural, steric and electronic terms). It is found that the individual vertex frequencies may be expressed in terms of the geometrical and electronic reactivity of the atoms and bonds, respectively. On the other hand, it is demonstrated that the GDIs are sensitive to progressive structural modifications in terms of: size, ramifications, electronic richness, conjugation effects and molecular symmetry. Moreover, it is observed that the GDIs quantify the interaction capacity among molecules and codify information on the activation entropy. A structure property relationship study reveals that there exists a direct correspondence between the individual frequencies of atoms and Hückel’s Free Valence, as well as between the atomic GDIs and the chemical shift in NMR, which collectively validates the theory that these indices codify steric and electronic information of the atoms in a molecule. Taking in consideration the regularity and coherence found in experiments performed with the GDIs, it is possible to say that GDIs possess plausible interpretation in structural and physicochemical terms. PMID:27240357

  8. Physico-Chemical and Structural Interpretation of Discrete Derivative Indices on N-Tuples Atoms

    Directory of Open Access Journals (Sweden)

    Oscar Martínez-Santiago

    2016-05-01

    Full Text Available This report examines the interpretation of the Graph Derivative Indices (GDIs from three different perspectives (i.e., in structural, steric and electronic terms. It is found that the individual vertex frequencies may be expressed in terms of the geometrical and electronic reactivity of the atoms and bonds, respectively. On the other hand, it is demonstrated that the GDIs are sensitive to progressive structural modifications in terms of: size, ramifications, electronic richness, conjugation effects and molecular symmetry. Moreover, it is observed that the GDIs quantify the interaction capacity among molecules and codify information on the activation entropy. A structure property relationship study reveals that there exists a direct correspondence between the individual frequencies of atoms and Hückel’s Free Valence, as well as between the atomic GDIs and the chemical shift in NMR, which collectively validates the theory that these indices codify steric and electronic information of the atoms in a molecule. Taking in consideration the regularity and coherence found in experiments performed with the GDIs, it is possible to say that GDIs possess plausible interpretation in structural and physicochemical terms.

  9. Assessment of Data Reliability of Wireless Sensor Network for Bioinformatics

    Directory of Open Access Journals (Sweden)

    Ting Dong

    2017-09-01

    Full Text Available As a focal point of biotechnology, bioinformatics integrates knowledge from biology, mathematics, physics, chemistry, computer science and information science. It generally deals with genome informatics, protein structure and drug design. However, the data or information thus acquired from the main areas of bioinformatics may not be effective. Some researchers combined bioinformatics with wireless sensor network (WSN into biosensor and other tools, and applied them to such areas as fermentation, environmental monitoring, food engineering, clinical medicine and military. In the combination, the WSN is used to collect data and information. The reliability of the WSN in bioinformatics is the prerequisite to effective utilization of information. It is greatly influenced by factors like quality, benefits, service, timeliness and stability, some of them are qualitative and some are quantitative. Hence, it is necessary to develop a method that can handle both qualitative and quantitative assessment of information. A viable option is the fuzzy linguistic method, especially 2-tuple linguistic model, which has been extensively used to cope with such issues. As a result, this paper introduces 2-tuple linguistic representation to assist experts in giving their opinions on different WSNs in bioinformatics that involve multiple factors. Moreover, the author proposes a novel way to determine attribute weights and uses the method to weigh the relative importance of different influencing factors which can be considered as attributes in the assessment of the WSN in bioinformatics. Finally, an illustrative example is given to provide a reasonable solution for the assessment.

  10. Publish/Subscribe on Top of DHT Using RETE Algorithm

    Science.gov (United States)

    Shvartzshnaider, Yan; Ott, Maximilian; Levy, David

    This paper discusses the construction of a Global Semantic Graph (GSG) [1] to support future information- and collaboration-centric applications and services. The GSG is a publish/subscribe (pub/sub) based architecture that supports publication of tuples and subscriptions with standing graph queries. We believe that an implementation of an efficient pattern matching algorithm such as Rete [2] on top of a distributed environment might serve as a possible substrate for GSG's pub/sub facility. Rete operates on loosely coupled alpha, beta and join nodes and therefore has been chosen by us for implementation in a distributed setting.

  11. Group-by Skyline Query Processing in Relational Engines

    DEFF Research Database (Denmark)

    Yiu, Man Lung; Luk, Ming-Hay; Lo, Eric

    2009-01-01

    the missing cost model for the BBS algorithm. Experimental results show that our techniques are able to devise the best query plans for a variety of group-by skyline queries. Our focus is on algorithms that can be directly implemented in today's commercial database systems without the addition of new access......The skyline operator was first proposed in 2001 for retrieving interesting tuples from a dataset. Since then, 100+ skyline-related papers have been published; however, we discovered that one of the most intuitive and practical type of skyline queries, namely, group-by skyline queries remains...

  12. Segmentation of Nonstationary Time Series with Geometric Clustering

    DEFF Research Database (Denmark)

    Bocharov, Alexei; Thiesson, Bo

    2013-01-01

    We introduce a non-parametric method for segmentation in regimeswitching time-series models. The approach is based on spectral clustering of target-regressor tuples and derives a switching regression tree, where regime switches are modeled by oblique splits. Such models can be learned efficiently...... from data, where clustering is used to propose one single split candidate at each split level. We use the class of ART time series models to serve as illustration, but because of the non-parametric nature of our segmentation approach, it readily generalizes to a wide range of time-series models that go...

  13. Relationship of order and number of siblings to perceived parental attitudes in childhood.

    Science.gov (United States)

    Kitamura, T; Sugawara, M; Shima, S; Toda, M A

    1998-06-01

    Despite the increasingly recognized link between perceived parenting behavior and the onset of psychopathology in adults, studies of the possible determinants of perceptions of parenting behavior are rare. In a sample of 1,145 pregnant Japanese women, correlations were examined between the numbers and sexes of siblings and perceived rearing practices, as rated by the Parental Bonding Instrument (PBI; Parker, Tupling, & Brown, 1979). The participants with more elder sisters viewed their parents' attitudes as less caring, whereas those with more brothers, particularly younger brothers, viewed their parents' attitudes as less overprotective. However, the proportion of the variance of all the PBI scores explained by different types of siblings was very small.

  14. Reinforcement Learning Based Web Service Compositions for Mobile Business

    Science.gov (United States)

    Zhou, Juan; Chen, Shouming

    In this paper, we propose a new solution to Reactive Web Service Composition, via molding with Reinforcement Learning, and introducing modified (alterable) QoS variables into the model as elements in the Markov Decision Process tuple. Moreover, we give an example of Reactive-WSC-based mobile banking, to demonstrate the intrinsic capability of the solution in question of obtaining the optimized service composition, characterized by (alterable) target QoS variable sets with optimized values. Consequently, we come to the conclusion that the solution has decent potentials in boosting customer experiences and qualities of services in Web Services, and those in applications in the whole electronic commerce and business sector.

  15. Quantum BCH Codes Based on Spectral Techniques

    International Nuclear Information System (INIS)

    Guo Ying; Zeng Guihua

    2006-01-01

    When the time variable in quantum signal processing is discrete, the Fourier transform exists on the vector space of n-tuples over the Galois field F 2 , which plays an important role in the investigation of quantum signals. By using Fourier transforms, the idea of quantum coding theory can be described in a setting that is much different from that seen that far. Quantum BCH codes can be defined as codes whose quantum states have certain specified consecutive spectral components equal to zero and the error-correcting ability is also described by the number of the consecutive zeros. Moreover, the decoding of quantum codes can be described spectrally with more efficiency.

  16. Equilibria in social belief removal [Journal article

    CSIR Research Space (South Africa)

    Booth, R

    2010-08-01

    Full Text Available removal function >i, which tells it how to remove any given sentence from its belief set. In this paper we view >i as a unary function on the set L of non- tautologous sentences, i.e., agents are never required to remove >. The result of removing 2 L... from i?s belief set is denoted by >i( ). We assume i?s initial belief set can always be recaptured from >i alone by just removing the (b) (1) (A) contradiction, i.e., i?s initial belief set is >i(?). We call any n-tuple (>i)i2A of removal functions a...

  17. Capacity region of the 3 receiver state dependent multilevel BC with noncausal state information

    Directory of Open Access Journals (Sweden)

    Viswanathan Ramachandran

    2017-09-01

    Full Text Available We consider a three receiver state dependent multilevel broadcast channel (BC, where the state information is known non-causally at the encoder as well as all the decoders. This is an extension of Nair and El Gamal’s three receiver multilevel BC (Nair and El Gamal, 2009, wherein two of the receivers decode only a common message while a third receiver decodes a private message as well. The objective is to characterize the rate tuples that are simultaneously achievable while ensuring negligible probability of error at each of the receivers. We characterize the capacity region of this setup in the paper.

  18. VPipe: Virtual Pipelining for Scheduling of DAG Stream Query Plans

    Science.gov (United States)

    Wang, Song; Gupta, Chetan; Mehta, Abhay

    There are data streams all around us that can be harnessed for tremendous business and personal advantage. For an enterprise-level stream processing system such as CHAOS [1] (Continuous, Heterogeneous Analytic Over Streams), handling of complex query plans with resource constraints is challenging. While several scheduling strategies exist for stream processing, efficient scheduling of complex DAG query plans is still largely unsolved. In this paper, we propose a novel execution scheme for scheduling complex directed acyclic graph (DAG) query plans with meta-data enriched stream tuples. Our solution, called Virtual Pipelined Chain (or VPipe Chain for short), effectively extends the "Chain" pipelining scheduling approach to complex DAG query plans.

  19. A simple system for 160GHz optical terahertz wave generation and data modulation

    Science.gov (United States)

    Li, Yihan; He, Jingsuo; Sun, Xueming; Shi, Zexia; Wang, Ruike; Cui, Hailin; Su, Bo; Zhang, Cunlin

    2018-01-01

    A simple system based on two cascaded Mach-Zehnder modulators, which can generate 160GHz optical terahertz waves from 40GHz microwave sources, is simulated and tested in this paper. Fiber grating filter is used in the system to filter out optical carrier. By properly adjusting the modulator DC bias voltages and the signal voltages and phases, 4-tupling optical terahertz wave can be generated with fiber grating. This notch fiber grating filter is greatly suitable for terahertz over fiber (TOF) communication system. This scheme greatly reduces the cost of long-distance terahertz communication. Furthermore, 10Gbps digital signal is modulated in the 160GHz optical terahertz wave.

  20. SPAX - PAX with Super-Pages

    Science.gov (United States)

    Bößwetter, Daniel

    Much has been written about the pros and cons of column-orientation as a means to speed up read-mostly analytic workloads in relational databases. In this paper we try to dissect the primitive mechanisms of a database that help express the coherence of tuples and present a novel way of organizing relational data in order to exploit the advantages of both, the row-oriented and the column-oriented world. As we go, we break with yet another bad habit of databases, namely the equal granularity of reads and writes which leads us to the introduction of consecutive clusters of disk pages called super-pages.

  1. SABRE: A Sensitive Attribute Bucketization and REdistribution framework for t-closeness

    KAUST Repository

    Cao, Jianneng

    2010-05-19

    Today, the publication of microdata poses a privacy threat: anonymous personal records can be re-identified using third data sources. Past research has tried to develop a concept of privacy guarantee that an anonymized data set should satisfy before publication, culminating in the notion of t-closeness. To satisfy t-closeness, the records in a data set need to be grouped into Equivalence Classes (ECs), such that each EC contains records of indistinguishable quasi-identifier values, and its local distribution of sensitive attribute (SA) values conforms to the global table distribution of SA values. However, despite this progress, previous research has not offered an anonymization algorithm tailored for t-closeness. In this paper, we cover this gap with SABRE, a SA Bucketization and REdistribution framework for t-closeness. SABRE first greedily partitions a table into buckets of similar SA values and then redistributes the tuples of each bucket into dynamically determined ECs. This approach is facilitated by a property of the Earth Mover\\'s Distance (EMD) that we employ as a measure of distribution closeness: If the tuples in an EC are picked proportionally to the sizes of the buckets they hail from, then the EMD of that EC is tightly upper-bounded using localized upper bounds derived for each bucket. We prove that if the t-closeness constraint is properly obeyed during partitioning, then it is obeyed by the derived ECs too. We develop two instantiations of SABRE and extend it to a streaming environment. Our extensive experimental evaluation demonstrates that SABRE achieves information quality superior to schemes that merely applied algorithms tailored for other models to t-closeness, and can be much faster as well. © 2010 Springer-Verlag.

  2. La mortalitá infantile ebraica a Ferrara nell'800

    Directory of Open Access Journals (Sweden)

    C. Adamo

    2003-05-01

    Full Text Available

    Obiettivi: per conoscere la qualità di vita e lo sviluppo di una collettività, tanto per il passato quanto per il presente, si ricorre sempre più spesso agli studi sulla mortalità infantile (I anno di vita. L’obiettivo del lavoro è quello di descrivere la mortalità infantile ottocentesca nella comunità ebraica di Ferrara.

    Metodi: presso l’Archivio Storico del Comune di Ferrara sono conservati i registri di morte dal 1813 al 1902 della popolazione ebraica da cui si sono tratte le seguenti variabili: anno di morte, numero progressivo del registro, cognome e nome del deceduto, sesso, giorno e mese di morte, malattia o causa di morte, età alla morte (mesi, giorni, ore. Il sesso è desunto dal nome del deceduto. Come in precedenti lavori, il nostro gruppo di ricerca, per definire la classificazione delle cause di morte nel primo anno di vita ha utilizzato fino all’anno 1854 un lavoro di Silini G. basato sulle metodologie di studio delle cause di morte nel secolo XIX a Lovere (Bergamo adattandola al diverso contesto storico geografico. Per il periodo successivo si è adottato il criterio classificatorio ISTAT (IX revisione.

    Risultati: dalle prime elaborazioni emerge, per il periodo 1813 1902, il seguente quadro: i morti nel primo anno di vita sono 552, di cui morti nella prima settimana 127 (23%, morti tra la prima settimana ed il primo mese 185 (33,5% ed oltre il primo mese 240 (43,5%. Gli aborti sono risultati complessivamente 194 ed i nati morti 96. Il 51% sono maschi ed il rimanente 49% femmine. Le cause di morte prevalenti sono “materia”, “infiammazione”, “inanizione” ed “epilessia”.

  3. Integrating PAW, a graphical analysis interface to Sybase

    International Nuclear Information System (INIS)

    Fry, A.; Chow, I.

    1993-04-01

    The program PAW (Physics Analysis Workstation) enjoys tremendous popularity within the high energy physics community. It is implemented on a large number of platforms and is available to the high energy physics community free of charge from the CERN computing division. PAW combines extensive graphical display capability (HPLOT/HIGZ), with histogramming (HBOOK4), file and data handling (ZEBRA), vector arithmetic manipulation (SIGMA), user defined functions (COMIS), powerful function minimization (MINUIT), and a command interpreter (KUIP). To facilitate the possibility of using relational databases in physics analysis, we have added an SQL interface to PAW. This interface allows users to create PAW N-tuples from Sybase tables and vice versa. We discuss the implementations below

  4. Exploiting Schemas in Data Synchronization

    DEFF Research Database (Denmark)

    Foster, J. Nathan; Greenwald, Michael B.; Kirkegaard, Christian

    2005-01-01

    Increased reliance on optimistic data replication has led to burgeoning interest in tools and frameworks for disconnected updates to replicated data.We have implemented a generic synchronization framework, called HARMONY, that can be used to build state-based synchronizers for a wide variety...... of tree-structureddata formats. A novel feature of this framework is that the synchronization process - in particular, the recognition of conflicts - is driven by the schema of the structures being synchronized.We formalize HARMONY's synchronization algorithm, state a simple and intuitive specification......, and illustrate, using simple address books as a case study, how it can be used to synchronize trees representing a variety of specific forms of applicationdata, including sets, records, tuples, and relations....

  5. Formal description of the jumpstart just-in-time signaling protocol using EFSM

    Science.gov (United States)

    Zaim, A. H.; Baldine, Ilia; Cassada, Mark; Rouskas, George N.; Perros, Harry G.; Stevenson, Daniel S.

    2002-07-01

    We present a formal protocol description for a Just-In-Time (JIT) signaling scheme running over a core dWDM network which utilizes Optical Burst Switches (OBS). We apply an eight-tuple extended finite state machine (EFSM) model to formally specify the protocol. Using the EFSM model, we define the communication between a source client node and a destination client node through an ingress and one or multiple intermediate switches. We worked on single burst connections that means setting up the connection just before sending a single burst and then closing the connection as soon as the burst is sent. The communication between the EFSMs is handled through message transfer between protocol entities.

  6. The Unbalanced Linguistic Aggregation Operator in Group Decision Making

    Directory of Open Access Journals (Sweden)

    Li Zou

    2012-01-01

    Full Text Available Many linguistic aggregation methods have been proposed and applied in the linguistic decision-making problems. In practice, experts need to assess a number of values in a side of reference domain higher than in the other one; that is, experts use unbalanced linguistic values to express their evaluation for problems. In this paper, we propose a new linguistic aggregation operator to deal with unbalanced linguistic values in group decision making, we adopt 2-tuple representation model of linguistic values and linguistic hierarchies to express unbalanced linguistic values, and moreover, we present the unbalanced linguistic ordered weighted geometric operator to aggregate unbalanced linguistic evaluation values; a comparison example is given to show the advantage of our method.

  7. Klaim-DB: A Modeling Language for Distributed Database Applications

    DEFF Research Database (Denmark)

    Wu, Xi; Li, Ximeng; Lluch Lafuente, Alberto

    2015-01-01

    and manipulation of structured data, with integrity and atomicity considerations. We present the formal semantics of KlaimDB and illustrate the use of the language in a scenario where the sales from different branches of a chain of department stores are aggregated from their local databases. It can be seen......We present the modelling language, Klaim-DB, for distributed database applications. Klaim-DB borrows the distributed nets of the coordination language Klaim but essentially re-incarnates the tuple spaces of Klaim as databases, and provides high-level language abstractions for the access...... that raising the abstraction level and encapsulating integrity checks (concerning the schema of tables, etc.) in the language primitives for database operations benefit the modelling task considerably....

  8. HippoDraw and Hippoplotamus

    International Nuclear Information System (INIS)

    Gravina, M.F.; Kunz, P.F.; Pavel, T.J.; Rensing, P.E.

    1992-02-01

    Hippo Draw is a result of research into finding better ways to visualize the kind of statistical data that is so common in high energy physics analyses. In these analyses, frequency distributions are visualized as histograms, contour plots, scatter plots, etc. Traditionally, one used a library of subroutines, called a histogram package, within one's analysis programs to create and display such distributions. HippoDraw is a NeXTstep application for viewing statistical data. It has several unique features which make viewing data distributions highly interactive. It also incorporates simple drawing tools. HippoDraw is written in Objective-C and uses the Hippoplotamus library package which handles the n-tuples and displays. Hippoplotamus is written in ANSI C. 4 refs

  9. Modular Transformations, Order-Chaos Transitions and Pseudo-Random Number Generation

    Science.gov (United States)

    Bonelli, Antonio; Ruffo, Stefano

    Successive pairs of pseudo-random numbers generated by standard linear congruential transformations display ordered patterns of parallel lines. We study the "ordered" and "chaotic" distribution of such pairs by solving the eigenvalue problem for two-dimensional modular transformations over integers. We conjecture that the optimal uniformity for pair distribution is obtained when the slope of linear modular eigenspaces takes the value n opt =maxint (p/√ {p-1}), where p is a prime number. We then propose a new generator of pairs of independent pseudo-random numbers, which realizes an optimal uniform distribution (in the "statistical" sense) of points on the unit square (0, 1] × (0, 1]. The method can be easily generalized to the generation of k-tuples of random numbers (with k>2).

  10. Study on Mandatory Access Control in a Secure Database Management System

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    This paper proposes a security policy model for mandatory access control in class B1 database management system whose level of labeling is tuple. The relation-hierarchical data model is extended to multilevel relation-hierarchical data model. Based on the multilevel relation-hierarchical data model, the concept of upper-lower layer relational integrity is presented after we analyze and eliminate the covert channels caused by the database integrity. Two SQL statements are extended to process polyinstantiation in the multilevel secure environment. The system is based on the multilevel relation-hierarchical data model and is capable of integratively storing and manipulating multilevel complicated objects (e. g., multilevel spatial data) and multilevel conventional data ( e. g., integer. real number and character string).

  11. FMEA using uncertainty theories and MCDM methods

    CERN Document Server

    Liu, Hu-Chen

    2016-01-01

    This book offers a thorough and systematic introduction to the modified failure mode and effect analysis (FMEA) models based on uncertainty theories (e.g. fuzzy logic, intuitionistic fuzzy sets, D numbers and 2-tuple linguistic variables) and various multi-criteria decision making (MCDM) approaches such as distance-based MCDM, compromise ranking MCDM and hybrid MCDM, etc. As such, it provides essential FMEA methods and practical examples that can be considered in applying FMEA to enhance the reliability and safety of products and services. The book offers a valuable guide for practitioners and researchers working in the fields of quality management, decision making, information science, management science, engineering, etc. It can also be used as a textbook for postgraduate and senior undergraduate students.

  12. String Chopping and Time-ordered Products of Linear String-localized Quantum Fields

    Science.gov (United States)

    Cardoso, Lucas T.; Mund, Jens; Várilly, Joseph C.

    2018-03-01

    For a renormalizability proof of perturbative models in the Epstein-Glaser scheme with string-localized quantum fields, one needs to know what freedom one has in the definition of time-ordered products of the interaction Lagrangian. This paper provides a first step in that direction. The basic issue is the presence of an open set of n-tuples of strings which cannot be chronologically ordered. We resolve it by showing that almost all such string configurations can be dissected into finitely many pieces which can indeed be chronologically ordered. This fixes the time-ordered products of linear field factors outside a nullset of string configurations. (The extension across the nullset, as well as the definition of time-ordered products of Wick monomials, will be discussed elsewhere).

  13. Scalable Packet Classification with Hash Tables

    Science.gov (United States)

    Wang, Pi-Chung

    In the last decade, the technique of packet classification has been widely deployed in various network devices, including routers, firewalls and network intrusion detection systems. In this work, we improve the performance of packet classification by using multiple hash tables. The existing hash-based algorithms have superior scalability with respect to the required space; however, their search performance may not be comparable to other algorithms. To improve the search performance, we propose a tuple reordering algorithm to minimize the number of accessed hash tables with the aid of bitmaps. We also use pre-computation to ensure the accuracy of our search procedure. Performance evaluation based on both real and synthetic filter databases shows that our scheme is effective and scalable and the pre-computation cost is moderate.

  14. The Strong Disjoint Blow-Up/Collapse Property

    Directory of Open Access Journals (Sweden)

    Héctor N. Salas

    2013-01-01

    Full Text Available Let be a topological vector space, and let be the algebra of continuous linear operators on . The operators are disjoint hypercyclic if there is such that the orbit is dense in . Bès and Peris have shown that if satisfy the Disjoint Blow-up/Collapse property, then they are disjoint hypercyclic. In a recent paper Bès, Martin, and Sanders, among other things, have characterized disjoint hypercyclic -tuples of weighted shifts in terms of this property. We introduce the Strong Disjoint Blow-up/Collapse property and prove that if satisfy this new property, then they have a dense linear manifold of disjoint hypercyclic vectors. This allows us to give a partial affirmative answer to one of their questions.

  15. Instantons on Calabi-Yau and hyper-Kähler cones

    Science.gov (United States)

    Geipel, Jakob C.; Sperling, Marcus

    2017-10-01

    The instanton equations on vector bundles over Calabi-Yau and hyper-Kähler cones can be reduced to matrix equations resembling Nahm's equations. We complement the discussion of Hermitian Yang-Mills (HYM) equations on Calabi-Yau cones, based on regular semi-simple elements, by a new set of (singular) boundary conditions which have a known instanton solution in one direction. This approach extends the classic results of Kronheimer by probing a relation between generalised Nahm's equations and nilpotent pairs/tuples. Moreover, we consider quaternionic instantons on hyper-Kähler cones over generic 3-Sasakian manifolds and study the HYM moduli spaces arising in this set-up, using the fact that their analysis can be traced back to the intersection of three Hermitian Yang-Mills conditions.

  16. Exploring the QSAR's predictive truthfulness of the novel N-tuple discrete derivative indices on benchmark datasets.

    Science.gov (United States)

    Martínez-Santiago, O; Marrero-Ponce, Y; Vivas-Reyes, R; Rivera-Borroto, O M; Hurtado, E; Treto-Suarez, M A; Ramos, Y; Vergara-Murillo, F; Orozco-Ugarriza, M E; Martínez-López, Y

    2017-05-01

    Graph derivative indices (GDIs) have recently been defined over N-atoms (N = 2, 3 and 4) simultaneously, which are based on the concept of derivatives in discrete mathematics (finite difference), metaphorical to the derivative concept in classical mathematical analysis. These molecular descriptors (MDs) codify topo-chemical and topo-structural information based on the concept of the derivative of a molecular graph with respect to a given event (S) over duplex, triplex and quadruplex relations of atoms (vertices). These GDIs have been successfully applied in the description of physicochemical properties like reactivity, solubility and chemical shift, among others, and in several comparative quantitative structure activity/property relationship (QSAR/QSPR) studies. Although satisfactory results have been obtained in previous modelling studies with the aforementioned indices, it is necessary to develop new, more rigorous analysis to assess the true predictive performance of the novel structure codification. So, in the present paper, an assessment and statistical validation of the performance of these novel approaches in QSAR studies are executed, as well as a comparison with those of other QSAR procedures reported in the literature. To achieve the main aim of this research, QSARs were developed on eight chemical datasets widely used as benchmarks in the evaluation/validation of several QSAR methods and/or many different MDs (fundamentally 3D MDs). Three to seven variable QSAR models were built for each chemical dataset, according to the original dissection into training/test sets. The models were developed by using multiple linear regression (MLR) coupled with a genetic algorithm as the feature wrapper selection technique in the MobyDigs software. Each family of GDIs (for duplex, triplex and quadruplex) behaves similarly in all modelling, although there were some exceptions. However, when all families were used in combination, the results achieved were quantitatively higher than those reported by other authors in similar experiments. Comparisons with respect to external correlation coefficients (q 2 ext ) revealed that the models based on GDIs possess superior predictive ability in seven of the eight datasets analysed, outperforming methodologies based on similar or more complex techniques and confirming the good predictive power of the obtained models. For the q 2 ext values, the non-parametric comparison revealed significantly different results to those reported so far, which demonstrated that the models based on DIVATI's indices presented the best global performance and yielded significantly better predictions than the 12 0-3D QSAR procedures used in the comparison. Therefore, GDIs are suitable for structure codification of the molecules and constitute a good alternative to build QSARs for the prediction of physicochemical, biological and environmental endpoints.

  17. A method for determining customer requirement weights based on TFMF and TLR

    Science.gov (United States)

    Ai, Qingsong; Shu, Ting; Liu, Quan; Zhou, Zude; Xiao, Zheng

    2013-11-01

    'Customer requirements' (CRs) management plays an important role in enterprise systems (ESs) by processing customer-focused information. Quality function deployment (QFD) is one of the main CRs analysis methods. Because CR weights are crucial for the input of QFD, we developed a method for determining CR weights based on trapezoidal fuzzy membership function (TFMF) and 2-tuple linguistic representation (TLR). To improve the accuracy of CR weights, we propose to apply TFMF to describe CR weights so that they can be appropriately represented. Because the fuzzy logic is not capable of aggregating information without loss, TLR model is adopted as well. We first describe the basic concepts of TFMF and TLR and then introduce an approach to compute CR weights. Finally, an example is provided to explain and verify the proposed method.

  18. What Is Spatio-Temporal Data Warehousing?

    Science.gov (United States)

    Vaisman, Alejandro; Zimányi, Esteban

    In the last years, extending OLAP (On-Line Analytical Processing) systems with spatial and temporal features has attracted the attention of the GIS (Geographic Information Systems) and database communities. However, there is no a commonly agreed definition of what is a spatio-temporal data warehouse and what functionality such a data warehouse should support. Further, the solutions proposed in the literature vary considerably in the kind of data that can be represented as well as the kind of queries that can be expressed. In this paper we present a conceptual framework for defining spatio-temporal data warehouses using an extensible data type system. We also define a taxonomy of different classes of queries of increasing expressive power, and show how to express such queries using an extension of the tuple relational calculus with aggregated functions.

  19. A Note on Sequence Prediction over Large Alphabets

    Directory of Open Access Journals (Sweden)

    Travis Gagie

    2012-02-01

    Full Text Available Building on results from data compression, we prove nearly tight bounds on how well sequences of length n can be predicted in terms of the size σ of the alphabet and the length k of the context considered when making predictions. We compare the performance achievable by an adaptive predictor with no advance knowledge of the sequence, to the performance achievable by the optimal static predictor using a table listing the frequency of each (k + 1-tuple in the sequence. We show that, if the elements of the sequence are chosen uniformly at random, then an adaptive predictor can compete in the expected case if k ≤ logσ n – 3 – ε, for a constant ε > 0, but not if k ≥ logσ n.

  20. FastSim: A Fast Simulation for the SuperB Detector

    International Nuclear Information System (INIS)

    Andreassen, R; Sokoloff, M; Arnaud, N; Burmistrov, L; Brown, D N; Carlson, J; Gaponenko, I; Suzuki, A; Cheng, C-h; Simone, A Di; Manoni, E; Perez, A; Walsh, J; Rama, M; Roberts, D; Rotondo, M; Simi, G

    2011-01-01

    We have developed a parameterized (fast) simulation for detector optimization and physics reach studies of the proposed SuperB Flavor Factory in Italy. Detector components are modeled as thin sections of planes, cylinders, disks or cones. Particle-material interactions are modeled using simplified cross-sections and formulas. Active detectors are modeled using parameterized response functions. Geometry and response parameters are configured using xml files with a custom-designed schema. Reconstruction algorithms adapted from BaBar are used to build tracks and clusters. Multiple sources of background signals can be merged with primary signals. Pattern recognition errors are modeled statistically by randomly misassigning nearby tracking hits. Standard BaBar analysis tuples are used as an event output. Hadronic B meson pair events can be simulated at roughly 10Hz.

  1. FastSim: A Fast Simulation for the SuperB Detector

    Science.gov (United States)

    Andreassen, R.; Arnaud, N.; Brown, D. N.; Burmistrov, L.; Carlson, J.; Cheng, C.-h.; Di Simone, A.; Gaponenko, I.; Manoni, E.; Perez, A.; Rama, M.; Roberts, D.; Rotondo, M.; Simi, G.; Sokoloff, M.; Suzuki, A.; Walsh, J.

    2011-12-01

    We have developed a parameterized (fast) simulation for detector optimization and physics reach studies of the proposed SuperB Flavor Factory in Italy. Detector components are modeled as thin sections of planes, cylinders, disks or cones. Particle-material interactions are modeled using simplified cross-sections and formulas. Active detectors are modeled using parameterized response functions. Geometry and response parameters are configured using xml files with a custom-designed schema. Reconstruction algorithms adapted from BaBar are used to build tracks and clusters. Multiple sources of background signals can be merged with primary signals. Pattern recognition errors are modeled statistically by randomly misassigning nearby tracking hits. Standard BaBar analysis tuples are used as an event output. Hadronic B meson pair events can be simulated at roughly 10Hz.

  2. Lessons learned enhancing EPICS CA for LANSCE timed and flavored data

    International Nuclear Information System (INIS)

    Hill, Jeffrey O.

    2009-01-01

    A previous paper described an upgrade to EPICS enabling client side tools at LANSCE to receive subscription updates filtered selectively to match a logical configuration of LANSCE beam gates, as configured by the control room. The upgrade required fundamental changes in the EPICS core components. First, the event queue in the EPICS server was upgraded to buffer record (function block) and device specific parameters accessed generically via software interfaces for introspection of 3rd party data. In contrast, event queues in previous versions of EPICS were strictly limited to buffering only value, timestamp, and alarm status tuples. Second, the Channel Access server is being upgraded to filter subscription updates. In this follow on paper some necessary design changes mid-project and the lessons learned during the software development will be described.

  3. Report on the UNIX ''tupleviewer'' challenge

    International Nuclear Information System (INIS)

    Harris, F.A.; Johnson, A.S.; Hollinger, J.; Glanzman, T.; Kunz, P.F.; Pavel, T.; Rensing, P.E.; Damian, D.

    1992-11-01

    One result of the 1991 B Factory Workshop at SLAC was the definition of a software benchmark project: a user-friendly data browsing tool called a ''tupleviewer''. The tupleviewer is a program which interactively displays 1-D and 2-D graphical plots from data stored as n-tuples. A set of rules define the minimum requirements for the tupleviewer, but the choices for platform, programming language, window system, graphics package and GUI development system are left open. The purpose of this project is to provide an arena in which to compare these open choices, as well as to provide a training exercise. The results of these efforts, including the experiences of the developers and comparisons between the projects are reported. In particular, comparisons are reported between the various GUI tool kits used

  4. ON THE DERIVATIVE OF SMOOTH MEANINGFUL FUNCTIONS

    Directory of Open Access Journals (Sweden)

    Sanjo Zlobec

    2011-02-01

    Full Text Available The derivative of a function f in n variables at a point x* is one of the most important tools in mathematical modelling. If this object exists, it is represented by the row n-tuple f(x* = [∂f/∂xi(x*] called the gradient of f at x*, abbreviated: “the gradient”. The evaluation of f(x* is usually done in two stages, first by calculating the n partials and then their values at x = x*. In this talk we give an alternative approach. We show that one can characterize the gradient without differentiation! The idea is to fix an arbitrary row n-tuple G and answer the following question: What is a necessary and sufficient condition such that G is the gradient of a given f at a given x*? The answer is given after adjusting the quadratic envelope property introduced in [3]. We work with smooth, i.e., continuously differentiable, functions with a Lipschitz derivative on a compact convex set with a non-empty interior. Working with this class of functions is not a serious restriction. In fact, loosely speaking, “almost all” smooth meaningful functions used in modelling of real life situations are expected to have a bounded “acceleration” hence they belong to this class. In particular, the class contains all twice differentiable functions [1]. An important property of the functions from this class is that every f can be represented as the difference of some convex function and a convex quadratic function. This decomposition was used in [3] to characterize the zero derivative points. There we obtained reformulations and augmentations of some well known classic results on optimality such as Fermats extreme value theorem (known from high school and the Lagrange multiplier theorem from calculus [2, 3]. In this talk we extend the results on zero derivative points to characterize the relation G = f(x*, where G is an arbitrary n-tuple. Some special cases: If G = O, we recover the results on zero derivative points. For functions of a single

  5. Tuple image multi-scale optical flow for detailed cardiac motion extraction: Application to left ventricle rotation analysis

    NARCIS (Netherlands)

    Assen, van H.C.; Florack, L.M.J.; Westenberg, J.J.M.; Haar Romenij, ter B.M.; Hamarneh, G.; Abugharbieh, R.

    2008-01-01

    We present a new method for detailed tracking of cardiac motion based on MR-tagging imaging, multi-scale optical flow, and HARP-like image filtering.In earlier work, we showed that the results obtained with our method correlate very well with Phase Contrast MRI. In this paper we combine the

  6. Day-to-day origin-destination tuple estimation and prediction with hierarchical bayesian networks using multiple data sources

    NARCIS (Netherlands)

    Ma, Y.; Kuik, R.; Van Zuylen, H.J.

    2013-01-01

    Prediction of traffic demand is essential, either for an understanding of the future traffic state or so necessary measures can be taken to alleviate congestion. Usually, an origin-destination (O-D) matrix is used to represent traffic demand between two zones in transportation planning. Vehicles are

  7. A relação da literatura de cordel na antroponímia dos fantoches populares portugueses

    Directory of Open Access Journals (Sweden)

    José Luís de Oliveira

    2018-04-01

    Full Text Available RESUMO: O mundo do teatro de bonecos em Portugal está envolto numa bruma, devido, em parte, a algum desinteresse da classe teatreira, mas principalmente, à escassa documentação sobre esta arte ancestral. O teatro de marionetas popular era pouco palavroso, em oposição ao teatro de atores de carne e osso, onde o verbo era fundamental. O repertório de texto diminuto, transmitido por via oral, levou à perda substancial de um entretenimento que fez os encantos de miúdos e graúdos ao longo dos séculos. Uma das lacunas é a génese dos próprios apodos pelos quais os bonifrates eram conhecidos. Atualmente encontra-se vulgarizado o cognome Dom Roberto (conhecido até 1962 por robertos para designar os bonecos de luva populares portugueses. Este evoluiu a partir do drama de cordel adaptado da lenda medieval Roberto do diabo e que fez parte do repertório do teatro de bonecos populares portugueses. PALAVRAS-CHAVE: Teatro de bonecos, literatura de cordel, folhetos volantes. ________________________ ABSTRACT: The world of puppet theater in Portugal is shrouded in mist, partly due to some disinterest in theater professionals, but mainly due to the scarce documentation about this ancestral art. The popular puppet theater was of few text, as opposed to the actors theater, where the verb was fundamental. The diminutive repertoire of orally transmitted text has led to the substantial loss of entertainment that has made the charms of kids and adults over the centuries. One of the gaps is the genesis of the nicknames for which puppets were known. At the moment the name Dom Roberto (known until 1962 by robertos is popularized to designate the Portuguese glove popular puppets. This evolved from the chapbook drama adapted from the medieval legend Robert the devil and that was part of the repertoire of Portuguese popular puppet theatre.  KEYWORDS: Puppet theater, chapbooks.

  8. A coordination language for databases

    DEFF Research Database (Denmark)

    Li, Ximeng; Wu, Xi; Lluch Lafuente, Alberto

    2017-01-01

    We present a coordination language for the modeling of distributed database applications. The language, baptized Klaim-DB, borrows the concepts of localities and nets of the coordination language Klaim but re-incarnates the tuple spaces of Klaim as databases. It provides high-level abstractions...... and primitives for the access and manipulation of structured data, with integrity and atomicity considerations. We present the formal semantics of Klaim-DB and develop a type system that avoids potential runtime errors such as certain evaluation errors and mismatches of data format in tables, which are monitored...... in the semantics. The use of the language is illustrated in a scenario where the sales from different branches of a chain of department stores are aggregated from their local databases. Raising the abstraction level and encapsulating integrity checks in the language primitives have benefited the modeling task...

  9. Suitability of a Consensual Fuzzy Inference System to Evaluate Suppliers of Strategic Products

    Directory of Open Access Journals (Sweden)

    Nazario Garcia

    2018-01-01

    Full Text Available This paper designs a bidding and supplier evaluation model focused on strategic product procurement, and develops their respective evaluation knowledge bases. The model is built using the most relevant variables cited in the reviewed procurement literature and allows to compare two evaluation methods: a factor weighting method (WM and a fuzzy inference system (FIS. By consulting an expert panel and using a two-tuples symbolic translation system, strong fuzzy partitions for all model variables are built. The method, based on central symmetry, permits to obtain the fuzzy label borders from their cores, which have been previously agreed among experts. The system also allows to agree the fuzzy rules to embed in the FIS. The results show the FIS method’s superiority as it allows to better manage the non-linear behavior and the uncertainty inherent to the supplier evaluation process.

  10. An introduction to Python and computer programming

    CERN Document Server

    Zhang, Yue

    2015-01-01

    This book introduces Python programming language and fundamental concepts in algorithms and computing. Its target audience includes students and engineers with little or no background in programming, who need to master a practical programming language and learn the basic thinking in computer science/programming. The main contents come from lecture notes for engineering students from all disciplines, and has received high ratings. Its materials and ordering have been adjusted repeatedly according to classroom reception. Compared to alternative textbooks in the market, this book introduces the underlying Python implementation of number, string, list, tuple, dict, function, class, instance and module objects in a consistent and easy-to-understand way, making assignment, function definition, function call, mutability and binding environments understandable inside-out. By giving the abstraction of implementation mechanisms, this book builds a solid understanding of the Python programming language.

  11. Healthcare4VideoStorm: Making Smart Decisions Based on Storm Metrics.

    Science.gov (United States)

    Zhang, Weishan; Duan, Pengcheng; Chen, Xiufeng; Lu, Qinghua

    2016-04-23

    Storm-based stream processing is widely used for real-time large-scale distributed processing. Knowing the run-time status and ensuring performance is critical to providing expected dependability for some applications, e.g., continuous video processing for security surveillance. The existing scheduling strategies' granularity is too coarse to have good performance, and mainly considers network resources without computing resources while scheduling. In this paper, we propose Healthcare4Storm, a framework that finds Storm insights based on Storm metrics to gain knowledge from the health status of an application, finally ending up with smart scheduling decisions. It takes into account both network and computing resources and conducts scheduling at a fine-grained level using tuples instead of topologies. The comprehensive evaluation shows that the proposed framework has good performance and can improve the dependability of the Storm-based applications.

  12. Fast detection of the main anatomical structures in digital retinal images based on intra- and inter-structure relational knowledge.

    Science.gov (United States)

    Molina-Casado, José M; Carmona, Enrique J; García-Feijoó, Julián

    2017-10-01

    The anatomical structure detection in retinal images is an open problem. However, most of the works in the related literature are oriented to the detection of each structure individually or assume the previous detection of a structure which is used as a reference. The objective of this paper is to obtain simultaneous detection of the main retinal structures (optic disc, macula, network of vessels and vascular bundle) in a fast and robust way. We propose a new methodology oriented to accomplish the mentioned objective. It consists of two stages. In an initial stage, a set of operators is applied to the retinal image. Each operator uses intra-structure relational knowledge in order to produce a set of candidate blobs that belongs to the desired structure. In a second stage, a set of tuples is created, each of which contains a different combination of the candidate blobs. Next, filtering operators, using inter-structure relational knowledge, are used in order to find the winner tuple. A method using template matching and mathematical morphology is implemented following the proposed methodology. A success is achieved if the distance between the automatically detected blob center and the actual structure center is less than or equal to one optic disc radius. The success rates obtained in the different public databases analyzed were: MESSIDOR (99.33%, 98.58%, 97.92%), DIARETDB1 (96.63%, 100%, 97.75%), DRIONS (100%, n/a, 100%) and ONHSD (100%, 98.85%, 97.70%) for optic disc (OD), macula (M) and vascular bundle (VB), respectively. Finally, the overall success rate obtained in this study for each structure was: 99.26% (OD), 98.69% (M) and 98.95% (VB). The average time of processing per image was 4.16 ± 0.72 s. The main advantage of the use of inter-structure relational knowledge was the reduction of the number of false positives in the detection process. The implemented method is able to simultaneously detect four structures. It is fast, robust and its detection

  13. Data Auditor: Analyzing Data Quality Using Pattern Tableaux

    Science.gov (United States)

    Srivastava, Divesh

    Monitoring databases maintain configuration and measurement tables about computer systems, such as networks and computing clusters, and serve important business functions, such as troubleshooting customer problems, analyzing equipment failures, planning system upgrades, etc. These databases are prone to many data quality issues: configuration tables may be incorrect due to data entry errors, while measurement tables may be affected by incorrect, missing, duplicate and delayed polls. We describe Data Auditor, a tool for analyzing data quality and exploring data semantics of monitoring databases. Given a user-supplied constraint, such as a boolean predicate expected to be satisfied by every tuple, a functional dependency, or an inclusion dependency, Data Auditor computes "pattern tableaux", which are concise summaries of subsets of the data that satisfy or fail the constraint. We discuss the architecture of Data Auditor, including the supported types of constraints and the tableau generation mechanism. We also show the utility of our approach on an operational network monitoring database.

  14. On the degrees-of-freedom of the MIMO three-way channel with intermittent connectivity

    KAUST Repository

    Chaaban, Anas

    2017-08-29

    The degrees-of-freedom (DoF) of the multi-antenna three-way channel (3WC) with an intermittent node is studied. Special attention is given to the impact of adaptation when the intermittent node has the largest number of antennas. A non-adaptive transmission scheme based on interference alignment, zero-forcing, and erasure-channel treatment is proposed, and its corresponding DoF region is derived. Then, it is shown that this scheme achieves the sum-DoF of the intermittent channel, in addition to the DoF region of the nonintermittent one. Thus, adaptation is not necessary from those perspectives. To the contrary, it is shown that adaptation is necessary for achieving the DoF region of the intermittent case. This is shown by deriving an outer bound for the intermittent channel with nonadaptive encoding, and proposing an adaptive scheme which achieves DoF tuples outside this bound. This highlights the importance of cooperation in this intermittent network.

  15. A DDoS Attack Detection Method Based on SVM in Software Defined Network

    Directory of Open Access Journals (Sweden)

    Jin Ye

    2018-01-01

    Full Text Available The detection of DDoS attacks is an important topic in the field of network security. The occurrence of software defined network (SDN (Zhang et al., 2018 brings up some novel methods to this topic in which some deep learning algorithm is adopted to model the attack behavior based on collecting from the SDN controller. However, the existing methods such as neural network algorithm are not practical enough to be applied. In this paper, the SDN environment by mininet and floodlight (Ning et al., 2014 simulation platform is constructed, 6-tuple characteristic values of the switch flow table is extracted, and then DDoS attack model is built by combining the SVM classification algorithms. The experiments show that average accuracy rate of our method is 95.24% with a small amount of flow collecting. Our work is of good value for the detection of DDoS attack in SDN.

  16. Specific "scientific" data structures, and their processing

    Directory of Open Access Journals (Sweden)

    Jerzy Karczmarczuk

    2011-09-01

    Full Text Available Programming physicists use, as all programmers, arrays, lists, tuples, records, etc., and this requires some change in their thought patterns while converting their formulae into some code, since the "data structures" operated upon, while elaborating some theory and its consequences, are rather: power series and Padé approximants, differential forms and other instances of differential algebras, functionals (for the variational calculus, trajectories (solutions of differential equations, Young diagrams and Feynman graphs, etc. Such data is often used in a [semi-]numerical setting, not necessarily "symbolic", appropriate for the computer algebra packages. Modules adapted to such data may be "just libraries", but often they become specific, embedded sub-languages, typically mapped into object-oriented frameworks, with overloaded mathematical operations. Here we present a functional approach to this philosophy. We show how the usage of Haskell datatypes and - fundamental for our tutorial - the application of lazy evaluation makes it possible to operate upon such data (in particular: the "infinite" sequences in a natural and comfortable manner.

  17. Multidimensional entropic uncertainty relation based on a commutator matrix in position and momentum spaces

    Science.gov (United States)

    Hertz, Anaelle; Vanbever, Luc; Cerf, Nicolas J.

    2018-01-01

    The uncertainty relation for continuous variables due to Byałinicki-Birula and Mycielski [I. Białynicki-Birula and J. Mycielski, Commun. Math. Phys. 44, 129 (1975), 10.1007/BF01608825] expresses the complementarity between two n -tuples of canonically conjugate variables (x1,x2,...,xn) and (p1,p2,...,pn) in terms of Shannon differential entropy. Here we consider the generalization to variables that are not canonically conjugate and derive an entropic uncertainty relation expressing the balance between any two n -variable Gaussian projective measurements. The bound on entropies is expressed in terms of the determinant of a matrix of commutators between the measured variables. This uncertainty relation also captures the complementarity between any two incompatible linear canonical transforms, the bound being written in terms of the corresponding symplectic matrices in phase space. Finally, we extend this uncertainty relation to Rényi entropies and also prove a covariance-based uncertainty relation which generalizes the Robertson relation.

  18. Detecting periodicities with Gaussian processes

    Directory of Open Access Journals (Sweden)

    Nicolas Durrande

    2016-04-01

    Full Text Available We consider the problem of detecting and quantifying the periodic component of a function given noise-corrupted observations of a limited number of input/output tuples. Our approach is based on Gaussian process regression, which provides a flexible non-parametric framework for modelling periodic data. We introduce a novel decomposition of the covariance function as the sum of periodic and aperiodic kernels. This decomposition allows for the creation of sub-models which capture the periodic nature of the signal and its complement. To quantify the periodicity of the signal, we derive a periodicity ratio which reflects the uncertainty in the fitted sub-models. Although the method can be applied to many kernels, we give a special emphasis to the Matérn family, from the expression of the reproducing kernel Hilbert space inner product to the implementation of the associated periodic kernels in a Gaussian process toolkit. The proposed method is illustrated by considering the detection of periodically expressed genes in the arabidopsis genome.

  19. A new approach for an efficient human resource appraisal and selection

    Directory of Open Access Journals (Sweden)

    Hachicha Raoudha

    2012-12-01

    Full Text Available The aim of the paper is to provide a decision making tool for solving a multi-criteria selection problem that can accommodate the qualitative details in relations with the task requirements and candidates’ competences. Our inquiry emphasizes the use of the 2-tuple linguistic representation model as the most suitable tool to overcome the uncertain and subjective assessments. It is adapted to aggregate linguistic assessments of acquired and required competence resources generated by a group of appraisers. The resulting aggregated objective evaluations are therefore used as inputs of an extended version of the TOPSIS method.  After certain customization, a candidates’ ranking based on a similarity degree between required and acquired competence components levels is provided. The quality and efficiency of the proposed approach were confirmed through a real life application from a university context. It ensures a better management of the available candidates. Moreover, it allows facing the circumstances of absenteeism, identifying the need of training, and so on.

  20. Optimal Selection Method of Process Patents for Technology Transfer Using Fuzzy Linguistic Computing

    Directory of Open Access Journals (Sweden)

    Gangfeng Wang

    2014-01-01

    Full Text Available Under the open innovation paradigm, technology transfer of process patents is one of the most important mechanisms for manufacturing companies to implement process innovation and enhance the competitive edge. To achieve promising technology transfers, we need to evaluate the feasibility of process patents and optimally select the most appropriate patent according to the actual manufacturing situation. Hence, this paper proposes an optimal selection method of process patents using multiple criteria decision-making and 2-tuple fuzzy linguistic computing to avoid information loss during the processes of evaluation integration. An evaluation index system for technology transfer feasibility of process patents is designed initially. Then, fuzzy linguistic computing approach is applied to aggregate the evaluations of criteria weights for each criterion and corresponding subcriteria. Furthermore, performance ratings for subcriteria and fuzzy aggregated ratings of criteria are calculated. Thus, we obtain the overall technology transfer feasibility of patent alternatives. Finally, a case study of aeroengine turbine manufacturing is presented to demonstrate the applicability of the proposed method.

  1. Lightning Fast and Space Efficient Inequality Joins

    KAUST Repository

    Khayyat, Zuhair

    2015-09-01

    Inequality joins, which join relational tables on inequality conditions, are used in various applications. While there have been a wide range of optimization methods for joins in database systems, from algorithms such as sort-merge join and band join, to various indices such as B+-tree,R*-tree and Bitmap, inequality joins have received little attention and queries containing such joins are usually very slow. In this paper, we introduce fast inequality join algorithms. We put columns to be joined in sorted arrays and we use permutation arrays to encode positions of tuples in one sorted array w.r.t. the other sorted array. In contrast to sort-merge join, we use space effcient bit-arrays that enable optimizations, such as Bloom filter indices, for fast computation of the join results. We have implemented a centralized version of these algorithms on top of PostgreSQL, and a distributed version on top of Spark SQL. We have compared against well known optimization techniques for inequality joins and show that our solution is more scalable and several orders of magnitude faster.

  2. Lightning fast and space efficient inequality joins

    KAUST Repository

    Khayyat, Zuhair

    2015-09-01

    Inequality joins, which join relational tables on inequality conditions, are used in various applications. While there have been a wide range of optimization methods for joins in database systems, from algorithms such as sort-merge join and band join, to various indices such as B+-tree, R*-tree and Bitmap, inequality joins have received little attention and queries containing such joins are usually very slow. In this paper, we introduce fast inequality join algorithms. We put columns to be joined in sorted arrays and we use permutation arrays to encode positions of tuples in one sorted array w.r.t. the other sorted array. In contrast to sort-merge join, we use space efficient bit-arrays that enable optimizations, such as Bloom filter indices, for fast computation of the join results. We have implemented a centralized version of these algorithms on top of PostgreSQL, and a distributed version on top of Spark SQL. We have compared against well known optimization techniques for inequality joins and show that our solution is more scalable and several orders of magnitude faster.

  3. Event generation and production of signal inputs for the search of dark matter mediator signal at a future hadron collider

    CERN Document Server

    Chalise, Darshan

    2017-01-01

    The interaction between Dark Matter particles and Standard Model particles is possible through a force mediated by a Dark Matter(DM) - Standard Model(SM) mediator. If that mediator decays through a dijet event, the reconstructed invariant mass of the jets will peak at a specific value, in contrast to the smooth QCD background. This analysis is a preliminary work towards the understanding of how changes in detector conditions at the Future Circular Collider affect the sensitivity of the mediator signal. MadGraph 5 was used to produce events with 30 TeV DM mediator and Heppy was used to produce flat n-tuples for ROOT analysis. MadAnalysis 5 was then used to produce histograms of MadGraph events and PyRoot was used to analyze Heppy output. Histograms of invariant mass of the jets after event production through MadGraph as well as after Heppy analysis showed a peak at 30 TeV. This verified the production of a 30 TeV mediator during event production.

  4. The Research of Disease Spots Extraction Based on Evolutionary Algorithm

    Directory of Open Access Journals (Sweden)

    Kangshun Li

    2017-01-01

    Full Text Available According to the characteristics of maize disease spot performance in the image, this paper designs two-histogram segmentation method based on evolutionary algorithm, which combined with the analysis of image of maize diseases and insect pests, with full consideration of color and texture characteristic of the lesion of pests and diseases, the chroma and gray image, composed of two tuples to build a two-dimensional histogram, solves the problem of one-dimensional histograms that cannot be clearly divided into target and background bimodal distribution and improved the traditional two-dimensional histogram application in pest damage lesion extraction. The chromosome coding suitable for the characteristics of lesion image is designed based on second segmentation of the genetic algorithm Otsu. Determining initial population with analysis results of lesion image, parallel selection, optimal preservation strategy, and adaptive mutation operator are used to improve the search efficiency. Finally, by setting the fluctuation threshold, we continue to search for the best threshold in the range of fluctuations for implementation of global search and local search.

  5. An Embedded System Dedicated to Intervehicle Communication Applications

    Directory of Open Access Journals (Sweden)

    Zhou Haiying

    2010-01-01

    Full Text Available To overcome system latency and network delay is essential for intervehicle communication (IVC applications such as hazard alarming and cooperative driving. This paper proposes a low-cost embedded software system dedicated to such applications. It consists of two basic component layers: an operating system, named HEROS (hybrid event-driven and real-time multitasking operating system, and a communication protocol, named CIVIC (Communication Inter Véhicule Intelligente et Coopérative. HEROS is originally designed for wireless sensor networks (WSNs. It contains a component-based resource-aware kernel and a low-latency tuple-based communication system. Moreover, it provides a configurable event-driven and/or real-time multitasking mechanism for various embedded applications. The CIVIC is an autoconfiguration cooperative IVC protocol. It merges proactive and reactive approaches to speed up and optimize location-based routing discovery with high-mobility nodes. Currently, this embedded system has been implemented and tested. The experiment results show that the new embedded system has low system latency and network delay under the principle of small resource consumption.

  6. Feynman's Operational Calculi: Spectral Theory for Noncommuting Self-adjoint Operators

    International Nuclear Information System (INIS)

    Jefferies, Brian; Johnson, Gerald W.; Nielsen, Lance

    2007-01-01

    The spectral theorem for commuting self-adjoint operators along with the associated functional (or operational) calculus is among the most useful and beautiful results of analysis. It is well known that forming a functional calculus for noncommuting self-adjoint operators is far more problematic. The central result of this paper establishes a rich functional calculus for any finite number of noncommuting (i.e. not necessarily commuting) bounded, self-adjoint operators A 1 ,..., A n and associated continuous Borel probability measures μ 1 , ?, μ n on [0,1]. Fix A 1 ,..., A n . Then each choice of an n-tuple (μ 1 ,...,μ n ) of measures determines one of Feynman's operational calculi acting on a certain Banach algebra of analytic functions even when A 1 , ..., A n are just bounded linear operators on a Banach space. The Hilbert space setting along with self-adjointness allows us to extend the operational calculi well beyond the analytic functions. Using results and ideas drawn largely from the proof of our main theorem, we also establish a family of Trotter product type formulas suitable for Feynman's operational calculi

  7. Spectral theory of linear operators and spectral systems in Banach algebras

    CERN Document Server

    Müller, Vladimir

    2003-01-01

    This book is dedicated to the spectral theory of linear operators on Banach spaces and of elements in Banach algebras. It presents a survey of results concerning various types of spectra, both of single and n-tuples of elements. Typical examples are the one-sided spectra, the approximate point, essential, local and Taylor spectrum, and their variants. The theory is presented in a unified, axiomatic and elementary way. Many results appear here for the first time in a monograph. The material is self-contained. Only a basic knowledge of functional analysis, topology, and complex analysis is assumed. The monograph should appeal both to students who would like to learn about spectral theory and to experts in the field. It can also serve as a reference book. The present second edition contains a number of new results, in particular, concerning orbits and their relations to the invariant subspace problem. This book is dedicated to the spectral theory of linear operators on Banach spaces and of elements in Banach alg...

  8. Case study of open-source enterprise resource planning implementation in a small business

    Science.gov (United States)

    Olson, David L.; Staley, Jesse

    2012-02-01

    Enterprise resource planning (ERP) systems have been recognised as offering great benefit to some organisations, although they are expensive and problematic to implement. The cost and risk make well-developed proprietorial systems unaffordable to small businesses. Open-source software (OSS) has become a viable means of producing ERP system products. The question this paper addresses is the feasibility of OSS ERP systems for small businesses. A case is reported involving two efforts to implement freely distributed ERP software products in a small US make-to-order engineering firm. The case emphasises the potential of freely distributed ERP systems, as well as some of the hurdles involved in their implementation. The paper briefly reviews highlights of OSS ERP systems, with the primary focus on reporting the case experiences for efforts to implement ERPLite software and xTuple software. While both systems worked from a technical perspective, both failed due to economic factors. While these economic conditions led to imperfect results, the case demonstrates the feasibility of OSS ERP for small businesses. Both experiences are evaluated in terms of risk dimension.

  9. BigDansing

    KAUST Repository

    Khayyat, Zuhair

    2015-06-02

    Data cleansing approaches have usually focused on detecting and fixing errors with little attention to scaling to big datasets. This presents a serious impediment since data cleansing often involves costly computations such as enumerating pairs of tuples, handling inequality joins, and dealing with user-defined functions. In this paper, we present BigDansing, a Big Data Cleansing system to tackle efficiency, scalability, and ease-of-use issues in data cleansing. The system can run on top of most common general purpose data processing platforms, ranging from DBMSs to MapReduce-like frameworks. A user-friendly programming interface allows users to express data quality rules both declaratively and procedurally, with no requirement of being aware of the underlying distributed platform. BigDansing takes these rules into a series of transformations that enable distributed computations and several optimizations, such as shared scans and specialized joins operators. Experimental results on both synthetic and real datasets show that BigDansing outperforms existing baseline systems up to more than two orders of magnitude without sacrificing the quality provided by the repair algorithms.

  10. Approximation des cubes OLAP et génération de règles dans les entrepôts de données

    Directory of Open Access Journals (Sweden)

    Sami Naouali

    2006-04-01

    Full Text Available This paper presents a new approach toward approximate query answering in data warehouses. The approach is based on an adaptation of rough set theory to multidimensional data, and offers cube exploration and mining facilities. The objective of this work is to integrate approximation mechanisms and associated operators into data cubes in order to produce views that can then be explored using OLAP or data mining techniques. The integration of data approximation capabilities with OLAP techniques offers additional facilities for cube exploration and analysis. The proposed approach allows the user to work either in a restricted mode using a cube lower approximation or in a relaxed mode using cube upper approximation. The former mode is useful when the query output is large, and hence allows the user to focus on a reduced set of fully matching tuples. The latter is useful when a query returns an empty or small answer set, and hence helps relax the query conditions so that a superset of the answer is returned.

  11. Augmented cognition tool for rapid military decision making.

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, Shawn Ellis; Bernard, Michael Lewis; Verzi, Stephen J.; Dubicka, Irene; Vineyard, Craig Michael

    2011-10-01

    This report describes the laboratory directed research and development work to model relevant areas of the brain that associate multi-modal information for long-term storage for the purpose of creating a more effective, and more automated, association mechanism to support rapid decision making. Using the biology and functionality of the hippocampus as an analogy or inspiration, we have developed an artificial neural network architecture to associate k-tuples (paired associates) of multimodal input records. The architecture is composed of coupled unimodal self-organizing neural modules that learn generalizations of unimodal components of the input record. Cross modal associations, stored as a higher-order tensor, are learned incrementally as these generalizations form. Graph algorithms are then applied to the tensor to extract multi-modal association networks formed during learning. Doing so yields a novel approach to data mining for knowledge discovery. This report describes the neurobiological inspiration, architecture, and operational characteristics of our model, and also provides a real world terrorist network example to illustrate the model's functionality.

  12. Silicon diode for measurement of integral neutron dose and method of its production

    International Nuclear Information System (INIS)

    Frank, H.; Seda, J.; Trousil, J.

    1978-01-01

    The silicon diode consists of an N or P type silicon plate having a specific resistance exceeding 10 ohm.cm and minority carrier life exceeding 100μs. The plate thickness is a quintuple to a ten-tuple of the diffusion length and the plate consists of layers. Ions of, eg., boron, at a concentration exceeding 10 14 cm -2 are implanted into the P + type silicon layer and a layer of a metal, eg., nickel, is deposited onto it. Ions of eg., phosphorus, at a concentration exceeding 10 14 cm -2 are implanted in the N + type layer and a metal layer, eg., nickel is again depositeJ onto it. Implantation proceeds at an ion acceleration voltage of 10 to 200 kV. Metal layer deposition follows, and simultaneously with annealing of the P + and N + types of silicon layers, the metal layers are annealed at 600 to 900 degC for 1 to 60 minutes with subsequent temperature decrease at a rate less than 10 degC/min, down to a temperature of 300 degC. (J.P.)

  13. Lightning fast and space efficient inequality joins

    KAUST Repository

    Khayyat, Zuhair; Lucia, William; Singh, Meghna; Ouzzani, Mourad; Papotti, Paolo; Quiané -Ruiz, Jorge-Arnulfo; Tang, Nan; Kalnis, Panos

    2015-01-01

    Inequality joins, which join relational tables on inequality conditions, are used in various applications. While there have been a wide range of optimization methods for joins in database systems, from algorithms such as sort-merge join and band join, to various indices such as B+-tree, R*-tree and Bitmap, inequality joins have received little attention and queries containing such joins are usually very slow. In this paper, we introduce fast inequality join algorithms. We put columns to be joined in sorted arrays and we use permutation arrays to encode positions of tuples in one sorted array w.r.t. the other sorted array. In contrast to sort-merge join, we use space efficient bit-arrays that enable optimizations, such as Bloom filter indices, for fast computation of the join results. We have implemented a centralized version of these algorithms on top of PostgreSQL, and a distributed version on top of Spark SQL. We have compared against well known optimization techniques for inequality joins and show that our solution is more scalable and several orders of magnitude faster.

  14. Elie Cartan and pan-geometry of multispatial hyperspace

    International Nuclear Information System (INIS)

    Czajko, Jakub

    2004-01-01

    Elie Cartan has proved that highest dimensionality of any simple geometric space is three and that an exterior differentiation of a 3D+ geometric object gives bivector, which may correspond to some two 2D surfaces as if the 3D+ geometric object comprised two 3D objects. Since one cannot increase the dimensionality of a 3D space even though more than four independently varying physical magnitudes do exist, then an expansion of dimensionality requires a multispatial hyperspace that contains many simple geometric 3D spaces. Presence of such a hyperspace prompts for an entirely new concept of vectors with an isometric operation of vector multiplication of traditional vectors (3-tuples). This new operation on 3-vectors implies presence of a 3D mass-based linear vector space and consequently thus a 9D geometric hyperspace for classical mechanics alone. Also an outline of entirely new, synthetic approach to physics and mathematics is introduced. This synthetic approach can be used to design a computer-aided knowledge extracting system, which could generate entirely new scientific knowledge

  15. Fault-Tolerant and Elastic Streaming MapReduce with Decentralized Coordination

    Energy Technology Data Exchange (ETDEWEB)

    Kumbhare, Alok [Univ. of Southern California, Los Angeles, CA (United States); Frincu, Marc [Univ. of Southern California, Los Angeles, CA (United States); Simmhan, Yogesh [Indian Inst. of Technology (IIT), Bangalore (India); Prasanna, Viktor K. [Univ. of Southern California, Los Angeles, CA (United States)

    2015-06-29

    The MapReduce programming model, due to its simplicity and scalability, has become an essential tool for processing large data volumes in distributed environments. Recent Stream Processing Systems (SPS) extend this model to provide low-latency analysis of high-velocity continuous data streams. However, integrating MapReduce with streaming poses challenges: first, the runtime variations in data characteristics such as data-rates and key-distribution cause resource overload, that inturn leads to fluctuations in the Quality of the Service (QoS); and second, the stateful reducers, whose state depends on the complete tuple history, necessitates efficient fault-recovery mechanisms to maintain the desired QoS in the presence of resource failures. We propose an integrated streaming MapReduce architecture leveraging the concept of consistent hashing to support runtime elasticity along with locality-aware data and state replication to provide efficient load-balancing with low-overhead fault-tolerance and parallel fault-recovery from multiple simultaneous failures. Our evaluation on a private cloud shows up to 2:8 improvement in peak throughput compared to Apache Storm SPS, and a low recovery latency of 700 -1500 ms from multiple failures.

  16. A web-based approach to data imputation

    KAUST Repository

    Li, Zhixu

    2013-10-24

    In this paper, we present WebPut, a prototype system that adopts a novel web-based approach to the data imputation problem. Towards this, Webput utilizes the available information in an incomplete database in conjunction with the data consistency principle. Moreover, WebPut extends effective Information Extraction (IE) methods for the purpose of formulating web search queries that are capable of effectively retrieving missing values with high accuracy. WebPut employs a confidence-based scheme that efficiently leverages our suite of data imputation queries to automatically select the most effective imputation query for each missing value. A greedy iterative algorithm is proposed to schedule the imputation order of the different missing values in a database, and in turn the issuing of their corresponding imputation queries, for improving the accuracy and efficiency of WebPut. Moreover, several optimization techniques are also proposed to reduce the cost of estimating the confidence of imputation queries at both the tuple-level and the database-level. Experiments based on several real-world data collections demonstrate not only the effectiveness of WebPut compared to existing approaches, but also the efficiency of our proposed algorithms and optimization techniques. © 2013 Springer Science+Business Media New York.

  17. Incorporating linguistic, probabilistic, and possibilistic information in a risk-based approach for ranking contaminated sites.

    Science.gov (United States)

    Zhang, Kejiang; Achari, Gopal; Pei, Yuansheng

    2010-10-01

    Different types of uncertain information-linguistic, probabilistic, and possibilistic-exist in site characterization. Their representation and propagation significantly influence the management of contaminated sites. In the absence of a framework with which to properly represent and integrate these quantitative and qualitative inputs together, decision makers cannot fully take advantage of the available and necessary information to identify all the plausible alternatives. A systematic methodology was developed in the present work to incorporate linguistic, probabilistic, and possibilistic information into the Preference Ranking Organization METHod for Enrichment Evaluation (PROMETHEE), a subgroup of Multi-Criteria Decision Analysis (MCDA) methods for ranking contaminated sites. The identification of criteria based on the paradigm of comparative risk assessment provides a rationale for risk-based prioritization. Uncertain linguistic, probabilistic, and possibilistic information identified in characterizing contaminated sites can be properly represented as numerical values, intervals, probability distributions, and fuzzy sets or possibility distributions, and linguistic variables according to their nature. These different kinds of representation are first transformed into a 2-tuple linguistic representation domain. The propagation of hybrid uncertainties is then carried out in the same domain. This methodology can use the original site information directly as much as possible. The case study shows that this systematic methodology provides more reasonable results. © 2010 SETAC.

  18. A New Decision-Making Method for Stock Portfolio Selection Based on Computing with Linguistic Assessment

    Directory of Open Access Journals (Sweden)

    Chen-Tung Chen

    2009-01-01

    Full Text Available The purpose of stock portfolio selection is how to allocate the capital to a large number of stocks in order to bring a most profitable return for investors. In most of past literatures, experts considered the portfolio of selection problem only based on past crisp or quantitative data. However, many qualitative and quantitative factors will influence the stock portfolio selection in real investment situation. It is very important for experts or decision-makers to use their experience or knowledge to predict the performance of each stock and make a stock portfolio. Because of the knowledge, experience, and background of each expert are different and vague, different types of 2-tuple linguistic variable are suitable used to express experts' opinions for the performance evaluation of each stock with respect to criteria. According to the linguistic evaluations of experts, the linguistic TOPSIS and linguistic ELECTRE methods are combined to present a new decision-making method for dealing with stock selection problems in this paper. Once the investment set has been determined, the risk preferences of investor are considered to calculate the investment ratio of each stock in the investment set. Finally, an example is implemented to demonstrate the practicability of the proposed method.

  19. Automorphic Forms and Mock Modular Forms in String Theory

    Science.gov (United States)

    Nazaroglu, Caner

    We study a variety of modular invariant objects in relation to string theory. First, we focus on Jacobi forms over generic rank lattices and Siegel forms that appear in N = 2, D = 4 compactifications of heterotic string with Wilson lines. Constraints from low energy spectrum and modularity are employed to deduce the relevant supersymmetric partition functions entirely. This procedure is applied on models that lead to Jacobi forms of index 3, 4, 5 as well as Jacobi forms over root lattices A2 and A3. These computations are then checked against an explicit orbifold model which can be Higgsed to the models under question. Models with a single Wilson line are then studied in detail with their relation to paramodular group Gammam as T-duality group made explicit. These results on the heterotic string side are then turned into predictions for geometric invariants using TypeII - Heterotic duality. Secondly, we study theta functions for indenite signature lattices of generic signature. Building on results in literature for signature (n-1,1) and (n-2,2) lattices, we work out the properties of generalized error functions which we call r-tuple error functions. We then use these functions to build such indenite theta functions and describe their modular completions.

  20. RBSURFpred: Modeling protein accessible surface area in real and binary space using regularized and optimized regression.

    Science.gov (United States)

    Tarafder, Sumit; Toukir Ahmed, Md; Iqbal, Sumaiya; Tamjidul Hoque, Md; Sohel Rahman, M

    2018-03-14

    Accessible surface area (ASA) of a protein residue is an effective feature for protein structure prediction, binding region identification, fold recognition problems etc. Improving the prediction of ASA by the application of effective feature variables is a challenging but explorable task to consider, specially in the field of machine learning. Among the existing predictors of ASA, REGAd 3 p is a highly accurate ASA predictor which is based on regularized exact regression with polynomial kernel of degree 3. In this work, we present a new predictor RBSURFpred, which extends REGAd 3 p on several dimensions by incorporating 58 physicochemical, evolutionary and structural properties into 9-tuple peptides via Chou's general PseAAC, which allowed us to obtain higher accuracies in predicting both real-valued and binary ASA. We have compared RBSURFpred for both real and binary space predictions with state-of-the-art predictors, such as REGAd 3 p and SPIDER2. We also have carried out a rigorous analysis of the performance of RBSURFpred in terms of different amino acids and their properties, and also with biologically relevant case-studies. The performance of RBSURFpred establishes itself as a useful tool for the community. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. Degrees-of-Freedom of the MIMO Three-Way Channel with Node-Intermittency

    KAUST Repository

    Neu, Joachim

    2017-08-28

    The characterization of fundamental performance bounds of many-to-many communication systems in which participating nodes are active in an intermittent way is one of the major challenges in communication theory. In order to address this issue, we introduce the multiple-input multiple-output (MIMO) three-way channel (3WC) with an intermittent node and study its degrees-of-freedom (DoF) region and sum-DoF. We devise a non-adaptive encoding scheme based on zero-forcing, interference alignment and erasure coding, and show its DoF region (and thus sum-DoF) optimality for non-intermittent 3WCs and its sum-DoF optimality for (node-)intermittent 3WCs. However, we show by example that in general some DoF tuples in the intermittent 3WC can only be achieved by adaptive schemes, such as multi-hop or decode-forward relaying. This shows that non-adaptive encoding is sufficient for the non-intermittent 3WC and for the sum-DoF of intermittent 3WCs, but adaptive encoding is necessary for the DoF region of intermittent 3WCs. Our work contributes to a better understanding of the fundamental limits of multi-way communication systems with intermittency and the impact of adaptation therein.

  2. Demazure Modules, Fusion Products and Q-Systems

    Science.gov (United States)

    Chari, Vyjayanthi; Venkatesh, R.

    2015-01-01

    In this paper, we introduce a family of indecomposable finite-dimensional graded modules for the current algebra associated to a simple Lie algebra. These modules are indexed by an -tuple of partitions , where α varies over a set of positive roots of and we assume that they satisfy a natural compatibility condition. In the case when the are all rectangular, for instance, we prove that these modules are Demazure modules in various levels. As a consequence, we see that the defining relations of Demazure modules can be greatly simplified. We use this simplified presentation to relate our results to the fusion products, defined in (Feigin and Loktev in Am Math Soc Transl Ser (2) 194:61-79, 1999), of representations of the current algebra. We prove that the Q-system of (Hatayama et al. in Contemporary Mathematics, vol. 248, pp. 243-291. American Mathematical Society, Providence, 1998) extends to a canonical short exact sequence of fusion products of representations associated to certain special partitions .Finally, in the last section we deal with the case of and prove that the modules we define are just fusion products of irreducible representations of the associated current algebra and give monomial bases for these modules.

  3. Kansas environmental and resource study: A Great Plains model. [land use, image enhancement, winter wheat, agriculture, water resources, and pattern recognition

    Science.gov (United States)

    Haralick, R. M.; Kanemasu, E. T.; Morain, S. A.; Yarger, H. L.; Ulaby, F. T.; Davis, J. C. (Principal Investigator); Bosley, R. J.; Williams, D. L.; Mccauley, J. R.; Mcnaughton, J. L.

    1973-01-01

    The author has identified the following significant results. Improvement in the land use classification accuracy of ERTS-1 MSS multi-images over Kansas can be made using two distances between neighboring grey tone N-tuples instead of one distance. Much more information is contained texturally than spectrally on the Kansas image. Ground truth measurements indicate that reflectance ratios of the 545 and 655 nm wavebands provide an index of plant development and possibly physiological stress. Preliminary analysis of MSS 4 and 5 channels substantiate the ground truth interpretation. Results of the land use mapping experiment indicate that ERTS-1 imagery has major potential in regionalization. The ways in which land is utilized within these regions may then be studied more effectively than if no adequate regionalization is available. A model for estimating wheat yield per acre has been applied to acreage estimates derived from ERTS-1 imagery to project the 1973 wheat yields for a ten county area in southwest Kansas. The results are within 3% of the preharvest estimates for the same area prepared by the USDA. Visual identification of winter wheat is readily achieved by using a temporal sequence of images. Identification can be improve by stratifying the project area into subregions having more or less homogeneous agricultural practices and crop mixes.

  4. On Power Allocation for Parallel Gaussian Broadcast Channels with Common Information

    Directory of Open Access Journals (Sweden)

    Gohary RamyH

    2009-01-01

    Full Text Available This paper considers a broadcast system in which a single transmitter sends a common message and (independent particular messages to receivers over unmatched parallel scalar Gaussian subchannels. For this system the set of all rate tuples that can be achieved via superposition coding and Gaussian signalling (SPCGS can be parameterized by a set of power loads and partitions, and the boundary of this set can be expressed as the solution of an optimization problem. Although that problem is not convex in the general case, it will be shown that it can be used to obtain tight and efficiently computable inner and outer bounds on the SPCGS rate region. The development of these bounds relies on approximating the original optimization problem by a (convex Geometric Program (GP, and in addition to generating the bounds, the GP also generates the corresponding power loads and partitions. There are special cases of the general problem that can be precisely formulated in a convex form. In this paper, explicit convex formulations are given for three such cases, namely, the case of 2 users, the case in which only particular messages are transmitted (in both of which the SPCGS rate region is the capacity region, and the case in which only the SPCGS sum rate is to be maximized.

  5. SharkDB: an in-memory column-oriented storage for trajectory analysis

    KAUST Repository

    Zheng, Bolong; Wang, Haozhou; Zheng, Kai; Su, Han; Liu, Kuien; Shang, Shuo

    2017-01-01

    The last decade has witnessed the prevalence of sensor and GPS technologies that produce a high volume of trajectory data representing the motion history of moving objects. However some characteristics of trajectories such as variable lengths and asynchronous sampling rates make it difficult to fit into traditional database systems that are disk-based and tuple-oriented. Motivated by the success of column store and recent development of in-memory databases, we try to explore the potential opportunities of boosting the performance of trajectory data processing by designing a novel trajectory storage within main memory. In contrast to most existing trajectory indexing methods that keep consecutive samples of the same trajectory in the same disk page, we partition the database into frames in which the positions of all moving objects at the same time instant are stored together and aligned in main memory. We found this column-wise storage to be surprisingly well suited for in-memory computing since most frames can be stored in highly compressed form, which is pivotal for increasing the memory throughput and reducing CPU-cache miss. The independence between frames also makes them natural working units when parallelizing data processing on a multi-core environment. Lastly we run a variety of common trajectory queries on both real and synthetic datasets in order to demonstrate advantages and study the limitations of our proposed storage.

  6. DataSpread: Unifying Databases and Spreadsheets.

    Science.gov (United States)

    Bendre, Mangesh; Sun, Bofan; Zhang, Ding; Zhou, Xinyan; Chang, Kevin ChenChuan; Parameswaran, Aditya

    2015-08-01

    Spreadsheet software is often the tool of choice for ad-hoc tabular data management, processing, and visualization, especially on tiny data sets. On the other hand, relational database systems offer significant power, expressivity, and efficiency over spreadsheet software for data management, while lacking in the ease of use and ad-hoc analysis capabilities. We demonstrate DataSpread, a data exploration tool that holistically unifies databases and spreadsheets. It continues to offer a Microsoft Excel-based spreadsheet front-end, while in parallel managing all the data in a back-end database, specifically, PostgreSQL. DataSpread retains all the advantages of spreadsheets, including ease of use, ad-hoc analysis and visualization capabilities, and a schema-free nature, while also adding the advantages of traditional relational databases, such as scalability and the ability to use arbitrary SQL to import, filter, or join external or internal tables and have the results appear in the spreadsheet. DataSpread needs to reason about and reconcile differences in the notions of schema, addressing of cells and tuples, and the current "pane" (which exists in spreadsheets but not in traditional databases), and support data modifications at both the front-end and the back-end. Our demonstration will center on our first and early prototype of the DataSpread, and will give the attendees a sense for the enormous data exploration capabilities offered by unifying spreadsheets and databases.

  7. A HYBRID HOPFIELD NEURAL NETWORK AND TABU SEARCH ALGORITHM TO SOLVE ROUTING PROBLEM IN COMMUNICATION NETWORK

    Directory of Open Access Journals (Sweden)

    MANAR Y. KASHMOLA

    2012-06-01

    Full Text Available The development of hybrid algorithms for solving complex optimization problems focuses on enhancing the strengths and compensating for the weakness of two or more complementary approaches. The goal is to intelligently combine the key elements of these approaches to find superior solutions to solve optimization problems. Optimal routing in communication network is considering a complex optimization problem. In this paper we propose a hybrid Hopfield Neural Network (HNN and Tabu Search (TS algorithm, this algorithm called hybrid HNN-TS algorithm. The paradigm of this hybridization is embedded. We embed the short-term memory and tabu restriction features from TS algorithm in the HNN model. The short-term memory and tabu restriction control the neuron selection process in the HNN model in order to get around the local minima problem and find an optimal solution using the HNN model to solve complex optimization problem. The proposed algorithm is intended to find the optimal path for packet transmission in the network which is fills in the field of routing problem. The optimal path that will be selected is depending on 4-tuples (delay, cost, reliability and capacity. Test results show that the propose algorithm can find path with optimal cost and a reasonable number of iterations. It also shows that the complexity of the network model won’t be a problem since the neuron selection is done heuristically.

  8. A Focused Crawler for Borderlands Situation Information with Geographical Properties of Place Names

    Directory of Open Access Journals (Sweden)

    Dongyang Hou

    2014-09-01

    Full Text Available Place name is an important ingredient of borderlands situation information and plays a significant role in collecting them from the Internet with focused crawlers. However, current focused crawlers treat place name in the same way as any other common keyword, which has no geographical properties. This may reduce the effectiveness of focused crawlers. To solve the problem, this paper firstly discusses the importance of place name in focused crawlers in terms of location and spatial relation, and, then, proposes the two-tuple-based topic representation method to express place name and common keyword, respectively. Afterwards, spatial relations between place names are introduced to calculate the relevance of given topics and webpages, which can make the calculation process more accurately. On the basis of the above, a focused crawler prototype for borderlands situation information collection is designed and implemented. The crawling speed and F-Score are adopted to evaluate its efficiency and effectiveness. Experimental results indicate that the efficiency of our proposed focused crawler is consistent with the polite access interval and it could meet the daily demand of borderlands situation information collection. Additionally, the F-Score value of our proposed focused crawler increases by around 7%, which means that our proposed focused crawler is more effective than the traditional best-first focused crawler.

  9. (a,k)-Anonymous Scheme for Privacy-Preserving Data Collection in IoT-based Healthcare Services Systems.

    Science.gov (United States)

    Li, Hongtao; Guo, Feng; Zhang, Wenyin; Wang, Jie; Xing, Jinsheng

    2018-02-14

    The widely use of IoT technologies in healthcare services has pushed forward medical intelligence level of services. However, it also brings potential privacy threat to the data collection. In healthcare services system, health and medical data that contains privacy information are often transmitted among networks, and such privacy information should be protected. Therefore, there is a need for privacy-preserving data collection (PPDC) scheme to protect clients (patients) data. We adopt (a,k)-anonymity model as privacy pretection scheme for data collection, and propose a novel anonymity-based PPDC method for healthcare services in this paper. The threat model is analyzed in the client-server-to-user (CS2U) model. On client-side, we utilize (a,k)-anonymity notion to generate anonymous tuples which can resist possible attack, and adopt a bottom-up clustering method to create clusters that satisfy a base privacy level of (a 1 ,k 1 )-anonymity. On server-side, we reduce the communication cost through generalization technology, and compress (a 1 ,k 1 )-anonymous data through an UPGMA-based cluster combination method to make the data meet the deeper level of privacy (a 2 ,k 2 )-anonymity (a 1  ≥ a 2 , k 2  ≥ k 1 ). Theoretical analysis and experimental results prove that our scheme is effective in privacy-preserving and data quality.

  10. Network Monitoring as a Streaming Analytics Problem

    KAUST Repository

    Gupta, Arpit

    2016-11-02

    Programmable switches make it easier to perform flexible network monitoring queries at line rate, and scalable stream processors make it possible to fuse data streams to answer more sophisticated queries about the network in real-time. Unfortunately, processing such network monitoring queries at high traffic rates requires both the switches and the stream processors to filter the traffic iteratively and adaptively so as to extract only that traffic that is of interest to the query at hand. Others have network monitoring in the context of streaming; yet, previous work has not closed the loop in a way that allows network operators to perform streaming analytics for network monitoring applications at scale. To achieve this objective, Sonata allows operators to express a network monitoring query by considering each packet as a tuple and efficiently partitioning each query between the switches and the stream processor through iterative refinement. Sonata extracts only the traffic that pertains to each query, ensuring that the stream processor can scale traffic rates of several terabits per second. We show with a simple example query involving DNS reflection attacks and traffic traces from one of the world\\'s largest IXPs that Sonata can capture 95% of all traffic pertaining to the query, while reducing the overall data rate by a factor of about 400 and the number of required counters by four orders of magnitude. Copyright 2016 ACM.

  11. Intermediate grouping on remotely sensed data using Gestalt algebra

    Science.gov (United States)

    Michaelsen, Eckart

    2014-10-01

    Human observers often achieve striking recognition performance on remotely sensed data unmatched by machine vision algorithms. This holds even for thermal images (IR) or synthetic aperture radar (SAR). Psychologists refer to these capabilities as Gestalt perceptive skills. Gestalt Algebra is a mathematical structure recently proposed for such laws of perceptual grouping. It gives operations for mirror symmetry, continuation in rows and rotational symmetric patterns. Each of these operations forms an aggregate-Gestalt of a tuple of part-Gestalten. Each Gestalt is attributed with a position, an orientation, a rotational frequency, a scale, and an assessment respectively. Any Gestalt can be combined with any other Gestalt using any of the three operations. Most often the assessment of the new aggregate-Gestalt will be close to zero. Only if the part-Gestalten perfectly fit into the desired pattern the new aggregate-Gestalt will be assessed with value one. The algebra is suitable in both directions: It may render an organized symmetric mandala using random numbers. Or it may recognize deep hidden visual relationships between meaningful parts of a picture. For the latter primitives must be obtained from the image by some key-point detector and a threshold. Intelligent search strategies are required for this search in the combinatorial space of possible Gestalt Algebra terms. Exemplarily, maximal assessed Gestalten found in selected aerial images as well as in IR and SAR images are presented.

  12. Optimal choice of word length when comparing two Markov sequences using a χ 2-statistic.

    Science.gov (United States)

    Bai, Xin; Tang, Kujin; Ren, Jie; Waterman, Michael; Sun, Fengzhu

    2017-10-03

    Alignment-free sequence comparison using counts of word patterns (grams, k-tuples) has become an active research topic due to the large amount of sequence data from the new sequencing technologies. Genome sequences are frequently modelled by Markov chains and the likelihood ratio test or the corresponding approximate χ 2 -statistic has been suggested to compare two sequences. However, it is not known how to best choose the word length k in such studies. We develop an optimal strategy to choose k by maximizing the statistical power of detecting differences between two sequences. Let the orders of the Markov chains for the two sequences be r 1 and r 2 , respectively. We show through both simulations and theoretical studies that the optimal k= max(r 1 ,r 2 )+1 for both long sequences and next generation sequencing (NGS) read data. The orders of the Markov chains may be unknown and several methods have been developed to estimate the orders of Markov chains based on both long sequences and NGS reads. We study the power loss of the statistics when the estimated orders are used. It is shown that the power loss is minimal for some of the estimators of the orders of Markov chains. Our studies provide guidelines on choosing the optimal word length for the comparison of Markov sequences.

  13. SharkDB: an in-memory column-oriented storage for trajectory analysis

    KAUST Repository

    Zheng, Bolong

    2017-05-05

    The last decade has witnessed the prevalence of sensor and GPS technologies that produce a high volume of trajectory data representing the motion history of moving objects. However some characteristics of trajectories such as variable lengths and asynchronous sampling rates make it difficult to fit into traditional database systems that are disk-based and tuple-oriented. Motivated by the success of column store and recent development of in-memory databases, we try to explore the potential opportunities of boosting the performance of trajectory data processing by designing a novel trajectory storage within main memory. In contrast to most existing trajectory indexing methods that keep consecutive samples of the same trajectory in the same disk page, we partition the database into frames in which the positions of all moving objects at the same time instant are stored together and aligned in main memory. We found this column-wise storage to be surprisingly well suited for in-memory computing since most frames can be stored in highly compressed form, which is pivotal for increasing the memory throughput and reducing CPU-cache miss. The independence between frames also makes them natural working units when parallelizing data processing on a multi-core environment. Lastly we run a variety of common trajectory queries on both real and synthetic datasets in order to demonstrate advantages and study the limitations of our proposed storage.

  14. Identifying Key Performance Indicators for Holistic Hospital Management with a Modified DEMATEL Approach.

    Science.gov (United States)

    Si, Sheng-Li; You, Xiao-Yue; Liu, Hu-Chen; Huang, Jia

    2017-08-19

    Performance analysis is an important way for hospitals to achieve higher efficiency and effectiveness in providing services to their customers. The performance of the healthcare system can be measured by many indicators, but it is difficult to improve them simultaneously due to the limited resources. A feasible way is to identify the central and influential indicators to improve healthcare performance in a stepwise manner. In this paper, we propose a hybrid multiple criteria decision making (MCDM) approach to identify key performance indicators (KPIs) for holistic hospital management. First, through integrating evidential reasoning approach and interval 2-tuple linguistic variables, various assessments of performance indicators provided by healthcare experts are modeled. Then, the decision making trial and evaluation laboratory (DEMATEL) technique is adopted to build an interactive network and visualize the causal relationships between the performance indicators. Finally, an empirical case study is provided to demonstrate the proposed approach for improving the efficiency of healthcare management. The results show that "accidents/adverse events", "nosocomial infection", ''incidents/errors", "number of operations/procedures" are significant influential indicators. Also, the indicators of "length of stay", "bed occupancy" and "financial measures" play important roles in performance evaluation of the healthcare organization. The proposed decision making approach could be considered as a reference for healthcare administrators to enhance the performance of their healthcare institutions.

  15. PRIVACY PRESERVING DATA MINING USING MULTIPLE OBJECTIVE OPTIMIZATION

    Directory of Open Access Journals (Sweden)

    V. Shyamala Susan

    2016-10-01

    Full Text Available Privacy preservation is that the most targeted issue in information publication, because the sensitive data shouldn't be leaked. For this sake, several privacy preservation data mining algorithms are proposed. In this work, feature selection using evolutionary algorithm and data masking coupled with slicing is treated as a multiple objective optimisation to preserve privacy. To start with, Genetic Algorithm (GA is carried out over the datasets to perceive the sensitive attributes and prioritise the attributes for treatment as per their determined sensitive level. In the next phase, to distort the data, noise is added to the higher level sensitive value using Hybrid Data Transformation (HDT method. In the following phase slicing algorithm groups the correlated attributes organized and by this means reduces the dimensionality by retaining the Advanced Clustering Algorithm (ACA. With the aim of getting the optimal dimensions of buckets, tuple segregating is accomplished by Metaheuristic Firefly Algorithm (MFA. The investigational consequences imply that the anticipated technique can reserve confidentiality and therefore the information utility is additionally high. Slicing algorithm allows the protection of association and usefulness in which effects in decreasing the information dimensionality and information loss. Performance analysis is created over OCC 7 and OCC 15 and our optimization method proves its effectiveness over two totally different datasets by showing 92.98% and 96.92% respectively.

  16. An Observation Task Chain Representation Model for Disaster Process-Oriented Remote Sensing Satellite Sensor Planning: A Flood Water Monitoring Application

    Directory of Open Access Journals (Sweden)

    Chao Yang

    2018-03-01

    Full Text Available An accurate and comprehensive representation of an observation task is a prerequisite in disaster monitoring to achieve reliable sensor observation planning. However, the extant disaster event or task information models do not fully satisfy the observation requirements for the accurate and efficient planning of remote-sensing satellite sensors. By considering the modeling requirements for a disaster observation task, we propose an observation task chain (OTChain representation model that includes four basic OTChain segments and eight-tuple observation task metadata description structures. A prototype system, namely OTChainManager, is implemented to provide functions for modeling, managing, querying, and visualizing observation tasks. In the case of flood water monitoring, we use a flood remote-sensing satellite sensor observation task for the experiment. The results show that the proposed OTChain representation model can be used in modeling process-owned flood disaster observation tasks. By querying and visualizing the flood observation task instances in the Jinsha River Basin, the proposed model can effectively express observation task processes, represent personalized observation constraints, and plan global remote-sensing satellite sensor observations. Compared with typical observation task information models or engines, the proposed OTChain representation model satisfies the information demands of the OTChain and its processes as well as impels the development of a long time-series sensor observation scheme.

  17. An efficient heuristic method for active feature acquisition and its application to protein-protein interaction prediction

    Directory of Open Access Journals (Sweden)

    Thahir Mohamed

    2012-11-01

    belief of the classification model induced by the acquisition of the feature under consideration. As compared to random selection of proteins on which the experiments are performed and the type of experiment that is performed, the heuristic method reduces the number of experiments to as few as 40%. Most notable characteristic of this method is that it does not require re-training of the classification model on every possible combination of instance, feature and feature-value tuples. For this reason, our method is far less computationally expensive as compared with previous AFA strategies. Conclusions The results show that our heuristic method for AFA creates an optimal training set with far less features acquired as compared to random acquisition. This shows the value of active feature acquisition to aid in protein-protein interaction prediction where feature acquisition is costly. Compared to previous methods, the proposed method reduces computational cost while also achieving a better F-score. The proposed method is valuable as it presents a direction to AFA with a far lesser computational expense by removing the need for the first time, of training a classifier for every combination of instance, feature and feature-value tuples which would be impractical for several domains.

  18. Multiple network alignment on quantum computers

    Science.gov (United States)

    Daskin, Anmer; Grama, Ananth; Kais, Sabre

    2014-12-01

    Comparative analyses of graph-structured datasets underly diverse problems. Examples of these problems include identification of conserved functional components (biochemical interactions) across species, structural similarity of large biomolecules, and recurring patterns of interactions in social networks. A large class of such analyses methods quantify the topological similarity of nodes across networks. The resulting correspondence of nodes across networks, also called node alignment, can be used to identify invariant subgraphs across the input graphs. Given graphs as input, alignment algorithms use topological information to assign a similarity score to each -tuple of nodes, with elements (nodes) drawn from each of the input graphs. Nodes are considered similar if their neighbors are also similar. An alternate, equivalent view of these network alignment algorithms is to consider the Kronecker product of the input graphs and to identify high-ranked nodes in the Kronecker product graph. Conventional methods such as PageRank and HITS (Hypertext-Induced Topic Selection) can be used for this purpose. These methods typically require computation of the principal eigenvector of a suitably modified Kronecker product matrix of the input graphs. We adopt this alternate view of the problem to address the problem of multiple network alignment. Using the phase estimation algorithm, we show that the multiple network alignment problem can be efficiently solved on quantum computers. We characterize the accuracy and performance of our method and show that it can deliver exponential speedups over conventional (non-quantum) methods.

  19. Effectiveness comparison of partially executed t-way test suite based generated by existing strategies

    Science.gov (United States)

    Othman, Rozmie R.; Ahmad, Mohd Zamri Zahir; Ali, Mohd Shaiful Aziz Rashid; Zakaria, Hasneeza Liza; Rahman, Md. Mostafijur

    2015-05-01

    Consuming 40 to 50 percent of software development cost, software testing is one of the most resource consuming activities in software development lifecycle. To ensure an acceptable level of quality and reliability of a typical software product, it is desirable to test every possible combination of input data under various configurations. Due to combinatorial explosion problem, considering all exhaustive testing is practically impossible. Resource constraints, costing factors as well as strict time-to-market deadlines are amongst the main factors that inhibit such consideration. Earlier work suggests that sampling strategy (i.e. based on t-way parameter interaction or called as t-way testing) can be effective to reduce number of test cases without effecting the fault detection capability. However, for a very large system, even t-way strategy will produce a large test suite that need to be executed. In the end, only part of the planned test suite can be executed in order to meet the aforementioned constraints. Here, there is a need for test engineers to measure the effectiveness of partially executed test suite in order for them to assess the risk they have to take. Motivated by the abovementioned problem, this paper presents the effectiveness comparison of partially executed t-way test suite generated by existing strategies using tuples coverage method. Here, test engineers can predict the effectiveness of the testing process if only part of the original test cases is executed.

  20. Using Grid for the BABAR Experiment

    International Nuclear Information System (INIS)

    Bozzi, C.

    2005-01-01

    The BaBar experiment has been taking data since 1999. In 2001 the computing group started to evaluate the possibility to evolve toward a distributed computing model in a grid environment. We built a prototype system, based on the European Data Grid (EDG), to submit full-scale analysis and Monte Carlo simulation jobs. Computing elements, storage elements, and worker nodes have been installed at SLAC and at various European sites. A BaBar virtual organization (VO) and a test replica catalog (RC) are maintained in Manchester, U.K., and the experiment is using three EDG testbed resource brokers in the U.K. and in Italy. First analysis tests were performed under the assumption that a standard BaBar software release was available at the grid target sites, using RC to register information about the executable and the produced n-tuples. Hundreds of analysis jobs accessing either Objectivity or Root data files ran on the grid. We tested the Monte Carlo production using a farm of the INFN-grid testbed customized to install an Objectivity database and run BaBar simulation software. First simulation production tests were performed using standard Job Description Language commands and the output files were written on the closest storage element. A package that can be officially distributed to grid sites not specifically customized for BaBar has been prepared. We are studying the possibility to add a user friendly interface to access grid services for BaBar

  1. Are Dimensions of Parenting Differentially Linked to Substance Use Across Caucasian and Asian American College Students?

    Science.gov (United States)

    Luk, Jeremy W; Patock-Peckham, Julie A; King, Kevin M

    2015-01-01

    Parental warmth and autonomy granting are commonly thought of as protective factors against substance use among Caucasians. However, limited research has examined whether associations between parenting dimensions and substance use outcomes are the same or different among Asian Americans. A final analytic sample of 839 college students was used to test whether race (Caucasian vs. Asian American) moderated the relations between parenting dimensions and substance use outcomes across Caucasians and Asian Americans. We utilized the Parental Bonding Instrument (Parker, Tupling, & Brown, 1979) to measure maternal and paternal warmth, encouragement of behavioral freedom, and denial of psychological autonomy. Multivariate regression models controlling for covariates including age, gender, and paternal education indicated four significant parenting by race interactions on alcohol problems and/or marijuana use. Specifically, maternal warmth was inversely associated with both alcohol problems and marijuana use among Caucasians but not among Asian Americans. Both maternal and paternal denial of psychological autonomy were positively associated with alcohol problems among Caucasians but not among Asian Americans. Consistent with emerging cross-cultural research, the associations between parenting dimensions and substance use behaviors observed in Caucasian populations may not be readily generalized to Asian Americans. These findings highlight the importance of considering different parenting dimensions in understanding substance use etiology among Asian Americans. Future research should use longitudinal data to replicate these findings across development and seek to identify other parenting dimensions that may be more relevant for Asian American youth.

  2. A Spatio-Temporal Enhanced Metadata Model for Interdisciplinary Instant Point Observations in Smart Cities

    Directory of Open Access Journals (Sweden)

    Nengcheng Chen

    2017-02-01

    Full Text Available Due to the incomprehensive and inconsistent description of spatial and temporal information for city data observed by sensors in various fields, it is a great challenge to share the massive, multi-source and heterogeneous interdisciplinary instant point observation data resources. In this paper, a spatio-temporal enhanced metadata model for point observation data sharing was proposed. The proposed Data Meta-Model (DMM focused on the spatio-temporal characteristics and formulated a ten-tuple information description structure to provide a unified and spatio-temporal enhanced description of the point observation data. To verify the feasibility of the point observation data sharing based on DMM, a prototype system was established, and the performance improvement of Sensor Observation Service (SOS for the instant access and insertion of point observation data was realized through the proposed MongoSOS, which is a Not Only SQL (NoSQL SOS based on the MongoDB database and has the capability of distributed storage. For example, the response time of the access and insertion for navigation and positioning data can be realized at the millisecond level. Case studies were conducted, including the gas concentrations monitoring for the gas leak emergency response and the smart city public vehicle monitoring based on BeiDou Navigation Satellite System (BDS used for recording the dynamic observation information. The results demonstrated the versatility and extensibility of the DMM, and the spatio-temporal enhanced sharing for interdisciplinary instant point observations in smart cities.

  3. Resource-efficient generation of linear cluster states by linear optics with postselection

    International Nuclear Information System (INIS)

    Uskov, D B; Alsing, P M; Fanto, M L; Szep, A; Smith, A M; Kaplan, L; Kim, R

    2015-01-01

    We report on theoretical research in photonic cluster-state computing. Finding optimal schemes of generating non-classical photonic states is of critical importance for this field as physically implementable photon–photon entangling operations are currently limited to measurement-assisted stochastic transformations. A critical parameter for assessing the efficiency of such transformations is the success probability of a desired measurement outcome. At present there are several experimental groups that are capable of generating multi-photon cluster states carrying more than eight qubits. Separate photonic qubits or small clusters can be fused into a single cluster state by a probabilistic optical CZ gate conditioned on simultaneous detection of all photons with 1/9 success probability for each gate. This design mechanically follows the original theoretical scheme of cluster state generation proposed more than a decade ago by Raussendorf, Browne and Briegel. The optimality of the destructive CZ gate in application to linear optical cluster state generation has not been analyzed previously. Our results reveal that this method is far from the optimal one. Employing numerical optimization we have identified that the maximal success probability of fusing n unentangled dual-rail optical qubits into a linear cluster state is equal to (1/2) n−1 ; an m-tuple of photonic Bell pair states, commonly generated via spontaneous parametric down-conversion, can be fused into a single cluster with the maximal success probability of (1/4) m−1 . (paper)

  4. Separating relational from item load effects in paired recognition: temporoparietal and middle frontal gyral activity with increased associates, but not items during encoding and retention.

    Science.gov (United States)

    Phillips, Steven; Niki, Kazuhisa

    2002-10-01

    Working memory is affected by items stored and the relations between them. However, separating these factors has been difficult, because increased items usually accompany increased associations/relations. Hence, some have argued, relational effects are reducible to item effects. We overcome this problem by manipulating index length: the fewest number of item positions at which there is a unique item, or tuple of items (if length >1), for every instance in the relational (memory) set. Longer indexes imply greater similarity (number of shared items) between instances and higher load on encoding processes. Subjects were given lists of study pairs and asked to make a recognition judgement. The number of unique items and index length in the three list conditions were: (1) AB, CD: four/one; (2) AB, CD, EF: six/one; and (3) AB, AD, CB: four/two, respectively. Japanese letters were used in Experiments 1 (kanji-ideograms) and 2 (hiragana-phonograms); numbers in Experiment 3; and shapes generated from Fourier descriptors in Experiment 4. Across all materials, right dominant temporoparietal and middle frontal gyral activity was found with increased index length, but not items during study. In Experiment 5, a longer delay was used to isolate retention effects in the absence of visual stimuli. Increased left hemispheric activity was observed in the precuneus, middle frontal gyrus, and superior temporal gyrus with increased index length for the delay period. These results show that relational load is not reducible to item load.

  5. ADASAGE, ADA Application Development System

    International Nuclear Information System (INIS)

    Taylor, M.; Russell, K.; Stewart, H.

    2001-01-01

    normal database development and use, restructuring and/or rebuilding of database files may be required if the schema for a record is modified (new fields are added, changed, or deleted and new keys selected, etc.), or if the index file for a relation becomes damaged or is destroyed. Program REBUILD performs this task. The rebuilding process is invoked by supplying both an old .DFL file name identifying the schema definition under which the current data was written and a current .DFL file name identifying the new schema definition to which the current database must be changed. REBUILD compares the old and current schema definitions for a relation, adjusts the database files accordingly, and re-tags the database files with the new version so that access with the new schema is allowed. The index file for the relation is then rebuilt, and any unused space within the database files removed. 3 - Restrictions on the complexity of the problem: Maxima for each ADASAGE database are: 4,294,967,000 records (tuples) per record size; 32,000 bytes record (tuple) size; 32,000 fixed-length field (attribute size); 4,294,967,000 variable-length field (attribute) size; 1,000 relations (flat files); 1,000 joined relations; 1,000 views; 500 fields (attributes) per relation; 500 keyed fields per relation; 6,550 forms (windows); 55 significant digits in a number. The user's operating system or hardware may have more stringent limits than these

  6. Relativistic positioning systems: Numerical simulations

    Science.gov (United States)

    Puchades Colmenero, Neus

    The position of users located on the Earth's surface or near it may be found with the classic positioning systems (CPS). Certain information broadcast by satellites of global navigation systems, as GPS and GALILEO, may be used for positioning. The CPS are based on the Newtonian formalism, although relativistic post-Newtonian corrections are done when they are necessary. This thesis contributes to the development of a different positioning approach, which is fully relativistic from the beginning. In the relativistic positioning systems (RPS), the space-time position of any user (ship, spacecraft, and so on) can be calculated with the help of four satellites, which broadcast their proper times by means of codified electromagnetic signals. In this thesis, we have simulated satellite 4-tuples of the GPS and GALILEO constellations. If a user receives the signals from four satellites simultaneously, the emission proper times read -after decoding- are the user "emission coordinates". In order to find the user "positioning coordinates", in an appropriate almost inertial reference system, there are two possibilities: (a) the explicit relation between positioning and emission coordinates (broadcast by the satellites) is analytically found or (b) numerical codes are designed to calculate the positioning coordinates from the emission ones. Method (a) is only viable in simple ideal cases, whereas (b) allows us to consider realistic situations. In this thesis, we have designed numerical codes with the essential aim of studying two appropriate RPS, which may be generalized. Sometimes, there are two real users placed in different positions, which receive the same proper times from the same satellites; then, we say that there is bifurcation, and additional data are needed to choose the real user position. In this thesis, bifurcation is studied in detail. We have analyzed in depth two RPS models; in both, it is considered that the satellites move in the Schwarzschild's space

  7. On stochastic geometry modeling of cellular uplink transmission with truncated channel inversion power control

    KAUST Repository

    Elsawy, Hesham

    2014-08-01

    Using stochastic geometry, we develop a tractable uplink modeling paradigm for outage probability and spectral efficiency in both single and multi-tier cellular wireless networks. The analysis accounts for per user equipment (UE) power control as well as the maximum power limitations for UEs. More specifically, for interference mitigation and robust uplink communication, each UE is required to control its transmit power such that the average received signal power at its serving base station (BS) is equal to a certain threshold ρo. Due to the limited transmit power, the UEs employ a truncated channel inversion power control policy with a cutoff threshold of ρo. We show that there exists a transfer point in the uplink system performance that depends on the following tuple: BS intensity λ, maximum transmit power of UEs Pu, and ρo. That is, when Pu is a tight operational constraint with respect to (w.r.t.) λ and ρo, the uplink outage probability and spectral efficiency highly depend on the values of λ and ρo. In this case, there exists an optimal cutoff threshold ρ*o, which depends on the system parameters, that minimizes the outage probability. On the other hand, when Pu is not a binding operational constraint w.r.t. λ and ρo, the uplink outage probability and spectral efficiency become independent of λ and ρo. We obtain approximate yet accurate simple expressions for outage probability and spectral efficiency, which reduce to closed forms in some special cases. © 2002-2012 IEEE.

  8. The right-hand side of the Jacobi identity: to be naught or not to be ?

    International Nuclear Information System (INIS)

    Kiselev, Arthemy V

    2016-01-01

    The geometric approach to iterated variations of local functionals -e.g., of the (master-)action functional - resulted in an extension of the deformation quantisation technique to the set-up of Poisson models of field theory. It also allowed of a rigorous proof for the main inter-relations between the Batalin-Vilkovisky (BV) Laplacian Δ and variational Schouten bracket [,]. The ad hoc use of these relations had been a known analytic difficulty in the BV- formalism for quantisation of gauge systems; now achieved, the proof does actually not require the assumption of graded-commutativity. Explained in our previous work, geometry's self- regularisation is rendered by Gel'fand's calculus of singular linear integral operators supported on the diagonal.We now illustrate that analytic technique by inspecting the validity mechanism for the graded Jacobi identity which the variational Schouten bracket does satisfy (whence Δ 2 = 0, i.e., the BV-Laplacian is a differential acting in the algebra of local functionals). By using one tuple of three variational multi-vectors twice, we contrast the new logic of iterated variations - when the right-hand side of Jacobi's identity vanishes altogether - with the old method: interlacing its steps and stops, it could produce some non-zero representative of the trivial class in the top- degree horizontal cohomology. But we then show at once by an elementary counterexample why, in the frames of the old approach that did not rely on Gel'fand's calculus, the BV-Laplacian failed to be a graded derivation of the variational Schouten bracket. (paper)

  9. The right-hand side of the Jacobi identity: to be naught or not to be ?

    Science.gov (United States)

    Kiselev, Arthemy V.

    2016-01-01

    The geometric approach to iterated variations of local functionals -e.g., of the (master-)action functional - resulted in an extension of the deformation quantisation technique to the set-up of Poisson models of field theory. It also allowed of a rigorous proof for the main inter-relations between the Batalin-Vilkovisky (BV) Laplacian Δ and variational Schouten bracket [,]. The ad hoc use of these relations had been a known analytic difficulty in the BV- formalism for quantisation of gauge systems; now achieved, the proof does actually not require the assumption of graded-commutativity. Explained in our previous work, geometry's self- regularisation is rendered by Gel'fand's calculus of singular linear integral operators supported on the diagonal. We now illustrate that analytic technique by inspecting the validity mechanism for the graded Jacobi identity which the variational Schouten bracket does satisfy (whence Δ2 = 0, i.e., the BV-Laplacian is a differential acting in the algebra of local functionals). By using one tuple of three variational multi-vectors twice, we contrast the new logic of iterated variations - when the right-hand side of Jacobi's identity vanishes altogether - with the old method: interlacing its steps and stops, it could produce some non-zero representative of the trivial class in the top- degree horizontal cohomology. But we then show at once by an elementary counterexample why, in the frames of the old approach that did not rely on Gel'fand's calculus, the BV-Laplacian failed to be a graded derivation of the variational Schouten bracket.

  10. Emergence of good conduct, scaling and zipf laws in human behavioral sequences in an online world.

    Directory of Open Access Journals (Sweden)

    Stefan Thurner

    Full Text Available We study behavioral action sequences of players in a massive multiplayer online game. In their virtual life players use eight basic actions which allow them to interact with each other. These actions are communication, trade, establishing or breaking friendships and enmities, attack, and punishment. We measure the probabilities for these actions conditional on previous taken and received actions and find a dramatic increase of negative behavior immediately after receiving negative actions. Similarly, positive behavior is intensified by receiving positive actions. We observe a tendency towards antipersistence in communication sequences. Classifying actions as positive (good and negative (bad allows us to define binary 'world lines' of lives of individuals. Positive and negative actions are persistent and occur in clusters, indicated by large scaling exponents α ~ 0.87 of the mean square displacement of the world lines. For all eight action types we find strong signs for high levels of repetitiveness, especially for negative actions. We partition behavioral sequences into segments of length n (behavioral 'words' and 'motifs' and study their statistical properties. We find two approximate power laws in the word ranking distribution, one with an exponent of κ ~ -1 for the ranks up to 100, and another with a lower exponent for higher ranks. The Shannon n-tuple redundancy yields large values and increases in terms of word length, further underscoring the non-trivial statistical properties of behavioral sequences. On the collective, societal level the timeseries of particular actions per day can be understood by a simple mean-reverting log-normal model.

  11. A common real time framework for SuperKEKB and Hyper Suprime-Cam at Subaru telescope

    International Nuclear Information System (INIS)

    Lee, S; Itoh, R; Katayama, N; Furusawa, H; Aihara, H; Mineo, S

    2010-01-01

    The real time data analysis at next generation experiments is a challenge because of their enormous data rate and size. The SuperKEKB experiment, the upgraded Belle experiment, requires to process 100 times larger data of current one. The offline-level data analysis is necessary in the HLT farm for the efficient data reduction. The real time processing of huge data is also the key at the planned dark energy survey using the Subaru telescope. The main camera for the survey called Hyper Suprime-Cam consists of 100 CCDs with 8 mega pixels each, and the total data size is expected to become comparable with that of SuperKEKB. The online tuning of measurement parameters is being planned by the real time processing, which was done empirically in the past. We started a joint development of the real time framework to be shared both by SuperKEKB and Hyper Suprime-Cam. The parallel processing technique is widely adopted in the framework design to utilize a huge number of network-connected PCs with multi-core CPUs. The parallel processing is performed not only in the trivial event-by-event manner, but also in the pipeline of the software modules which are dynamically placed over the distributed computing nodes. The object data flow in the framework is realized by the object serializing technique with the object persistency. On-the-fly collection of histograms and N-tuples is supported for the run-time monitoring. The detailed design and the development status of the framework is presented.

  12. Is there a computable upper bound for the height of a solution of a Diophantine equation with a unique solution in positive integers?

    Directory of Open Access Journals (Sweden)

    Tyszka Apoloniusz

    2017-03-01

    Full Text Available Let Bn = {xi · xj = xk : i, j, k ∈ {1, . . . , n}} ∪ {xi + 1 = xk : i, k ∈ {1, . . . , n}} denote the system of equations in the variables x1, . . . , xn. For a positive integer n, let _(n denote the smallest positive integer b such that for each system of equations S ⊆ Bn with a unique solution in positive integers x1, . . . , xn, this solution belongs to [1, b]n. Let g(1 = 1, and let g(n + 1 = 22g(n for every positive integer n. We conjecture that ξ (n 6 g(2n for every positive integer n. We prove: (1 the function ξ : N \\ {0} → N \\ {0} is computable in the limit; (2 if a function f : N \\ {0} → N \\ {0} has a single-fold Diophantine representation, then there exists a positive integer m such that f (n m; (3 the conjecture implies that there exists an algorithm which takes as input a Diophantine equation D(x1, . . . , xp = 0 and returns a positive integer d with the following property: for every positive integers a1, . . . , ap, if the tuple (a1, . . . , ap solely solves the equation D(x1, . . . , xp = 0 in positive integers, then a1, . . . , ap 6 d; (4 the conjecture implies that if a set M ⊆ N has a single-fold Diophantine representation, then M is computable; (5 for every integer n > 9, the inequality ξ (n < (22n−5 − 12n−5 + 1 implies that 22n−5 + 1 is composite.

  13. Methods for semi-automated indexing for high precision information retrieval

    Science.gov (United States)

    Berrios, Daniel C.; Cucina, Russell J.; Fagan, Lawrence M.

    2002-01-01

    OBJECTIVE: To evaluate a new system, ISAID (Internet-based Semi-automated Indexing of Documents), and to generate textbook indexes that are more detailed and more useful to readers. DESIGN: Pilot evaluation: simple, nonrandomized trial comparing ISAID with manual indexing methods. Methods evaluation: randomized, cross-over trial comparing three versions of ISAID and usability survey. PARTICIPANTS: Pilot evaluation: two physicians. Methods evaluation: twelve physicians, each of whom used three different versions of the system for a total of 36 indexing sessions. MEASUREMENTS: Total index term tuples generated per document per minute (TPM), with and without adjustment for concordance with other subjects; inter-indexer consistency; ratings of the usability of the ISAID indexing system. RESULTS: Compared with manual methods, ISAID decreased indexing times greatly. Using three versions of ISAID, inter-indexer consistency ranged from 15% to 65% with a mean of 41%, 31%, and 40% for each of three documents. Subjects using the full version of ISAID were faster (average TPM: 5.6) and had higher rates of concordant index generation. There were substantial learning effects, despite our use of a training/run-in phase. Subjects using the full version of ISAID were much faster by the third indexing session (average TPM: 9.1). There was a statistically significant increase in three-subject concordant indexing rate using the full version of ISAID during the second indexing session (p < 0.05). SUMMARY: Users of the ISAID indexing system create complex, precise, and accurate indexing for full-text documents much faster than users of manual methods. Furthermore, the natural language processing methods that ISAID uses to suggest indexes contributes substantially to increased indexing speed and accuracy.

  14. Systematic analysis of coding and noncoding DNA sequences using methods of statistical linguistics

    Science.gov (United States)

    Mantegna, R. N.; Buldyrev, S. V.; Goldberger, A. L.; Havlin, S.; Peng, C. K.; Simons, M.; Stanley, H. E.

    1995-01-01

    We compare the statistical properties of coding and noncoding regions in eukaryotic and viral DNA sequences by adapting two tests developed for the analysis of natural languages and symbolic sequences. The data set comprises all 30 sequences of length above 50 000 base pairs in GenBank Release No. 81.0, as well as the recently published sequences of C. elegans chromosome III (2.2 Mbp) and yeast chromosome XI (661 Kbp). We find that for the three chromosomes we studied the statistical properties of noncoding regions appear to be closer to those observed in natural languages than those of coding regions. In particular, (i) a n-tuple Zipf analysis of noncoding regions reveals a regime close to power-law behavior while the coding regions show logarithmic behavior over a wide interval, while (ii) an n-gram entropy measurement shows that the noncoding regions have a lower n-gram entropy (and hence a larger "n-gram redundancy") than the coding regions. In contrast to the three chromosomes, we find that for vertebrates such as primates and rodents and for viral DNA, the difference between the statistical properties of coding and noncoding regions is not pronounced and therefore the results of the analyses of the investigated sequences are less conclusive. After noting the intrinsic limitations of the n-gram redundancy analysis, we also briefly discuss the failure of the zeroth- and first-order Markovian models or simple nucleotide repeats to account fully for these "linguistic" features of DNA. Finally, we emphasize that our results by no means prove the existence of a "language" in noncoding DNA.

  15. High sensitive RNA detection by one-step RT-PCR using the genetically engineered variant of DNA polymerase with reverse transcriptase activity from hyperthermophilies.

    Science.gov (United States)

    Okano, Hiroyuki; Baba, Misato; Kawato, Katsuhiro; Hidese, Ryota; Yanagihara, Itaru; Kojima, Kenji; Takita, Teisuke; Fujiwara, Shinsuke; Yasukawa, Kiyoshi

    2018-03-01

    One-step RT-PCR has not been widely used even though some thermostable DNA polymerases with reverse transcriptase (RT) activity were developed from bacterial and archaeal polymerases, which is owing to low cDNA synthesis activity from RNA. In the present study, we developed highly-sensitive one-step RT-PCR using the single variant of family A DNA polymerase with RT activity, K4pol L329A (L329A), from the hyperthermophilic bacterium Thermotoga petrophila K4 or the 16-tuple variant of family B DNA polymerase with RT activity, RTX, from the hyperthermophilic archaeon Thermococcus kodakarensis. Optimization of reaction condition revealed that the activities for cDNA synthesis and PCR of K4pol L329A and RTX were highly affected by the concentrations of MgCl 2 and Mn(OCOCH 3 ) 2 as well as those of K4pol L329A or RTX. Under the optimized condition, 300 copies/μl of target RNA in 10 μl reaction volumes were successfully detected by the one-step RT-PCR with K4pol L329A or RTX, which was almost equally sensitive enough compared with the current RT-PCR condition using retroviral RT and thermostable DNA polymerase. Considering that K4pol L329A and RTX are stable even at 90-100°C, our results suggest that the one-step RT-PCR with K4pol L329A or RTX is more advantageous than the current one. Copyright © 2017 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.

  16. A data fusion approach to indications and warnings of terrorist attacks

    Science.gov (United States)

    McDaniel, David; Schaefer, Gregory

    2014-05-01

    Indications and Warning (I&W) of terrorist attacks, particularly IED attacks, require detection of networks of agents and patterns of behavior. Social Network Analysis tries to detect a network; activity analysis tries to detect anomalous activities. This work builds on both to detect elements of an activity model of terrorist attack activity - the agents, resources, networks, and behaviors. The activity model is expressed as RDF triples statements where the tuple positions are elements or subsets of a formal ontology for activity models. The advantage of a model is that elements are interdependent and evidence for or against one will influence others so that there is a multiplier effect. The advantage of the formality is that detection could occur hierarchically, that is, at different levels of abstraction. The model matching is expressed as a likelihood ratio between input text and the model triples. The likelihood ratio is designed to be analogous to track correlation likelihood ratios common in JDL fusion level 1. This required development of a semantic distance metric for positive and null hypotheses as well as for complex objects. The metric uses the Web 1Terabype database of one to five gram frequencies for priors. This size requires the use of big data technologies so a Hadoop cluster is used in conjunction with OpenNLP natural language and Mahout clustering software. Distributed data fusion Map Reduce jobs distribute parts of the data fusion problem to the Hadoop nodes. For the purposes of this initial testing, open source models and text inputs of similar complexity to terrorist events were used as surrogates for the intended counter-terrorist application.

  17. Quasirandom geometric networks from low-discrepancy sequences

    Science.gov (United States)

    Estrada, Ernesto

    2017-08-01

    We define quasirandom geometric networks using low-discrepancy sequences, such as Halton, Sobol, and Niederreiter. The networks are built in d dimensions by considering the d -tuples of digits generated by these sequences as the coordinates of the vertices of the networks in a d -dimensional Id unit hypercube. Then, two vertices are connected by an edge if they are at a distance smaller than a connection radius. We investigate computationally 11 network-theoretic properties of two-dimensional quasirandom networks and compare them with analogous random geometric networks. We also study their degree distribution and their spectral density distributions. We conclude from this intensive computational study that in terms of the uniformity of the distribution of the vertices in the unit square, the quasirandom networks look more random than the random geometric networks. We include an analysis of potential strategies for generating higher-dimensional quasirandom networks, where it is know that some of the low-discrepancy sequences are highly correlated. In this respect, we conclude that up to dimension 20, the use of scrambling, skipping and leaping strategies generate quasirandom networks with the desired properties of uniformity. Finally, we consider a diffusive process taking place on the nodes and edges of the quasirandom and random geometric graphs. We show that the diffusion time is shorter in the quasirandom graphs as a consequence of their larger structural homogeneity. In the random geometric graphs the diffusion produces clusters of concentration that make the process more slow. Such clusters are a direct consequence of the heterogeneous and irregular distribution of the nodes in the unit square in which the generation of random geometric graphs is based on.

  18. Database structure for plasma modeling programs

    International Nuclear Information System (INIS)

    Dufresne, M.; Silvester, P.P.

    1993-01-01

    Continuum plasma models often use a finite element (FE) formulation. Another approach is simulation models based on particle-in-cell (PIC) formulation. The model equations generally include four nonlinear differential equations specifying the plasma parameters. In simulation a large number of equations must be integrated iteratively to determine the plasma evolution from an initial state. The complexity of the resulting programs is a combination of the physics involved and the numerical method used. The data structure requirements of plasma programs are stated by defining suitable abstract data types. These abstractions are then reduced to data structures and a group of associated algorithms. These are implemented in an object oriented language (C++) as object classes. Base classes encapsulate data management into a group of common functions such as input-output management, instance variable updating and selection of objects by Boolean operations on their instance variables. Operations are thereby isolated from specific element types and uniformity of treatment is guaranteed. Creation of the data structures and associated functions for a particular plasma model is reduced merely to defining the finite element matrices for each equation, or the equations of motion for PIC models. Changes in numerical method or equation alterations are readily accommodated through the mechanism of inheritance, without modification of the data management software. The central data type is an n-relation implemented as a tuple of variable internal structure. Any finite element program may be described in terms of five relational tables: nodes, boundary conditions, sources, material/particle descriptions, and elements. Equivalently, plasma simulation programs may be described using four relational tables: cells, boundary conditions, sources, and particle descriptions

  19. Multi-material decomposition of spectral CT images

    Science.gov (United States)

    Mendonça, Paulo R. S.; Bhotika, Rahul; Maddah, Mahnaz; Thomsen, Brian; Dutta, Sandeep; Licato, Paul E.; Joshi, Mukta C.

    2010-04-01

    Spectral Computed Tomography (Spectral CT), and in particular fast kVp switching dual-energy computed tomography, is an imaging modality that extends the capabilities of conventional computed tomography (CT). Spectral CT enables the estimation of the full linear attenuation curve of the imaged subject at each voxel in the CT volume, instead of a scalar image in Hounsfield units. Because the space of linear attenuation curves in the energy ranges of medical applications can be accurately described through a two-dimensional manifold, this decomposition procedure would be, in principle, limited to two materials. This paper describes an algorithm that overcomes this limitation, allowing for the estimation of N-tuples of material-decomposed images. The algorithm works by assuming that the mixing of substances and tissue types in the human body has the physicochemical properties of an ideal solution, which yields a model for the density of the imaged material mix. Under this model the mass attenuation curve of each voxel in the image can be estimated, immediately resulting in a material-decomposed image triplet. Decomposition into an arbitrary number of pre-selected materials can be achieved by automatically selecting adequate triplets from an application-specific material library. The decomposition is expressed in terms of the volume fractions of each constituent material in the mix; this provides for a straightforward, physically meaningful interpretation of the data. One important application of this technique is in the digital removal of contrast agent from a dual-energy exam, producing a virtual nonenhanced image, as well as in the quantification of the concentration of contrast observed in a targeted region, thus providing an accurate measure of tissue perfusion.

  20. Impact of spike train autostructure on probability distribution of joint spike events.

    Science.gov (United States)

    Pipa, Gordon; Grün, Sonja; van Vreeswijk, Carl

    2013-05-01

    The discussion whether temporally coordinated spiking activity really exists and whether it is relevant has been heated over the past few years. To investigate this issue, several approaches have been taken to determine whether synchronized events occur significantly above chance, that is, whether they occur more often than expected if the neurons fire independently. Most investigations ignore or destroy the autostructure of the spiking activity of individual cells or assume Poissonian spiking as a model. Such methods that ignore the autostructure can significantly bias the coincidence statistics. Here, we study the influence of the autostructure on the probability distribution of coincident spiking events between tuples of mutually independent non-Poisson renewal processes. In particular, we consider two types of renewal processes that were suggested as appropriate models of experimental spike trains: a gamma and a log-normal process. For a gamma process, we characterize the shape of the distribution analytically with the Fano factor (FFc). In addition, we perform Monte Carlo estimations to derive the full shape of the distribution and the probability for false positives if a different process type is assumed as was actually present. We also determine how manipulations of such spike trains, here dithering, used for the generation of surrogate data change the distribution of coincident events and influence the significance estimation. We find, first, that the width of the coincidence count distribution and its FFc depend critically and in a nontrivial way on the detailed properties of the structure of the spike trains as characterized by the coefficient of variation CV. Second, the dependence of the FFc on the CV is complex and mostly nonmonotonic. Third, spike dithering, even if as small as a fraction of the interspike interval, can falsify the inference on coordinated firing.

  1. Magnetic MIMO Signal Processing and Optimization for Wireless Power Transfer

    Science.gov (United States)

    Yang, Gang; Moghadam, Mohammad R. Vedady; Zhang, Rui

    2017-06-01

    In magnetic resonant coupling (MRC) enabled multiple-input multiple-output (MIMO) wireless power transfer (WPT) systems, multiple transmitters (TXs) each with one single coil are used to enhance the efficiency of simultaneous power transfer to multiple single-coil receivers (RXs) by constructively combining their induced magnetic fields at the RXs, a technique termed "magnetic beamforming". In this paper, we study the optimal magnetic beamforming design in a multi-user MIMO MRC-WPT system. We introduce the multi-user power region that constitutes all the achievable power tuples for all RXs, subject to the given total power constraint over all TXs as well as their individual peak voltage and current constraints. We characterize each boundary point of the power region by maximizing the sum-power deliverable to all RXs subject to their minimum harvested power constraints. For the special case without the TX peak voltage and current constraints, we derive the optimal TX current allocation for the single-RX setup in closed-form as well as that for the multi-RX setup. In general, the problem is a non-convex quadratically constrained quadratic programming (QCQP), which is difficult to solve. For the case of one single RX, we show that the semidefinite relaxation (SDR) of the problem is tight. For the general case with multiple RXs, based on SDR we obtain two approximate solutions by applying time-sharing and randomization, respectively. Moreover, for practical implementation of magnetic beamforming, we propose a novel signal processing method to estimate the magnetic MIMO channel due to the mutual inductances between TXs and RXs. Numerical results show that our proposed magnetic channel estimation and adaptive beamforming schemes are practically effective, and can significantly improve the power transfer efficiency and multi-user performance trade-off in MIMO MRC-WPT systems.

  2. Forecasting Japan's Physician Shortage in 2035 as the First Full-Fledged Aged Society

    Science.gov (United States)

    Yamaguchi, Rui; Matsumura, Tomoko; Murashige, Naoko; Kodama, Yuko; Minayo, Satoru; Imai, Kohzoh; Kami, Masahiro

    2012-01-01

    Introduction Japan is rapidly becoming a full-fledged aged society, and physician shortage is a significant concern. The Japanese government has increased the number of medical school enrollments since 2008, but some researchers warn that this increase could lead to physician surplus in the future. It is unknown how many physicians will be required to accommodate future healthcare needs. Materials and Methods We simulated changes in age/sex composition of the population, fatalities (the number of fatalities for the consecutive five years), and number of physicians from 2010 to 2035. Two indicators were defined: fatalities per physician and fatalities by physician working hour, based on the data of the working hours of physicians for each tuple of sex and age groups. We estimated the necessary number of physicians in 2035 and the number of new physicians to maintain the indicator levels in 2010. Results The number of physicians per 1,000 population is predicted to rise from 2·00 in 2010 to 3·14 in 2035. The number of physicians aged 60 years or older is expected to increase from 55,375 (20% of physicians) to 141,711 (36%). In 2010 and 2035, fatalities per physician were 23·1 and 24·0 for the total population, and 13·9 and 19·2 for 75 years or older, respectively. Fatalities per physician working hour are predicted to rise from 0·128 to 0·138. If working hours are limited to 48 hours per week in 2035, the number of fatalities per physician working hour is expected to be 0·196, and the number of new physicians must be increased by 53% over the current pace. Discussion The number of physicians per population continues to rise, but the estimated supply will not fulfill the demand for healthcare in the aging society. Strategies to increase the number of physicians and improve working conditions are urgently needed. PMID:23233868

  3. Spin-adapted open-shell random phase approximation and time-dependent density functional theory. I. Theory.

    Science.gov (United States)

    Li, Zhendong; Liu, Wenjian

    2010-08-14

    The spin-adaptation of single-reference quantum chemical methods for excited states of open-shell systems has been nontrivial. The primary reason is that the configuration space, generated by a truncated rank of excitations from only one component of a reference multiplet, is spin-incomplete. Those "missing" configurations are of higher ranks and can, in principle, be recaptured by a particular class of excitation operators. However, the resulting formalisms are then quite involved and there are situations [e.g., time-dependent density functional theory (TD-DFT) under the adiabatic approximation] that prevent one from doing so. To solve this issue, we propose here a tensor-coupling scheme that invokes all the components of a reference multiplet (i.e., a tensor reference) rather than increases the excitation ranks. A minimal spin-adapted n-tuply excited configuration space can readily be constructed by tensor products between the n-tuple tensor excitation operators and the chosen tensor reference. Further combined with the tensor equation-of-motion formalism, very compact expressions for excitation energies can be obtained. As a first application of this general idea, a spin-adapted open-shell random phase approximation is first developed. The so-called "translation rule" is then adopted to formulate a spin-adapted, restricted open-shell Kohn-Sham (ROKS)-based TD-DFT (ROKS-TD-DFT). Here, a particular symmetry structure has to be imposed on the exchange-correlation kernel. While the standard ROKS-TD-DFT can access only excited states due to singlet-coupled single excitations, i.e., only some of the singly excited states of the same spin (S(i)) as the reference, the new scheme can capture all the excited states of spin S(i)-1, S(i), or S(i)+1 due to both singlet- and triplet-coupled single excitations. The actual implementation and computation are very much like the (spin-contaminated) unrestricted Kohn-Sham-based TD-DFT. It is also shown that spin-contaminated spin

  4. Geospatial Image Stream Processing: Models, techniques, and applications in remote sensing change detection

    Science.gov (United States)

    Rueda-Velasquez, Carlos Alberto

    Detection of changes in environmental phenomena using remotely sensed data is a major requirement in the Earth sciences, especially in natural disaster related scenarios where real-time detection plays a crucial role in the saving of human lives and the preservation of natural resources. Although various approaches formulated to model multidimensional data can in principle be applied to the inherent complexity of remotely sensed geospatial data, there are still challenging peculiarities that demand a precise characterization in the context of change detection, particularly in scenarios of fast changes. In the same vein, geospatial image streams do not fit appropriately in the standard Data Stream Management System (DSMS) approach because these systems mainly deal with tuple-based streams. Recognizing the necessity for a systematic effort to address the above issues, the work presented in this thesis is a concrete step toward the foundation and construction of an integrated Geospatial Image Stream Processing framework, GISP. First, we present a data and metadata model for remotely sensed image streams. We introduce a precise characterization of images and image streams in the context of remotely sensed geospatial data. On this foundation, we define spatially-aware temporal operators with a consistent semantics for change analysis tasks. We address the change detection problem in settings where multiple image stream sources are available, and thus we introduce an architectural design for the processing of geospatial image streams from multiple sources. With the aim of targeting collaborative scientific environments, we construct a realization of our architecture based on Kepler, a robust and widely used scientific workflow management system, as the underlying computational support; and open data and Web interface standards, as a means to facilitate the interoperability of GISP instances with other processing infrastructures and client applications. We demonstrate our

  5. Testing biological hypotheses with embodied robots: adaptations, accidents, and by-products in the evolution of vertebrates

    Directory of Open Access Journals (Sweden)

    Sonia F Roberts

    2014-11-01

    Full Text Available Evolutionary robotics allows biologists to test hypotheses about extinct animals. We modeled some of the first vertebrates, jawless fishes, in order to study the evolution of the trait after which vertebrates are named: vertebrae. We tested the hypothesis that vertebrae are an adaptation for enhanced feeding and fleeing performance. We created a population of autonomous embodied robots, Preyro, in which the number of vertebrae, N, were free to evolve. In addition, two other traits, the span of the caudal fin, b, and the predator detection threshold, ζ, a proxy for the lateral line sensory system, were also allowed to evolve. These three traits were chosen because they evolved early in vertebrates, are all potentially important in feeding and fleeing, and vary in form among species. Preyro took on individual identities in a given generation as defined by the population’s six diploid genotypes, Gi. Each Gi was a 3-tuple, with each element an integer specifying N, b, and, ζ. The small size of the population allowed for genetic drift to operate in concert with random mutation and mating; the presence of these mechanisms of chance provided an opportunity for N to evolve by accident. The presence of three evolvable traits provided an opportunity for direct selection on b and/or ζ to evolve N as a by-product linked trait correlation. In selection trials, different Gi embodied in Preyro attempted to feed at a light source and then flee to avoid a predator robot in pursuit. The fitness of each Gi was calculated from five different types of performance: speed, acceleration, distance to the light, distance to the predator, and the number of predator escapes initiated. In each generation, we measured the selection differential, the selection gradient, the strength of chance, and the indirect correlation selection gradient. These metrics allowed us to understand the relative contributions of the three mechanisms: direct selection, chance, and indirect

  6. Françafrique and Ethics in International Relations

    Directory of Open Access Journals (Sweden)

    Vasily Rudolfovich Filippov

    2017-12-01

    Full Text Available The article discusses the corruption ties between the political elites of France and a number of sovereign African states that were part of the French colonial empire until the middle of the last century. An analysis of the reasons is given that prompted the first President of the Fifth Republic, General Charles de Gaulle, to begin the formation of a special military and political phenomenon, later named “Fransafrique”. The nature of this phenomenon is revealed as a special combination of latent lobbyist networks of the military-political and economic influence of the Elysee Palace in the countries of the Black Continent, on the one hand, and as a specific mechanism for implementing the French version of neocolonialism in political practice, on the other hand. The point is made that the threat of the loss of all military and strategic, political and, in particular, economic preferences of France in the newly independent African countries, prompted the Elysee Palace to semi-legal, illegal, and often criminal methods of defending the interests of France in the region. In practice, this meant the construction of a system of integral dependence of the formally independent countries of Tropical Africa from the former metropolis, the formation of a tuple of client-states with corrupt power elites. The latter were designed to provide the French energy giants with virtually unlimited access to strategic natural resources, control over the political parties of these countries. The main emphasis in this French model of neocolonialism was made on the total bribery of the emerging political elites of African countries. The French special services used such methods maintaining control over the situation as blackmail, political assassinations or direct military aggression when the direct bribery of African politicians proved impossible or insufficient. Particular attention is paid to the problem of the influence of corrupt African leaders, in their turn

  7. Artificial intelligence in a technological production system of the set quality

    Directory of Open Access Journals (Sweden)

    V. I. Karpov

    2017-01-01

    Full Text Available This  article considers the expert system (ES as a subsystem of management information system technology of sausage products of a given quality. Given the typical structure of an automated expert system, upgraded under a set of interrelated operations of the technological process of production of cooked sausages. Describes the development of two main blocks of this expert system – a database and knowledge base, creating an information space. The work of ES is a sequence of steps, each of which is selected from the database for a rule that applies to the current contents of the working set. The cycle ends when withdrawn or denied the target claim. In our case, the system is designed as a system with direct output, in which the known facts is found the conclusion which from these facts follows. If such a conclusion is found, it is entered into working memory. The knowledge base of an expert system is created as a set of separate entities. The set of these entities allows you to generate objects of study, rules that they can conform, and recommendations for meeting these rules. A set of such entities with their attributes and relationships can be represented as a set of tuples. For the implementation of this approach developed an automated expert system of control of technological process of production of meat and sausage products – the program complex (PC “MulTimit Expert”. The effectiveness of using the developed expert system to control the technology of sausage products of a given quality are considered in one of the examples of the identification of technology defects in the formulation of cooked sausages "Capital", containing large amounts of fatty raw materials. The results of the research as a whole showed that the recommendations of the developed expert system make it possible to improve the quality of the ready-to-eat meat product, increase the water retention coefficient characterizing the moisture retention capacity of the

  8. Using Xrootd to Federate Regional Storage

    International Nuclear Information System (INIS)

    Bauerdick, L; Benjamin, D; Bloom, K; Bockelman, B; Bradley, D; Dasu, S; Ernst, M; Ito, H; Rind, O; Gardner, R; Vukotic, I; Hanushevsky, A; Lesny, D; McGuigan, P; McKee, S; Severini, H; Sfiligoi, I; Tadel, M; Würthwein, F; Williams, S

    2012-01-01

    While the LHC data movement systems have demonstrated the ability to move data at the necessary throughput, we have identified two weaknesses: the latency for physicists to access data and the complexity of the tools involved. To address these, both ATLAS and CMS have begun to federate regional storage systems using Xrootd. Xrootd, referring to a protocol and implementation, allows us to provide data access to all disk-resident data from a single virtual endpoint. This “redirector” discovers the actual location of the data and redirects the client to the appropriate site. The approach is particularly advantageous since typically the redirection requires much less than 500 milliseconds and the Xrootd client is conveniently built into LHC physicists’ analysis tools. Currently, there are three regional storage federations - a US ATLAS region, a European CMS region, and a US CMS region. The US ATLAS and US CMS regions include their respective Tier 1, Tier 2 and some Tier 3 facilities; a large percentage of experimental data is available via the federation. Additionally, US ATLAS has begun studying low-latency regional federations of close-by sites. From the base idea of federating storage behind an endpoint, the implementations and use cases diverge. The CMS software framework is capable of efficiently processing data over high-latency links, so using the remote site directly is comparable to accessing local data. The ATLAS processing model allows a broad spectrum of user applications with varying degrees of performance with regard to latency; a particular focus has been optimizing n-tuple analysis. Both VOs use GSI security. ATLAS has developed a mapping of VOMS roles to specific file system authorizations, while CMS has developed callouts to the site's mapping service. Each federation presents a global namespace to users. For ATLAS, the global-to-local mapping is based on a heuristic-based lookup from the site's local file catalog, while CMS does the mapping

  9. Using Xrootd to federate regional storage

    Energy Technology Data Exchange (ETDEWEB)

    Bauerdick, L.; et al.

    2012-01-01

    While the LHC data movement systems have demonstrated the ability to move data at the necessary throughput, we have identified two weaknesses: the latency for physicists to access data and the complexity of the tools involved. To address these, both ATLAS and CMS have begun to federate regional storage systems using Xrootd. Xrootd, referring to a protocol and implementation, allows us to provide data access to all disk-resident data from a single virtual endpoint. This redirector discovers the actual location of the data and redirects the client to the appropriate site. The approach is particularly advantageous since typically the redirection requires much less than 500 milliseconds and the Xrootd client is conveniently built into LHC physicists analysis tools. Currently, there are three regional storage federations - a US ATLAS region, a European CMS region, and a US CMS region. The US ATLAS and US CMS regions include their respective Tier 1, Tier 2 and some Tier 3 facilities, a large percentage of experimental data is available via the federation. Additionally, US ATLAS has begun studying low-latency regional federations of close-by sites. From the base idea of federating storage behind an endpoint, the implementations and use cases diverge. The CMS software framework is capable of efficiently processing data over high-latency links, so using the remote site directly is comparable to accessing local data. The ATLAS processing model allows a broad spectrum of user applications with varying degrees of performance with regard to latency, a particular focus has been optimizing n-tuple analysis. Both VOs use GSI security. ATLAS has developed a mapping of VOMS roles to specific file system authorizations, while CMS has developed callouts to the site's mapping service. Each federation presents a global namespace to users. For ATLAS, the global-to-local mapping is based on a heuristic-based lookup from the site's local file catalog, while CMS does the mapping

  10. The Role of Discrete Global Grid Systems in the Global Statistical Geospatial Framework

    Science.gov (United States)

    Purss, M. B. J.; Peterson, P.; Minchin, S. A.; Bermudez, L. E.

    2016-12-01

    The United Nations Committee of Experts on Global Geospatial Information Management (UN-GGIM) has proposed the development of a Global Statistical Geospatial Framework (GSGF) as a mechanism for the establishment of common analytical systems that enable the integration of statistical and geospatial information. Conventional coordinate reference systems address the globe with a continuous field of points suitable for repeatable navigation and analytical geometry. While this continuous field is represented on a computer in a digitized and discrete fashion by tuples of fixed-precision floating point values, it is a non-trivial exercise to relate point observations spatially referenced in this way to areal coverages on the surface of the Earth. The GSGF states the need to move to gridded data delivery and the importance of using common geographies and geocoding. The challenges associated with meeting these goals are not new and there has been a significant effort within the geospatial community to develop nested gridding standards to tackle these issues over many years. These efforts have recently culminated in the development of a Discrete Global Grid Systems (DGGS) standard which has been developed under the auspices of Open Geospatial Consortium (OGC). DGGS provide a fixed areal based geospatial reference frame for the persistent location of measured Earth observations, feature interpretations, and modelled predictions. DGGS address the entire planet by partitioning it into a discrete hierarchical tessellation of progressively finer resolution cells, which are referenced by a unique index that facilitates rapid computation, query and analysis. The geometry and location of the cell is the principle aspect of a DGGS. Data integration, decomposition, and aggregation is optimised in the DGGS hierarchical structure and can be exploited for efficient multi-source data processing, storage, discovery, transmission, visualization, computation, analysis, and modelling. During

  11. Footprint Representation of Planetary Remote Sensing Data

    Science.gov (United States)

    Walter, S. H. G.; Gasselt, S. V.; Michael, G.; Neukum, G.

    The geometric outline of remote sensing image data, the so called footprint, can be represented as a number of coordinate tuples. These polygons are associated with according attribute information such as orbit name, ground- and image resolution, solar longitude and illumination conditions to generate a powerful base for classification of planetary experiment data. Speed, handling and extended capabilites are the reasons for using geodatabases to store and access these data types. Techniques for such a spatial database of footprint data are demonstrated using the Relational Database Management System (RDBMS) PostgreSQL, spatially enabled by the PostGIS extension. Exemplary, footprints of the HRSC and OMEGA instruments, both onboard ESA's Mars Express Orbiter, are generated and connected to attribute information. The aim is to provide high-resolution footprints of the OMEGA instrument to the science community for the first time and make them available for web-based mapping applications like the "Planetary Interactive GIS-on-the-Web Analyzable Database" (PIG- WAD), produced by the USGS. Map overlays with HRSC or other instruments like MOC and THEMIS (footprint maps are already available for these instruments and can be integrated into the database) allow on-the-fly intersection and comparison as well as extended statistics of the data. Footprint polygons are generated one by one using standard software provided by the instrument teams. Attribute data is calculated and stored together with the geometric information. In the case of HRSC, the coordinates of the footprints are already available in the VICAR label of each image file. Using the VICAR RTL and PostgreSQL's libpq C library they are loaded into the database using the Well-Known Text (WKT) notation by the Open Geospatial Consortium, Inc. (OGC). For the OMEGA instrument, image data is read using IDL routines developed and distributed by the OMEGA team. Image outlines are exported together with relevant attribute

  12. Cluster algebras in mathematical physics

    International Nuclear Information System (INIS)

    Francesco, Philippe Di; Gekhtman, Michael; Kuniba, Atsuo; Yamazaki, Masahito

    2014-01-01

    This special issue of Journal of Physics A: Mathematical and Theoretical contains reviews and original research articles on cluster algebras and their applications to mathematical physics. Cluster algebras were introduced by S Fomin and A Zelevinsky around 2000 as a tool for studying total positivity and dual canonical bases in Lie theory. Since then the theory has found diverse applications in mathematics and mathematical physics. Cluster algebras are axiomatically defined commutative rings equipped with a distinguished set of generators (cluster variables) subdivided into overlapping subsets (clusters) of the same cardinality subject to certain polynomial relations. A cluster algebra of rank n can be viewed as a subring of the field of rational functions in n variables. Rather than being presented, at the outset, by a complete set of generators and relations, it is constructed from the initial seed via an iterative procedure called mutation producing new seeds successively to generate the whole algebra. A seed consists of an n-tuple of rational functions called cluster variables and an exchange matrix controlling the mutation. Relations of cluster algebra type can be observed in many areas of mathematics (Plücker and Ptolemy relations, Stokes curves and wall-crossing phenomena, Feynman integrals, Somos sequences and Hirota equations to name just a few examples). The cluster variables enjoy a remarkable combinatorial pattern; in particular, they exhibit the Laurent phenomenon: they are expressed as Laurent polynomials rather than more general rational functions in terms of the cluster variables in any seed. These characteristic features are often referred to as the cluster algebra structure. In the last decade, it became apparent that cluster structures are ubiquitous in mathematical physics. Examples include supersymmetric gauge theories, Poisson geometry, integrable systems, statistical mechanics, fusion products in infinite dimensional algebras, dilogarithm

  13. Intelligent microchip networks: an agent-on-chip synthesis framework for the design of smart and robust sensor networks

    Science.gov (United States)

    Bosse, Stefan

    2013-05-01

    , interaction (communication), and mobility features are modelled and specified on a machine-independent abstract programming level using a state-based agent behaviour language (APL). With this APL a high-level agent compiler is able to synthesize a hardware model (RTL, VHDL), a software model (C, ML), or a simulation model (XML) suitable to simulate a multi-agent system using the SeSAm simulator framework. Agent communication is provided by a simple tuple-space database implemented on node level providing fault tolerant access of global data. A novel synthesis development kit (SynDK) based on a graph-structured database approach is introduced to support the rapid development of compilers and synthesis tools, used for example for the design and implementation of the APL compiler.

  14. Algorithms for Learning Preferences for Sets of Objects

    Science.gov (United States)

    Wagstaff, Kiri L.; desJardins, Marie; Eaton, Eric

    2010-01-01

    A method is being developed that provides for an artificial-intelligence system to learn a user's preferences for sets of objects and to thereafter automatically select subsets of objects according to those preferences. The method was originally intended to enable automated selection, from among large sets of images acquired by instruments aboard spacecraft, of image subsets considered to be scientifically valuable enough to justify use of limited communication resources for transmission to Earth. The method is also applicable to other sets of objects: examples of sets of objects considered in the development of the method include food menus, radio-station music playlists, and assortments of colored blocks for creating mosaics. The method does not require the user to perform the often-difficult task of quantitatively specifying preferences; instead, the user provides examples of preferred sets of objects. This method goes beyond related prior artificial-intelligence methods for learning which individual items are preferred by the user: this method supports a concept of setbased preferences, which include not only preferences for individual items but also preferences regarding types and degrees of diversity of items in a set. Consideration of diversity in this method involves recognition that members of a set may interact with each other in the sense that when considered together, they may be regarded as being complementary, redundant, or incompatible to various degrees. The effects of such interactions are loosely summarized in the term portfolio effect. The learning method relies on a preference representation language, denoted DD-PREF, to express set-based preferences. In DD-PREF, a preference is represented by a tuple that includes quality (depth) functions to estimate how desired a specific value is, weights for each feature preference, the desired diversity of feature values, and the relative importance of diversity versus depth. The system applies statistical

  15. A Fault Oblivious Extreme-Scale Execution Environment

    Energy Technology Data Exchange (ETDEWEB)

    McKie, Jim

    2014-11-20

    The FOX project, funded under the ASCR X-stack I program, developed systems software and runtime libraries for a new approach to the data and work distribution for massively parallel, fault oblivious application execution. Our work was motivated by the premise that exascale computing systems will provide a thousand-fold increase in parallelism and a proportional increase in failure rate relative to today’s machines. To deliver the capability of exascale hardware, the systems software must provide the infrastructure to support existing applications while simultaneously enabling efficient execution of new programming models that naturally express dynamic, adaptive, irregular computation; coupled simulations; and massive data analysis in a highly unreliable hardware environment with billions of threads of execution. Our OS research has prototyped new methods to provide efficient resource sharing, synchronization, and protection in a many-core compute node. We have experimented with alternative task/dataflow programming models and shown scalability in some cases to hundreds of thousands of cores. Much of our software is in active development through open source projects. Concepts from FOX are being pursued in next generation exascale operating systems. Our OS work focused on adaptive, application tailored OS services optimized for multi → many core processors. We developed a new operating system NIX that supports role-based allocation of cores to processes which was released to open source. We contributed to the IBM FusedOS project, which promoted the concept of latency-optimized and throughput-optimized cores. We built a task queue library based on distributed, fault tolerant key-value store and identified scaling issues. A second fault tolerant task parallel library was developed, based on the Linda tuple space model, that used low level interconnect primitives for optimized communication. We designed fault tolerance mechanisms for task parallel computations

  16. Camera Trajectory fromWide Baseline Images

    Science.gov (United States)

    Havlena, M.; Torii, A.; Pajdla, T.

    2008-09-01

    suggested in to draw 5-tuples from the list of tentative matches ordered ascendingly by the distance of their descriptors which may help to reduce the number of samples in RANSAC. From each 5-tuple, relative orientation is computed by solving the 5-point minimal relative orientation problem for calibrated cameras. Often, there are more models which are supported by a large number of matches. Thus the chance that the correct model, even if it has the largest support, will be found by running a single RANSAC is small. Work suggested to generate models by randomized sampling as in RANSAC but to use soft (kernel) voting for a parameter instead of looking for the maximal support. The best model is then selected as the one with the parameter closest to the maximum in the accumulator space. In our case, we vote in a two-dimensional accumulator for the estimated camera motion direction. However, unlike in, we do not cast votes directly by each sampled epipolar geometry but by the best epipolar geometries recovered by ordered sampling of RANSAC. With our technique, we could go up to the 98.5 % contamination of mismatches with comparable effort as simple RANSAC does for the contamination by 84 %. The relative camera orientation with the motion direction closest to the maximum in the voting space is finally selected. As already mentioned in the first paragraph, the use of camera trajectory estimates is quite wide. In we have introduced a technique for measuring the size of camera translation relatively to the observed scene which uses the dominant apical angle computed at the reconstructed scene points and is robust against mismatches. The experiments demonstrated that the measure can be used to improve the robustness of camera path computation and object recognition for methods which use a geometric, e.g. the ground plane, constraint such as does for the detection of pedestrians. Using the camera trajectories, perspective cutouts with stabilized horizon are constructed and an

  17. Lowering the Barrier to Cross-Disciplinary Scientific Data Access via a Brokering Service Built Around a Unified Data Model

    Science.gov (United States)

    Lindholm, D. M.; Wilson, A.

    2012-12-01

    The steps many scientific data users go through to use data (after discovering it) can be rather tedious, even when dealing with datasets within their own discipline. Accessing data across domains often seems intractable. We present here, LaTiS, an Open Source brokering solution that bridges the gap between the source data and the user's code by defining a unified data model plus a plugin framework for "adapters" to read data from their native source, "filters" to perform server side data processing, and "writers" to output any number of desired formats or streaming protocols. A great deal of work is being done in the informatics community to promote multi-disciplinary science with a focus on search and discovery based on metadata - information about the data. The goal of LaTiS is to go that last step to provide a uniform interface to read the dataset into computer programs and other applications once it has been identified. The LaTiS solution for integrating a wide variety of data models is to return to mathematical fundamentals. The LaTiS data model emphasizes functional relationships between variables. For example, a time series of temperature measurements can be thought of as a function that maps a time to a temperature. With just three constructs: "Scalar" for a single variable, "Tuple" for a collection of variables, and "Function" to represent a set of independent and dependent variables, the LaTiS data model can represent most scientific datasets at a low level that enables uniform data access. Higher level abstractions can be built on top of the basic model to add more meaningful semantics for specific user communities. LaTiS defines its data model in terms of the Unified Modeling Language (UML). It also defines a very thin Java Interface that can be implemented by numerous existing data interfaces (e.g. NetCDF-Java) such that client code can access any dataset via the Java API, independent of the underlying data access mechanism. LaTiS also provides a

  18. Large-Scale, Parallel, Multi-Sensor Data Fusion in the Cloud

    Science.gov (United States)

    Wilson, B. D.; Manipon, G.; Hua, H.

    2012-12-01

    NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the "A-Train" platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over periods of years to decades. However, moving from predominantly single-instrument studies to a multi-sensor, measurement-based model for long-duration analysis of important climate variables presents serious challenges for large-scale data mining and data fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another instrument (MODIS), and to a model (ECMWF), stratify the comparisons using a classification of the "cloud scenes" from CloudSat, and repeat the entire analysis over years of AIRS data. To perform such an analysis, one must discover & access multiple datasets from remote sites, find the space/time "matchups" between instruments swaths and model grids, understand the quality flags and uncertainties for retrieved physical variables, assemble merged datasets, and compute fused products for further scientific and statistical analysis. To efficiently assemble such decade-scale datasets in a timely manner, we are utilizing Elastic Computing in the Cloud and parallel map/reduce-based algorithms. "SciReduce" is a Hadoop-like parallel analysis system, programmed in parallel python, that is designed from the ground up for Earth science. SciReduce executes inside VMWare images and scales to any number of nodes in the Cloud. Unlike Hadoop, in which simple tuples (keys & values) are passed between the map and reduce functions, SciReduce operates on bundles of named numeric arrays, which can be passed in memory or serialized to disk in netCDF4 or HDF5. Thus, SciReduce uses the native datatypes (geolocated grids, swaths, and points) that geo-scientists are familiar with. We are deploying within Sci

  19. Using Scientific Thinking and Intelligent Aggregation to Tame Big Data

    Science.gov (United States)

    Strelitz, R. A.; Yuen, D. A.

    2013-12-01

    a concomitant avoidance of empirical constructs. First, we not only display the full basic equation derived from the laws of conservation and restrictions to smoothness and continuity, we take care to break down the equations into their component terms, each a distinct process, and when put in non-dimensional form, with a dimensionless coefficient that contains both significance and scale terms. We define a computational regime as a region in space time for which the same tuple of dimensionless numbers are active and or dominant. One of the unfortunate side effects of big data and big computing is the growing dominance of the Eulerian viewpoint, instead of the more traditional Lagrangian, you-are-there stance. In the mind of the scientist, our starting point, there is not only the foundation of equations and idealized forms and solutions, but also lagrangian diagrams and cartoons of interacting processes. In our mock ups and tests, we make liberal use of multiple, linked windows to reveal links between disparate views. Using brushing, tracking and highlighting, we permit the scientists to create and test hypotheses from the particular to the global with a minimum number of interactions.

  20. Toward a view-oriented approach for aligning RDF-based biomedical repositories.

    Science.gov (United States)

    Anguita, A; García-Remesal, M; de la Iglesia, D; Graf, N; Maojo, V

    2015-01-01

    This article is part of the Focus Theme of METHODS of Information in Medicine on "Managing Interoperability and Complexity in Health Systems". The need for complementary access to multiple RDF databases has fostered new lines of research, but also entailed new challenges due to data representation disparities. While several approaches for RDF-based database integration have been proposed, those focused on schema alignment have become the most widely adopted. All state-of-the-art solutions for aligning RDF-based sources resort to a simple technique inherited from legacy relational database integration methods. This technique - known as element-to-element (e2e) mappings - is based on establishing 1:1 mappings between single primitive elements - e.g. concepts, attributes, relationships, etc. - belonging to the source and target schemas. However, due to the intrinsic nature of RDF - a representation language based on defining tuples -, one may find RDF elements whose semantics vary dramatically when combined into a view involving other RDF elements - i.e. they depend on their context. The latter cannot be adequately represented in the target schema by resorting to the traditional e2e approach. These approaches fail to properly address this issue without explicitly modifying the target ontology, thus lacking the required expressiveness for properly reflecting the intended semantics in the alignment information. To enhance existing RDF schema alignment techniques by providing a mechanism to properly represent elements with context-dependent semantics, thus enabling users to perform more expressive alignments, including scenarios that cannot be adequately addressed by the existing approaches. Instead of establishing 1:1 correspondences between single primitive elements of the schemas, we propose adopting a view-based approach. The latter is targeted at establishing mapping relationships between RDF subgraphs - that can be regarded as the equivalent of views in traditional

  1. Automated Cache Performance Analysis And Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Mohror, Kathryn [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2013-12-23

    , cache behavior could only be measured reliably in the ag- gregate across tens or hundreds of thousands of instructions. With the newest iteration of PEBS technology, cache events can be tied to a tuple of instruction pointer, target address (for both loads and stores), memory hierarchy, and observed latency. With this information we can now begin asking questions regarding the efficiency of not only regions of code, but how these regions interact with particular data structures and how these interactions evolve over time. In the short term, this information will be vital for performance analysts understanding and optimizing the behavior of their codes for the memory hierarchy. In the future, we can begin to ask how data layouts might be changed to improve performance and, for a particular application, what the theoretical optimal performance might be. The overall benefit to be produced by this effort was a commercial quality easy-to- use and scalable performance tool that will allow both beginner and experienced parallel programmers to automatically tune their applications for optimal cache usage. Effective use of such a tool can literally save weeks of performance tuning effort. Easy to use. With the proposed innovations, finding and fixing memory performance issues would be more automated and hide most to all of the performance engineer exper- tise ”under the hood” of the Open|SpeedShop performance tool. One of the biggest public benefits from the proposed innovations is that it makes performance analysis more usable to a larger group of application developers. Intuitive reporting of results. The Open|SpeedShop performance analysis tool has a rich set of intuitive, yet detailed reports for presenting performance results to application developers. Our goal was to leverage this existing technology to present the results from our memory performance addition to Open|SpeedShop. Suitable for experts as well as novices. Application performance is getting more difficult