WorldWideScience

Sample records for government databases featuring

  1. Applications of GIS and database technologies to manage a Karst Feature Database

    Science.gov (United States)

    Gao, Y.; Tipping, R.G.; Alexander, E.C.

    2006-01-01

    This paper describes the management of a Karst Feature Database (KFD) in Minnesota. Two sets of applications in both GIS and Database Management System (DBMS) have been developed for the KFD of Minnesota. These applications were used to manage and to enhance the usability of the KFD. Structured Query Language (SQL) was used to manipulate transactions of the database and to facilitate the functionality of the user interfaces. The Database Administrator (DBA) authorized users with different access permissions to enhance the security of the database. Database consistency and recovery are accomplished by creating data logs and maintaining backups on a regular basis. The working database provides guidelines and management tools for future studies of karst features in Minnesota. The methodology of designing this DBMS is applicable to develop GIS-based databases to analyze and manage geomorphic and hydrologic datasets at both regional and local scales. The short-term goal of this research is to develop a regional KFD for the Upper Mississippi Valley Karst and the long-term goal is to expand this database to manage and study karst features at national and global scales.

  2. Corporate governance and audit features: SMEs evidence

    OpenAIRE

    Al-Najjar, Basil

    2018-01-01

    Purpose\\ud The purpose of this paper is to investigate the effect of corporate governance factors on audit features, namely, audit fees and the selection of Big 4 audit firms within the UK SMEs context.\\ud \\ud Design/methodology/approach\\ud The author uses different regression models to investigate the impact of corporate governance characteristics on audit features, and employs cross-sectional time series models as well as two-stage least squares technique. In addition, the author has used l...

  3. Construction of ideas and practice for 'nuclear geology featured database'

    International Nuclear Information System (INIS)

    Hu Guanglin; Feng Kai

    2010-01-01

    East China Institute of Technology is engaged in training person in areas of Nuclear Resource exploration. It is Nuclear Featured multi-Institute of Technology. At present, our library was done several collections system, which were focusing on Uranium and Geology. The library decide to be organizational force to construct Nuclear and Geology Featured database and put into use as soon as possible. 'Nuclear Geology Featured Database' put forward for construction principles of uniqueness, standardization, completeness, practicality, security and respecting knowledge property rights. The database contains 'Map and Table', 'periodical thesis', 'dissertations', 'conference papers', newspapers', 'books', ect. The types of literatures mainly includes monographs, periodicals, dissertations, conference papers, newspapers, as well as videos. The database can get information by ways of searching titles, authors and texts, and gradually become a more authoritative Nuclear Geology Database for study. (authors)

  4. Databases in the Central Government : State-of-the-art and the Future

    Science.gov (United States)

    Ohashi, Tomohiro

    Management and Coordination Agency, Prime Minister’s Office, conducted a survey by questionnaire against all Japanese Ministries and Agencies, in November 1985, on a subject of the present status of databases produced or planned to be produced by the central government. According to the results, the number of the produced databases has been 132 in 19 Ministries and Agencies. Many of such databases have been possessed by Defence Agency, Ministry of Construction, Ministry of Agriculture, Forestry & Fisheries, and Ministry of International Trade & Industries and have been in the fields of architecture & civil engineering, science & technology, R & D, agriculture, forestry and fishery. However the ratio of the databases available for other Ministries and Agencies has amounted to only 39 percent of all produced databases and the ratio of the databases unavailable for them has amounted to 60 percent of all of such databases, because of in-house databases and so forth. The outline of such results of the survey is reported and the databases produced by the central government are introduced under the items of (1) databases commonly used by all Ministries and Agencies, (2) integrated databases, (3) statistical databases and (4) bibliographic databases. The future problems are also described from the viewpoints of technology developments and mutual uses of databases.

  5. A Ruby API to query the Ensembl database for genomic features.

    Science.gov (United States)

    Strozzi, Francesco; Aerts, Jan

    2011-04-01

    The Ensembl database makes genomic features available via its Genome Browser. It is also possible to access the underlying data through a Perl API for advanced querying. We have developed a full-featured Ruby API to the Ensembl databases, providing the same functionality as the Perl interface with additional features. A single Ruby API is used to access different releases of the Ensembl databases and is also able to query multi-species databases. Most functionality of the API is provided using the ActiveRecord pattern. The library depends on introspection to make it release independent. The API is available through the Rubygem system and can be installed with the command gem install ruby-ensembl-api.

  6. Essential Features of Responsible Governance of Agricultural Biotechnology.

    Science.gov (United States)

    Hartley, Sarah; Gillund, Frøydis; van Hove, Lilian; Wickson, Fern

    2016-05-01

    Agricultural biotechnology continues to generate considerable controversy. We argue that to address this controversy, serious changes to governance are needed. The new wave of genomic tools and products (e.g., CRISPR, gene drives, RNAi, synthetic biology, and genetically modified [GM] insects and fish), provide a particularly useful opportunity to reflect on and revise agricultural biotechnology governance. In response, we present five essential features to advance more socially responsible forms of governance. In presenting these, we hope to stimulate further debate and action towards improved forms of governance, particularly as these new genomic tools and products continue to emerge.

  7. "There's so Much Data": Exploring the Realities of Data-Based School Governance

    Science.gov (United States)

    Selwyn, Neil

    2016-01-01

    Educational governance is commonly predicated around the generation, collation and processing of data through digital technologies. Drawing upon an empirical study of two Australian secondary schools, this paper explores the different forms of data-based governance that are being enacted by school leaders, managers, administrators and teachers.…

  8. Quantitative imaging features: extension of the oncology medical image database

    Science.gov (United States)

    Patel, M. N.; Looney, P. T.; Young, K. C.; Halling-Brown, M. D.

    2015-03-01

    Radiological imaging is fundamental within the healthcare industry and has become routinely adopted for diagnosis, disease monitoring and treatment planning. With the advent of digital imaging modalities and the rapid growth in both diagnostic and therapeutic imaging, the ability to be able to harness this large influx of data is of paramount importance. The Oncology Medical Image Database (OMI-DB) was created to provide a centralized, fully annotated dataset for research. The database contains both processed and unprocessed images, associated data, and annotations and where applicable expert determined ground truths describing features of interest. Medical imaging provides the ability to detect and localize many changes that are important to determine whether a disease is present or a therapy is effective by depicting alterations in anatomic, physiologic, biochemical or molecular processes. Quantitative imaging features are sensitive, specific, accurate and reproducible imaging measures of these changes. Here, we describe an extension to the OMI-DB whereby a range of imaging features and descriptors are pre-calculated using a high throughput approach. The ability to calculate multiple imaging features and data from the acquired images would be valuable and facilitate further research applications investigating detection, prognosis, and classification. The resultant data store contains more than 10 million quantitative features as well as features derived from CAD predictions. Theses data can be used to build predictive models to aid image classification, treatment response assessment as well as to identify prognostic imaging biomarkers.

  9. The LAILAPS search engine: a feature model for relevance ranking in life science databases.

    Science.gov (United States)

    Lange, Matthias; Spies, Karl; Colmsee, Christian; Flemming, Steffen; Klapperstück, Matthias; Scholz, Uwe

    2010-03-25

    Efficient and effective information retrieval in life sciences is one of the most pressing challenge in bioinformatics. The incredible growth of life science databases to a vast network of interconnected information systems is to the same extent a big challenge and a great chance for life science research. The knowledge found in the Web, in particular in life-science databases, are a valuable major resource. In order to bring it to the scientist desktop, it is essential to have well performing search engines. Thereby, not the response time nor the number of results is important. The most crucial factor for millions of query results is the relevance ranking. In this paper, we present a feature model for relevance ranking in life science databases and its implementation in the LAILAPS search engine. Motivated by the observation of user behavior during their inspection of search engine result, we condensed a set of 9 relevance discriminating features. These features are intuitively used by scientists, who briefly screen database entries for potential relevance. The features are both sufficient to estimate the potential relevance, and efficiently quantifiable. The derivation of a relevance prediction function that computes the relevance from this features constitutes a regression problem. To solve this problem, we used artificial neural networks that have been trained with a reference set of relevant database entries for 19 protein queries. Supporting a flexible text index and a simple data import format, this concepts are implemented in the LAILAPS search engine. It can easily be used both as search engine for comprehensive integrated life science databases and for small in-house project databases. LAILAPS is publicly available for SWISSPROT data at http://lailaps.ipk-gatersleben.de.

  10. The Government Finance Database: A Common Resource for Quantitative Research in Public Financial Analysis.

    Science.gov (United States)

    Pierson, Kawika; Hand, Michael L; Thompson, Fred

    2015-01-01

    Quantitative public financial management research focused on local governments is limited by the absence of a common database for empirical analysis. While the U.S. Census Bureau distributes government finance data that some scholars have utilized, the arduous process of collecting, interpreting, and organizing the data has led its adoption to be prohibitive and inconsistent. In this article we offer a single, coherent resource that contains all of the government financial data from 1967-2012, uses easy to understand natural-language variable names, and will be extended when new data is available.

  11. A database of semantic features for chosen concepts (Attested in 8- to 10-year-old Czech pupils

    Directory of Open Access Journals (Sweden)

    Konečná Kristýna

    2017-06-01

    Full Text Available In this paper, a database of semantic features is presented. 104 nominal concepts from 13 semantic categories were described by young Czech school children. They were asked to respond to the question “what is it, what does it mean?” by listing different kinds of properties for concepts in writing. Their responses were broken down into semantic features and the database was prepared using a set of pre-established rules. The method of database design, with an emphasis on the way features were recorded, is described in detail within this article. The data were statistically analysed and interpreted and the results along with database usage methodologies are discussed. The goal of this research is to produce a complex database to be used in future research relating to semantic features and therefore it has been published online for use by the wider academic community. At present, databases have been published based on data gathered from adult English and Czech speakers; however participation in this study was limited specifically to young Czech-speaking children. Thus, this database is characteristically unique as it provides important insight into this specific age and language group’s conceptual knowledge. The research is inspired primarily by research papers concerning semantic feature production obtained from adult English speakers (McRae, de Sa, and Seidenberg, 1997; McRae, Cree, Seidenberg, and McNorgan, 2005; Vinson and Vigliocco, 2008.

  12. Automatic feature extraction in large fusion databases by using deep learning approach

    Energy Technology Data Exchange (ETDEWEB)

    Farias, Gonzalo, E-mail: gonzalo.farias@ucv.cl [Pontificia Universidad Católica de Valparaíso, Valparaíso (Chile); Dormido-Canto, Sebastián [Departamento de Informática y Automática, UNED, Madrid (Spain); Vega, Jesús; Rattá, Giuseppe [Asociación EURATOM/CIEMAT Para Fusión, CIEMAT, Madrid (Spain); Vargas, Héctor; Hermosilla, Gabriel; Alfaro, Luis; Valencia, Agustín [Pontificia Universidad Católica de Valparaíso, Valparaíso (Chile)

    2016-11-15

    Highlights: • Feature extraction is a very critical stage in any machine learning algorithm. • The problem dimensionality can be reduced enormously when selecting suitable attributes. • Despite the importance of feature extraction, the process is commonly done manually by trial and error. • Fortunately, recent advances in deep learning approach have proposed an encouraging way to find a good feature representation automatically. • In this article, deep learning is applied to the TJ-II fusion database to get more robust and accurate classifiers in comparison to previous work. - Abstract: Feature extraction is one of the most important machine learning issues. Finding suitable attributes of datasets can enormously reduce the dimensionality of the input space, and from a computational point of view can help all of the following steps of pattern recognition problems, such as classification or information retrieval. However, the feature extraction step is usually performed manually. Moreover, depending on the type of data, we can face a wide range of methods to extract features. In this sense, the process to select appropriate techniques normally takes a long time. This work describes the use of recent advances in deep learning approach in order to find a good feature representation automatically. The implementation of a special neural network called sparse autoencoder and its application to two classification problems of the TJ-II fusion database is shown in detail. Results have shown that it is possible to get robust classifiers with a high successful rate, in spite of the fact that the feature space is reduced to less than 0.02% from the original one.

  13. Automatic feature extraction in large fusion databases by using deep learning approach

    International Nuclear Information System (INIS)

    Farias, Gonzalo; Dormido-Canto, Sebastián; Vega, Jesús; Rattá, Giuseppe; Vargas, Héctor; Hermosilla, Gabriel; Alfaro, Luis; Valencia, Agustín

    2016-01-01

    Highlights: • Feature extraction is a very critical stage in any machine learning algorithm. • The problem dimensionality can be reduced enormously when selecting suitable attributes. • Despite the importance of feature extraction, the process is commonly done manually by trial and error. • Fortunately, recent advances in deep learning approach have proposed an encouraging way to find a good feature representation automatically. • In this article, deep learning is applied to the TJ-II fusion database to get more robust and accurate classifiers in comparison to previous work. - Abstract: Feature extraction is one of the most important machine learning issues. Finding suitable attributes of datasets can enormously reduce the dimensionality of the input space, and from a computational point of view can help all of the following steps of pattern recognition problems, such as classification or information retrieval. However, the feature extraction step is usually performed manually. Moreover, depending on the type of data, we can face a wide range of methods to extract features. In this sense, the process to select appropriate techniques normally takes a long time. This work describes the use of recent advances in deep learning approach in order to find a good feature representation automatically. The implementation of a special neural network called sparse autoencoder and its application to two classification problems of the TJ-II fusion database is shown in detail. Results have shown that it is possible to get robust classifiers with a high successful rate, in spite of the fact that the feature space is reduced to less than 0.02% from the original one.

  14. Accuracy of the Government-Owned Contractor-Occupied Real Property in the Military Departments' Real Property Databases

    National Research Council Canada - National Science Library

    2000-01-01

    .... Accurate reporting of real property on the Military Departments' real property databases is essential to DoD and the Federal Government receiving favorable audit opinions on their financial statements...

  15. Achievement motivation features of female servants of government and local self-government bodies

    Directory of Open Access Journals (Sweden)

    T.U. Kulakovsk

    2018-03-01

    Full Text Available The article makes the theoretical analysis of the features of professional performance of civil servants and officials of local self-government bodies. The significant level of staff turnover among civil servants and the significant level of registered unemployed among the persons who had previously worked in these structures was revealed. The article carries out the theoretical analysis of psychological researches, which testify to the existence of direct correlation between the level of achievement motivation and success in an entrepreneurial activity. The purpose has been set to study the peculiar properties of achievement motivation of civil servants and officials of local self-government bodies. The researches have shown that the level of motivation for the achievement has a straightforward connection with the success of entrepreneurship. The purpose is to study the peculiarities of the motivation of achievement of civil servants and officials of local self-government bodies. There is the possibility for engagement of fired civil servants to entrepreneurial activities. The empirical study of achievement motivation peculiarities of female employees has been conducted. A statistically significant inverse correlation between the level of achievement motivation and the length of civil service has been established. The paper substantiates the need in further research aimed at uncovering the factors responsible for the inverse correlation between the level of achievement motivation and the term of service in the civil service.

  16. Features of Interaction of Business and Government in the Form of Public-private partnership

    Directory of Open Access Journals (Sweden)

    Oksana N. Taranenko

    2016-12-01

    Full Text Available Today modernization of relations between the government and the business sector is an important issue particularly relevant in the context of financial globalization in the transition to a market economy. The paper discusses the theoretical concept of public-private partnership, as a form of business organization, combining the functional features of an independent firm or companies and the government, which implementation is caused by the need to ensure the production of the most important benefits in various areas, as well as the features of the interaction of business and government. It is proposed to highlight the definition of public-private partnerships in the form of a special system of relations of economic agents, as well to determine the required features that separate this form of interaction as a partnership from other forms of interaction. Also the authors consider a system of public-private partnership in terms of coordination of joint relations between the government and business, try to identify the basic principles of interaction between the participants, identify their main advantages that each of the participants in the partnership seeks to contribute to the joint project, and identifiess areas to support sustainable development the country's economy. The paper describes the problems associated with the implementation of projects in the public-private partnership system and suggests ways to improve them, discusses the main advantages and disadvantages of such members, as the government and business.

  17. Integrated reporting and board features

    Directory of Open Access Journals (Sweden)

    Rares HURGHIS

    2017-02-01

    Full Text Available In the last two decades the concept of sustainability reporting gained more importance in the companies’ annual reports, a trend which is embedded also in integrated reporting. Issuing an integrated report became a necessity, because the report explains to the investors how the organization creates value over time. The governance structure, more exactly the board of directors, decides whether or not the company will issue an integrated report. Thus, are there certain features of the board that might influence the issue of an integrated report? Do the companies which issue an integrated report have certain features of the governance structure? Looking for an answer to these questions, we seek for any possible correlations between a disclosure index and the corporate governance structure characteristics, on a sample from the companies participating at the International Integrated Reporting Council Examples Database. The results highlight that only the size of the board influences the extent to which the issued integrated report is in accordance with the International Framework.

  18. The benefits of a product-independent lexical database with formal word features

    NARCIS (Netherlands)

    Froon, Johanna; Froon, Janneke; de Jong, Franciska M.G.

    Dictionaries can be used as a basis for lexicon development for NLP applications. However, it often takes a lot of pre-processing before they are usable. In the last 5 years a product-independent database of formal word features has been developed on the basis of the Van Dale dictionaries for Dutch.

  19. Thermodynamic database for proteins: features and applications.

    Science.gov (United States)

    Gromiha, M Michael; Sarai, Akinori

    2010-01-01

    We have developed a thermodynamic database for proteins and mutants, ProTherm, which is a collection of a large number of thermodynamic data on protein stability along with the sequence and structure information, experimental methods and conditions, and literature information. This is a valuable resource for understanding/predicting the stability of proteins, and it can be accessible at http://www.gibk26.bse.kyutech.ac.jp/jouhou/Protherm/protherm.html . ProTherm has several features including various search, display, and sorting options and visualization tools. We have analyzed the data in ProTherm to examine the relationship among thermodynamics, structure, and function of proteins. We describe the progress on the development of methods for understanding/predicting protein stability, such as (i) relationship between the stability of protein mutants and amino acid properties, (ii) average assignment method, (iii) empirical energy functions, (iv) torsion, distance, and contact potentials, and (v) machine learning techniques. The list of online resources for predicting protein stability has also been provided.

  20. How Do Ownership Features Affect Corporate Governance Disclosure ? – The Case Of Banking System

    Directory of Open Access Journals (Sweden)

    Cristina Stefanescu

    2013-04-01

    Full Text Available The purpose of our empirical study is to assess the relationship between ownership’features and the level of disclosure in case of banking institutions listed on London Stock Exchange,basing on the general statement that disclosure and quality of corporate governance system are twoclosely related concepts-the higher the level of transparency, the better the quality corporategovernance practices.The research methodology used for achieving our goal is based on econometric analysis usingstatistical tools-correlations for identifying the relationships and regressions for assessing them-allof these beingperformed using SPSS software. In this respect, we developed a disclosure index,considered structure and concentration as features for assessing ownership.The results of the performed analysis reveal significant positive influences of all features testedon thelevel of disclosure, thus confirming our assumptions that the higher the quality of ownership, thehigher the level of disclosure.Irrespective of prior studies, which were focused on various corporate governance features, our papercomes to add value in this respect by testing only ownership. Moreover, because the banking systemwas little explored on this topic before, we had another chance to enrich the research literature withthis empirical study.

  1. Method to assess the temporal persistence of potential biometric features: Application to oculomotor, gait, face and brain structure databases

    Science.gov (United States)

    Nixon, Mark S.; Komogortsev, Oleg V.

    2017-01-01

    We introduce the intraclass correlation coefficient (ICC) to the biometric community as an index of the temporal persistence, or stability, of a single biometric feature. It requires, as input, a feature on an interval or ratio scale, and which is reasonably normally distributed, and it can only be calculated if each subject is tested on 2 or more occasions. For a biometric system, with multiple features available for selection, the ICC can be used to measure the relative stability of each feature. We show, for 14 distinct data sets (1 synthetic, 8 eye-movement-related, 2 gait-related, and 2 face-recognition-related, and one brain-structure-related), that selecting the most stable features, based on the ICC, resulted in the best biometric performance generally. Analyses based on using only the most stable features produced superior Rank-1-Identification Rate (Rank-1-IR) performance in 12 of 14 databases (p = 0.0065, one-tailed), when compared to other sets of features, including the set of all features. For Equal Error Rate (EER), using a subset of only high-ICC features also produced superior performance in 12 of 14 databases (p = 0. 0065, one-tailed). In general, then, for our databases, prescreening potential biometric features, and choosing only highly reliable features yields better performance than choosing lower ICC features or than choosing all features combined. We also determined that, as the ICC of a group of features increases, the median of the genuine similarity score distribution increases and the spread of this distribution decreases. There was no statistically significant similar relationships for the impostor distributions. We believe that the ICC will find many uses in biometric research. In case of the eye movement-driven biometrics, the use of reliable features, as measured by ICC, allowed to us achieve the authentication performance with EER = 2.01%, which was not possible before. PMID:28575030

  2. System and method employing a minimum distance and a load feature database to identify electric load types of different electric loads

    Science.gov (United States)

    Lu, Bin; Yang, Yi; Sharma, Santosh K; Zambare, Prachi; Madane, Mayura A

    2014-12-23

    A method identifies electric load types of a plurality of different electric loads. The method includes providing a load feature database of a plurality of different electric load types, each of the different electric load types including a first load feature vector having at least four different load features; sensing a voltage signal and a current signal for each of the different electric loads; determining a second load feature vector comprising at least four different load features from the sensed voltage signal and the sensed current signal for a corresponding one of the different electric loads; and identifying by a processor one of the different electric load types by determining a minimum distance of the second load feature vector to the first load feature vector of the different electric load types of the load feature database.

  3. Update History of This Database - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Trypanosomes Database Update History of This Database Date Update contents 2014/05/07 The co...ntact information is corrected. The features and manner of utilization of the database are corrected. 2014/02/04 Trypanosomes Databas...e English archive site is opened. 2011/04/04 Trypanosomes Database ( http://www.tan...paku.org/tdb/ ) is opened. About This Database Database Description Download Lice...nse Update History of This Database Site Policy | Contact Us Update History of This Database - Trypanosomes Database | LSDB Archive ...

  4. System and method employing a self-organizing map load feature database to identify electric load types of different electric loads

    Science.gov (United States)

    Lu, Bin; Harley, Ronald G.; Du, Liang; Yang, Yi; Sharma, Santosh K.; Zambare, Prachi; Madane, Mayura A.

    2014-06-17

    A method identifies electric load types of a plurality of different electric loads. The method includes providing a self-organizing map load feature database of a plurality of different electric load types and a plurality of neurons, each of the load types corresponding to a number of the neurons; employing a weight vector for each of the neurons; sensing a voltage signal and a current signal for each of the loads; determining a load feature vector including at least four different load features from the sensed voltage signal and the sensed current signal for a corresponding one of the loads; and identifying by a processor one of the load types by relating the load feature vector to the neurons of the database by identifying the weight vector of one of the neurons corresponding to the one of the load types that is a minimal distance to the load feature vector.

  5. Medical research using governments' health claims databases: with or without patients' consent?

    Science.gov (United States)

    Tsai, Feng-Jen; Junod, Valérie

    2018-03-01

    Taking advantage of its single-payer, universal insurance system, Taiwan has leveraged its exhaustive database of health claims data for research purposes. Researchers can apply to receive access to pseudonymized (coded) medical data about insured patients, notably their diagnoses, health status and treatments. In view of the strict safeguards implemented, the Taiwanese government considers that this research use does not require patients' consent (either in the form of an opt-in or in the form of an opt-out). A group of non-governmental organizations has challenged this view in the Taiwanese Courts, but to no avail. The present article reviews the arguments both against and in favor of patients' consent for re-use of their data in research. It concludes that offering patients an opt-out would be appropriate as it would best balance the important interests at issue.

  6. Generalized Database Management System Support for Numeric Database Environments.

    Science.gov (United States)

    Dominick, Wayne D.; Weathers, Peggy G.

    1982-01-01

    This overview of potential for utilizing database management systems (DBMS) within numeric database environments highlights: (1) major features, functions, and characteristics of DBMS; (2) applicability to numeric database environment needs and user needs; (3) current applications of DBMS technology; and (4) research-oriented and…

  7. The development of large-scale de-identified biomedical databases in the age of genomics-principles and challenges.

    Science.gov (United States)

    Dankar, Fida K; Ptitsyn, Andrey; Dankar, Samar K

    2018-04-10

    Contemporary biomedical databases include a wide range of information types from various observational and instrumental sources. Among the most important features that unite biomedical databases across the field are high volume of information and high potential to cause damage through data corruption, loss of performance, and loss of patient privacy. Thus, issues of data governance and privacy protection are essential for the construction of data depositories for biomedical research and healthcare. In this paper, we discuss various challenges of data governance in the context of population genome projects. The various challenges along with best practices and current research efforts are discussed through the steps of data collection, storage, sharing, analysis, and knowledge dissemination.

  8. Feature constrained compressed sensing CT image reconstruction from incomplete data via robust principal component analysis of the database

    International Nuclear Information System (INIS)

    Wu, Dufan; Li, Liang; Zhang, Li

    2013-01-01

    In computed tomography (CT), incomplete data problems such as limited angle projections often cause artifacts in the reconstruction results. Additional prior knowledge of the image has shown the potential for better results, such as a prior image constrained compressed sensing algorithm. While a pre-full-scan of the same patient is not always available, massive well-reconstructed images of different patients can be easily obtained from clinical multi-slice helical CTs. In this paper, a feature constrained compressed sensing (FCCS) image reconstruction algorithm was proposed to improve the image quality by using the prior knowledge extracted from the clinical database. The database consists of instances which are similar to the target image but not necessarily the same. Robust principal component analysis is employed to retrieve features of the training images to sparsify the target image. The features form a low-dimensional linear space and a constraint on the distance between the image and the space is used. A bi-criterion convex program which combines the feature constraint and total variation constraint is proposed for the reconstruction procedure and a flexible method is adopted for a good solution. Numerical simulations on both the phantom and real clinical patient images were taken to validate our algorithm. Promising results are shown for limited angle problems. (paper)

  9. Governance and oversight of researcher access to electronic health data: the role of the Independent Scientific Advisory Committee for MHRA database research, 2006-2015.

    Science.gov (United States)

    Waller, P; Cassell, J A; Saunders, M H; Stevens, R

    2017-03-01

    In order to promote understanding of UK governance and assurance relating to electronic health records research, we present and discuss the role of the Independent Scientific Advisory Committee (ISAC) for MHRA database research in evaluating protocols proposing the use of the Clinical Practice Research Datalink. We describe the development of the Committee's activities between 2006 and 2015, alongside growth in data linkage and wider national electronic health records programmes, including the application and assessment processes, and our approach to undertaking this work. Our model can provide independence, challenge and support to data providers such as the Clinical Practice Research Datalink database which has been used for well over 1,000 medical research projects. ISAC's role in scientific oversight ensures feasible and scientifically acceptable plans are in place, while having both lay and professional membership addresses governance issues in order to protect the integrity of the database and ensure that public confidence is maintained.

  10. Software listing: CHEMTOX database

    International Nuclear Information System (INIS)

    Moskowitz, P.D.

    1993-01-01

    Initially launched in 1983, the CHEMTOX Database was among the first microcomputer databases containing hazardous chemical information. The database is used in many industries and government agencies in more than 17 countries. Updated quarterly, the CHEMTOX Database provides detailed environmental and safety information on 7500-plus hazardous substances covered by dozens of regulatory and advisory sources. This brief listing describes the method of accessing data and provides ordering information for those wishing to obtain the CHEMTOX Database

  11. Object feature extraction and recognition model

    International Nuclear Information System (INIS)

    Wan Min; Xiang Rujian; Wan Yongxing

    2001-01-01

    The characteristics of objects, especially flying objects, are analyzed, which include characteristics of spectrum, image and motion. Feature extraction is also achieved. To improve the speed of object recognition, a feature database is used to simplify the data in the source database. The feature vs. object relationship maps are stored in the feature database. An object recognition model based on the feature database is presented, and the way to achieve object recognition is also explained

  12. Features of TMR for a Successful Clinical and Research Database

    OpenAIRE

    Pryor, David B.; Stead, William W.; Hammond, W. Edward; Califf, Robert M.; Rosati, Robert A.

    1982-01-01

    A database can be used for clinical practice and for research. The design of the database is important if both uses are to succeed. A clinical database must be efficient and flexible. A research database requires consistent observations recorded in a format which permits complete recall of the experience. In addition, the database should be designed to distinguish between missing data and negative responses, and to minimize transcription errors during the recording process.

  13. Data-Science Analysis of the Macro-scale Features Governing the Corrosion to Crack Transition in AA7050-T7451

    Science.gov (United States)

    Co, Noelle Easter C.; Brown, Donald E.; Burns, James T.

    2018-05-01

    This study applies data science approaches (random forest and logistic regression) to determine the extent to which macro-scale corrosion damage features govern the crack formation behavior in AA7050-T7451. Each corrosion morphology has a set of corresponding predictor variables (pit depth, volume, area, diameter, pit density, total fissure length, surface roughness metrics, etc.) describing the shape of the corrosion damage. The values of the predictor variables are obtained from white light interferometry, x-ray tomography, and scanning electron microscope imaging of the corrosion damage. A permutation test is employed to assess the significance of the logistic and random forest model predictions. Results indicate minimal relationship between the macro-scale corrosion feature predictor variables and fatigue crack initiation. These findings suggest that the macro-scale corrosion features and their interactions do not solely govern the crack formation behavior. While these results do not imply that the macro-features have no impact, they do suggest that additional parameters must be considered to rigorously inform the crack formation location.

  14. NoSQL databases

    OpenAIRE

    Mrozek, Jakub

    2012-01-01

    This thesis deals with database systems referred to as NoSQL databases. In the second chapter, I explain basic terms and the theory of database systems. A short explanation is dedicated to database systems based on the relational data model and the SQL standardized query language. Chapter Three explains the concept and history of the NoSQL databases, and also presents database models, major features and the use of NoSQL databases in comparison with traditional database systems. In the fourth ...

  15. Xylella fastidiosa comparative genomic database is an information resource to explore the annotation, genomic features, and biology of different strains

    Directory of Open Access Journals (Sweden)

    Alessandro M. Varani

    2012-01-01

    Full Text Available The Xylella fastidiosa comparative genomic database is a scientific resource with the aim to provide a user-friendly interface for accessing high-quality manually curated genomic annotation and comparative sequence analysis, as well as for identifying and mapping prophage-like elements, a marked feature of Xylella genomes. Here we describe a database and tools for exploring the biology of this important plant pathogen. The hallmarks of this database are the high quality genomic annotation, the functional and comparative genomic analysis and the identification and mapping of prophage-like elements. It is available from web site http://www.xylella.lncc.br.

  16. Enhancing facial features by using clear facial features

    Science.gov (United States)

    Rofoo, Fanar Fareed Hanna

    2017-09-01

    The similarity of features between individuals of same ethnicity motivated the idea of this project. The idea of this project is to extract features of clear facial image and impose them on blurred facial image of same ethnic origin as an approach to enhance a blurred facial image. A database of clear images containing 30 individuals equally divided to five different ethnicities which were Arab, African, Chines, European and Indian. Software was built to perform pre-processing on images in order to align the features of clear and blurred images. And the idea was to extract features of clear facial image or template built from clear facial images using wavelet transformation to impose them on blurred image by using reverse wavelet. The results of this approach did not come well as all the features did not align together as in most cases the eyes were aligned but the nose or mouth were not aligned. Then we decided in the next approach to deal with features separately but in the result in some cases a blocky effect was present on features due to not having close matching features. In general the available small database did not help to achieve the goal results, because of the number of available individuals. The color information and features similarity could be more investigated to achieve better results by having larger database as well as improving the process of enhancement by the availability of closer matches in each ethnicity.

  17. KALIMER database development (database configuration and design methodology)

    International Nuclear Information System (INIS)

    Jeong, Kwan Seong; Kwon, Young Min; Lee, Young Bum; Chang, Won Pyo; Hahn, Do Hee

    2001-10-01

    KALIMER Database is an advanced database to utilize the integration management for Liquid Metal Reactor Design Technology Development using Web Applicatins. KALIMER Design database consists of Results Database, Inter-Office Communication (IOC), and 3D CAD database, Team Cooperation system, and Reserved Documents, Results Database is a research results database during phase II for Liquid Metal Reactor Design Technology Develpment of mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD Database is s schematic design overview for KALIMER. Team Cooperation System is to inform team member of research cooperation and meetings. Finally, KALIMER Reserved Documents is developed to manage collected data and several documents since project accomplishment. This report describes the features of Hardware and Software and the Database Design Methodology for KALIMER

  18. Features Of The Local Government Development In Pre-Revolutionary Russia

    Directory of Open Access Journals (Sweden)

    Lyubov I. Rogacheva

    2014-09-01

    Full Text Available In the present article historical development of the municipal government institutes in the pre-revolutionary Russia are analyzed. Author emphasizes that in Russia at different stages of the statehood historical development a great experience of various forms and institutes of self-government were developed. History shows that municipal government in Russian conditions cannot develop in full in the conditions of weakening government. Without support of the strong state, municipal government have weak chances, but strong government may also suppress, crush self-government. In the Russian history the balance of public administration and self-government was quite often broken into the favor of the first, what may be explained by political and geographical factors of the country development which caused need of the strong centralized government existence. Reforms in the middle of the XIX century aggravated a question of the municipal government reforming, on the creation of new system instead of imperfect and outdated former structure. Elimination of the serfdom which changed legal status of the considerable part of population in the country caused the need of country estate involvement in the solution of various economic tasks at the municipal level. This task in a certain degree was solved during the reforms of the second half of the XIX century. However model created during this period was not ideal, it suffered from many defects, inherent for the long time by the system of the state and public relations in Russia. At the beginning of the XX century questions of the municipal government reforming were again actively discussed by scientists and public figures, on this subject there were numerous publications. However, revolutionary events made questions of municipal government almost irrelevant for the long time.

  19. Governing Forest Ecosystem Services for Sustainable Environmental Governance: A Review

    Directory of Open Access Journals (Sweden)

    Shankar Adhikari

    2018-05-01

    Full Text Available Governing forest ecosystem services as a forest socio-ecological system is an evolving concept in the face of different environmental and social challenges. Therefore, different modes of ecosystem governance such as hierarchical, scientific–technical, and adaptive–collaborative governance have been developed. Although each form of governance offers important features, no one form on its own is sufficient to attain sustainable environmental governance (SEG. Thus, the blending of important features of each mode of governance could contribute to SEG, through a combination of both hierarchical and collaborative governance systems supported by scientifically and technically aided knowledge. This should be further reinforced by the broad engagement of stakeholders to ensure the improved well-being of both ecosystems and humans. Some form of governance and forest management measures, including sustainable forest management, forest certification, and payment for ecosystem services mechanisms, are also contributing to that end. While issues around commodification and putting a price on nature are still contested due to the complex relationship between different services, if these limitations are taken into account, the governance of forest ecosystem services will serve as a means of effective environmental governance and the sustainable management of forest resources. Therefore, forest ecosystem services governance has a promising future for SEG, provided limitations are tackled with due care in future governance endeavors.

  20. Integrating adaptive governance and participatory multicriteria methods: a framework for climate adaptation governance

    Directory of Open Access Journals (Sweden)

    Stefania Munaretto

    2014-06-01

    Full Text Available Climate adaptation is a dynamic social and institutional process where the governance dimension is receiving growing attention. Adaptive governance is an approach that promises to reduce uncertainty by improving the knowledge base for decision making. As uncertainty is an inherent feature of climate adaptation, adaptive governance seems to be a promising approach for improving climate adaptation governance. However, the adaptive governance literature has so far paid little attention to decision-making tools and methods, and the literature on the governance of adaptation is in its infancy in this regard. We argue that climate adaptation governance would benefit from systematic and yet flexible decision-making tools and methods such as participatory multicriteria methods for the evaluation of adaptation options, and that these methods can be linked to key adaptive governance principles. Moving from these premises, we propose a framework that integrates key adaptive governance features into participatory multicriteria methods for the governance of climate adaptation.

  1. DOE technology information management system database study report

    Energy Technology Data Exchange (ETDEWEB)

    Widing, M.A.; Blodgett, D.W.; Braun, M.D.; Jusko, M.J.; Keisler, J.M.; Love, R.J.; Robinson, G.L. [Argonne National Lab., IL (United States). Decision and Information Sciences Div.

    1994-11-01

    To support the missions of the US Department of Energy (DOE) Special Technologies Program, Argonne National Laboratory is defining the requirements for an automated software system that will search electronic databases on technology. This report examines the work done and results to date. Argonne studied existing commercial and government sources of technology databases in five general areas: on-line services, patent database sources, government sources, aerospace technology sources, and general technology sources. First, it conducted a preliminary investigation of these sources to obtain information on the content, cost, frequency of updates, and other aspects of their databases. The Laboratory then performed detailed examinations of at least one source in each area. On this basis, Argonne recommended which databases should be incorporated in DOE`s Technology Information Management System.

  2. Reach Address Database (RAD)

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Reach Address Database (RAD) stores the reach address of each Water Program feature that has been linked to the underlying surface water features (streams,...

  3. Kansas Cartographic Database (KCD)

    Data.gov (United States)

    Kansas Data Access and Support Center — The Kansas Cartographic Database (KCD) is an exact digital representation of selected features from the USGS 7.5 minute topographic map series. Features that are...

  4. Brasilia’s Database Administrators

    Directory of Open Access Journals (Sweden)

    Jane Adriana

    2016-06-01

    Full Text Available Database administration has gained an essential role in the management of new database technologies. Different data models are being created for supporting the enormous data volume, from the traditional relational database. These new models are called NoSQL (Not only SQL databases. The adoption of best practices and procedures, has become essential for the operation of database management systems. Thus, this paper investigates some of the techniques and tools used by database administrators. The study highlights features and particularities in databases within the area of Brasilia, the Capital of Brazil. The results point to which new technologies regarding database management are currently the most relevant, as well as the central issues in this area.

  5. Feature Extraction Using Fractal Codes

    NARCIS (Netherlands)

    B.A.M. Schouten (Ben); P.M. de Zeeuw (Paul)

    1999-01-01

    htmlabstractFast and successful searching for an object in a multimedia database is a highly desirable functionality. Several approaches to content based retrieval for multimedia databases can be found in the literature [9,10,12,14,17]. The approach we consider is feature extraction. A feature can

  6. An Internet of Things Approach for Extracting Featured Data Using AIS Database: An Application Based on the Viewpoint of Connected Ships

    Directory of Open Access Journals (Sweden)

    Wei He

    2017-09-01

    Full Text Available Automatic Identification System (AIS, as a major data source of navigational data, is widely used in the application of connected ships for the purpose of implementing maritime situation awareness and evaluating maritime transportation. Efficiently extracting featured data from AIS database is always a challenge and time-consuming work for maritime administrators and researchers. In this paper, a novel approach was proposed to extract massive featured data from the AIS database. An Evidential Reasoning rule based methodology was proposed to simulate the procedure of extracting routes from AIS database artificially. First, the frequency distributions of ship dynamic attributes, such as the mean and variance of Speed over Ground, Course over Ground, are obtained, respectively, according to the verified AIS data samples. Subsequently, the correlations between the attributes and belief degrees of the categories are established based on likelihood modeling. In this case, the attributes were characterized into several pieces of evidence, and the evidence can be combined with the Evidential Reasoning rule. In addition, the weight coefficients were trained in a nonlinear optimization model to extract the AIS data more accurately. A real life case study was conducted at an intersection waterway, Yangtze River, Wuhan, China. The results show that the proposed methodology is able to extract data very precisely.

  7. Feature extraction using fractal codes

    NARCIS (Netherlands)

    B.A.M. Ben Schouten; Paul M. de Zeeuw

    1999-01-01

    Fast and successful searching for an object in a multimedia database is a highly desirable functionality. Several approaches to content based retrieval for multimedia databases can be found in the literature [9,10,12,14,17]. The approach we consider is feature extraction. A feature can be seen as a

  8. A community effort to construct a gravity database for the United States and an associated Web portal

    Science.gov (United States)

    Keller, Gordon R.; Hildenbrand, T.G.; Kucks, R.; Webring, M.; Briesacher, A.; Rujawitz, K.; Hittleman, A.M.; Roman, D.R.; Winester, D.; Aldouri, R.; Seeley, J.; Rasillo, J.; Torres, R.; Hinze, W. J.; Gates, A.; Kreinovich, V.; Salayandia, L.

    2006-01-01

    Potential field data (gravity and magnetic measurements) are both useful and costeffective tools for many geologic investigations. Significant amounts of these data are traditionally in the public domain. A new magnetic database for North America was released in 2002, and as a result, a cooperative effort between government agencies, industry, and universities to compile an upgraded digital gravity anomaly database, grid, and map for the conterminous United States was initiated and is the subject of this paper. This database is being crafted into a data system that is accessible through a Web portal. This data system features the database, software tools, and convenient access. The Web portal will enhance the quality and quantity of data contributed to the gravity database that will be a shared community resource. The system's totally digital nature ensures that it will be flexible so that it can grow and evolve as new data, processing procedures, and modeling and visualization tools become available. Another goal of this Web-based data system is facilitation of the efforts of researchers and students who wish to collect data from regions currently not represented adequately in the database. The primary goal of upgrading the United States gravity database and this data system is to provide more reliable data that support societal and scientific investigations of national importance. An additional motivation is the international intent to compile an enhanced North American gravity database, which is critical to understanding regional geologic features, the tectonic evolution of the continent, and other issues that cross national boundaries. ?? 2006 Geological Society of America. All rights reserved.

  9. Migration Between NoSQL Databases

    OpenAIRE

    Opačak, Damir

    2013-01-01

    The thesis discusses the differences and, consequently, potential problems that may arise when migrating between different types of NoSQL databases. The first chapters introduce the reader to the issues of relational databases and present the beginnings of NoSQL databases. The following chapters present different types of NoSQL databases and some of their representatives with the aim to show specific features of NoSQL databases and the fact that each of them was developed to solve specifi...

  10. Identifying the relevant features of the National Digital Cadastral Database (NDCDB) for spatial analysis by using the Delphi Technique

    Science.gov (United States)

    Halim, N. Z. A.; Sulaiman, S. A.; Talib, K.; Ng, E. G.

    2018-02-01

    This paper explains the process carried out in identifying the relevant features of the National Digital Cadastral Database (NDCDB) for spatial analysis. The research was initially a part of a larger research exercise to identify the significance of NDCDB from the legal, technical, role and land-based analysis perspectives. The research methodology of applying the Delphi technique is substantially discussed in this paper. A heterogeneous panel of 14 experts was created to determine the importance of NDCDB from the technical relevance standpoint. Three statements describing the relevant features of NDCDB for spatial analysis were established after three rounds of consensus building. It highlighted the NDCDB’s characteristics such as its spatial accuracy, functions, and criteria as a facilitating tool for spatial analysis. By recognising the relevant features of NDCDB for spatial analysis in this study, practical application of NDCDB for various analysis and purpose can be widely implemented.

  11. Governing climate change transnationally: assessing the evidence from a database of sixty initiatives

    OpenAIRE

    Harriet Bulkeley; Liliana Andonova; Karin Bäckstrand; Michele Betsill; Daniel Compagnon; Rosaleen Duffy; Ans Kolk; Matthew Hoffmann; David Levy; Peter Newell; Tori Milledge; Matthew Paterson; Philipp Pattberg; Stacy VanDeveer

    2012-01-01

    With this paper we present an analysis of sixty transnational governance initiatives and assess the implications for our understanding of the roles of public and private actors, the legitimacy of governance ‘beyond’ the state, and the North–South dimensions of governing climate change. In the first part of the paper we examine the notion of transnational governance and its applicability in the climate change arena, reflecting on the history and emergence of transnational governance initiative...

  12. A prototype feature system for feature retrieval using relationships

    Science.gov (United States)

    Choi, J.; Usery, E.L.

    2009-01-01

    Using a feature data model, geographic phenomena can be represented effectively by integrating space, theme, and time. This paper extends and implements a feature data model that supports query and visualization of geographic features using their non-spatial and temporal relationships. A prototype feature-oriented geographic information system (FOGIS) is then developed and storage of features named Feature Database is designed. Buildings from the U.S. Marine Corps Base, Camp Lejeune, North Carolina and subways in Chicago, Illinois are used to test the developed system. The results of the applications show the strength of the feature data model and the developed system 'FOGIS' when they utilize non-spatial and temporal relationships in order to retrieve and visualize individual features.

  13. Conceptual considerations for CBM databases

    Energy Technology Data Exchange (ETDEWEB)

    Akishina, E. P.; Aleksandrov, E. I.; Aleksandrov, I. N.; Filozova, I. A.; Ivanov, V. V.; Zrelov, P. V. [Lab. of Information Technologies, JINR, Dubna (Russian Federation); Friese, V.; Mueller, W. [GSI, Darmstadt (Germany)

    2014-07-01

    We consider a concept of databases for the Cm experiment. For this purpose, an analysis of the databases for large experiments at the LHC at CERN has been performed. Special features of various DBMS utilized in physical experiments, including relational and object-oriented DBMS as the most applicable ones for the tasks of these experiments, were analyzed. A set of databases for the CBM experiment, DBMS for their developments as well as use cases for the considered databases are suggested.

  14. Conceptual considerations for CBM databases

    International Nuclear Information System (INIS)

    Akishina, E.P.; Aleksandrov, E.I.; Aleksandrov, I.N.; Filozova, I.A.; Ivanov, V.V.; Zrelov, P.V.; Friese, V.; Mueller, W.

    2014-01-01

    We consider a concept of databases for the Cm experiment. For this purpose, an analysis of the databases for large experiments at the LHC at CERN has been performed. Special features of various DBMS utilized in physical experiments, including relational and object-oriented DBMS as the most applicable ones for the tasks of these experiments, were analyzed. A set of databases for the CBM experiment, DBMS for their developments as well as use cases for the considered databases are suggested.

  15. Integrating adaptive governance and participatory multicriteria methods: a framework for climate adaptation governance

    NARCIS (Netherlands)

    Munaretto, S.; Siciliano, G.; Turvani, M.

    2014-01-01

    Climate adaptation is a dynamic social and institutional process where the governance dimension is receiving growing attention. Adaptive governance is an approach that promises to reduce uncertainty by improving the knowledge base for decision making. As uncertainty is an inherent feature of climate

  16. Searching LOGIN, the Local Government Information Network.

    Science.gov (United States)

    Jack, Robert F.

    1984-01-01

    Describes a computer-based information retrieval and electronic messaging system produced by Control Data Corporation now being used by government agencies and other organizations. Background of Local Government Information Network (LOGIN), database structure, types of LOGIN units, searching LOGIN (intersect, display, and list commands), and how…

  17. Multi-level governance of forest resources (Editorial to the special feature

    Directory of Open Access Journals (Sweden)

    Esther Mwangi

    2012-08-01

    Full Text Available A major challenge for many researchers and practitioners relates to how to recognize and address cross-scale dynamics in space and over time in order to design and implement effective governance arrangements. This editorial provides an overview of the concept of multi-level governance (MLG. In particular we highlight definitional issues, why the concept matters as well as more practical concerns related to the processes and structure of multi-level governance. It is increasingly clear that multi-level governance of forest resources involves complex interactions of state, private and civil society actors at various levels, and institutions linking higher levels of social and political organization. Local communities are increasingly connected to global networks and influences. This creates new opportunities to learn and address problems but may also introduce new pressures and risks. We conclude by stressing the need for a much complex approach to the varieties of MLG to better understand how policies work as instruments of governance and to organize communities within systems of power and authority.

  18. Analysis of TRMM-LIS Lightning and Related Microphysics Using a Cell-Scale Database

    Science.gov (United States)

    Leroy, Anita; Petersen, Walter A.

    2010-01-01

    Previous studies of tropical lightning activity using Tropical Rainfall Measurement Mission (TRMM) Lightning Imaging Sensor (LIS) data performed analyses of lightning behavior over mesoscale "feature" scales or over uniform grids. In order to study lightning and the governing ice microphysics intrinsic to thunderstorms at a more process-specific scale (i.e., the scale over which electrification processes and lightning occur in a "unit" thunderstorm), a new convective cell-scale database was developed by analyzing and refining the University of Utah's Precipitation Features database and retaining precipitation data parameters computed from the TRMM precipitation radar (PR), microwave imager (TMI) and LIS instruments. The resulting data base was to conduct a limited four-year study of tropical continental convection occurring over the Amazon Basin, Congo, Maritime Continent and the western Pacific Ocean. The analysis reveals expected strong correlations between lightning flash counts per cell and ice proxies, such as ice water path, minimum and average 85GHz brightness temperatures, and 18dBz echo top heights above the freezing level in all regimes, as well as regime-specific relationships between lighting flash counts and PR-derived surface rainfall rates. Additionally, radar CFADs were used to partition the 3D structure of cells in each regime at different flash counts. The resulting cell-scale analyses are compared to previous mesoscale feature and gridded studies wherever possible.

  19. A Case for Database Filesystems

    Energy Technology Data Exchange (ETDEWEB)

    Adams, P A; Hax, J C

    2009-05-13

    Data intensive science is offering new challenges and opportunities for Information Technology and traditional relational databases in particular. Database filesystems offer the potential to store Level Zero data and analyze Level 1 and Level 3 data within the same database system [2]. Scientific data is typically composed of both unstructured files and scalar data. Oracle SecureFiles is a new database filesystem feature in Oracle Database 11g that is specifically engineered to deliver high performance and scalability for storing unstructured or file data inside the Oracle database. SecureFiles presents the best of both the filesystem and the database worlds for unstructured content. Data stored inside SecureFiles can be queried or written at performance levels comparable to that of traditional filesystems while retaining the advantages of the Oracle database.

  20. Governing climate change transnationally: assessing the evidence from a database of sixty initiatives

    NARCIS (Netherlands)

    Bulkeley, H.; Andanova, L.; Bäckstrand, K.; Betsill, M.; Compagnon, D.; Duffy, R.; Kolk, A.; Hoffmann, M.; Levy, D.; Newell, P.; Milledge, T.; Paterson, M.; Pattberg, P.; VanDeveer, S.

    2012-01-01

    With this paper we present an analysis of sixty transnational governance initiatives and assess the implications for our understanding of the roles of public and private actors, the legitimacy of governance ‘beyond’ the state, and the North-South dimensions of governing climate change. In the first

  1. County and Parish Boundaries - COUNTY_GOVERNMENT_BOUNDARIES_IDHS_IN: Governmental Boundaries Maintained by County Agencies in Indiana (Indiana Department of Homeland Security, Polygon feature class)

    Data.gov (United States)

    NSGIC State | GIS Inventory — COUNTY_GOVERNMENT_BOUNDARIES_IDHS_IN is a polygon feature class that contains governmental boundaries maintained by county agencies in Indiana, provided by personnel...

  2. The flux database concerted action

    International Nuclear Information System (INIS)

    Mitchell, N.G.; Donnelly, C.E.

    1999-01-01

    This paper summarizes the background to the UIR action on the development of a flux database for radionuclide transfer in soil-plant systems. The action is discussed in terms of the objectives, the deliverables and the progress achieved so far by the flux database working group. The paper describes the background to the current initiative and outlines specific features of the database and supporting documentation. Particular emphasis is placed on the proforma used for data entry, on the database help file and on the approach adopted to indicate data quality. Refs. 3 (author)

  3. Karst database development in Minnesota: Design and data assembly

    Science.gov (United States)

    Gao, Y.; Alexander, E.C.; Tipping, R.G.

    2005-01-01

    The Karst Feature Database (KFD) of Minnesota is a relational GIS-based Database Management System (DBMS). Previous karst feature datasets used inconsistent attributes to describe karst features in different areas of Minnesota. Existing metadata were modified and standardized to represent a comprehensive metadata for all the karst features in Minnesota. Microsoft Access 2000 and ArcView 3.2 were used to develop this working database. Existing county and sub-county karst feature datasets have been assembled into the KFD, which is capable of visualizing and analyzing the entire data set. By November 17 2002, 11,682 karst features were stored in the KFD of Minnesota. Data tables are stored in a Microsoft Access 2000 DBMS and linked to corresponding ArcView applications. The current KFD of Minnesota has been moved from a Windows NT server to a Windows 2000 Citrix server accessible to researchers and planners through networked interfaces. ?? Springer-Verlag 2005.

  4. Dynamically Integrating OSM Data into a Borderland Database

    Directory of Open Access Journals (Sweden)

    Xiaoguang Zhou

    2015-09-01

    Full Text Available Spatial data are fundamental for borderland analyses of geography, natural resources, demography, politics, economy, and culture. As the spatial data used in borderland research usually cover the borderland regions of several neighboring countries, it is difficult for anyone research institution of government to collect them. Volunteered Geographic Information (VGI is a highly successful method for acquiring timely and detailed global spatial data at a very low cost. Therefore, VGI is a reasonable source of borderland spatial data. OpenStreetMap (OSM is known as the most successful VGI resource. However, OSM's data model is far different from the traditional geographic information model. Thus, the OSM data must be converted in the scientist’s customized data model. Because the real world changes rapidly, the converted data must be updated incrementally. Therefore, this paper presents a method used to dynamically integrate OSM data into the borderland database. In this method, a basic transformation rule base is formed by comparing the OSM Map Feature description document and the destination model definitions. Using the basic rules, the main features can be automatically converted to the destination model. A human-computer interaction model transformation and a rule/automatic-remember mechanism are developed to interactively transfer the unusual features that cannot be transferred by the basic rules to the target model and to remember the reusable rules automatically. To keep the borderland database current, the global OsmChange daily diff file is used to extract the change-only information for the research region. To extract the changed objects in the region under study, the relationship between the changed object and the research region is analyzed considering the evolution of the involved objects. In addition, five rules are determined to select the objects and integrate the changed objects with multi-versions over time. The objects

  5. RODOS database adapter

    International Nuclear Information System (INIS)

    Xie Gang

    1995-11-01

    Integrated data management is an essential aspect of many automatical information systems such as RODOS, a real-time on-line decision support system for nuclear emergency management. In particular, the application software must provide access management to different commercial database systems. This report presents the tools necessary for adapting embedded SQL-applications to both HP-ALLBASE/SQL and CA-Ingres/SQL databases. The design of the database adapter and the concept of RODOS embedded SQL syntax are discussed by considering some of the most important features of SQL-functions and the identification of significant differences between SQL-implementations. Finally fully part of the software developed and the administrator's and installation guides are described. (orig.) [de

  6. Enforcing Privacy in Cloud Databases

    OpenAIRE

    Moghadam, Somayeh Sobati; Darmont, Jérôme; Gavin, Gérald

    2017-01-01

    International audience; Outsourcing databases, i.e., resorting to Database-as-a-Service (DBaaS), is nowadays a popular choice due to the elasticity, availability, scalability and pay-as-you-go features of cloud computing. However, most data are sensitive to some extent, and data privacy remains one of the top concerns to DBaaS users, for obvious legal and competitive reasons.In this paper, we survey the mechanisms that aim at making databases secure in a cloud environment, and discuss current...

  7. Individual Identification Using Linear Projection of Heartbeat Features

    Directory of Open Access Journals (Sweden)

    Yogendra Narain Singh

    2014-01-01

    Full Text Available This paper presents a novel method to use the electrocardiogram (ECG signal as biometrics for individual identification. The ECG characterization is performed using an automated approach consisting of analytical and appearance methods. The analytical method extracts the fiducial features from heartbeats while the appearance method extracts the morphological features from the ECG trace. We linearly project the extracted features into a subspace of lower dimension using an orthogonal basis that represent the most significant features for distinguishing heartbeats among the subjects. Result demonstrates that the proposed characterization of the ECG signal and subsequently derived eigenbeat features are insensitive to signal variations and nonsignal artifacts. The proposed system utilizing ECG biometric method achieves the best identification rates of 85.7% for the subjects of MIT-BIH arrhythmia database and 92.49% for the healthy subjects of our IIT (BHU database. These results are significantly better than the classification accuracies of 79.55% and 84.9%, reported using support vector machine on the tested subjects of MIT-BIH arrhythmia database and our IIT (BHU database, respectively.

  8. Analysis of functionality free CASE-tools databases design

    Directory of Open Access Journals (Sweden)

    A. V. Gavrilov

    2016-01-01

    Full Text Available The introduction in the educational process of database design CASEtechnologies requires the institution of significant costs for the purchase of software. A possible solution could be the use of free software peers. At the same time this kind of substitution should be based on even-com representation of the functional characteristics and features of operation of these programs. The purpose of the article – a review of the free and non-profi t CASE-tools database design, as well as their classifi cation on the basis of the analysis functionality. When writing this article were used materials from the offi cial websites of the tool developers. Evaluation of the functional characteristics of CASEtools for database design made exclusively empirically with the direct work with software products. Analysis functionality of tools allow you to distinguish the two categories CASE-tools database design. The first category includes systems with a basic set of features and tools. The most important basic functions of these systems are: management connections to database servers, visual tools to create and modify database objects (tables, views, triggers, procedures, the ability to enter and edit data in table mode, user and privilege management tools, editor SQL-code, means export/import data. CASE-system related to the first category can be used to design and develop simple databases, data management, as well as a means of administration server database. A distinctive feature of the second category of CASE-tools for database design (full-featured systems is the presence of visual designer, allowing to carry out the construction of the database model and automatic creation of the database on the server based on this model. CASE-system related to this categories can be used for the design and development of databases of any structural complexity, as well as a database server administration tool. The article concluded that the

  9. ASEAN Mineral Database and Information System (AMDIS)

    Science.gov (United States)

    Okubo, Y.; Ohno, T.; Bandibas, J. C.; Wakita, K.; Oki, Y.; Takahashi, Y.

    2014-12-01

    AMDIS has lunched officially since the Fourth ASEAN Ministerial Meeting on Minerals on 28 November 2013. In cooperation with Geological Survey of Japan, the web-based GIS was developed using Free and Open Source Software (FOSS) and the Open Geospatial Consortium (OGC) standards. The system is composed of the local databases and the centralized GIS. The local databases created and updated using the centralized GIS are accessible from the portal site. The system introduces distinct advantages over traditional GIS. Those are a global reach, a large number of users, better cross-platform capability, charge free for users, charge free for provider, easy to use, and unified updates. Raising transparency of mineral information to mining companies and to the public, AMDIS shows that mineral resources are abundant throughout the ASEAN region; however, there are many datum vacancies. We understand that such problems occur because of insufficient governance of mineral resources. Mineral governance we refer to is a concept that enforces and maximizes the capacity and systems of government institutions that manages minerals sector. The elements of mineral governance include a) strengthening of information infrastructure facility, b) technological and legal capacities of state-owned mining companies to fully-engage with mining sponsors, c) government-led management of mining projects by supporting the project implementation units, d) government capacity in mineral management such as the control and monitoring of mining operations, and e) facilitation of regional and local development plans and its implementation with the private sector.

  10. Database Software for the 1990s.

    Science.gov (United States)

    Beiser, Karl

    1990-01-01

    Examines trends in the design of database management systems for microcomputers and predicts developments that may occur in the next decade. Possible developments are discussed in the areas of user interfaces, database programing, library systems, the use of MARC data, CD-ROM applications, artificial intelligence features, HyperCard, and…

  11. A new feature constituting approach to detection of vocal fold pathology

    Science.gov (United States)

    Hariharan, M.; Polat, Kemal; Yaacob, Sazali

    2014-08-01

    In the last two decades, non-invasive methods through acoustic analysis of voice signal have been proved to be excellent and reliable tool to diagnose vocal fold pathologies. This paper proposes a new feature vector based on the wavelet packet transform and singular value decomposition for the detection of vocal fold pathology. k-means clustering based feature weighting is proposed to increase the distinguishing performance of the proposed features. In this work, two databases Massachusetts Eye and Ear Infirmary (MEEI) voice disorders database and MAPACI speech pathology database are used. Four different supervised classifiers such as k-nearest neighbour (k-NN), least-square support vector machine, probabilistic neural network and general regression neural network are employed for testing the proposed features. The experimental results uncover that the proposed features give very promising classification accuracy of 100% for both MEEI database and MAPACI speech pathology database.

  12. The offshore hydrocarbon releases (HCR) database

    International Nuclear Information System (INIS)

    Bruce, R.A.P.

    1995-01-01

    Following Cullen Recommendation 39 which states that: ''The regulatory body should be responsible for maintaining a database with regard to hydrocarbon leaks, spills, and ignitions in the Industry and for the benefit of Industry'', HSE Offshore Safety Division (HSE-OSD) has now been operating the Hydrocarbon Releases (HCR) Database for approximately 3 years. This paper deals with the reporting of Offshore Hydrocarbon Releases, the setting up of the HCR Database, the collection of associated equipment population data, and the main features and benefits of the database, including discussion on the latest output information. (author)

  13. Report of the SRC working party on databases and database management systems

    International Nuclear Information System (INIS)

    Crennell, K.M.

    1980-10-01

    An SRC working party, set up to consider the subject of support for databases within the SRC, were asked to identify interested individuals and user communities, establish which features of database management systems they felt were desirable, arrange demonstrations of possible systems and then make recommendations for systems, funding and likely manpower requirements. This report describes the activities and lists the recommendations of the working party and contains a list of databses maintained or proposed by those who replied to a questionnaire. (author)

  14. Key Features of Governance in Brazilian Science and Technology Parks

    Directory of Open Access Journals (Sweden)

    Milton Correia Sampaio Filho

    2017-09-01

    Full Text Available The situation of Brazilian Science and Technology Parks (STPs operation led to the field research. Even with the public policy of stimulus and support of associations, nothing has been mapped on the dissemination of results (economic growth and regional development. This scenario instigates the question: What are the governance characteristics of Brazilian Science and Technology Parks? A empirical field research was developed, taking into consideration the possibility of replication trought the registration of the choice criteria in the multiple cases and trought research detalhes and data colection. Eight STPs (TECNOPUC - Porto Alegre, Valetec - Novo Hamburgo, Tecnosinos - Sao Leopoldo, Unicamp, CIATEC and TECHNOPARK - Campinas, Rio Park - Rio de Janeiro and SergipeTec participated in research. The results and considerations about the research question allows to infer the little effectiveness of governance (without qualitative or quantitative performance indicators is possibly caused by tensions characterized by elements such as heterogeneity in characteristics of organizations that are part of STPs, lack of consensus on common goals, pressure forces and influences affecting trusts, nonconformity standards and personal and organizational preferences. Leadership relations championed by the government and / or companies can negatively influence the STP's performance as a whole.

  15. Australian Government Information Resources

    OpenAIRE

    Chapman, Bert

    2017-01-01

    Provides an overview of Australian Government information resources. Features content from Australian Government agency websites such as the Department of Environment and Energy, Department of Defence, Australian National Maritime Museum, ANZAC Memorial in Sydney, Department of Immigration & Border Protection, Australian Bureau of Statistics, Australian Dept. of Agriculture and Water Resources, Australian Parliament, Australian Treasury, Australian Transport Safety Board, and Australian Parl...

  16. Mobile object retrieval in server-based image databases

    Science.gov (United States)

    Manger, D.; Pagel, F.; Widak, H.

    2013-05-01

    The increasing number of mobile phones equipped with powerful cameras leads to huge collections of user-generated images. To utilize the information of the images on site, image retrieval systems are becoming more and more popular to search for similar objects in an own image database. As the computational performance and the memory capacity of mobile devices are constantly increasing, this search can often be performed on the device itself. This is feasible, for example, if the images are represented with global image features or if the search is done using EXIF or textual metadata. However, for larger image databases, if multiple users are meant to contribute to a growing image database or if powerful content-based image retrieval methods with local features are required, a server-based image retrieval backend is needed. In this work, we present a content-based image retrieval system with a client server architecture working with local features. On the server side, the scalability to large image databases is addressed with the popular bag-of-word model with state-of-the-art extensions. The client end of the system focuses on a lightweight user interface presenting the most similar images of the database highlighting the visual information which is common with the query image. Additionally, new images can be added to the database making it a powerful and interactive tool for mobile contentbased image retrieval.

  17. Geo-spatial Service and Application based on National E-government Network Platform and Cloud

    Science.gov (United States)

    Meng, X.; Deng, Y.; Li, H.; Yao, L.; Shi, J.

    2014-04-01

    With the acceleration of China's informatization process, our party and government take a substantive stride in advancing development and application of digital technology, which promotes the evolution of e-government and its informatization. Meanwhile, as a service mode based on innovative resources, cloud computing may connect huge pools together to provide a variety of IT services, and has become one relatively mature technical pattern with further studies and massive practical applications. Based on cloud computing technology and national e-government network platform, "National Natural Resources and Geospatial Database (NRGD)" project integrated and transformed natural resources and geospatial information dispersed in various sectors and regions, established logically unified and physically dispersed fundamental database and developed national integrated information database system supporting main e-government applications. Cross-sector e-government applications and services are realized to provide long-term, stable and standardized natural resources and geospatial fundamental information products and services for national egovernment and public users.

  18. Yucca Mountain digital database

    International Nuclear Information System (INIS)

    Daudt, C.R.; Hinze, W.J.

    1992-01-01

    This paper discusses the Yucca Mountain Digital Database (DDB) which is a digital, PC-based geographical database of geoscience-related characteristics of the proposed high-level waste (HLW) repository site of Yucca Mountain, Nevada. It was created to provide the US Nuclear Regulatory Commission's (NRC) Advisory Committee on Nuclear Waste (ACNW) and its staff with a visual perspective of geological, geophysical, and hydrological features at the Yucca Mountain site as discussed in the Department of Energy's (DOE) pre-licensing reports

  19. Local Government System in Japan

    Directory of Open Access Journals (Sweden)

    Vladimir V. Redko

    2016-12-01

    Full Text Available The article is devoted to the issues of the activities of the local government of Japan. Particular attention is drawn to the legal framework and the material basis for the functioning of local self-government bodies. The system of local self-government is considered as a special form of self-government with a specific functional and meaning; system of municipal management and delegation of authority, as well as features of interaction between civil and imperious levels. The allocation of the city with a special status, as well as the financial structure of the local government of Japan, is considered in detail.

  20. Federal Advisory Committee Act (FACA) Database-Complete-Raw

    Data.gov (United States)

    General Services Administration — The Federal Advisory Committee Act (FACA) database is used by Federal agencies to continuously manage an average of 1,000 advisory committees government-wide. This...

  1. Domain Regeneration for Cross-Database Micro-Expression Recognition

    Science.gov (United States)

    Zong, Yuan; Zheng, Wenming; Huang, Xiaohua; Shi, Jingang; Cui, Zhen; Zhao, Guoying

    2018-05-01

    In this paper, we investigate the cross-database micro-expression recognition problem, where the training and testing samples are from two different micro-expression databases. Under this setting, the training and testing samples would have different feature distributions and hence the performance of most existing micro-expression recognition methods may decrease greatly. To solve this problem, we propose a simple yet effective method called Target Sample Re-Generator (TSRG) in this paper. By using TSRG, we are able to re-generate the samples from target micro-expression database and the re-generated target samples would share same or similar feature distributions with the original source samples. For this reason, we can then use the classifier learned based on the labeled source samples to accurately predict the micro-expression categories of the unlabeled target samples. To evaluate the performance of the proposed TSRG method, extensive cross-database micro-expression recognition experiments designed based on SMIC and CASME II databases are conducted. Compared with recent state-of-the-art cross-database emotion recognition methods, the proposed TSRG achieves more promising results.

  2. Move Over, Word Processors--Here Come the Databases.

    Science.gov (United States)

    Olds, Henry F., Jr.; Dickenson, Anne

    1985-01-01

    Discusses the use of beginning, intermediate, and advanced databases for instructional purposes. A table listing seven databases with information on ease of use, smoothness of operation, data capacity, speed, source, and program features is included. (JN)

  3. Economic and Structural Database for the MEDPRO Project

    OpenAIRE

    Paroussos, Leonidas; Tsani, Stella; Vrontisi, Zoi

    2013-01-01

    This report presents the economic and structural database compiled for the MEDPRO project. The database includes governance, infrastructure, finance, environment, energy, agricultural data and development indicators for the 11 southern and eastern Mediterranean countries (SEMCs) studied in the MEDPRO project. The report further details the data and the methods used for the construction of social accounting, bilateral trade, consumption and investment matrices for each of the SEMCs.

  4. i-Genome: A database to summarize oligonucleotide data in genomes

    Directory of Open Access Journals (Sweden)

    Chang Yu-Chung

    2004-10-01

    Full Text Available Abstract Background Information on the occurrence of sequence features in genomes is crucial to comparative genomics, evolutionary analysis, the analyses of regulatory sequences and the quantitative evaluation of sequences. Computing the frequencies and the occurrences of a pattern in complete genomes is time-consuming. Results The proposed database provides information about sequence features generated by exhaustively computing the sequences of the complete genome. The repetitive elements in the eukaryotic genomes, such as LINEs, SINEs, Alu and LTR, are obtained from Repbase. The database supports various complete genomes including human, yeast, worm, and 128 microbial genomes. Conclusions This investigation presents and implements an efficiently computational approach to accumulate the occurrences of the oligonucleotides or patterns in complete genomes. A database is established to maintain the information of the sequence features, including the distributions of oligonucleotide, the gene distribution, the distribution of repetitive elements in genomes and the occurrences of the oligonucleotides. The database can provide more effective and efficient way to access the repetitive features in genomes.

  5. Dutch urban governance: multi-level of multi-scalar?

    NARCIS (Netherlands)

    Kokx, J.M.C.; Kempen, R. van

    2010-01-01

    Many accounts of urban governance emphasize municipal and neighbourhood scales, featuring local participation, social cohesion and the relationship between local government and residents. By contrast, our focus is the vertical governance processes of integrated urban policies. We concentrate on the

  6. TU-A-12A-04: Quantitative Texture Features Calculated in Lung Tissue From CT Scans Demonstrate Consistency Between Two Databases From Different Institutions

    International Nuclear Information System (INIS)

    Cunliffe, A; Armato, S; Castillo, R; Pham, N; Guerrero, T; Al-Hallaq, H

    2014-01-01

    Purpose: To evaluate the consistency of computed tomography (CT) scan texture features, previously identified as stable in a healthy patient cohort, in esophageal cancer patient CT scans. Methods: 116 patients receiving radiation therapy (median dose: 50.4Gy) for esophageal cancer were retrospectively identified. For each patient, diagnostic-quality pre-therapy (0-183 days) and post-therapy (5-120 days) scans (mean voxel size: 0.8mm×0.8mm×2.5mm) and a treatment planning scan and associated dose map were collected. An average of 501 32x32-pixel ROIs were placed randomly in the lungs of each pre-therapy scan. ROI centers were mapped to corresponding locations in post-therapy and planning scans using the displacement vector field output by demons deformable registration. Only ROIs with mean dose <5Gy were analyzed, as these were expected to contain minimal post-treatment damage. 140 texture features were calculated in pre-therapy and post-therapy scan ROIs and compared using Bland-Altman analysis. For each feature, the mean feature value change and the distance spanned by the 95% limits of agreement were normalized to the mean feature value, yielding normalized range of agreement (nRoA) and normalized bias (nBias). Using Wilcoxon signed rank tests, nRoA and nBias were compared with values computed previously in 27 healthy patient scans (mean voxel size: 0.67mm×0.67mm×1mm) acquired at a different institution. Results: nRoA was significantly (p<0.001) larger in cancer patients than healthy patients. Differences in nBias were not significant (p=0.23). The 20 features identified previously as having nRoA<20% for healthy patients had the lowest nRoA values in the current database, with an average increase of 5.6%. Conclusion: Despite differences in CT scanner type, scan resolution, and patient health status, the same 20 features remained stable (i.e., low variability and bias) in the absence of disease changes for databases from two institutions. Identification of

  7. THE DEVELOPMENT OF THE YUCCA MOUNTAIN PROJECT FEATURE, EVENT, AND PROCESS (FEP) DATABASE

    International Nuclear Information System (INIS)

    Freeze, G.; Swift, P.; Brodsky, N.

    2000-01-01

    A Total System Performance Assessment for Site Recommendation (TSPA-SR) has recently been completed (CRWMS M andO, 2000b) for the potential high-level waste repository at the Yucca Mountain site. The TSPA-SR is an integrated model of scenarios and processes relevant to the postclosure performance of the potential repository. The TSPA-SR scenarios and model components in turn include representations of all features, events, and processes (FEPs) identified as being relevant (i.e., screened in) for analysis. The process of identifying, classifying, and screening potentially relevant FEPs thus provides a critical foundation for scenario development and TSPA analyses for the Yucca Mountain site (Swift et al., 1999). The objectives of this paper are to describe (a) the identification and classification of the comprehensive list of FEPs potentially relevant to the postclosure performance of the potential Yucca Mountain repository, and (b) the development, structure, and use of an electronic database for storing and retrieving screening information about the inclusion and/or exclusion of these Yucca Mountain FEPs in TSPA-SR. The FEPs approach to scenario development is not unique to the Yucca Mountain Project (YMP). General systematic approaches are summarized in NEA (1992). The application of the FEPs approach in several other international radioactive waste disposal programs is summarized in NEA ( 1999)

  8. Grantees Guide to Research Databases at IDRC

    International Development Research Centre (IDRC) Digital Library (Canada)

    . 7. 7. Creating search alerts. 9. 8. IDRC Digital Library (IDL). 11. 9. Key contacts. 12. Commercial databases conditions of use. These resources are governed by license agreements which restrict use to IDRC employees and grantees taking ...

  9. Electronic Commerce: Government Services in the New Millennium.

    Science.gov (United States)

    Maxwell, Terrence A., Ed.

    1998-01-01

    This newsletter features innovations in resource management and information technology to support New York State government. The newsletter contains the following six sections: (1) "Electronic Commerce: Government Services in the New Millennium" -- examining the need for government involvement in electronic commerce policy and…

  10. The plant phenological online database (PPODB): an online database for long-term phenological data

    Science.gov (United States)

    Dierenbach, Jonas; Badeck, Franz-W.; Schaber, Jörg

    2013-09-01

    We present an online database that provides unrestricted and free access to over 16 million plant phenological observations from over 8,000 stations in Central Europe between the years 1880 and 2009. Unique features are (1) a flexible and unrestricted access to a full-fledged database, allowing for a wide range of individual queries and data retrieval, (2) historical data for Germany before 1951 ranging back to 1880, and (3) more than 480 curated long-term time series covering more than 100 years for individual phenological phases and plants combined over Natural Regions in Germany. Time series for single stations or Natural Regions can be accessed through a user-friendly graphical geo-referenced interface. The joint databases made available with the plant phenological database PPODB render accessible an important data source for further analyses of long-term changes in phenology. The database can be accessed via www.ppodb.de .

  11. Development of Information Technology of Object-relational Databases Design

    Directory of Open Access Journals (Sweden)

    Valentyn A. Filatov

    2012-12-01

    Full Text Available The article is concerned with the development of information technology of object-relational databases design and study of object features infological and logical database schemes entities and connections.

  12. Final Results of Shuttle MMOD Impact Database

    Science.gov (United States)

    Hyde, J. L.; Christiansen, E. L.; Lear, D. M.

    2015-01-01

    The Shuttle Hypervelocity Impact Database documents damage features on each Orbiter thought to be from micrometeoroids (MM) or orbital debris (OD). Data is divided into tables for crew module windows, payload bay door radiators and thermal protection systems along with other miscellaneous regions. The combined number of records in the database is nearly 3000. Each database record provides impact feature dimensions, location on the vehicle and relevant mission information. Additional detail on the type and size of particle that produced the damage site is provided when sampling data and definitive spectroscopic analysis results are available. Guidelines are described which were used in determining whether impact damage is from micrometeoroid or orbital debris impact based on the findings from scanning electron microscopy chemical analysis. Relationships assumed when converting from observed feature sizes in different shuttle materials to particle sizes will be presented. A small number of significant impacts on the windows, radiators and wing leading edge will be highlighted and discussed in detail, including the hypervelocity impact testing performed to estimate particle sizes that produced the damage.

  13. INTERNATIONAL GOVERNMENT SECURITIES: SPECIFIC FUNCTIONING

    Directory of Open Access Journals (Sweden)

    N. Versal

    2013-11-01

    Full Text Available It’s disclosed the features of the international government securities market during 1993 – 2012: main players are the developed countries (Western Europe, Canada, USA with the increasing role of developing countries; debt crises have the negative impact as on the development of the international government securities market, but also on the international capital market as a whole; debt crises are not a spontaneous phenomenon, and usually occur as a result of inadequate growth in GDP increasing government debt.

  14. The CEBAF Element Database and Related Operational Software

    Energy Technology Data Exchange (ETDEWEB)

    Larrieu, Theodore [Thomas Jefferson National Accelerator Facility, Newport News, VA (United States); Slominski, Christopher [Thomas Jefferson National Accelerator Facility, Newport News, VA (United States); Keesee, Marie [Thomas Jefferson National Accelerator Facility, Newport News, VA (United States); Turner, Dennison [Thomas Jefferson National Accelerator Facility, Newport News, VA (United States); Joyce, Michele [Thomas Jefferson National Accelerator Facility, Newport News, VA (United States)

    2015-09-01

    The newly commissioned 12GeV CEBAF accelerator relies on a flexible, scalable and comprehensive database to define the accelerator. This database delivers the configuration for CEBAF operational tools, including hardware checkout, the downloadable optics model, control screens, and much more. The presentation will describe the flexible design of the CEBAF Element Database (CED), its features and assorted use case examples.

  15. GOVERNMENT ELECTRONIC PURCHASING: AN ASSESSMENT OF BRAZILIAN STATE GOVERNMENTS' E-PROCUREMENT WEBSITES

    OpenAIRE

    Alves, Tomaz Rodrigo; Universidade de São Paulo; Souza, Cesar Alexandre; Universidade de São Paulo

    2011-01-01

    One of the electronic government applications that has been fast developing in Brazil is e-procurement. It’s a field that allows relatively objective measuring, regarding price cuts and savings generated by the reduction of bureaucracy. This research evaluated the quality of the e-procurement portals of the 26 Brazilian state governments and also the Federal District, considering primarily features that could be useful to suppliers. In order to do so, a scoring method was developed, in which ...

  16. Learning and clean-up in a large scale music database

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Lehn-Schiøler, Tue; Petersen, Kaare Brandt

    2007-01-01

    We have collected a database of musical features from radio broadcasts (N > 100.000). The database poses a number of hard modeling challenges including: Segmentation problems and missing metadata. We describe our efforts towards cleaning the database using signal processing and machine learning...

  17. Enacting Governance through Strategy

    DEFF Research Database (Denmark)

    Brandtner, Christof; Höllerer, Markus A.; Meyer, Renate E.

    2017-01-01

    of strategy documents in city administration addresses these challenges. Our central claim is that strategy documents can be understood as a distinct discursive device through which local governments enact aspired governance configurations. We illustrate our argument empirically using two prominent examples...... that, while showing similar features and characteristics, are anchored in different administrative traditions and institutional frameworks: the city administrations of Sydney, Australia, and Vienna, Austria. The contribution of the paper is to show how strategy documents enact governance configurations...... along four core dimensions: the setting in space and time, the definition of the public, the framing of the res publica and legitimacy issues. Moreover, our comparative analysis of Sydney and Vienna gives evidence of differences in governance configurations enacted through strategy documents....

  18. performance evaluation of feature sets of minutiae quadruplets

    African Journals Online (AJOL)

    databases. This shows that the evaluation of algorithms on just one or two databases is not sufficient to confirm the performance of tech- niques as they may be database-dependent. Much work was done to find a feature-set that would have a good performance across three. FVC databases of the FVC 2000, 2002 and. 2004 ...

  19. Professional iOS database application programming

    CERN Document Server

    Alessi, Patrick

    2013-01-01

    Updated and revised coverage that includes the latest versions of iOS and Xcode Whether you're a novice or experienced developer, you will want to dive into this updated resource on database application programming for the iPhone and iPad. Packed with more than 50 percent new and revised material - including completely rebuilt code, screenshots, and full coverage of new features pertaining to database programming and enterprise integration in iOS 6 - this must-have book intends to continue the precedent set by the previous edition by helping thousands of developers master database

  20. Proceedings of the Canadian Solar Industries Association Solar Forum 2005 : sunny days ahead : a forum on solar energy for government officials

    International Nuclear Information System (INIS)

    2006-01-01

    Solar energy is the fastest growing energy source in the world. Government involvement is critical in the deployment of solar energy. This forum focused on the application of solar energy in government facilities. The forum was divided into 3 sessions: (1) solar technologies and markets; (2) government initiatives that support solar energy; and (3) the use of solar energy on government facilities in Canada. The current state of solar technologies and products in Canada was reviewed. Solar thermal markets were discussed with reference to passive solar energy and photovoltaic applications. On-site solar generation for federal facilities was discussed, and various federal initiatives were reviewed. Issues concerning Ontario's standard offer contract program were discussed. Government users and buyers of solar products spoke of their experiences in using solar energy and the challenges that were faced. The role that solar energy can play in reducing government costs was discussed, as well as the impact of solar energy on the environment. Opportunities and barriers to the use of solar energy in Canada were explored. The conference featured 14 presentations, of which 2 have been catalogued separately for inclusion in this database. refs., tabs., figs

  1. The ESID Online Database network.

    Science.gov (United States)

    Guzman, D; Veit, D; Knerr, V; Kindle, G; Gathmann, B; Eades-Perner, A M; Grimbacher, B

    2007-03-01

    Primary immunodeficiencies (PIDs) belong to the group of rare diseases. The European Society for Immunodeficiencies (ESID), is establishing an innovative European patient and research database network for continuous long-term documentation of patients, in order to improve the diagnosis, classification, prognosis and therapy of PIDs. The ESID Online Database is a web-based system aimed at data storage, data entry, reporting and the import of pre-existing data sources in an enterprise business-to-business integration (B2B). The online database is based on Java 2 Enterprise System (J2EE) with high-standard security features, which comply with data protection laws and the demands of a modern research platform. The ESID Online Database is accessible via the official website (http://www.esid.org/). Supplementary data are available at Bioinformatics online.

  2. The PEP-II project-wide database

    International Nuclear Information System (INIS)

    Chan, A.; Calish, S.; Crane, G.; MacGregor, I.; Meyer, S.; Wong, J.

    1995-05-01

    The PEP-II Project Database is a tool for monitoring the technical and documentation aspects of this accelerator construction. It holds the PEP-II design specifications, fabrication and installation data in one integrated system. Key pieces of the database include the machine parameter list, magnet and vacuum fabrication data. CAD drawings, publications and documentation, survey and alignment data and property control. The database can be extended to contain information required for the operations phase of the accelerator and detector. Features such as viewing CAD drawing graphics from the database will be implemented in the future. This central Oracle database on a UNIX server is built using ORACLE Case tools. Users at the three collaborating laboratories (SLAC, LBL, LLNL) can access the data remotely, using various desktop computer platforms and graphical interfaces

  3. Zebrafish Database: Customizable, Free, and Open-Source Solution for Facility Management.

    Science.gov (United States)

    Yakulov, Toma Antonov; Walz, Gerd

    2015-12-01

    Zebrafish Database is a web-based customizable database solution, which can be easily adapted to serve both single laboratories and facilities housing thousands of zebrafish lines. The database allows the users to keep track of details regarding the various genomic features, zebrafish lines, zebrafish batches, and their respective locations. Advanced search and reporting options are available. Unique features are the ability to upload files and images that are associated with the respective records and an integrated calendar component that supports multiple calendars and categories. Built on the basis of the Joomla content management system, the Zebrafish Database is easily extendable without the need for advanced programming skills.

  4. Corporate governance cycles during transition

    DEFF Research Database (Denmark)

    Jones, Derek C.; Mygind, Niels

    2004-01-01

    -sures for restructuring produce strong impulses for ownership changes. There is limited possibility for external finance because of the embryonic development of the banking system and the capital markets during early transition. The governance cycle is also influenced by specific features of the institutional, cultural...... is faster in Estonia and this can be explained by the relatively fast pace of institutional change and evolution of important gov-ernance institutions, including tough bankruptcy legislation and advances in the financial system. JEL-codes: G3, J5, P2, P3 Keywords: corporate governance, life...

  5. Brain Tumor Database, a free relational database for collection and analysis of brain tumor patient information.

    Science.gov (United States)

    Bergamino, Maurizio; Hamilton, David J; Castelletti, Lara; Barletta, Laura; Castellan, Lucio

    2015-03-01

    In this study, we describe the development and utilization of a relational database designed to manage the clinical and radiological data of patients with brain tumors. The Brain Tumor Database was implemented using MySQL v.5.0, while the graphical user interface was created using PHP and HTML, thus making it easily accessible through a web browser. This web-based approach allows for multiple institutions to potentially access the database. The BT Database can record brain tumor patient information (e.g. clinical features, anatomical attributes, and radiological characteristics) and be used for clinical and research purposes. Analytic tools to automatically generate statistics and different plots are provided. The BT Database is a free and powerful user-friendly tool with a wide range of possible clinical and research applications in neurology and neurosurgery. The BT Database graphical user interface source code and manual are freely available at http://tumorsdatabase.altervista.org. © The Author(s) 2013.

  6. The governance of adaptation: choices, reasons, and effects. Introduction to the Special Feature

    NARCIS (Netherlands)

    Huitema, D.; Adger, W.N.; Berkhout, F.G.H.; Massey, E.E.; Mazmanian, D.; Munaretto, S.; Plummer, R.; Termeer, C.C.J.A.M.

    2016-01-01

    The governance of climate adaptation involves the collective efforts of multiple societal actors to address problems, or to reap the benefits, associated with impacts of climate change. Governing involves the creation of institutions, rules and organizations, and the selection of normative

  7. DIGITAL FLOOD INSURANCE RATE MAP DATABASE, LEXINGTON-FAYETTE URBAN COUNTY GOVERNMENT, KENTUCKY

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — The Digital Flood Insurance Rate Map (DFIRM) Database depicts flood risk information and supporting data used to develop the risk data. The primary risk...

  8. Cross-interdisciplinary insights into adaptive governance and resilience

    Directory of Open Access Journals (Sweden)

    Craig Anthony (Tony. Arnold

    2017-12-01

    Full Text Available The Adaptive Water Governance project is an interdisciplinary collaborative synthesis project aimed at identifying the features of adaptive governance in complex social-ecological institutional systems to manage for water-basin resilience. We conducted a systematic qualitative meta-analysis of the project's first set of published interdisciplinary studies, six North American basin resilience assessments. We sought to develop new knowledge that transcends each study, concerning two categories of variables: (1 the drivers of change in complex water-basin systems that affect systemic resilience; and (2 the features of adaptive governance. We have identified the pervasive themes, concepts, and variables of the systemic-change drivers and adaptive-governance features from these six interdisciplinary texts using qualitative methods of inductive textual analysis and synthesis. We produced synthesis frameworks for understanding the patterns that emerged from the basin assessment texts, as well as comprehensive lists of the variables that these studies uniformly or nearly uniformly addressed. These study results are cross-interdisciplinary in the sense that they identify patterns and knowledge that transcend several diverse interdisciplinary studies. These relevant and potentially generalizable insights form a foundation for future research on the dynamics of complex social-ecological institutional systems and how they could be governed adaptively.

  9. Artificially intelligent recognition of Arabic speaker using voice print-based local features

    Science.gov (United States)

    Mahmood, Awais; Alsulaiman, Mansour; Muhammad, Ghulam; Akram, Sheeraz

    2016-11-01

    Local features for any pattern recognition system are based on the information extracted locally. In this paper, a local feature extraction technique was developed. This feature was extracted in the time-frequency plain by taking the moving average on the diagonal directions of the time-frequency plane. This feature captured the time-frequency events producing a unique pattern for each speaker that can be viewed as a voice print of the speaker. Hence, we referred to this technique as voice print-based local feature. The proposed feature was compared to other features including mel-frequency cepstral coefficient (MFCC) for speaker recognition using two different databases. One of the databases used in the comparison is a subset of an LDC database that consisted of two short sentences uttered by 182 speakers. The proposed feature attained 98.35% recognition rate compared to 96.7% for MFCC using the LDC subset.

  10. The government of life

    DEFF Research Database (Denmark)

    Villadsen, Kaspar; Wahlberg, Ayo

    2015-01-01

    . Subsequent research on biopolitics and governmentality has tended to separate the concepts, differentiating into distinct research traditions each with different intellectual pathways. We propose to bring these conceptual innovations together to understand contemporary problems of the government of life...... of death power, the interplay of sovereignty, discipline and security, governmentalization through medical normalization, and ‘securitization’ of life as circulations and open series. The article also introduces this special feature on the government of life in which significant scholars explores issues...

  11. O-ODM Framework for Object-Relational Databases

    Directory of Open Access Journals (Sweden)

    Carlos Alberto Rombaldo Jr

    2012-09-01

    Full Text Available Object-Relational Databases introduce new features which allow manipulating objects in databases. At present, many DBMS offer resources to manipulate objects in database, but most application developers just map class to relations tables, failing to exploit the O-R model strength. The lack of tools that aid the database project contributes to this situation. This work presents O-ODM (Object-Object Database Mapping, a persistent framework that maps objects from OO applications to database objects. Persistent Frameworks have been used to aid developers, managing all access to DBMS. This kind of tool allows developers to persist objects without solid knowledge about DBMSs and specific languages, improving the developers’ productivity, mainly when a different DBMS is used. The results of some experiments using O-ODM are shown.

  12. Experience in running relational databases on clustered storage

    CERN Document Server

    Aparicio, Ruben Gaspar

    2015-01-01

    For past eight years, CERN IT Database group has based its backend storage on NAS (Network-Attached Storage) architecture, providing database access via NFS (Network File System) protocol. In last two and half years, our storage has evolved from a scale-up architecture to a scale-out one. This paper describes our setup and a set of functionalities providing key features to other services like Database on Demand [1] or CERN Oracle backup and recovery service. It also outlines possible trend of evolution that, storage for databases could follow.

  13. Quantifying the effects of IT-governance rules

    NARCIS (Netherlands)

    Verhoef, C.

    2007-01-01

    Via quantitative analyses of large IT-portfolio databases, we detected unique data patterns pointing to certain IT-governance rules and styles, plus their sometimes nonintuitive and negative side-effects. We grouped the most important patterns in seven categories and highlighted them separately.

  14. Palmprint Based Verification System Using SURF Features

    Science.gov (United States)

    Srinivas, Badrinath G.; Gupta, Phalguni

    This paper describes the design and development of a prototype of robust biometric system for verification. The system uses features extracted using Speeded Up Robust Features (SURF) operator of human hand. The hand image for features is acquired using a low cost scanner. The palmprint region extracted is robust to hand translation and rotation on the scanner. The system is tested on IITK database of 200 images and PolyU database of 7751 images. The system is found to be robust with respect to translation and rotation. It has FAR 0.02%, FRR 0.01% and accuracy of 99.98% and can be a suitable system for civilian applications and high-security environments.

  15. Oil and gas field database

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Young In; Han, Jung Kuy [Korea Institute of Geology Mining and Materials, Taejon (Korea)

    1998-12-01

    As agreed by the Second Meeting of the Expert Group of Minerals and Energy Exploration and Development in Seoul, Korea, 'The Construction of Database on the Oil and Gas Fields in the APEC Region' is now under way as a GEMEED database project for 1998. This project is supported by Korean government funds and the cooperation of GEMEED colleagues and experts. During this year, we have constructed the home page menu (topics) and added the data items on the oil and gas field. These items include name of field, discovery year, depth, the number of wells, average production (b/d), cumulative production, and API gravity. The web site shows the total number of oil and gas fields in the APEC region is 47,201. The number of oil and gas fields by member economics are shown in the table. World oil and gas statistics including reserve, production consumption, and trade information were added to the database for the users convenience. (author). 13 refs., tabs., figs.

  16. Oil and gas field database

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Young In; Han, Jung Kuy [Korea Institute of Geology Mining and Materials, Taejon (Korea)

    1998-12-01

    As agreed by the Second Meeting of the Expert Group of Minerals and Energy Exploration and Development in Seoul, Korea, 'The Construction of Database on the Oil and Gas Fields in the APEC Region' is now under way as a GEMEED database project for 1998. This project is supported by Korean government funds and the cooperation of GEMEED colleagues and experts. During this year, we have constructed the home page menu (topics) and added the data items on the oil and gas field. These items include name of field, discovery year, depth, the number of wells, average production (b/d), cumulative production, and API gravity. The web site shows the total number of oil and gas fields in the APEC region is 47,201. The number of oil and gas fields by member economics are shown in the table. World oil and gas statistics including reserve, production consumption, and trade information were added to the database for the users convenience. (author). 13 refs., tabs., figs.

  17. Integrating the Allen Brain Institute Cell Types Database into Automated Neuroscience Workflow.

    Science.gov (United States)

    Stockton, David B; Santamaria, Fidel

    2017-10-01

    We developed software tools to download, extract features, and organize the Cell Types Database from the Allen Brain Institute (ABI) in order to integrate its whole cell patch clamp characterization data into the automated modeling/data analysis cycle. To expand the potential user base we employed both Python and MATLAB. The basic set of tools downloads selected raw data and extracts cell, sweep, and spike features, using ABI's feature extraction code. To facilitate data manipulation we added a tool to build a local specialized database of raw data plus extracted features. Finally, to maximize automation, we extended our NeuroManager workflow automation suite to include these tools plus a separate investigation database. The extended suite allows the user to integrate ABI experimental and modeling data into an automated workflow deployed on heterogeneous computer infrastructures, from local servers, to high performance computing environments, to the cloud. Since our approach is focused on workflow procedures our tools can be modified to interact with the increasing number of neuroscience databases being developed to cover all scales and properties of the nervous system.

  18. Climate change governance

    Energy Technology Data Exchange (ETDEWEB)

    Knieling, Joerg [HafenCity Univ. Hamburg (Germany). Urban Planning and Regional Development; Leal Filho, Walter (eds.) [HAW Hamburg (Germany). Research and Transfer Centre Applications of Life Science

    2013-07-01

    Climate change is a cause for concern both globally and locally. In order for it to be tackled holistically, its governance is an important topic needing scientific and practical consideration. Climate change governance is an emerging area, and one which is closely related to state and public administrative systems and the behaviour of private actors, including the business sector, as well as the civil society and non-governmental organisations. Questions of climate change governance deal both with mitigation and adaptation whilst at the same time trying to devise effective ways of managing the consequences of these measures across the different sectors. Many books have been produced on general matters related to climate change, such as climate modelling, temperature variations, sea level rise, but, to date, very few publications have addressed the political, economic and social elements of climate change and their links with governance. This book will address this gap. Furthermore, a particular feature of this book is that it not only presents different perspectives on climate change governance, but it also introduces theoretical approaches and brings these together with practical examples which show how main principles may be implemented in practice.

  19. Intellectual capital performance and cash-based incentive payments for executive directors: Impact of remuneration committee and corporate governance features

    Directory of Open Access Journals (Sweden)

    J-L. W. Mitchell Van der Zahn

    2005-11-01

    Full Text Available We use a sample of 964 executive directors representing 354 Singapore publicly listed firms to examine linkage between firm performance and cash-based bonus payments. As a pooled OLS regression model may hide different models that characterize subsets of observations we use latent class analysis to further examine the data and to identify more specifically the influence of corporate governance features. Our latent class analysis results indicate that remuneration committees with members having their interests better aligned with shareholders (such as presence of a significant owner appear more likely to consider the incremental value of tying executive director compensation to intellectual capital performance. Remuneration committees with a lower risk of influence from managerial power were also found to be more likely to support a compensation linkage for executive directors to intellectual capital performance. The influence of the remuneration committee features is evident for both entrepreneurial and traditional firms. Overall, our findings are consistent with both the optimal-contract pricing and managerial power views of executive compensation setting.

  20. Spatial database of mining-related features in 2001 at selected phosphate mines, Bannock, Bear Lake, Bingham, and Caribou Counties, Idaho

    Science.gov (United States)

    Moyle, Phillip R.; Kayser, Helen Z.

    2006-01-01

    This report describes the spatial database, PHOSMINE01, and the processes used to delineate mining-related features (active and inactive/historical) in the core of the southeastern Idaho phosphate resource area. The spatial data have varying degrees of accuracy and attribution detail. Classification of areas by type of mining-related activity at active mines is generally detailed; however, for many of the closed or inactive mines the spatial coverage does not differentiate mining-related surface disturbance features. Nineteen phosphate mine sites are included in the study, three active phosphate mines - Enoch Valley (nearing closure), Rasmussen Ridge, and Smoky Canyon - and 16 inactive (or historical) phosphate mines - Ballard, Champ, Conda, Diamond Gulch, Dry Valley, Gay, Georgetown Canyon, Henry, Home Canyon, Lanes Creek, Maybe Canyon, Mountain Fuel, Trail Canyon, Rattlesnake, Waterloo, and Wooley Valley. Approximately 6,000 hc (15,000 ac), or 60 km2 (23 mi2) of phosphate mining-related surface disturbance are documented in the spatial coverage. Spatial data for the inactive mines is current because no major changes have occurred; however, the spatial data for active mines were derived from digital maps prepared in early 2001 and therefore recent activity is not included. The inactive Gay Mine has the largest total area of disturbance, 1,900 hc (4,700 ac) or about 19 km2 (7.4 mi2). It encompasses over three times the disturbance area of the next largest mine, the Conda Mine with 610 hc (1,500 ac), and it is nearly four times the area of the Smoky Canyon Mine, the largest of the active mines with about 550 hc (1,400 ac). The wide range of phosphate mining-related surface disturbance features (141) from various industry maps were reduced to 15 types or features based on a generic classification system used for this study: mine pit; backfilled mine pit; waste rock dump; adit and waste rock dump; ore stockpile; topsoil stockpile; tailings or tailings pond; sediment

  1. Digital database of mining-related features at selected historic and active phosphate mines, Bannock, Bear Lake, Bingham, and Caribou counties, Idaho

    Science.gov (United States)

    Causey, J. Douglas; Moyle, Phillip R.

    2001-01-01

    This report provides a description of data and processes used to produce a spatial database that delineates mining-related features in areas of historic and active phosphate mining in the core of the southeastern Idaho phosphate resource area. The data have varying degrees of accuracy and attribution detail. Classification of areas by type of mining-related activity at active mines is generally detailed; however, the spatial coverage does not differentiate mining-related surface disturbance features at many of the closed or inactive mines. Nineteen phosphate mine sites are included in the study. A total of 5,728 hc (14,154 ac), or more than 57 km2 (22 mi2), of phosphate mining-related surface disturbance are documented in the spatial coverage of the core of the southeast Idaho phosphate resource area. The study includes 4 active phosphate mines—Dry Valley, Enoch Valley, Rasmussen Ridge, and Smoky Canyon—and 15 historic phosphate mines—Ballard, Champ, Conda, Diamond Gulch, Gay, Georgetown Canyon, Henry, Home Canyon, Lanes Creek, Maybe Canyon, Mountain Fuel, Trail Canyon, Rattlesnake Canyon, Waterloo, and Wooley Valley. Spatial data on the inactive historic mines is relatively up-to-date; however, spatially described areas for active mines are based on digital maps prepared in early 1999. The inactive Gay mine has the largest total area of disturbance: 1,917 hc (4,736 ac) or about 19 km2 (7.4 mi2). It encompasses over three times the disturbance area of the next largest mine, the Conda mine with 607 hc (1,504 ac), and it is nearly four times the area of the Smoky Canyon mine, the largest of the active mines with 497 hc (1,228 ac). The wide range of phosphate mining-related surface disturbance features (approximately 80) were reduced to 13 types or features used in this study—adit and pit, backfilled mine pit, facilities, mine pit, ore stockpile, railroad, road, sediment catchment, tailings or tailings pond, topsoil stockpile, water reservoir, and disturbed

  2. UbSRD: The Ubiquitin Structural Relational Database.

    Science.gov (United States)

    Harrison, Joseph S; Jacobs, Tim M; Houlihan, Kevin; Van Doorslaer, Koenraad; Kuhlman, Brian

    2016-02-22

    The structurally defined ubiquitin-like homology fold (UBL) can engage in several unique protein-protein interactions and many of these complexes have been characterized with high-resolution techniques. Using Rosetta's structural classification tools, we have created the Ubiquitin Structural Relational Database (UbSRD), an SQL database of features for all 509 UBL-containing structures in the PDB, allowing users to browse these structures by protein-protein interaction and providing a platform for quantitative analysis of structural features. We used UbSRD to define the recognition features of ubiquitin (UBQ) and SUMO observed in the PDB and the orientation of the UBQ tail while interacting with certain types of proteins. While some of the interaction surfaces on UBQ and SUMO overlap, each molecule has distinct features that aid in molecular discrimination. Additionally, we find that the UBQ tail is malleable and can adopt a variety of conformations upon binding. UbSRD is accessible as an online resource at rosettadesign.med.unc.edu/ubsrd. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. The measurement of employee engagement in government institutions

    Directory of Open Access Journals (Sweden)

    Martins, N.

    2016-07-01

    Full Text Available Employee engagement has consistently been rated as one of the top issues on chief executive officers’ lists of priorities and is a main focus of attention of both academics and human resources practitioners. A number of studies focus on employee engagement in the private sector, however there are relatively fewer studies that focus on employee engagement in government institutions. The aim of this study was twofold: Firstly, the validity and reliability of the employee engagement instrument for government institutions were determined. Secondly, it was determined if any significant differences could be detected between the employee engagement levels of the various biographical groups that participated in the survey. A quantitative research study was conducted using a database of a research company. The database in question is made up of 285 000 business people from various industries and sizes of business and who occupy different roles, reflecting the profile of the South African working population. A total of 4 099 employees, of which 427 represented government institutions, completed the employee engagement questionnaire. The results confirmed the validity and reliability of the questionnaire for government institutions, but with a slightly different structure. Some biographical groupings indicated that they experience employee engagement in a significantly different way. The results indicate that the younger employees together with top and senior management experience the highest levels of engagement in government institutions. The significance of these results is that not all biographical groups’ engagement levels can be managed equally

  4. Spectroscopic databases - A tool for structure elucidation

    Energy Technology Data Exchange (ETDEWEB)

    Luksch, P [Fachinformationszentrum Karlsruhe, Gesellschaft fuer Wissenschaftlich-Technische Information mbH, Eggenstein-Leopoldshafen (Germany)

    1990-05-01

    Spectroscopic databases have developed to useful tools in the process of structure elucidation. Besides the conventional library searches, new intelligent programs have been added, that are able to predict structural features from measured spectra or to simulate for a given structure. The example of the C13NMR/IR database developed at BASF and available on STN is used to illustrate the present capabilities of online database. New developments in the field of spectrum simulation and methods for the prediction of complete structures from spectroscopic information are reviewed. (author). 10 refs, 5 figs.

  5. A web-based database for EPR centers in semiconductors

    International Nuclear Information System (INIS)

    Umeda, T.; Hagiwara, S.; Katagiri, M.; Mizuochi, N.; Isoya, J.

    2006-01-01

    We develop a web-based database system for electron paramagnetic resonance (EPR) centers in semiconductors. This database is available to anyone at http://www.kc.tsukuba.ac.jp/div-media/epr/. It currently has more than 300 records of the spin-Hamiltonian parameters for major known EPR centers. One can upload own new records to the database or can use simulation tools powered by EPR-NMR(C). Here, we describe the features and objectives of this database, and mention some future plans

  6. Second-Tier Database for Ecosystem Focus, 2000-2001 Annual Report.

    Energy Technology Data Exchange (ETDEWEB)

    Van Holmes, Chris; Muongchanh, Christine; Anderson, James J. (University of Washington, School of Aquatic and Fishery Sciences, Seattle, WA)

    2001-11-01

    The Second-Tier Database for Ecosystem Focus (Contract 00004124) provides direct and timely public access to Columbia Basin environmental, operational, fishery and riverine data resources for federal, state, public and private entities. The Second-Tier Database known as Data Access in Realtime (DART) does not duplicate services provided by other government entities in the region. Rather, it integrates public data for effective access, consideration and application.

  7. 2008 Availability and Utilization of Electronic Information Databases ...

    African Journals Online (AJOL)

    Gbaje E.S

    electronic information databases include; research work, to update knowledge in their field of interest and Current awareness. ... be read by a computer device. CD ROMs are ... business and government innovation. Its ... technologies, ideas and management practices ..... sources of information and storage devices bring.

  8. The Governmentality of Meta-governance : Identifying Theoretical and Empirical Challenges of Network Governance in the Political Field of Security and Beyond

    OpenAIRE

    Larsson, Oscar

    2015-01-01

    Meta-governance recently emerged in the field of governance as a new approach which claims that its use enables modern states to overcome problems associated with network governance. This thesis shares the view that networks are an important feature of contemporary politics which must be taken seriously, but it also maintains that networks pose substantial analytical and political challenges. It proceeds to investigate the potential possibilities and problems associated with meta-governance o...

  9. News and Features Updates from USA.gov

    Data.gov (United States)

    General Services Administration — Stay on top of important government news and information with the USA.gov Updates: News and Features RSS feed. We'll update this feed when we add news and featured...

  10. National Levee Database, series information for the current inventory of the Nation's levees.

    Data.gov (United States)

    Federal Geographic Data Committee — The National Levee Database is authoritative database that describes the location and condition of the Nation’s levees. The database contains 21 feature classes...

  11. Odense Pharmacoepidemiological Database (OPED)

    DEFF Research Database (Denmark)

    Hallas, Jesper; Poulsen, Maja Hellfritzsch; Hansen, Morten Rix

    2017-01-01

    The Odense University Pharmacoepidemiological Database (OPED) is a prescription database established in 1990 by the University of Southern Denmark, covering reimbursed prescriptions from the county of Funen in Denmark and the region of Southern Denmark (1.2 million inhabitants). It is still active...... and thereby has more than 25 years' of continuous coverage. In this MiniReview, we review its history, content, quality, coverage, governance and some of its uses. OPED's data include the Danish Civil Registration Number (CPR), which enables unambiguous linkage with virtually all other health......-related registers in Denmark. Among its research uses, we review record-linkage studies of drug effects, advanced drug utilization studies, some examples of method development and use of OPED as sampling frame to recruit patients for field studies or clinical trials. With the advent of other, more comprehensive...

  12. Corporate governance and control in Russian banks

    OpenAIRE

    Vernikov, Andrei

    2007-01-01

    The Working Paper examines peculiarities of the Russian model of corporate governance and control in the banking sector. The study relies upon theoretical as well as applied research of corporate governance in Russian commercial banks featuring different forms of ownership. We focus on real interests of all stakeholders, namely bank and stock market regulators, bank owners, investors, top managers and other insiders. The Anglo-American concept of corporate governance, based on agency theor...

  13. Sagace: A web-based search engine for biomedical databases in Japan

    Directory of Open Access Journals (Sweden)

    Morita Mizuki

    2012-10-01

    Full Text Available Abstract Background In the big data era, biomedical research continues to generate a large amount of data, and the generated information is often stored in a database and made publicly available. Although combining data from multiple databases should accelerate further studies, the current number of life sciences databases is too large to grasp features and contents of each database. Findings We have developed Sagace, a web-based search engine that enables users to retrieve information from a range of biological databases (such as gene expression profiles and proteomics data and biological resource banks (such as mouse models of disease and cell lines. With Sagace, users can search more than 300 databases in Japan. Sagace offers features tailored to biomedical research, including manually tuned ranking, a faceted navigation to refine search results, and rich snippets constructed with retrieved metadata for each database entry. Conclusions Sagace will be valuable for experts who are involved in biomedical research and drug development in both academia and industry. Sagace is freely available at http://sagace.nibio.go.jp/en/.

  14. Autism genetic database (AGD: a comprehensive database including autism susceptibility gene-CNVs integrated with known noncoding RNAs and fragile sites

    Directory of Open Access Journals (Sweden)

    Talebizadeh Zohreh

    2009-09-01

    Full Text Available Abstract Background Autism is a highly heritable complex neurodevelopmental disorder, therefore identifying its genetic basis has been challenging. To date, numerous susceptibility genes and chromosomal abnormalities have been reported in association with autism, but most discoveries either fail to be replicated or account for a small effect. Thus, in most cases the underlying causative genetic mechanisms are not fully understood. In the present work, the Autism Genetic Database (AGD was developed as a literature-driven, web-based, and easy to access database designed with the aim of creating a comprehensive repository for all the currently reported genes and genomic copy number variations (CNVs associated with autism in order to further facilitate the assessment of these autism susceptibility genetic factors. Description AGD is a relational database that organizes data resulting from exhaustive literature searches for reported susceptibility genes and CNVs associated with autism. Furthermore, genomic information about human fragile sites and noncoding RNAs was also downloaded and parsed from miRBase, snoRNA-LBME-db, piRNABank, and the MIT/ICBP siRNA database. A web client genome browser enables viewing of the features while a web client query tool provides access to more specific information for the features. When applicable, links to external databases including GenBank, PubMed, miRBase, snoRNA-LBME-db, piRNABank, and the MIT siRNA database are provided. Conclusion AGD comprises a comprehensive list of susceptibility genes and copy number variations reported to-date in association with autism, as well as all known human noncoding RNA genes and fragile sites. Such a unique and inclusive autism genetic database will facilitate the evaluation of autism susceptibility factors in relation to known human noncoding RNAs and fragile sites, impacting on human diseases. As a result, this new autism database offers a valuable tool for the research

  15. Web Application Software for Ground Operations Planning Database (GOPDb) Management

    Science.gov (United States)

    Lanham, Clifton; Kallner, Shawn; Gernand, Jeffrey

    2013-01-01

    A Web application facilitates collaborative development of the ground operations planning document. This will reduce costs and development time for new programs by incorporating the data governance, access control, and revision tracking of the ground operations planning data. Ground Operations Planning requires the creation and maintenance of detailed timelines and documentation. The GOPDb Web application was created using state-of-the-art Web 2.0 technologies, and was deployed as SaaS (Software as a Service), with an emphasis on data governance and security needs. Application access is managed using two-factor authentication, with data write permissions tied to user roles and responsibilities. Multiple instances of the application can be deployed on a Web server to meet the robust needs for multiple, future programs with minimal additional cost. This innovation features high availability and scalability, with no additional software that needs to be bought or installed. For data governance and security (data quality, management, business process management, and risk management for data handling), the software uses NAMS. No local copy/cloning of data is permitted. Data change log/tracking is addressed, as well as collaboration, work flow, and process standardization. The software provides on-line documentation and detailed Web-based help. There are multiple ways that this software can be deployed on a Web server to meet ground operations planning needs for future programs. The software could be used to support commercial crew ground operations planning, as well as commercial payload/satellite ground operations planning. The application source code and database schema are owned by NASA.

  16. Factors governing the deep ventilation of the Red Sea

    KAUST Repository

    Papadopoulos, Vassilis P.; Zhan, Peng; Sofianos, Sarantis S.; Raitsos, Dionysios E.; Qurban, Mohammed; Abualnaja, Yasser; Bower, Amy; Kontoyiannis, Harilaos; Pavlidou, Alexandra; Asharaf T.T., Mohamed; Zarokanellos, Nikolaos; Hoteit, Ibrahim

    2015-01-01

    A variety of data based on hydrographic measurements, satellite observations, reanalysis databases, and meteorological observations are used to explore the interannual variability and factors governing the deep water formation in the northern Red

  17. The ABC (Analysing Biomolecular Contacts-database

    Directory of Open Access Journals (Sweden)

    Walter Peter

    2007-03-01

    Full Text Available As protein-protein interactions are one of the basic mechanisms in most cellular processes, it is desirable to understand the molecular details of protein-protein contacts and ultimately be able to predict which proteins interact. Interface areas on a protein surface that are involved in protein interactions exhibit certain characteristics. Therefore, several attempts were made to distinguish protein interactions from each other and to categorize them. One way of classification are the groups of transient and permanent interactions. Previously two of the authors analysed several properties for transient complexes such as the amino acid and secondary structure element composition and pairing preferences. Certainly, interfaces can be characterized by many more possible attributes and this is a subject of intense ongoing research. Although several freely available online databases exist that illuminate various aspects of protein-protein interactions, we decided to construct a new database collecting all desired interface features allowing for facile selection of subsets of complexes. As database-server we applied MySQL and the program logic was written in JAVA. Furthermore several class extensions and tools such as JMOL were included to visualize the interfaces and JfreeChart for the representation of diagrams and statistics. The contact data is automatically generated from standard PDB files by a tcl/tk-script running through the molecular visualization package VMD. Currently the database contains 536 interfaces extracted from 479 PDB files and it can be queried by various types of parameters. Here, we describe the database design and demonstrate its usefulness with a number of selected features.

  18. The Effects of Environmental, Social and Governance on the Corporate Performance of Malaysian Government-Linked Companies

    Directory of Open Access Journals (Sweden)

    Kweh Qian Long

    2017-01-01

    Full Text Available This study examines the impacts of ESG on the corporate performance government-linked companies (GLCs in Malaysia. For the period 2006-2012, ESG disclosure data were extracted from the Sustainalytics ESG performance reports, while financial data were obtained from the Bloomberg database. Data development analysis (DEA was used to estimate efficiency in the first stage; a regression analysis was performed to test the relationship between ESG and efficiency in the second stage. The empirical results of this study show that GLCs focused more on governance disclosures, followed by social and environmental aspects. Moreover, governance will improve firm efficiency, but social and environmental factors have no similar effect. In conclusion, this study provides insight on the limited literature on ESG and informs the relevant stakeholders on the important ESG components for financial and investment decisions.

  19. Complexity in Vocational Education and Training Governance

    Science.gov (United States)

    Oliver, Damian

    2010-01-01

    Complexity is a feature common to all vocational education and training (VET) governance arrangements, due to the wide range of students VET systems caters for, and the number of stakeholders involved in both decision making and funding and financing. In this article, Pierre and Peter's framework of governance is used to examine complexity in VET…

  20. Grantees Guide for Research Database at IDRC (English)

    International Development Research Centre (IDRC) Digital Library (Canada)

    Commercial databases conditions of use. These resources are governed by license agreements which restrict use to IDRC employees and grantees taking part in projects funded by. IDRC. It is the responsibility of each user to use these products only for individual, noncommercial use without systematically downloading ...

  1. The HITRAN 2008 molecular spectroscopic database

    International Nuclear Information System (INIS)

    Rothman, L.S.; Gordon, I.E.; Barbe, A.; Benner, D.Chris; Bernath, P.F.; Birk, M.; Boudon, V.; Brown, L.R.; Campargue, A.; Champion, J.-P.; Chance, K.; Coudert, L.H.; Dana, V.; Devi, V.M.; Fally, S.; Flaud, J.-M.

    2009-01-01

    This paper describes the status of the 2008 edition of the HITRAN molecular spectroscopic database. The new edition is the first official public release since the 2004 edition, although a number of crucial updates had been made available online since 2004. The HITRAN compilation consists of several components that serve as input for radiative-transfer calculation codes: individual line parameters for the microwave through visible spectra of molecules in the gas phase; absorption cross-sections for molecules having dense spectral features, i.e. spectra in which the individual lines are not resolved; individual line parameters and absorption cross-sections for bands in the ultraviolet; refractive indices of aerosols, tables and files of general properties associated with the database; and database management software. The line-by-line portion of the database contains spectroscopic parameters for 42 molecules including many of their isotopologues.

  2. Contributions to Logical Database Design

    Directory of Open Access Journals (Sweden)

    Vitalie COTELEA

    2012-01-01

    Full Text Available This paper treats the problems arising at the stage of logical database design. It comprises a synthesis of the most common inference models of functional dependencies, deals with the problems of building covers for sets of functional dependencies, makes a synthesizes of normal forms, presents trends regarding normalization algorithms and provides a temporal complexity of those. In addition, it presents a summary of the most known keys’ search algorithms, deals with issues of analysis and testing of relational schemes. It also summarizes and compares the different features of recognition of acyclic database schemas.

  3. The Sequenced Angiosperm Genomes and Genome Databases.

    Science.gov (United States)

    Chen, Fei; Dong, Wei; Zhang, Jiawei; Guo, Xinyue; Chen, Junhao; Wang, Zhengjia; Lin, Zhenguo; Tang, Haibao; Zhang, Liangsheng

    2018-01-01

    Angiosperms, the flowering plants, provide the essential resources for human life, such as food, energy, oxygen, and materials. They also promoted the evolution of human, animals, and the planet earth. Despite the numerous advances in genome reports or sequencing technologies, no review covers all the released angiosperm genomes and the genome databases for data sharing. Based on the rapid advances and innovations in the database reconstruction in the last few years, here we provide a comprehensive review for three major types of angiosperm genome databases, including databases for a single species, for a specific angiosperm clade, and for multiple angiosperm species. The scope, tools, and data of each type of databases and their features are concisely discussed. The genome databases for a single species or a clade of species are especially popular for specific group of researchers, while a timely-updated comprehensive database is more powerful for address of major scientific mysteries at the genome scale. Considering the low coverage of flowering plants in any available database, we propose construction of a comprehensive database to facilitate large-scale comparative studies of angiosperm genomes and to promote the collaborative studies of important questions in plant biology.

  4. Developing an Online Database of National and Sub-National Clean Energy Policies

    Energy Technology Data Exchange (ETDEWEB)

    Haynes, R.; Cross, S.; Heinemann, A.; Booth, S.

    2014-06-01

    The Database of State Incentives for Renewables and Efficiency (DSIRE) was established in 1995 to provide summaries of energy efficiency and renewable energy policies offered by the federal and state governments. This primer provides an overview of the major policy, research, and technical topics to be considered when creating a similar clean energy policy database and website.

  5. W-transform method for feature-oriented multiresolution image retrieval

    Energy Technology Data Exchange (ETDEWEB)

    Kwong, M.K.; Lin, B. [Argonne National Lab., IL (United States). Mathematics and Computer Science Div.

    1995-07-01

    Image database management is important in the development of multimedia technology. Since an enormous amount of digital images is likely to be generated within the next few decades in order to integrate computers, television, VCR, cables, telephone and various imaging devices. Effective image indexing and retrieval systems are urgently needed so that images can be easily organized, searched, transmitted, and presented. Here, the authors present a local-feature-oriented image indexing and retrieval method based on Kwong, and Tang`s W-transform. Multiresolution histogram comparison is an effective method for content-based image indexing and retrieval. However, most recent approaches perform multiresolution analysis for whole images but do not exploit the local features present in the images. Since W-transform is featured by its ability to handle images of arbitrary size, with no periodicity assumptions, it provides a natural tool for analyzing local image features and building indexing systems based on such features. In this approach, the histograms of the local features of images are used in the indexing, system. The system not only can retrieve images that are similar or identical to the query images but also can retrieve images that contain features specified in the query images, even if the retrieved images as a whole might be very different from the query images. The local-feature-oriented method also provides a speed advantage over the global multiresolution histogram comparison method. The feature-oriented approach is expected to be applicable in managing large-scale image systems such as video databases and medical image databases.

  6. The Flux Database Concerted Action (invited paper)

    International Nuclear Information System (INIS)

    Mitchell, N.G.; Donnelly, C.E.

    2000-01-01

    The background to the IUR action on the development of a flux database for radionuclide transfer in soil-plant systems is summarised. The action is discussed in terms of the objectives, the deliverables and the progress achieved by the flux database working group. The paper describes the background to the current initiative, outlines specific features of the database and supporting documentation, and presents findings from the working group's activities. The aim of the IUR flux database working group is to bring together researchers to collate data from current experimental studies investigating aspects of radionuclide transfer in soil-plant systems. The database will incorporate parameters describing the time-dependent transfer of radionuclides between soil, plant and animal compartments. Work under the EC Concerted Action considers soil-plant interactions. This initiative has become known as the radionuclide flux database. It is emphasised that the word flux is used in this case simply to indicate the flow of radionuclides between compartments in time. (author)

  7. Exploiting relational database technology in a GIS

    Science.gov (United States)

    Batty, Peter

    1992-05-01

    All systems for managing data face common problems such as backup, recovery, auditing, security, data integrity, and concurrent update. Other challenges include the ability to share data easily between applications and to distribute data across several computers, whereas continuing to manage the problems already mentioned. Geographic information systems are no exception, and need to tackle all these issues. Standard relational database-management systems (RDBMSs) provide many features to help solve the issues mentioned so far. This paper describes how the IBM geoManager product approaches these issues by storing all its geographic data in a standard RDBMS in order to take advantage of such features. Areas in which standard RDBMS functions need to be extended are highlighted, and the way in which geoManager does this is explained. The performance implications of storing all data in the relational database are discussed. An important distinction is made between the storage and management of geographic data and the manipulation and analysis of geographic data, which needs to be made when considering the applicability of relational database technology to GIS.

  8. KEUNTUNGAN PENGGUNAAN EXTERNAL FUNCTION PADA DATABASE POSTGRESQL

    Directory of Open Access Journals (Sweden)

    Nahrun Hartono

    2015-02-01

    Full Text Available Database id a central data for an information system, there were many database system applications that revolve both paid and free. PostgreSQL is one of the free database system and it has the powerful feature, one of the feature that make it different with others sysem is a function features, where the users can make their function by them selves by using spesific programming language, then it can be called return for executed. In this research the writer tries for conducting additional data testing by using INSERT Query and External Function Query on the terminal/command prompt and web browser. From the result of the trial, the research found that the average of time in additional data processing on the terminal/command prompt by using INSERT Query was 185.293 ms and the average of time External Function Query proccess was 129.52 ms, while the average of time in additional data processing on web browser by using INSERT Query was 168.363 ms and External Function Query was 145.64 ms. There was a difference both of the query, but the important thing and should be noted that external function is a command was defined by stand alone by the user, this matter lets the user to determine a specify command that they need.

  9. The Network Configuration of an Object Relational Database Management System

    Science.gov (United States)

    Diaz, Philip; Harris, W. C.

    2000-01-01

    The networking and implementation of the Oracle Database Management System (ODBMS) requires developers to have knowledge of the UNIX operating system as well as all the features of the Oracle Server. The server is an object relational database management system (DBMS). By using distributed processing, processes are split up between the database server and client application programs. The DBMS handles all the responsibilities of the server. The workstations running the database application concentrate on the interpretation and display of data.

  10. Harvesting Covert Networks: The Case Study of the iMiner Database

    DEFF Research Database (Denmark)

    Memon, Nasrullah; Wiil, Uffe Kock; Alhajj, Reda

    2011-01-01

    was incorporated in the iMiner prototype tool, which makes use of investigative data mining techniques to analyse data. This paper will present the developed framework along with the form and structure of the terrorist data in the database. Selected cases will be referenced to highlight the effectiveness of the i...... collected by intelligence agencies and government organisations is inaccessible to researchers. To counter the information scarcity, we designed and built a database of terrorist-related data and information by harvesting such data from publicly available authenticated websites. The database...

  11. The Web-Database Connection Tools for Sharing Information on the Campus Intranet.

    Science.gov (United States)

    Thibeault, Nancy E.

    This paper evaluates four tools for creating World Wide Web pages that interface with Microsoft Access databases: DB Gateway, Internet Database Assistant (IDBA), Microsoft Internet Database Connector (IDC), and Cold Fusion. The system requirements and features of each tool are discussed. A sample application, "The Virtual Help Desk"…

  12. Physics analysis database for the DIII-D tokamak

    International Nuclear Information System (INIS)

    Schissel, D.P.; Bramson, G.; DeBoo, J.C.

    1986-01-01

    The authors report on a centralized database for handling reduced data for physics analysis implemented for the DIII-D tokamak. Each database record corresponds to a specific snapshot in time for a selected discharge. Features of the database environment include automatic updating, data integrity checks, and data traceability. Reduced data from each diagnostic comprises a dedicated data bank (a subset of the database) with quality assurance provided by a physicist. These data banks will be used to create profile banks which will be input to a transport code to create a transport bank. Access to the database is initially through FORTRAN programs. One user interface, PLOTN, is a command driven program to select and display data subsets. Another user interface, PROF, compares and displays profiles. The database is implemented on a Digital Equipment Corporation VAX 8600 running VMS

  13. Exploration of a Vision for Actor Database Systems

    DEFF Research Database (Denmark)

    Shah, Vivek

    of these services. Existing popular approaches to building these services either use an in-memory database system or an actor runtime. We observe that these approaches have complementary strengths and weaknesses. In this dissertation, we propose the integration of actor programming models in database systems....... In doing so, we lay down a vision for a new class of systems called actor database systems. To explore this vision, this dissertation crystallizes the notion of an actor database system by defining its feature set in light of current application and hardware trends. In order to explore the viability...... of the outlined vision, a new programming model named Reactors has been designed to enrich classic relational database programming models with logical actor programming constructs. To support the reactor programming model, a high-performance in-memory multi-core OLTP database system named REACTDB has been built...

  14. CANDID: Comparison algorithm for navigating digital image databases

    Energy Technology Data Exchange (ETDEWEB)

    Kelly, P.M.; Cannon, T.M.

    1994-02-21

    In this paper, we propose a method for calculating the similarity between two digital images. A global signature describing the texture, shape, or color content is first computed for every image stored in a database, and a normalized distance between probability density functions of feature vectors is used to match signatures. This method can be used to retrieve images from a database that are similar to an example target image. This algorithm is applied to the problem of search and retrieval for database containing pulmonary CT imagery, and experimental results are provided.

  15. Pro Oracle database 11g RAC on Linux

    CERN Document Server

    Shaw, Steve

    2010-01-01

    Pro Oracle Database 11g RAC on Linux provides full-life-cycle guidance on implementing Oracle Real Application Clusters in a Linux environment. Real Application Clusters, commonly abbreviated as RAC, is Oracle's industry-leading architecture for scalable and fault-tolerant databases. RAC allows you to scale up and down by simply adding and subtracting inexpensive Linux servers. Redundancy provided by those multiple, inexpensive servers is the basis for the failover and other fault-tolerance features that RAC provides. Written by authors well-known for their talent with RAC, Pro Oracle Database

  16. The International Nucleotide Sequence Database Collaboration.

    Science.gov (United States)

    Cochrane, Guy; Karsch-Mizrachi, Ilene; Nakamura, Yasukazu

    2011-01-01

    Under the International Nucleotide Sequence Database Collaboration (INSDC; http://www.insdc.org), globally comprehensive public domain nucleotide sequence is captured, preserved and presented. The partners of this long-standing collaboration work closely together to provide data formats and conventions that enable consistent data submission to their databases and support regular data exchange around the globe. Clearly defined policy and governance in relation to free access to data and relationships with journal publishers have positioned INSDC databases as a key provider of the scientific record and a core foundation for the global bioinformatics data infrastructure. While growth in sequence data volumes comes no longer as a surprise to INSDC partners, the uptake of next-generation sequencing technology by mainstream science that we have witnessed in recent years brings a step-change to growth, necessarily making a clear mark on INSDC strategy. In this article, we introduce the INSDC, outline data growth patterns and comment on the challenges of increased growth.

  17. Science.gov: gateway to government science information.

    Science.gov (United States)

    Fitzpatrick, Roberta Bronson

    2010-01-01

    Science.gov is a portal to more than 40 scientific databases and 200 million pages of science information via a single query. It connects users to science information and research results from the U.S. government. This column will provide readers with an overview of the resource, as well as basic search hints.

  18. The IVTANTHERMO-Online database for thermodynamic properties of individual substances with web interface

    Science.gov (United States)

    Belov, G. V.; Dyachkov, S. A.; Levashov, P. R.; Lomonosov, I. V.; Minakov, D. V.; Morozov, I. V.; Sineva, M. A.; Smirnov, V. N.

    2018-01-01

    The database structure, main features and user interface of an IVTANTHERMO-Online system are reviewed. This system continues the series of the IVTANTHERMO packages developed in JIHT RAS. It includes the database for thermodynamic properties of individual substances and related software for analysis of experimental results, data fitting, calculation and estimation of thermodynamical functions and thermochemistry quantities. In contrast to the previous IVTANTHERMO versions it has a new extensible database design, the client-server architecture, a user-friendly web interface with a number of new features for online and offline data processing.

  19. The Politics of Governance Architectures

    DEFF Research Database (Denmark)

    Borrás, Susana; Radaelli, Claudio M.

    2011-01-01

    Governance architectures are strategic and long-term institutional arrangements of international organizations exhibiting three features; namely, they address strategic and long-term problems in a holistic manner, they set substantive output-oriented goals, and they are implemented through...... not being identified as an object of study on its own right. We define the Lisbon Strategy as a case of governance architecture, raising questions about its creation, evolution and impact at the national level. We tackle these questions by drawing on institutional theories about emergence and change...

  20. Using a database to manage resolution of comments on standards

    International Nuclear Information System (INIS)

    Holloran, R.W.; Kelley, R.P.

    1995-01-01

    Features of production systems that would enhance development and implementation of procedures and other standards were first suggested in 1988 described how a database could provide the features sought for managing the content of structured documents such as standards and procedures. This paper describes enhancements of the database that manage the more complex links associated with resolution of comments. Displaying the linked information on a computer display aids comment resolvers. A hardcopy report generated by the database permits others to independently evaluate the resolution of comments in context with the original text of the standard, the comment, and the revised text of the standard. Because the links are maintained by the database, consistency between the agreed-upon resolutions and the text of the standard can be maintained throughout the subsequent reviews of the standard. Each of the links is bidirectional; i.e., the relationships between any two documents can be viewed from the perspective of either document

  1. The multiple meanings of global health governance: a call for conceptual clarity.

    Science.gov (United States)

    Lee, Kelley; Kamradt-Scott, Adam

    2014-04-28

    The term global health governance (GHG) is now widely used, with over one thousand works published in the scholarly literature, almost all since 2002. Amid this rapid growth there is considerable variation in how the term is defined and applied, generating confusion as to the boundaries of the subject, the perceived problems in practice, and the goals to be achieved through institutional reform. This paper is based on the results of a separate scoping study of peer reviewed GHG research from 1990 onwards which undertook keyword searches of public health and social science databases. Additional works, notably books, book chapters and scholarly articles, not currently indexed, were identified through Web of Science citation searches. After removing duplicates, book reviews, commentaries and editorials, we reviewed the remaining 250 scholarly works in terms of how the concept of GHG is applied. More specifically, we identify what is claimed as constituting GHG, how it is problematised, the institutional features of GHG, and what forms and functions are deemed ideal. After examining the broader notion of global governance and increasingly ubiquitous term "global health", the paper identifies three ontological variations in GHG scholarship - the scope of institutional arrangements, strengths and weaknesses of existing institutions, and the ideal form and function of GHG. This has produced three common, yet distinct, meanings of GHG that have emerged - globalisation and health governance, global governance and health, and governance for global health. There is a need to clarify ontological and definitional distinctions in GHG scholarship and practice, and be critically reflexive of their normative underpinnings. This will enable greater precision in describing existing institutional arrangements, as well as serve as a prerequisite for a fuller debate about the desired nature of GHG.

  2. The multiple meanings of global health governance: a call for conceptual clarity

    Science.gov (United States)

    2014-01-01

    Background The term global health governance (GHG) is now widely used, with over one thousand works published in the scholarly literature, almost all since 2002. Amid this rapid growth there is considerable variation in how the term is defined and applied, generating confusion as to the boundaries of the subject, the perceived problems in practice, and the goals to be achieved through institutional reform. Methodology This paper is based on the results of a separate scoping study of peer reviewed GHG research from 1990 onwards which undertook keyword searches of public health and social science databases. Additional works, notably books, book chapters and scholarly articles, not currently indexed, were identified through Web of Science citation searches. After removing duplicates, book reviews, commentaries and editorials, we reviewed the remaining 250 scholarly works in terms of how the concept of GHG is applied. More specifically, we identify what is claimed as constituting GHG, how it is problematised, the institutional features of GHG, and what forms and functions are deemed ideal. Results After examining the broader notion of global governance and increasingly ubiquitous term “global health”, the paper identifies three ontological variations in GHG scholarship - the scope of institutional arrangements, strengths and weaknesses of existing institutions, and the ideal form and function of GHG. This has produced three common, yet distinct, meanings of GHG that have emerged – globalisation and health governance, global governance and health, and governance for global health. Conclusions There is a need to clarify ontological and definitional distinctions in GHG scholarship and practice, and be critically reflexive of their normative underpinnings. This will enable greater precision in describing existing institutional arrangements, as well as serve as a prerequisite for a fuller debate about the desired nature of GHG. PMID:24775919

  3. Supervised Learning for Detection of Duplicates in Genomic Sequence Databases.

    Directory of Open Access Journals (Sweden)

    Qingyu Chen

    Full Text Available First identified as an issue in 1996, duplication in biological databases introduces redundancy and even leads to inconsistency when contradictory information appears. The amount of data makes purely manual de-duplication impractical, and existing automatic systems cannot detect duplicates as precisely as can experts. Supervised learning has the potential to address such problems by building automatic systems that learn from expert curation to detect duplicates precisely and efficiently. While machine learning is a mature approach in other duplicate detection contexts, it has seen only preliminary application in genomic sequence databases.We developed and evaluated a supervised duplicate detection method based on an expert curated dataset of duplicates, containing over one million pairs across five organisms derived from genomic sequence databases. We selected 22 features to represent distinct attributes of the database records, and developed a binary model and a multi-class model. Both models achieve promising performance; under cross-validation, the binary model had over 90% accuracy in each of the five organisms, while the multi-class model maintains high accuracy and is more robust in generalisation. We performed an ablation study to quantify the impact of different sequence record features, finding that features derived from meta-data, sequence identity, and alignment quality impact performance most strongly. The study demonstrates machine learning can be an effective additional tool for de-duplication of genomic sequence databases. All Data are available as described in the supplementary material.

  4. Supervised Learning for Detection of Duplicates in Genomic Sequence Databases.

    Science.gov (United States)

    Chen, Qingyu; Zobel, Justin; Zhang, Xiuzhen; Verspoor, Karin

    2016-01-01

    First identified as an issue in 1996, duplication in biological databases introduces redundancy and even leads to inconsistency when contradictory information appears. The amount of data makes purely manual de-duplication impractical, and existing automatic systems cannot detect duplicates as precisely as can experts. Supervised learning has the potential to address such problems by building automatic systems that learn from expert curation to detect duplicates precisely and efficiently. While machine learning is a mature approach in other duplicate detection contexts, it has seen only preliminary application in genomic sequence databases. We developed and evaluated a supervised duplicate detection method based on an expert curated dataset of duplicates, containing over one million pairs across five organisms derived from genomic sequence databases. We selected 22 features to represent distinct attributes of the database records, and developed a binary model and a multi-class model. Both models achieve promising performance; under cross-validation, the binary model had over 90% accuracy in each of the five organisms, while the multi-class model maintains high accuracy and is more robust in generalisation. We performed an ablation study to quantify the impact of different sequence record features, finding that features derived from meta-data, sequence identity, and alignment quality impact performance most strongly. The study demonstrates machine learning can be an effective additional tool for de-duplication of genomic sequence databases. All Data are available as described in the supplementary material.

  5. The Steward Observatory asteroid relational database

    Science.gov (United States)

    Sykes, Mark V.; Alvarezdelcastillo, Elizabeth M.

    1991-01-01

    The Steward Observatory Asteroid Relational Database (SOARD) was created as a flexible tool for undertaking studies of asteroid populations and sub-populations, to probe the biases intrinsic to asteroid databases, to ascertain the completeness of data pertaining to specific problems, to aid in the development of observational programs, and to develop pedagogical materials. To date, SOARD has compiled an extensive list of data available on asteroids and made it accessible through a single menu-driven database program. Users may obtain tailored lists of asteroid properties for any subset of asteroids or output files which are suitable for plotting spectral data on individual asteroids. The program has online help as well as user and programmer documentation manuals. The SOARD already has provided data to fulfill requests by members of the astronomical community. The SOARD continues to grow as data is added to the database and new features are added to the program.

  6. Image Processing and Features Extraction of Fingerprint Images ...

    African Journals Online (AJOL)

    To demonstrate the importance of the image processing of fingerprint images prior to image enrolment or comparison, the set of fingerprint images in databases (a) and (b) of the FVC (Fingerprint Verification Competition) 2000 database were analyzed using a features extraction algorithm. This paper presents the results of ...

  7. Forensic Automatic Speaker Recognition Based on Likelihood Ratio Using Acoustic-phonetic Features Measured Automatically

    Directory of Open Access Journals (Sweden)

    Huapeng Wang

    2015-01-01

    Full Text Available Forensic speaker recognition is experiencing a remarkable paradigm shift in terms of the evaluation framework and presentation of voice evidence. This paper proposes a new method of forensic automatic speaker recognition using the likelihood ratio framework to quantify the strength of voice evidence. The proposed method uses a reference database to calculate the within- and between-speaker variability. Some acoustic-phonetic features are extracted automatically using the software VoiceSauce. The effectiveness of the approach was tested using two Mandarin databases: A mobile telephone database and a landline database. The experiment's results indicate that these acoustic-phonetic features do have some discriminating potential and are worth trying in discrimination. The automatic acoustic-phonetic features have acceptable discriminative performance and can provide more reliable results in evidence analysis when fused with other kind of voice features.

  8. Design and Implementation of the CEBAF Element Database

    International Nuclear Information System (INIS)

    Larrieu, Theodore; Slominski, Christopher; Joyce, Michele

    2011-01-01

    With inauguration of the CEBAF Element Database (CED) in Fall 2010, Jefferson Lab computer scientists have taken a first step toward the eventual goal of a model-driven accelerator. Once fully populated, the database will be the primary repository of information used for everything from generating lattice decks to booting front-end computers to building controls screens. A particular requirement influencing the CED design is that it must provide consistent access to not only present, but also future, and eventually past, configurations of the CEBAF accelerator. To accomplish this, an introspective database schema was designed that allows new elements, element types, and element properties to be defined on-the-fly without changing table structure. When used in conjunction with the Oracle Workspace Manager, it allows users to seamlessly query data from any time in the database history with the exact same tools as they use for querying the present configuration. Users can also check-out workspaces and use them as staging areas for upcoming machine configurations. All Access to the CED is through a well-documented API that is translated automatically from original C++ into native libraries for script languages such as perl, php, and TCL making access to the CED easy and ubiquitous. Notice: Authored by Jefferson Science Associates, LLC under U.S. DOE Contract No. DE-AC05-06OR23177. The U.S. Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce this manuscript for U.S. Government purposes.

  9. A survey of the current status of web-based databases indexing Iranian journals.

    Science.gov (United States)

    Merat, Shahin; Khatibzadeh, Shahab; Mesgarpour, Bita; Malekzadeh, Reza

    2009-05-01

    The scientific output of Iran is increasing rapidly during the recent years. Unfortunately, most papers are published in journals which are not indexed by popular indexing systems and many of them are in Persian without English translation. This makes the results of Iranian scientific research unavailable to other researchers, including Iranians. The aim of this study was to evaluate the quality of current web-based databases indexing scientific articles published in Iran. We identified web-based databases which indexed scientific journals published in Iran using popular search engines. The sites were then subjected to a series of tests to evaluate their coverage, search capabilities, stability, accuracy of information, consistency, accessibility, ease of use, and other features. Results were compared with each other to identify strengths and shortcomings of each site. Five web sites were indentified. None had a complete coverage on scientific Iranian journals. The search capabilities were less than optimal in most sites. English translations of research titles, author names, keywords, and abstracts of Persian-language articles did not follow standards. Some sites did not cover abstracts. Numerous typing errors make searches ineffective and citation indexing unreliable. None of the currently available indexing sites are capable of presenting Iranian research to the international scientific community. The government should intervene by enforcing policies designed to facilitate indexing through a systematic approach. The policies should address Iranian journals, authors, and indexing sites. Iranian journals should be required to provide their indexing data, including references, electronically; authors should provide correct indexing information to journals; and indexing sites should improve their software to meet standards set by the government.

  10. Developing the Vectorial Glance: Infrastructural Inversion for the New Agenda on Government Information Systems

    NARCIS (Netherlands)

    Pelizza, Annalisa

    2016-01-01

    Integrating information systems (IS) has become a key goal for governments worldwide. Systems of “authentic registers,” for instance, provide government agencies with information from databases acknowledged as the only legitimate sources of data. Concerns are thus arising about the risks for

  11. The GLIMS Glacier Database

    Science.gov (United States)

    Raup, B. H.; Khalsa, S. S.; Armstrong, R.

    2007-12-01

    The Global Land Ice Measurements from Space (GLIMS) project has built a geospatial and temporal database of glacier data, composed of glacier outlines and various scalar attributes. These data are being derived primarily from satellite imagery, such as from ASTER and Landsat. Each "snapshot" of a glacier is from a specific time, and the database is designed to store multiple snapshots representative of different times. We have implemented two web-based interfaces to the database; one enables exploration of the data via interactive maps (web map server), while the other allows searches based on text-field constraints. The web map server is an Open Geospatial Consortium (OGC) compliant Web Map Server (WMS) and Web Feature Server (WFS). This means that other web sites can display glacier layers from our site over the Internet, or retrieve glacier features in vector format. All components of the system are implemented using Open Source software: Linux, PostgreSQL, PostGIS (geospatial extensions to the database), MapServer (WMS and WFS), and several supporting components such as Proj.4 (a geographic projection library) and PHP. These tools are robust and provide a flexible and powerful framework for web mapping applications. As a service to the GLIMS community, the database contains metadata on all ASTER imagery acquired over glacierized terrain. Reduced-resolution of the images (browse imagery) can be viewed either as a layer in the MapServer application, or overlaid on the virtual globe within Google Earth. The interactive map application allows the user to constrain by time what data appear on the map. For example, ASTER or glacier outlines from 2002 only, or from Autumn in any year, can be displayed. The system allows users to download their selected glacier data in a choice of formats. The results of a query based on spatial selection (using a mouse) or text-field constraints can be downloaded in any of these formats: ESRI shapefiles, KML (Google Earth), Map

  12. A data model and database for high-resolution pathology analytical image informatics.

    Science.gov (United States)

    Wang, Fusheng; Kong, Jun; Cooper, Lee; Pan, Tony; Kurc, Tahsin; Chen, Wenjin; Sharma, Ashish; Niedermayr, Cristobal; Oh, Tae W; Brat, Daniel; Farris, Alton B; Foran, David J; Saltz, Joel

    2011-01-01

    The systematic analysis of imaged pathology specimens often results in a vast amount of morphological information at both the cellular and sub-cellular scales. While microscopy scanners and computerized analysis are capable of capturing and analyzing data rapidly, microscopy image data remain underutilized in research and clinical settings. One major obstacle which tends to reduce wider adoption of these new technologies throughout the clinical and scientific communities is the challenge of managing, querying, and integrating the vast amounts of data resulting from the analysis of large digital pathology datasets. This paper presents a data model, which addresses these challenges, and demonstrates its implementation in a relational database system. This paper describes a data model, referred to as Pathology Analytic Imaging Standards (PAIS), and a database implementation, which are designed to support the data management and query requirements of detailed characterization of micro-anatomic morphology through many interrelated analysis pipelines on whole-slide images and tissue microarrays (TMAs). (1) Development of a data model capable of efficiently representing and storing virtual slide related image, annotation, markup, and feature information. (2) Development of a database, based on the data model, capable of supporting queries for data retrieval based on analysis and image metadata, queries for comparison of results from different analyses, and spatial queries on segmented regions, features, and classified objects. The work described in this paper is motivated by the challenges associated with characterization of micro-scale features for comparative and correlative analyses involving whole-slides tissue images and TMAs. Technologies for digitizing tissues have advanced significantly in the past decade. Slide scanners are capable of producing high-magnification, high-resolution images from whole slides and TMAs within several minutes. Hence, it is becoming

  13. A data model and database for high-resolution pathology analytical image informatics

    Directory of Open Access Journals (Sweden)

    Fusheng Wang

    2011-01-01

    Full Text Available Background: The systematic analysis of imaged pathology specimens often results in a vast amount of morphological information at both the cellular and sub-cellular scales. While microscopy scanners and computerized analysis are capable of capturing and analyzing data rapidly, microscopy image data remain underutilized in research and clinical settings. One major obstacle which tends to reduce wider adoption of these new technologies throughout the clinical and scientific communities is the challenge of managing, querying, and integrating the vast amounts of data resulting from the analysis of large digital pathology datasets. This paper presents a data model, which addresses these challenges, and demonstrates its implementation in a relational database system. Context: This paper describes a data model, referred to as Pathology Analytic Imaging Standards (PAIS, and a database implementation, which are designed to support the data management and query requirements of detailed characterization of micro-anatomic morphology through many interrelated analysis pipelines on whole-slide images and tissue microarrays (TMAs. Aims: (1 Development of a data model capable of efficiently representing and storing virtual slide related image, annotation, markup, and feature information. (2 Development of a database, based on the data model, capable of supporting queries for data retrieval based on analysis and image metadata, queries for comparison of results from different analyses, and spatial queries on segmented regions, features, and classified objects. Settings and Design: The work described in this paper is motivated by the challenges associated with characterization of micro-scale features for comparative and correlative analyses involving whole-slides tissue images and TMAs. Technologies for digitizing tissues have advanced significantly in the past decade. Slide scanners are capable of producing high-magnification, high-resolution images from whole

  14. Governance, agricultural intensification, and land sparing in tropical South America.

    Science.gov (United States)

    Ceddia, Michele Graziano; Bardsley, Nicholas Oliver; Gomez-y-Paloma, Sergio; Sedlacek, Sabine

    2014-05-20

    In this paper we address two topical questions: How do the quality of governance and agricultural intensification impact on spatial expansion of agriculture? Which aspects of governance are more likely to ensure that agricultural intensification allows sparing land for nature? Using data from the Food and Agriculture Organization, the World Bank, the World Database on Protected Areas, and the Yale Center for Environmental Law and Policy, we estimate a panel data model for six South American countries and quantify the effects of major determinants of agricultural land expansion, including various dimensions of governance, over the period 1970-2006. The results indicate that the effect of agricultural intensification on agricultural expansion is conditional on the quality and type of governance. When considering conventional aspects of governance, agricultural intensification leads to an expansion of agricultural area when governance scores are high. When looking specifically at environmental aspects of governance, intensification leads to a spatial contraction of agriculture when governance scores are high, signaling a sustainable intensification process.

  15. Analysis of Patent Databases Using VxInsight

    Energy Technology Data Exchange (ETDEWEB)

    BOYACK,KEVIN W.; WYLIE,BRIAN N.; DAVIDSON,GEORGE S.; JOHNSON,DAVID K.

    2000-12-12

    We present the application of a new knowledge visualization tool, VxInsight, to the mapping and analysis of patent databases. Patent data are mined and placed in a database, relationships between the patents are identified, primarily using the citation and classification structures, then the patents are clustered using a proprietary force-directed placement algorithm. Related patents cluster together to produce a 3-D landscape view of the tens of thousands of patents. The user can navigate the landscape by zooming into or out of regions of interest. Querying the underlying database places a colored marker on each patent matching the query. Automatically generated labels, showing landscape content, update continually upon zooming. Optionally, citation links between patents may be shown on the landscape. The combination of these features enables powerful analyses of patent databases.

  16. Recent hospital charity care controversies highlight ambiguities and outdated features of government regulations.

    Science.gov (United States)

    MacKelvie, Charles; Apolskis, Michael; Unland, James J

    2005-01-01

    For years the hospital industry has been embroiled in controversies involving pricing, charity care, and collection practices. Unfortunately, Medicare regulations and policies governing hospital charge-setting and collection practices have not helped bring much clarity to the situation, nor has related CMS and OIG guidance. Coordinated effort by hospitals and regulatory bodies can help clarify unclear government regulation of charity care, pricing, and collections and end potentially destructive controversies that sap valuable time, energy, and resources from efforts addressing much graver long-term threats to hospital viability.

  17. Extending GIS Technology to Study Karst Features of Southeastern Minnesota

    Science.gov (United States)

    Gao, Y.; Tipping, R. G.; Alexander, E. C.; Alexander, S. C.

    2001-12-01

    This paper summarizes ongoing research on karst feature distribution of southeastern Minnesota. The main goals of this interdisciplinary research are: 1) to look for large-scale patterns in the rate and distribution of sinkhole development; 2) to conduct statistical tests of hypotheses about the formation of sinkholes; 3) to create management tools for land-use managers and planners; and 4) to deliver geomorphic and hydrogeologic criteria for making scientifically valid land-use policies and ethical decisions in karst areas of southeastern Minnesota. Existing county and sub-county karst feature datasets of southeastern Minnesota have been assembled into a large GIS-based database capable of analyzing the entire data set. The central database management system (DBMS) is a relational GIS-based system interacting with three modules: GIS, statistical and hydrogeologic modules. ArcInfo and ArcView were used to generate a series of 2D and 3D maps depicting karst feature distributions in southeastern Minnesota. IRIS ExplorerTM was used to produce satisfying 3D maps and animations using data exported from GIS-based database. Nearest-neighbor analysis has been used to test sinkhole distributions in different topographic and geologic settings. All current nearest-neighbor analyses testify that sinkholes in southeastern Minnesota are not evenly distributed in this area (i.e., they tend to be clustered). More detailed statistical methods such as cluster analysis, histograms, probability estimation, correlation and regression have been used to study the spatial distributions of some mapped karst features of southeastern Minnesota. A sinkhole probability map for Goodhue County has been constructed based on sinkhole distribution, bedrock geology, depth to bedrock, GIS buffer analysis and nearest-neighbor analysis. A series of karst features for Winona County including sinkholes, springs, seeps, stream sinks and outcrop has been mapped and entered into the Karst Feature Database

  18. Database-independent, database-dependent, and extended interpretation of peptide mass spectra in VEMS V2.0

    DEFF Research Database (Denmark)

    Matthiesen, Rune; Bunkenborg, Jakob; Stensballe, Allan

    2004-01-01

    , and generation of protein and peptide databases. VEMS V2.0 has been developed into a fast tool for combining database-independent and -dependent protein assignments in an extended analysis of MS/MS-peptide data. MS or MS/MS data can be directly recalibrated after the first search by fitting the data to the best...... search result using polynomial equations. The score function is an improvement of known scoring algorithms and can be adapted for any MS instrument type. In addition, VEMS offers a novel statistical model for evaluating the significance of the protein assignment. The novel features are illustrated...

  19. Semantic feature extraction for interior environment understanding and retrieval

    Science.gov (United States)

    Lei, Zhibin; Liang, Yufeng

    1998-12-01

    In this paper, we propose a novel system of semantic feature extraction and retrieval for interior design and decoration application. The system, V2ID(Virtual Visual Interior Design), uses colored texture and spatial edge layout to obtain simple information about global room environment. We address the domain-specific segmentation problem in our application and present techniques for obtaining semantic features from a room environment. We also discuss heuristics for making use of these features (color, texture, edge layout, and shape), to retrieve objects from an existing database. The final resynthesized room environment, with the original scene and objects from the database, is created for the purpose of animation and virtual walk-through.

  20. Proposal on state affairs and government system for next government

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2012-11-15

    The contents of this book are introduction of need of study and purpose, building ecosystem to create jobs, consolidation of social innovation to improve life quality, construction of infrastructure to revitalize region, construction of new digital ecosystem, building of effective intellectual property system, creation of future competitiveness with scientific technology, establishment of raising of creative person and comprehensive discussion on reorganization of government system like need, principle, feature, difficulty, and arrangement of discussion.

  1. Proposal on state affairs and government system for next government

    International Nuclear Information System (INIS)

    2012-11-01

    The contents of this book are introduction of need of study and purpose, building ecosystem to create jobs, consolidation of social innovation to improve life quality, construction of infrastructure to revitalize region, construction of new digital ecosystem, building of effective intellectual property system, creation of future competitiveness with scientific technology, establishment of raising of creative person and comprehensive discussion on reorganization of government system like need, principle, feature, difficulty, and arrangement of discussion.

  2. THE EVOLUTION OF LOCAL GOVERNMENT AND SELF-GOVERNMENT IN RUSSIA

    Directory of Open Access Journals (Sweden)

    Tatiana Yashchuk

    2017-01-01

    was delegated at the local level. The local authorities have been transferred some resources to implement it. So the system of local budgets was build. The most successful period of the local govern-ment activity accounts for 1920s. The city and district are considered as the territorial foundation of local government. Scientific field formed that studies the feature of local govern-ment in the Soviet conditions.In the 1930s, there is centralization of government. The development of the city is subject to problems of industrialization and development of the rural areas is subject to problems of collectivization. The state policy does not consider the interests of local communities.The liberalization of the political regime in the late 1950s has led to a revival of the idea of decentralization. But decentralization of government is considered solely as an economic rather than a social and political problem. This understanding persisted until the end of the Soviet period.The lack of stable historical tradition of local government negatively affects the municipal development of the Russian Federation.

  3. Experiences with automated categorization in e-government information retrieval

    DEFF Research Database (Denmark)

    Jonasen, Tanja Svarre; Lykke, Marianne

    2014-01-01

    High-precision search results are essential for supporting e-government employees’ information tasks. Prior studies have shown that existing features of e-government retrieval systems need improvement in terms of search facilities (e.g., Goh et al. 2008), navigation (e.g., de Jong and Lentz 2006)...

  4. Insertion algorithms for network model database management systems

    Science.gov (United States)

    Mamadolimov, Abdurashid; Khikmat, Saburov

    2017-12-01

    The network model is a database model conceived as a flexible way of representing objects and their relationships. Its distinguishing feature is that the schema, viewed as a graph in which object types are nodes and relationship types are arcs, forms partial order. When a database is large and a query comparison is expensive then the efficiency requirement of managing algorithms is minimizing the number of query comparisons. We consider updating operation for network model database management systems. We develop a new sequantial algorithm for updating operation. Also we suggest a distributed version of the algorithm.

  5. Database management system for large container inspection system

    International Nuclear Information System (INIS)

    Gao Wenhuan; Li Zheng; Kang Kejun; Song Binshan; Liu Fang

    1998-01-01

    Large Container Inspection System (LCIS) based on radiation imaging technology is a powerful tool for the Customs to check the contents inside a large container without opening it. The author has discussed a database application system, as a part of Signal and Image System (SIS), for the LCIS. The basic requirements analysis was done first. Then the selections of computer hardware, operating system, and database management system were made according to the technology and market products circumstance. Based on the above considerations, a database application system with central management and distributed operation features has been implemented

  6. An overview of clinical governance policies, practices and initiatives.

    Science.gov (United States)

    Braithwaite, Jeffrey; Travaglia, Joanne F

    2008-02-01

    To map the emergence of, and define, clinical governance; to discuss current best practices, and to explore the implications of these for boards of directors and executives wishing to promote a clinical governance approach in their health services. Review and analysis of the published and grey literature on clinical governance from 1966 to 2006. Medline and CINAHL databases, key journals and websites were systematically searched. Central issues were identified in the literature as key to effective clinical governance. These include: ensuring that links are made between health services' clinical and corporate governance; the use of clinical governance to promote quality and safety through a focus on quality assurance and continuous improvement; the creation of clinical governance structures to improve safety and quality and manage risk and performance; the development of strategies to ensure the effective exchange of data, knowledge and expertise; and the sponsoring of a patient-centred approach to service delivery. A comprehensive approach to clinical governance necessarily includes the active participation of boards and executives in sponsoring and promoting clinical governance as a quality and safety strategy. Although this is still a relatively recent development, the signs are promising.

  7. PostgreSQL in the database landscape

    CERN Multimedia

    CERN. Geneva; Riggs, Simon

    2013-01-01

    This presentation targets the exposure of PostgreSQL and its main highlights in two parts: PostgreSQL today, by Harald Armin Massa This will explore the functionalities and capabilities of PostgreSQL; point out differences to other available databases; give information about the PostgreSQL project how it ensures the quality of this software. PostgreSQL and Extremely Large Databases, by Simon Riggs presenting an outlook on what is happening with PostgreSQL and Extremely Large Databases. About the speakers Simon Riggs is founder and CTO of 2ndQuadrant. He is working in the AXLE project. He works as an Architect and Developer of new features for PostgreSQL, setting technical directions for 2ndQuadrant and as a Database Systems Architect for 2ndQuadrant customers. Simon is the author of PostgreSQL 9 Admin Cookbook; and a committer to the PostgreSQL project. Harald Armin Massa studied computers and economics, he's self employed since 1999, doing software development in Python and ...

  8. Geologic structure mapping database Spent Fuel Test - Climax, Nevada Test Site

    International Nuclear Information System (INIS)

    Yow, J.L. Jr.

    1984-01-01

    Information on over 2500 discontinuities mapped at the SFT-C is contained in the geologic structure mapping database. Over 1800 of these features include complete descriptions of their orientations. This database is now available for use by other researchers. 6 references, 3 figures, 2 tables

  9. REDIdb: the RNA editing database.

    Science.gov (United States)

    Picardi, Ernesto; Regina, Teresa Maria Rosaria; Brennicke, Axel; Quagliariello, Carla

    2007-01-01

    The RNA Editing Database (REDIdb) is an interactive, web-based database created and designed with the aim to allocate RNA editing events such as substitutions, insertions and deletions occurring in a wide range of organisms. The database contains both fully and partially sequenced DNA molecules for which editing information is available either by experimental inspection (in vitro) or by computational detection (in silico). Each record of REDIdb is organized in a specific flat-file containing a description of the main characteristics of the entry, a feature table with the editing events and related details and a sequence zone with both the genomic sequence and the corresponding edited transcript. REDIdb is a relational database in which the browsing and identification of editing sites has been simplified by means of two facilities to either graphically display genomic or cDNA sequences or to show the corresponding alignment. In both cases, all editing sites are highlighted in colour and their relative positions are detailed by mousing over. New editing positions can be directly submitted to REDIdb after a user-specific registration to obtain authorized secure access. This first version of REDIdb database stores 9964 editing events and can be freely queried at http://biologia.unical.it/py_script/search.html.

  10. JDD, Inc. Database

    Science.gov (United States)

    Miller, David A., Jr.

    2004-01-01

    JDD Inc, is a maintenance and custodial contracting company whose mission is to provide their clients in the private and government sectors "quality construction, construction management and cleaning services in the most efficient and cost effective manners, (JDD, Inc. Mission Statement)." This company provides facilities support for Fort Riley in Fo,rt Riley, Kansas and the NASA John H. Glenn Research Center at Lewis Field here in Cleveland, Ohio. JDD, Inc. is owned and operated by James Vaughn, who started as painter at NASA Glenn and has been working here for the past seventeen years. This summer I worked under Devan Anderson, who is the safety manager for JDD Inc. in the Logistics and Technical Information Division at Glenn Research Center The LTID provides all transportation, secretarial, security needs and contract management of these various services for the center. As a safety manager, my mentor provides Occupational Health and Safety Occupation (OSHA) compliance to all JDD, Inc. employees and handles all other issues (Environmental Protection Agency issues, workers compensation, safety and health training) involving to job safety. My summer assignment was not as considered "groundbreaking research" like many other summer interns have done in the past, but it is just as important and beneficial to JDD, Inc. I initially created a database using a Microsoft Excel program to classify and categorize data pertaining to numerous safety training certification courses instructed by our safety manager during the course of the fiscal year. This early portion of the database consisted of only data (training field index, employees who were present at these training courses and who was absent) from the training certification courses. Once I completed this phase of the database, I decided to expand the database and add as many dimensions to it as possible. Throughout the last seven weeks, I have been compiling more data from day to day operations and been adding the

  11. An online database of nuclear electromagnetic moments

    International Nuclear Information System (INIS)

    Mertzimekis, T.J.; Stamou, K.; Psaltis, A.

    2016-01-01

    Measurements of nuclear magnetic dipole and electric quadrupole moments are considered quite important for the understanding of nuclear structure both near and far from the valley of stability. The recent advent of radioactive beams has resulted in a plethora of new, continuously flowing, experimental data on nuclear structure – including nuclear moments – which hinders the information management. A new, dedicated, public and user friendly online database ( (http://magneticmoments.info)) has been created comprising experimental data of nuclear electromagnetic moments. The present database supersedes existing printed compilations, including also non-evaluated series of data and relevant meta-data, while putting strong emphasis on bimonthly updates. The scope, features and extensions of the database are reported.

  12. Hmrbase: a database of hormones and their receptors

    Science.gov (United States)

    Rashid, Mamoon; Singla, Deepak; Sharma, Arun; Kumar, Manish; Raghava, Gajendra PS

    2009-01-01

    Background Hormones are signaling molecules that play vital roles in various life processes, like growth and differentiation, physiology, and reproduction. These molecules are mostly secreted by endocrine glands, and transported to target organs through the bloodstream. Deficient, or excessive, levels of hormones are associated with several diseases such as cancer, osteoporosis, diabetes etc. Thus, it is important to collect and compile information about hormones and their receptors. Description This manuscript describes a database called Hmrbase which has been developed for managing information about hormones and their receptors. It is a highly curated database for which information has been collected from the literature and the public databases. The current version of Hmrbase contains comprehensive information about ~2000 hormones, e.g., about their function, source organism, receptors, mature sequences, structures etc. Hmrbase also contains information about ~3000 hormone receptors, in terms of amino acid sequences, subcellular localizations, ligands, and post-translational modifications etc. One of the major features of this database is that it provides data about ~4100 hormone-receptor pairs. A number of online tools have been integrated into the database, to provide the facilities like keyword search, structure-based search, mapping of a given peptide(s) on the hormone/receptor sequence, sequence similarity search. This database also provides a number of external links to other resources/databases in order to help in the retrieving of further related information. Conclusion Owing to the high impact of endocrine research in the biomedical sciences, the Hmrbase could become a leading data portal for researchers. The salient features of Hmrbase are hormone-receptor pair-related information, mapping of peptide stretches on the protein sequences of hormones and receptors, Pfam domain annotations, categorical browsing options, online data submission, Drug

  13. Guilt by Association: The 13 Micron Dust Emission Feature and Its Correlation to Other Gas and Dust Features

    Science.gov (United States)

    Sloan, G. C.; Kraemer, Kathleen E.; Goebel, J. H.; Price, Stephan D.

    2003-09-01

    A study of all full-scan spectra of optically thin oxygen-rich circumstellar dust shells in the database produced by the Short Wavelength Spectrometer on ISO reveals that the strength of several infrared spectral features correlates with the strength of the 13 μm dust feature. These correlated features include dust features at 19.8 and 28.1 μm and the bands produced by warm carbon dioxide molecules (the strongest of which are at 13.9, 15.0, and 16.2 μm). The database does not provide any evidence for a correlation of the 13 μm feature with a dust feature at 32 μm, and it is more likely that a weak emission feature at 16.8 μm arises from carbon dioxide gas rather than dust. The correlated dust features at 13, 20, and 28 μm tend to be stronger with respect to the total dust emission in semiregular and irregular variables associated with the asymptotic giant branch than in Mira variables or supergiants. This family of dust features also tends to be stronger in systems with lower infrared excesses and thus lower mass-loss rates. We hypothesize that the dust features arise from crystalline forms of alumina (13 μm) and silicates (20 and 28 μm). Based on observations with the ISO, a European Space Agency (ESA) project with instruments funded by ESA member states (especially the Principal Investigator countries: France, Germany, the Netherlands, and the United Kingdom) and with the participation of the Institute of Space and Astronautical Science (ISAS) and the National Aeronautics and Space Administration (NASA).

  14. Development of Vision Based Multiview Gait Recognition System with MMUGait Database

    Directory of Open Access Journals (Sweden)

    Hu Ng

    2014-01-01

    Full Text Available This paper describes the acquisition setup and development of a new gait database, MMUGait. This database consists of 82 subjects walking under normal condition and 19 subjects walking with 11 covariate factors, which were captured under two views. This paper also proposes a multiview model-based gait recognition system with joint detection approach that performs well under different walking trajectories and covariate factors, which include self-occluded or external occluded silhouettes. In the proposed system, the process begins by enhancing the human silhouette to remove the artifacts. Next, the width and height of the body are obtained. Subsequently, the joint angular trajectories are determined once the body joints are automatically detected. Lastly, crotch height and step-size of the walking subject are determined. The extracted features are smoothened by Gaussian filter to eliminate the effect of outliers. The extracted features are normalized with linear scaling, which is followed by feature selection prior to the classification process. The classification experiments carried out on MMUGait database were benchmarked against the SOTON Small DB from University of Southampton. Results showed correct classification rate above 90% for all the databases. The proposed approach is found to outperform other approaches on SOTON Small DB in most cases.

  15. Governance, agricultural intensification, and land sparing in tropical South America

    OpenAIRE

    CEDDIA Michele Graziano; BARDSLEY N. O.; GOMEZ Y PALOMA Sergio; SEDLACEK S

    2014-01-01

    In this paper we address two topical questions: How do the quality of governance and agricultural intensification impact on spatial expansion of agriculture? Which aspects of governance are more likely to ensure that agricultural intensification allows sparing land for nature? Using data from the Food and Agriculture Organization, the World Bank, the World Database on Protected Areas, and the Yale Center for Environmental Law and Policy, we estimate a panel data model for six South A...

  16. The role of automated categorization in e-government information retrieval

    DEFF Research Database (Denmark)

    Jonasen, Tanja Svarre; Lykke, Marianne

    2013-01-01

    High-precision search results are essential for helping e-government employees complete work-based tasks. Prior studies have shown that existing features of e-government systems need improvement in terms of search facilities, navigation, and metadata adoption. This paper investigates how automated...

  17. Human Ageing Genomic Resources: new and updated databases

    Science.gov (United States)

    Tacutu, Robi; Thornton, Daniel; Johnson, Emily; Budovsky, Arie; Barardo, Diogo; Craig, Thomas; Diana, Eugene; Lehmann, Gilad; Toren, Dmitri; Wang, Jingwei; Fraifeld, Vadim E

    2018-01-01

    Abstract In spite of a growing body of research and data, human ageing remains a poorly understood process. Over 10 years ago we developed the Human Ageing Genomic Resources (HAGR), a collection of databases and tools for studying the biology and genetics of ageing. Here, we present HAGR’s main functionalities, highlighting new additions and improvements. HAGR consists of six core databases: (i) the GenAge database of ageing-related genes, in turn composed of a dataset of >300 human ageing-related genes and a dataset with >2000 genes associated with ageing or longevity in model organisms; (ii) the AnAge database of animal ageing and longevity, featuring >4000 species; (iii) the GenDR database with >200 genes associated with the life-extending effects of dietary restriction; (iv) the LongevityMap database of human genetic association studies of longevity with >500 entries; (v) the DrugAge database with >400 ageing or longevity-associated drugs or compounds; (vi) the CellAge database with >200 genes associated with cell senescence. All our databases are manually curated by experts and regularly updated to ensure a high quality data. Cross-links across our databases and to external resources help researchers locate and integrate relevant information. HAGR is freely available online (http://genomics.senescence.info/). PMID:29121237

  18. Databases for highway inventories. Proposal for a new model

    Energy Technology Data Exchange (ETDEWEB)

    Perez Casan, J.A.

    2016-07-01

    Database models for highway inventories are based on classical schemes for relational databases: many related tables, in which the database designer establishes, a priori, every detail that they consider relevant for inventory management. This kind of database presents several problems. First, adapting the model and its applications when new database features appear is difficult. In addition, the different needs of different sets of road inventory users are difficult to fulfil with these schemes. For example, maintenance management services, road authorities and emergency services have different needs. In addition, this kind of database cannot be adapted to new scenarios, such as other countries and regions (that may classify roads or name certain elements differently). The problem is more complex if the language used in these scenarios is not the same as that used in the database design. In addition, technicians need a long time to learn to use the database efficiently. This paper proposes a flexible, multilanguage and multipurpose database model, which gives an effective and simple solution to the aforementioned problems. (Author)

  19. Government influence on international trade in uranium

    International Nuclear Information System (INIS)

    1978-01-01

    The subject is dealt with in sections, entitled; introduction (history of uncertainty in the uranium market, opposition to nuclear power); unsatisfactory features of today's trade conditions (including discussion of restrictions in production, exports and imports); desirable principles governing international trade in uranium, apart from the non-proliferation issue (limitation on governmental intervention for economic purposes, reservation of adequate uranium resources in exporting countries, government export price control); desirable principles for achieving balance between security of supply and non-proliferation (need for consensus, reprocessing and fast breeder reactors, principles guiding government controls established for non-proliferation purposes). (U.K.)

  20. Societal Drivers of European Water Governance: A Comparison of Urban River Restoration Practices in France and Germany

    Directory of Open Access Journals (Sweden)

    Aude Zingraff-Hamed

    2017-03-01

    Full Text Available The European water governance took a decisive turn with the formulation of the Water Framework Directive (WFD, which demands the restoration of all water bodies that did not achieve sufficient ecological status. Urban rivers are particularly impaired by human activities and their restorations are motivated by multiple ecological and societal drivers, such as requirements of laws and legislation, and citizen needs for a better quality of life. In this study we investigated the relative influence of socio-political and socio-cultural drivers on urban river restorations by comparing projects of different policy contexts and cultural norms to cross-fertilize knowledge. A database of 75 projects in French and German major cities was compiled to apply (a a comparative statistical analysis of main project features, i.e., motivation, goals, measures, morphological status, and project date; and (b a qualitative textual analysis on project descriptions and titles. The results showed that despite a powerful European directive, urban river restoration projects still keep national specificities. The WFD drives with more intensity German, rather than French, urban river restoration. This study showed the limits of macro-level governance and the influence of microlevel governance driven by societal aspects such as nature perception and relationships between humans and rivers.

  1. Governance, agricultural intensification, and land sparing in tropical South America

    Science.gov (United States)

    Ceddia, Michele Graziano; Bardsley, Nicholas Oliver; Gomez-y-Paloma, Sergio; Sedlacek, Sabine

    2014-01-01

    In this paper we address two topical questions: How do the quality of governance and agricultural intensification impact on spatial expansion of agriculture? Which aspects of governance are more likely to ensure that agricultural intensification allows sparing land for nature? Using data from the Food and Agriculture Organization, the World Bank, the World Database on Protected Areas, and the Yale Center for Environmental Law and Policy, we estimate a panel data model for six South American countries and quantify the effects of major determinants of agricultural land expansion, including various dimensions of governance, over the period 1970–2006. The results indicate that the effect of agricultural intensification on agricultural expansion is conditional on the quality and type of governance. When considering conventional aspects of governance, agricultural intensification leads to an expansion of agricultural area when governance scores are high. When looking specifically at environmental aspects of governance, intensification leads to a spatial contraction of agriculture when governance scores are high, signaling a sustainable intensification process. PMID:24799696

  2. Real Time Baseball Database

    Science.gov (United States)

    Fukue, Yasuhiro

    The author describes the system outline, features and operations of "Nikkan Sports Realtime Basaball Database" which was developed and operated by Nikkan Sports Shimbun, K. K. The system enables to input numerical data of professional baseball games as they proceed simultaneously, and execute data updating at realtime, just-in-time. Other than serving as supporting tool for prepareing newspapers it is also available for broadcasting media, general users through NTT dial Q2 and others.

  3. The HITRAN 2004 molecular spectroscopic database

    Energy Technology Data Exchange (ETDEWEB)

    Rothman, L.S. [Harvard-Smithsonian Center for Astrophysics, Atomic and Molecular Physics Division, Cambridge, MA 02138 (United States)]. E-mail: lrothman@cfa.harvard.edu; Jacquemart, D. [Harvard-Smithsonian Center for Astrophysics, Atomic and Molecular Physics Division, Cambridge, MA 02138 (United States); Barbe, A. [Universite de Reims-Champagne-Ardenne, Groupe de Spectrometrie Moleculaire et Atmospherique, 51062 Reims (France)] (and others)

    2005-12-01

    This paper describes the status of the 2004 edition of the HITRAN molecular spectroscopic database. The HITRAN compilation consists of several components that serve as input for radiative transfer calculation codes: individual line parameters for the microwave through visible spectra of molecules in the gas phase; absorption cross-sections for molecules having dense spectral features, i.e., spectra in which the individual lines are unresolvable; individual line parameters and absorption cross-sections for bands in the ultra-violet; refractive indices of aerosols; tables and files of general properties associated with the database; and database management software. The line-by-line portion of the database contains spectroscopic parameters for 39 molecules including many of their isotopologues. The format of the section of the database on individual line parameters of HITRAN has undergone the most extensive enhancement in almost two decades. It now lists the Einstein A-coefficients, statistical weights of the upper and lower levels of the transitions, a better system for the representation of quantum identifications, and enhanced referencing and uncertainty codes. In addition, there is a provision for making corrections to the broadening of line transitions due to line mixing.

  4. The HITRAN 2004 molecular spectroscopic database

    International Nuclear Information System (INIS)

    Rothman, L.S.; Jacquemart, D.; Barbe, A.

    2005-01-01

    This paper describes the status of the 2004 edition of the HITRAN molecular spectroscopic database. The HITRAN compilation consists of several components that serve as input for radiative transfer calculation codes: individual line parameters for the microwave through visible spectra of molecules in the gas phase; absorption cross-sections for molecules having dense spectral features, i.e., spectra in which the individual lines are unresolvable; individual line parameters and absorption cross-sections for bands in the ultra-violet; refractive indices of aerosols; tables and files of general properties associated with the database; and database management software. The line-by-line portion of the database contains spectroscopic parameters for 39 molecules including many of their isotopologues. The format of the section of the database on individual line parameters of HITRAN has undergone the most extensive enhancement in almost two decades. It now lists the Einstein A-coefficients, statistical weights of the upper and lower levels of the transitions, a better system for the representation of quantum identifications, and enhanced referencing and uncertainty codes. In addition, there is a provision for making corrections to the broadening of line transitions due to line mixing

  5. Database Description - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Trypanosomes Database Database Description General information of database Database name Trypanosomes Database...stitute of Genetics Research Organization of Information and Systems Yata 1111, Mishima, Shizuoka 411-8540, JAPAN E mail: Database...y Name: Trypanosoma Taxonomy ID: 5690 Taxonomy Name: Homo sapiens Taxonomy ID: 9606 Database description The... Article title: Author name(s): Journal: External Links: Original website information Database maintenance s...DB (Protein Data Bank) KEGG PATHWAY Database DrugPort Entry list Available Query search Available Web servic

  6. Database usage and performance for the Fermilab Run II experiments

    International Nuclear Information System (INIS)

    Bonham, D.; Box, D.; Gallas, E.; Guo, Y.; Jetton, R.; Kovich, S.; Kowalkowski, J.; Kumar, A.; Litvintsev, D.; Lueking, L.; Stanfield, N.; Trumbo, J.; Vittone-Wiersma, M.; White, S.P.; Wicklund, E.; Yasuda, T.; Maksimovic, P.

    2004-01-01

    The Run II experiments at Fermilab, CDF and D0, have extensive database needs covering many areas of their online and offline operations. Delivering data to users and processing farms worldwide has represented major challenges to both experiments. The range of applications employing databases includes, calibration (conditions), trigger information, run configuration, run quality, luminosity, data management, and others. Oracle is the primary database product being used for these applications at Fermilab and some of its advanced features have been employed, such as table partitioning and replication. There is also experience with open source database products such as MySQL for secondary databases used, for example, in monitoring. Tools employed for monitoring the operation and diagnosing problems are also described

  7. An examination of the factors governing the development of karst topography in the Cumberland Valley of Pennsylvania

    International Nuclear Information System (INIS)

    Ackerman, R.V.

    1993-01-01

    The landscape of the Cumberland Valley of South Central Pennsylvania is dominated by karst topography. A study was initiated to determine if the development of karst was controlled primarily by geologic structure or by lithologic differences. Existing data concerning the geographic locations of karst features, the hydrogeologic characteristics of the Cumberland Valley, and the chemistry of the eleven carbonate formations within the 518 km 2 study area were compiled. Data concerning 366 mapped sinkholes and over 9,000 additional karst features and their relations to the structural, lithological and spatial characteristics of the study area were collected and compiled into the database. Other factors contributing to karst development such as groundwater flow, soil and colluvium characteristics, and geographic distribution were considered. The data suggest that structure dominates lithology in the development of karst features within the study area. Structural features such as fractures, joints and folds, which create secondary porosity, are prerequisite for solution of the carbonate bedrock. Joint systems, fold axes, igneous intrusions, caves, springs and groundwater flow have a significant impact on the development of karst features. The presence of faults proved inconclusive. There are a greater number of karst features per unit area in areas of purer limestones (units with a lower percentage of acid insoluble residue). Lithological variations impact karst development only when structural features are present to provide secondary porosity that enhances chemical weathering. The distribution of karst features and the geologic factors governing their development and distribution should be taken into account when land-use decisions in karst terrains are made

  8. Database requirements for the Advanced Test Accelerator project

    International Nuclear Information System (INIS)

    Chambers, F.W.

    1984-01-01

    The database requirements for the Advanced Test Accelerator (ATA) project are outlined. ATA is a state-of-the-art electron accelerator capable of producing energetic (50 million electron volt), high current (10,000 ampere), short pulse (70 billionths of a second) beams of electrons for a wide variety of applications. Databasing is required for two applications. First, the description of the configuration of facility itself requires an extended database. Second, experimental data gathered from the facility must be organized and managed to insure its full utilization. The two applications are intimately related since the acquisition and analysis of experimental data requires knowledge of the system configuration. This report reviews the needs of the ATA program and current implementation, intentions, and desires. These database applications have several unique aspects which are of interest and will be highlighted. The features desired in an ultimate database system are outlined. 3 references, 5 figures

  9. Action Recognition by Joint Spatial-Temporal Motion Feature

    Directory of Open Access Journals (Sweden)

    Weihua Zhang

    2013-01-01

    Full Text Available This paper introduces a method for human action recognition based on optical flow motion features extraction. Automatic spatial and temporal alignments are combined together in order to encourage the temporal consistence on each action by an enhanced dynamic time warping (DTW algorithm. At the same time, a fast method based on coarse-to-fine DTW constraint to improve computational performance without reducing accuracy is induced. The main contributions of this study include (1 a joint spatial-temporal multiresolution optical flow computation method which can keep encoding more informative motion information than recent proposed methods, (2 an enhanced DTW method to improve temporal consistence of motion in action recognition, and (3 coarse-to-fine DTW constraint on motion features pyramids to speed up recognition performance. Using this method, high recognition accuracy is achieved on different action databases like Weizmann database and KTH database.

  10. Learning lessons from Natech accidents - the eNATECH accident database

    Science.gov (United States)

    Krausmann, Elisabeth; Girgin, Serkan

    2016-04-01

    When natural hazards impact industrial facilities that house or process hazardous materials, fires, explosions and toxic releases can occur. This type of accident is commonly referred to as Natech accident. In order to prevent the recurrence of accidents or to better mitigate their consequences, lessons-learned type studies using available accident data are usually carried out. Through post-accident analysis, conclusions can be drawn on the most common damage and failure modes and hazmat release paths, particularly vulnerable storage and process equipment, and the hazardous materials most commonly involved in these types of accidents. These analyses also lend themselves to identifying technical and organisational risk-reduction measures that require improvement or are missing. Industrial accident databases are commonly used for retrieving sets of Natech accident case histories for further analysis. These databases contain accident data from the open literature, government authorities or in-company sources. The quality of reported information is not uniform and exhibits different levels of detail and accuracy. This is due to the difficulty of finding qualified information sources, especially in situations where accident reporting by the industry or by authorities is not compulsory, e.g. when spill quantities are below the reporting threshold. Data collection has then to rely on voluntary record keeping often by non-experts. The level of detail is particularly non-uniform for Natech accident data depending on whether the consequences of the Natech event were major or minor, and whether comprehensive information was available for reporting. In addition to the reporting bias towards high-consequence events, industrial accident databases frequently lack information on the severity of the triggering natural hazard, as well as on failure modes that led to the hazmat release. This makes it difficult to reconstruct the dynamics of the accident and renders the development of

  11. Optimized feature subsets for epileptic seizure prediction studies.

    Science.gov (United States)

    Direito, Bruno; Ventura, Francisco; Teixeira, César; Dourado, António

    2011-01-01

    The reduction of the number of EEG features to give as inputs to epilepsy seizure predictors is a needed step towards the development of a transportable device for real-time warning. This paper presents a comparative study of three feature selection methods, based on Support Vector Machines. Minimum-Redundancy Maximum-Relevance, Recursive Feature Elimination, Genetic Algorithms, show that, for three patients of the European Database on Epilepsy, the most important univariate features are related to spectral information and statistical moments.

  12. Base Carbone. Documentation about the emission factors of the Base CarboneR database

    International Nuclear Information System (INIS)

    2014-01-01

    The Base Carbone R is a public database of emission factors as required for carrying out carbon accounting exercises. It is administered by ADEME, but its governance involves many stakeholders and it can be added to freely. The articulation and convergence of environmental regulations requires data homogenization. The Base Carbone R proposes to be this centralized data source. Today, it is the reference database for article 75 of the Grenelle II Act. It is also entirely consistent with article L1341-3 of the French Transport Code and the default values of the European emission quotas exchange system. The data of the Base Carbone R can be freely consulted by all. Furthermore, the originality of this tool is that it enables third parties to propose their own data (feature scheduled for February 2015). These data are then assessed for their quality and transparency, then validated or refused for incorporation in the Base Carbone R . Lastly, a forum (planned for February 2015) will enable users to ask questions about the data, or to contest the data. The administration of the Base Carbone R is handled by ADEME. However, its orientation and the data that it contains are validated by a governance committee incorporating various public and private stakeholders. Lastly, transparency is one of the keystones of the Base Carbone R . Documentation details the hypotheses underlying the construction of all the data in the base, and refers to the studies that have enabled their construction. This document brings together the different versions of the Base Carbone R documentation: the most recent version (v11.5) and the previous versions (v11.0) which is shared in 2 parts dealing with the general case and with the specific case of overseas territories

  13. Institutional Fit and River Basin Governance: a New Approach Using Multiple Composite Measures

    Directory of Open Access Journals (Sweden)

    Louis Lebel

    2013-03-01

    Full Text Available The notion that effective environmental governance depends in part on achieving a reasonable fit between institutional arrangements and the features of ecosystems and their interconnections with users has been central to much thinking about social-ecological systems for more than a decade. Based on expert consultations this study proposes a set of six dimensions of fit for water governance regimes and then empirically explores variation in measures of these in 28 case studies of national parts of river basins in Europe, Asia, Latin America, and Africa drawing on a database compiled by the Twin2Go project. The six measures capture different but potentially important dimensions of fit: allocation, integration, conservation, basinization, participation, and adaptation. Based on combinations of responses to a standard questionnaire filled in by groups of experts in each basin we derived quantitative measures for each indicator. Substantial variation in these measures of fit was apparent among basins in developing and developed countries. Geographical location is not a barrier to high institutional fit; but within basins different measures of fit often diverge. This suggests it is difficult, but not impossible, to simultaneously achieve a high fit against multiple challenging conditions. Comparing multidimensional fit profiles give a sense of how well water governance regimes are equipped for dealing with a range of natural resource and use-related conditions and suggests areas for priority intervention. The findings of this study thus confirm and help explain previous work that has concluded that context is important for understanding the variable consequences of institutional reform on water governance practices as well as on social and environmental outcomes.

  14. Mining Outlier Data in Mobile Internet-Based Large Real-Time Databases

    Directory of Open Access Journals (Sweden)

    Xin Liu

    2018-01-01

    Full Text Available Mining outlier data guarantees access security and data scheduling of parallel databases and maintains high-performance operation of real-time databases. Traditional mining methods generate abundant interference data with reduced accuracy, efficiency, and stability, causing severe deficiencies. This paper proposes a new mining outlier data method, which is used to analyze real-time data features, obtain magnitude spectra models of outlier data, establish a decisional-tree information chain transmission model for outlier data in mobile Internet, obtain the information flow of internal outlier data in the information chain of a large real-time database, and cluster data. Upon local characteristic time scale parameters of information flow, the phase position features of the outlier data before filtering are obtained; the decision-tree outlier-classification feature-filtering algorithm is adopted to acquire signals for analysis and instant amplitude and to achieve the phase-frequency characteristics of outlier data. Wavelet transform threshold denoising is combined with signal denoising to analyze data offset, to correct formed detection filter model, and to realize outlier data mining. The simulation suggests that the method detects the characteristic outlier data feature response distribution, reduces response time, iteration frequency, and mining error rate, improves mining adaptation and coverage, and shows good mining outcomes.

  15. BioWarehouse: a bioinformatics database warehouse toolkit.

    Science.gov (United States)

    Lee, Thomas J; Pouliot, Yannick; Wagner, Valerie; Gupta, Priyanka; Stringer-Calvert, David W J; Tenenbaum, Jessica D; Karp, Peter D

    2006-03-23

    This article addresses the problem of interoperation of heterogeneous bioinformatics databases. We introduce BioWarehouse, an open source toolkit for constructing bioinformatics database warehouses using the MySQL and Oracle relational database managers. BioWarehouse integrates its component databases into a common representational framework within a single database management system, thus enabling multi-database queries using the Structured Query Language (SQL) but also facilitating a variety of database integration tasks such as comparative analysis and data mining. BioWarehouse currently supports the integration of a pathway-centric set of databases including ENZYME, KEGG, and BioCyc, and in addition the UniProt, GenBank, NCBI Taxonomy, and CMR databases, and the Gene Ontology. Loader tools, written in the C and JAVA languages, parse and load these databases into a relational database schema. The loaders also apply a degree of semantic normalization to their respective source data, decreasing semantic heterogeneity. The schema supports the following bioinformatics datatypes: chemical compounds, biochemical reactions, metabolic pathways, proteins, genes, nucleic acid sequences, features on protein and nucleic-acid sequences, organisms, organism taxonomies, and controlled vocabularies. As an application example, we applied BioWarehouse to determine the fraction of biochemically characterized enzyme activities for which no sequences exist in the public sequence databases. The answer is that no sequence exists for 36% of enzyme activities for which EC numbers have been assigned. These gaps in sequence data significantly limit the accuracy of genome annotation and metabolic pathway prediction, and are a barrier for metabolic engineering. Complex queries of this type provide examples of the value of the data warehousing approach to bioinformatics research. BioWarehouse embodies significant progress on the database integration problem for bioinformatics.

  16. Constructing an XML database of linguistics data

    Directory of Open Access Journals (Sweden)

    J H Kroeze

    2010-04-01

    Full Text Available A language-oriented, multi-dimensional database of the linguistic characteristics of the Hebrew text of the Old Testament can enable researchers to do ad hoc queries. XML is a suitable technology to transform free text into a database. A clause’s word order can be kept intact while other features such as syntactic and semantic functions can be marked as elements or attributes. The elements or attributes from the XML “database” can be accessed and proces sed by a 4th generation programming language, such as Visual Basic. XML is explored as an option to build an exploitable database of linguistic data by representing inherently multi-dimensional data, including syntactic and semantic analyses of free text.

  17. Database Description - SKIP Stemcell Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us SKIP Stemcell Database Database Description General information of database Database name SKIP Stemcell Database...rsity Journal Search: Contact address http://www.skip.med.keio.ac.jp/en/contact/ Database classification Human Genes and Diseases Dat...abase classification Stemcell Article Organism Taxonomy Name: Homo sapiens Taxonomy ID: 9606 Database...ks: Original website information Database maintenance site Center for Medical Genetics, School of medicine, ...lable Web services Not available URL of Web services - Need for user registration Not available About This Database Database

  18. Cadastral Database Positional Accuracy Improvement

    Science.gov (United States)

    Hashim, N. M.; Omar, A. H.; Ramli, S. N. M.; Omar, K. M.; Din, N.

    2017-10-01

    Positional Accuracy Improvement (PAI) is the refining process of the geometry feature in a geospatial dataset to improve its actual position. This actual position relates to the absolute position in specific coordinate system and the relation to the neighborhood features. With the growth of spatial based technology especially Geographical Information System (GIS) and Global Navigation Satellite System (GNSS), the PAI campaign is inevitable especially to the legacy cadastral database. Integration of legacy dataset and higher accuracy dataset like GNSS observation is a potential solution for improving the legacy dataset. However, by merely integrating both datasets will lead to a distortion of the relative geometry. The improved dataset should be further treated to minimize inherent errors and fitting to the new accurate dataset. The main focus of this study is to describe a method of angular based Least Square Adjustment (LSA) for PAI process of legacy dataset. The existing high accuracy dataset known as National Digital Cadastral Database (NDCDB) is then used as bench mark to validate the results. It was found that the propose technique is highly possible for positional accuracy improvement of legacy spatial datasets.

  19. Database Description - Arabidopsis Phenome Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Arabidopsis Phenome Database Database Description General information of database Database n... BioResource Center Hiroshi Masuya Database classification Plant databases - Arabidopsis thaliana Organism T...axonomy Name: Arabidopsis thaliana Taxonomy ID: 3702 Database description The Arabidopsis thaliana phenome i...heir effective application. We developed the new Arabidopsis Phenome Database integrating two novel database...seful materials for their experimental research. The other, the “Database of Curated Plant Phenome” focusing

  20. A performance evaluation of in-memory databases

    Directory of Open Access Journals (Sweden)

    Abdullah Talha Kabakus

    2017-10-01

    Full Text Available The popularity of NoSQL databases has increased due to the need of (1 processing vast amount of data faster than the relational database management systems by taking the advantage of highly scalable architecture, (2 flexible (schema-free data structure, and, (3 low latency and high performance. Despite that memory usage is not major criteria to evaluate performance of algorithms, since these databases serve the data from memory, their memory usages are also experimented alongside the time taken to complete each operation in the paper to reveal which one uses the memory most efficiently. Currently there exists over 225 NoSQL databases that provide different features and characteristics. So it is necessary to reveal which one provides better performance for different data operations. In this paper, we experiment the widely used in-memory databases to measure their performance in terms of (1 the time taken to complete operations, and (2 how efficiently they use memory during operations. As per the results reported in this paper, there is no database that provides the best performance for all data operations. It is also proved that even though a RDMS stores its data in memory, its overall performance is worse than NoSQL databases.

  1. Design of database management system for 60Co container inspection system

    International Nuclear Information System (INIS)

    Liu Jinhui; Wu Zhifang

    2007-01-01

    The function of the database management system has been designed according to the features of cobalt-60 container inspection system. And the software related to the function has been constructed. The database querying and searching are included in the software. The database operation program is constructed based on Microsoft SQL server and Visual C ++ under Windows 2000. The software realizes database querying, image and graph displaying, statistic, report form and its printing, interface designing, etc. The software is powerful and flexible for operation and information querying. And it has been successfully used in the real database management system of cobalt-60 container inspection system. (authors)

  2. Open Geoscience Database

    Science.gov (United States)

    Bashev, A.

    2012-04-01

    Currently there is an enormous amount of various geoscience databases. Unfortunately the only users of the majority of the databases are their elaborators. There are several reasons for that: incompaitability, specificity of tasks and objects and so on. However the main obstacles for wide usage of geoscience databases are complexity for elaborators and complication for users. The complexity of architecture leads to high costs that block the public access. The complication prevents users from understanding when and how to use the database. Only databases, associated with GoogleMaps don't have these drawbacks, but they could be hardly named "geoscience" Nevertheless, open and simple geoscience database is necessary at least for educational purposes (see our abstract for ESSI20/EOS12). We developed a database and web interface to work with them and now it is accessible at maps.sch192.ru. In this database a result is a value of a parameter (no matter which) in a station with a certain position, associated with metadata: the date when the result was obtained; the type of a station (lake, soil etc); the contributor that sent the result. Each contributor has its own profile, that allows to estimate the reliability of the data. The results can be represented on GoogleMaps space image as a point in a certain position, coloured according to the value of the parameter. There are default colour scales and each registered user can create the own scale. The results can be also extracted in *.csv file. For both types of representation one could select the data by date, object type, parameter type, area and contributor. The data are uploaded in *.csv format: Name of the station; Lattitude(dd.dddddd); Longitude(ddd.dddddd); Station type; Parameter type; Parameter value; Date(yyyy-mm-dd). The contributor is recognised while entering. This is the minimal set of features that is required to connect a value of a parameter with a position and see the results. All the complicated data

  3. The CMS ECAL database services for detector control and monitoring

    International Nuclear Information System (INIS)

    Arcidiacono, Roberta; Marone, Matteo; Badgett, William

    2010-01-01

    In this paper we give a description of the database services for the control and monitoring of the electromagnetic calorimeter of the CMS experiment at LHC. After a general description of the software infrastructure, we present the organization of the tables in the database, that has been designed in order to simplify the development of software interfaces. This feature is achieved including in the database the description of each relevant table. We also give some estimation about the final size and performance of the system.

  4. Benchmarking database performance for genomic data.

    Science.gov (United States)

    Khushi, Matloob

    2015-06-01

    Genomic regions represent features such as gene annotations, transcription factor binding sites and epigenetic modifications. Performing various genomic operations such as identifying overlapping/non-overlapping regions or nearest gene annotations are common research needs. The data can be saved in a database system for easy management, however, there is no comprehensive database built-in algorithm at present to identify overlapping regions. Therefore I have developed a novel region-mapping (RegMap) SQL-based algorithm to perform genomic operations and have benchmarked the performance of different databases. Benchmarking identified that PostgreSQL extracts overlapping regions much faster than MySQL. Insertion and data uploads in PostgreSQL were also better, although general searching capability of both databases was almost equivalent. In addition, using the algorithm pair-wise, overlaps of >1000 datasets of transcription factor binding sites and histone marks, collected from previous publications, were reported and it was found that HNF4G significantly co-locates with cohesin subunit STAG1 (SA1).Inc. © 2015 Wiley Periodicals, Inc.

  5. SIMS: addressing the problem of heterogeneity in databases

    Science.gov (United States)

    Arens, Yigal

    1997-02-01

    The heterogeneity of remotely accessible databases -- with respect to contents, query language, semantics, organization, etc. -- presents serious obstacles to convenient querying. The SIMS (single interface to multiple sources) system addresses this global integration problem. It does so by defining a single language for describing the domain about which information is stored in the databases and using this language as the query language. Each database to which SIMS is to provide access is modeled using this language. The model describes a database's contents, organization, and other relevant features. SIMS uses these models, together with a planning system drawing on techniques from artificial intelligence, to decompose a given user's high-level query into a series of queries against the databases and other data manipulation steps. The retrieval plan is constructed so as to minimize data movement over the network and maximize parallelism to increase execution speed. SIMS can recover from network failures during plan execution by obtaining data from alternate sources, when possible. SIMS has been demonstrated in the domains of medical informatics and logistics, using real databases.

  6. A Study on the Korea Database Industry Promotion Act Legislation

    Directory of Open Access Journals (Sweden)

    Bae, Seoung-Hun

    2013-09-01

    Full Text Available The Database Industry Promotion Act was proposed at the National Assembly plenary session on July 26, 2012 and since then it has been in the process of enactment in consultation with all the governmental departments concerned. The recent trend of economic globalization and smart device innovation suggests a new opportunity and challenges for all industries. The database industry is also facing a new phase in an era of smart innovation. Korea is in a moment of opportunity to take an innovative approach to promoting the database industry. Korea should set up a national policy to promote the database industry for citizens, government, and research institutions, as well as enterprises. Above all, the Database Industry Promotion Act could play a great role in promoting the social infrastructure to enhance the capacity of small and medium-sized enterprises. This article discusses the background of the development of the Database Industry Promotion Act and its legislative processes in order to clarify its legal characteristics, including the meaning of the act. In addition, this article explains individual items related to the overall structure of the Database Industry Promotion Act. Finally, this article reviews the economic effects of the database industry for now and the future.

  7. Recent Developments in German Corporate Governance

    NARCIS (Netherlands)

    Goergen, M.; Manjon, M.C.; Renneboog, L.D.R.

    2004-01-01

    We contrast the features of the German corporate governance system with those of other systems and discuss the recent regulatory initiatives.For example, the rules on insider trading and anti-trust have been strengthened.The Restructuring Act has been revised to prevent minority shareholders from

  8. SmallSat Database

    Science.gov (United States)

    Petropulos, Dolores; Bittner, David; Murawski, Robert; Golden, Bert

    2015-01-01

    The SmallSat has an unrealized potential in both the private industry and in the federal government. Currently over 70 companies, 50 universities and 17 governmental agencies are involved in SmallSat research and development. In 1994, the U.S. Army Missile and Defense mapped the moon using smallSat imagery. Since then Smart Phones have introduced this imagery to the people of the world as diverse industries watched this trend. The deployment cost of smallSats is also greatly reduced compared to traditional satellites due to the fact that multiple units can be deployed in a single mission. Imaging payloads have become more sophisticated, smaller and lighter. In addition, the growth of small technology obtained from private industries has led to the more widespread use of smallSats. This includes greater revisit rates in imagery, significantly lower costs, the ability to update technology more frequently and the ability to decrease vulnerability of enemy attacks. The popularity of smallSats show a changing mentality in this fast paced world of tomorrow. What impact has this created on the NASA communication networks now and in future years? In this project, we are developing the SmallSat Relational Database which can support a simulation of smallSats within the NASA SCaN Compatability Environment for Networks and Integrated Communications (SCENIC) Modeling and Simulation Lab. The NASA Space Communications and Networks (SCaN) Program can use this modeling to project required network support needs in the next 10 to 15 years. The SmallSat Rational Database could model smallSats just as the other SCaN databases model the more traditional larger satellites, with a few exceptions. One being that the smallSat Database is designed to be built-to-order. The SmallSat database holds various hardware configurations that can be used to model a smallSat. It will require significant effort to develop as the research material can only be populated by hand to obtain the unique data

  9. Spatialising Agricultural Water Governance Data in Polycentric Regimes

    Directory of Open Access Journals (Sweden)

    Faith Sternlieb

    2015-06-01

    Full Text Available Water governance in the Colorado River Basin (CRB is based on a historical and complex set of policies, legal decisions, and operational guidelines called the Law of the River. Behind the complex institutional structure lies an intricate web of data on water, most of which are hydrogeological in nature. However, we posit that in order to realise sustainable water governance, management efforts must also address data on water governance. Therefore, our central research question is: what is the role of water governance data in water governance, as it pertains to agriculture? First, we lay out the digital landscape and theoretical framework that justify the development of the Colorado River Basin Water Governance Relational Database. Then, we conduct an analysis of water-sharing policies within Law of the River to identify and categorise boundaries. By operationalising a boundary typology in a geographic information system, we found that data on agricultural water governance have little to no current role in water governance due to scale discrepancies, insufficient availability and collection of data, and lack of standardisation. In addition, agricultural water governance in the CRB was found to exhibit polycentric patterns. However, unlike the flexible and adaptive nature of some polycentric systems, polycentric data sets may pose challenges to water governance due to limited information regarding organisational changes, policy developments, and special interests. This study advances the science-policy dialogue in four ways: 1 by emphasising the salience of the data on water governance, 2 by incorporating water governance data in water governance and policy decisions, 3 by demonstrating the value of integrating data types, and 4 by engaging users through geo-visualisation.

  10. A new karren feature: hummocky karren

    Directory of Open Access Journals (Sweden)

    Plan Lukas

    2012-01-01

    Full Text Available Karren are small-scale landforms on karst surfaces and many types have been described so far. Here we present an apparently new feature which was found on the Hochschwab karst massive in the Northern Calcareous Alps of Austria. So far only few outcrops each having less than 1 m² within a very restricted area have been found. Morphometric analysis reveals that the karren consist of a randomly distributed, dispersed assemblage of small hummocks and depressions in between. The mean distance between neighbouring hummocks is 4 to 5 cm and the mean height is 0.85 cm. Longitudinal sections are gently sinuous. The occurrences are delimited by thin soil cover with grassy vegetation and the karren continue below that vegetation cover. Therefore, it is clear that the features have formed subcutaneously. Corroded fissures where water could infiltrate into the epikarst are absent. The bedrock lithology is Middle Triassic limestone of the Wetterstein Formation in lagoonal facies. Geological structures do not govern the feature. The surface is not a bedding plane and small joints and fractures do not govern the arrangement of the hummocks. Thin section analysis regarding rock texture and dolomite components show that there is no compositional difference between hummocks and depressions. Geochemical analyses show that the limestone is very pure with a very low content of Magnesia. Slightly higher Magnesia contents at the hummock surfaces are significant. The data obtained so far only indicate that some dissolution mechanism but not any rock property governs the irregular array. As there exist no descriptions of comparable features in literature, the name “hummocky karren” is suggested for that type of karren landform.

  11. Does filler database size influence identification accuracy?

    Science.gov (United States)

    Bergold, Amanda N; Heaton, Paul

    2018-06-01

    Police departments increasingly use large photo databases to select lineup fillers using facial recognition software, but this technological shift's implications have been largely unexplored in eyewitness research. Database use, particularly if coupled with facial matching software, could enable lineup constructors to increase filler-suspect similarity and thus enhance eyewitness accuracy (Fitzgerald, Oriet, Price, & Charman, 2013). However, with a large pool of potential fillers, such technologies might theoretically produce lineup fillers too similar to the suspect (Fitzgerald, Oriet, & Price, 2015; Luus & Wells, 1991; Wells, Rydell, & Seelau, 1993). This research proposes a new factor-filler database size-as a lineup feature affecting eyewitness accuracy. In a facial recognition experiment, we select lineup fillers in a legally realistic manner using facial matching software applied to filler databases of 5,000, 25,000, and 125,000 photos, and find that larger databases are associated with a higher objective similarity rating between suspects and fillers and lower overall identification accuracy. In target present lineups, witnesses viewing lineups created from the larger databases were less likely to make correct identifications and more likely to select known innocent fillers. When the target was absent, database size was associated with a lower rate of correct rejections and a higher rate of filler identifications. Higher algorithmic similarity ratings were also associated with decreases in eyewitness identification accuracy. The results suggest that using facial matching software to select fillers from large photograph databases may reduce identification accuracy, and provides support for filler database size as a meaningful system variable. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  12. Emotion recognition based on multiple order features using fractional Fourier transform

    Science.gov (United States)

    Ren, Bo; Liu, Deyin; Qi, Lin

    2017-07-01

    In order to deal with the insufficiency of recently algorithms based on Two Dimensions Fractional Fourier Transform (2D-FrFT), this paper proposes a multiple order features based method for emotion recognition. Most existing methods utilize the feature of single order or a couple of orders of 2D-FrFT. However, different orders of 2D-FrFT have different contributions on the feature extraction of emotion recognition. Combination of these features can enhance the performance of an emotion recognition system. The proposed approach obtains numerous features that extracted in different orders of 2D-FrFT in the directions of x-axis and y-axis, and uses the statistical magnitudes as the final feature vectors for recognition. The Support Vector Machine (SVM) is utilized for the classification and RML Emotion database and Cohn-Kanade (CK) database are used for the experiment. The experimental results demonstrate the effectiveness of the proposed method.

  13. Database on wind characteristics - contents of database bank

    Energy Technology Data Exchange (ETDEWEB)

    Larsen, G.C.; Hansen, K.S.

    2004-06-01

    The main objective of IEA R and D Wind Annex XVII - Database on Wind Characteristics - has been to provide wind energy planners, designers and researchers, as well as the international wind engineering community in general, with a source of actual wind field data (time series and resource data) observed in a wide range of different wind climates and terrain types. Connected to an extension of the initial Annex period, the scope for the continuation was widened to include also support to the international wind turbine standardisation efforts.. The project partners are Sweden, Norway, U.S.A., The Netherlands and Denmark, with Denmark as the Operating Agent. The reporting of the continuation of Annex XVII falls in two separate parts. Part one accounts in details for the available data in the established database bank, and part two describes various data analyses performed with the overall purpose of improving the design load cases with relevance for to wind turbine structures. The present report constitutes the second part of the Annex XVII reporting. Both fatigue and extreme load aspects are dealt with, however, with the main emphasis on the latter. The work has been supported by The Ministry of Environment and Energy, Danish Energy Agency, The Netherlands Agency for Energy and the Environment (NOVEM), The Norwegian Water Resources and Energy Administration (NVE), The Swedish National Energy Administration (STEM) and The Government of the United States of America. (au)

  14. BioWarehouse: a bioinformatics database warehouse toolkit

    Directory of Open Access Journals (Sweden)

    Stringer-Calvert David WJ

    2006-03-01

    Full Text Available Abstract Background This article addresses the problem of interoperation of heterogeneous bioinformatics databases. Results We introduce BioWarehouse, an open source toolkit for constructing bioinformatics database warehouses using the MySQL and Oracle relational database managers. BioWarehouse integrates its component databases into a common representational framework within a single database management system, thus enabling multi-database queries using the Structured Query Language (SQL but also facilitating a variety of database integration tasks such as comparative analysis and data mining. BioWarehouse currently supports the integration of a pathway-centric set of databases including ENZYME, KEGG, and BioCyc, and in addition the UniProt, GenBank, NCBI Taxonomy, and CMR databases, and the Gene Ontology. Loader tools, written in the C and JAVA languages, parse and load these databases into a relational database schema. The loaders also apply a degree of semantic normalization to their respective source data, decreasing semantic heterogeneity. The schema supports the following bioinformatics datatypes: chemical compounds, biochemical reactions, metabolic pathways, proteins, genes, nucleic acid sequences, features on protein and nucleic-acid sequences, organisms, organism taxonomies, and controlled vocabularies. As an application example, we applied BioWarehouse to determine the fraction of biochemically characterized enzyme activities for which no sequences exist in the public sequence databases. The answer is that no sequence exists for 36% of enzyme activities for which EC numbers have been assigned. These gaps in sequence data significantly limit the accuracy of genome annotation and metabolic pathway prediction, and are a barrier for metabolic engineering. Complex queries of this type provide examples of the value of the data warehousing approach to bioinformatics research. Conclusion BioWarehouse embodies significant progress on the

  15. On the Governance of Social Science Research

    DEFF Research Database (Denmark)

    Linneberg, Mai Skjøtt; Nørreklit, Hanne; Schröder, Philipp J.H.

    2009-01-01

    The majority of social science research is conducted within public or semi-public institutions, such as universities. Over the past decades, these institutions have experienced substantial changes in governance structures and an increased focus on performance contracts. Obviously, the new...... structures do not enter into a governance vacuum but replace existing profession-based governance structures. The present paper has a two-fold purpose. First, we map the key features and problems of a profession-based governance system focussing on principal-agent issues and motivational drivers. Second, we...... study the implications of the current changes in the social science research landscape along with central aspects of mechanism design, validity, employee motivation as well as the ability to establish socially optimal resource allocations. We identify a number of potential problems that may come along...

  16. Database resources for the tuberculosis community.

    Science.gov (United States)

    Lew, Jocelyne M; Mao, Chunhong; Shukla, Maulik; Warren, Andrew; Will, Rebecca; Kuznetsov, Dmitry; Xenarios, Ioannis; Robertson, Brian D; Gordon, Stephen V; Schnappinger, Dirk; Cole, Stewart T; Sobral, Bruno

    2013-01-01

    Access to online repositories for genomic and associated "-omics" datasets is now an essential part of everyday research activity. It is important therefore that the Tuberculosis community is aware of the databases and tools available to them online, as well as for the database hosts to know what the needs of the research community are. One of the goals of the Tuberculosis Annotation Jamboree, held in Washington DC on March 7th-8th 2012, was therefore to provide an overview of the current status of three key Tuberculosis resources, TubercuList (tuberculist.epfl.ch), TB Database (www.tbdb.org), and Pathosystems Resource Integration Center (PATRIC, www.patricbrc.org). Here we summarize some key updates and upcoming features in TubercuList, and provide an overview of the PATRIC site and its online tools for pathogen RNA-Seq analysis. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. 'Good Governance' dan 'Governability'

    Directory of Open Access Journals (Sweden)

    - Pratikno

    2005-03-01

    Full Text Available The article endeavors to trace the outset of governance concept, its dominant meanings and discourse, and its implication towards governability. The central role of government in the governing processes has predominantly been adopted. The concept of governance was emerged precisely in the context of the failure of government as key player in regulation, economic redistribution and political participation. Governance is therefore aimed to emphasize pattern of governing which are based both on democratic mechanism and sound development management. However, practices of such good governance concept –which are mainly adopted and promoted by donor states and agencies– tend to degrade state and/or government authority and legitimacy. Traditional function of the state as sole facilitator of equal societal, political and legal membership among citizens has been diminished. The logic of fair competition has been substituted almost completely by the logic of free competition in nearly all sectors of public life. The concept and practices of good governance have resulted in decayed state authority and failed state which in turn created a condition for "ungovernability". By promoting democratic and humane governance, the article accordingly encourages discourse to reinstall and bring the idea of accountable state back in.

  18. Word-level recognition of multifont Arabic text using a feature vector matching approach

    Science.gov (United States)

    Erlandson, Erik J.; Trenkle, John M.; Vogt, Robert C., III

    1996-03-01

    Many text recognition systems recognize text imagery at the character level and assemble words from the recognized characters. An alternative approach is to recognize text imagery at the word level, without analyzing individual characters. This approach avoids the problem of individual character segmentation, and can overcome local errors in character recognition. A word-level recognition system for machine-printed Arabic text has been implemented. Arabic is a script language, and is therefore difficult to segment at the character level. Character segmentation has been avoided by recognizing text imagery of complete words. The Arabic recognition system computes a vector of image-morphological features on a query word image. This vector is matched against a precomputed database of vectors from a lexicon of Arabic words. Vectors from the database with the highest match score are returned as hypotheses for the unknown image. Several feature vectors may be stored for each word in the database. Database feature vectors generated using multiple fonts and noise models allow the system to be tuned to its input stream. Used in conjunction with database pruning techniques, this Arabic recognition system has obtained promising word recognition rates on low-quality multifont text imagery.

  19. Banking governance: New Approaches

    Directory of Open Access Journals (Sweden)

    Victor Mihăiţă Duţă

    2016-11-01

    Full Text Available Banks are companies like any other. However, banks are distinguished by certain intrinsic characteristics of companies that have a different impact on the motivation of stakeholders. Among these features, we mention:partnership and shareholders governance agreements; banks are heavily regulated companies; banking assets is the main source of haze banking and information asymmetry; between the bank and depositors there is a problem of moral hazard.

  20. Scalable Database Design of End-Game Model with Decoupled Countermeasure and Threat Information

    Science.gov (United States)

    2017-11-01

    the Army Modular Active Protection System (MAPS) program to provide end-to-end APS modeling and simulation capabilities. The SSES simulation features...research project of scalable database design was initiated in support of SSES modularization efforts with respect to 4 major software components...Iron Curtain KE kinetic energy MAPS Modular Active Protective System OLE DB object linking and embedding database RDB relational database RPG

  1. BioCarian: search engine for exploratory searches in heterogeneous biological databases.

    Science.gov (United States)

    Zaki, Nazar; Tennakoon, Chandana

    2017-10-02

    There are a large number of biological databases publicly available for scientists in the web. Also, there are many private databases generated in the course of research projects. These databases are in a wide variety of formats. Web standards have evolved in the recent times and semantic web technologies are now available to interconnect diverse and heterogeneous sources of data. Therefore, integration and querying of biological databases can be facilitated by techniques used in semantic web. Heterogeneous databases can be converted into Resource Description Format (RDF) and queried using SPARQL language. Searching for exact queries in these databases is trivial. However, exploratory searches need customized solutions, especially when multiple databases are involved. This process is cumbersome and time consuming for those without a sufficient background in computer science. In this context, a search engine facilitating exploratory searches of databases would be of great help to the scientific community. We present BioCarian, an efficient and user-friendly search engine for performing exploratory searches on biological databases. The search engine is an interface for SPARQL queries over RDF databases. We note that many of the databases can be converted to tabular form. We first convert the tabular databases to RDF. The search engine provides a graphical interface based on facets to explore the converted databases. The facet interface is more advanced than conventional facets. It allows complex queries to be constructed, and have additional features like ranking of facet values based on several criteria, visually indicating the relevance of a facet value and presenting the most important facet values when a large number of choices are available. For the advanced users, SPARQL queries can be run directly on the databases. Using this feature, users will be able to incorporate federated searches of SPARQL endpoints. We used the search engine to do an exploratory search

  2. A Stakeholder Approach to Media Governance

    DEFF Research Database (Denmark)

    Lund, Anker Brink

    2016-01-01

    Historically, government regulation has significantly impacted the room for manoeuvre enjoyed by media managers, especially in public service media but increasingly also in privately owned firms. Currently stakeholders of many different kinds attempt to influence media industries, using a number...... of the world arguably features the most complex and continuous development in these aspects. Our particular interest investigates media governance, which is not understood as an external given but considered as a premise of strategic management. It is argued that to secure an appropriate remit for an industry...... or firm to that guarantees a longer-term licence to operate, media managers must engage different audiences and authorities in relation to restrictive as well as prescriptive regulation. Achieving that requires approaching media governance from a stakeholder perspective, which inherently involves a broad...

  3. Performance assessment of EMR systems based on post-relational database.

    Science.gov (United States)

    Yu, Hai-Yan; Li, Jing-Song; Zhang, Xiao-Guang; Tian, Yu; Suzuki, Muneou; Araki, Kenji

    2012-08-01

    Post-relational databases provide high performance and are currently widely used in American hospitals. As few hospital information systems (HIS) in either China or Japan are based on post-relational databases, here we introduce a new-generation electronic medical records (EMR) system called Hygeia, which was developed with the post-relational database Caché and the latest platform Ensemble. Utilizing the benefits of a post-relational database, Hygeia is equipped with an "integration" feature that allows all the system users to access data-with a fast response time-anywhere and at anytime. Performance tests of databases in EMR systems were implemented in both China and Japan. First, a comparison test was conducted between a post-relational database, Caché, and a relational database, Oracle, embedded in the EMR systems of a medium-sized first-class hospital in China. Second, a user terminal test was done on the EMR system Izanami, which is based on the identical database Caché and operates efficiently at the Miyazaki University Hospital in Japan. The results proved that the post-relational database Caché works faster than the relational database Oracle and showed perfect performance in the real-time EMR system.

  4. ArcGIS 9.3 ed i database spaziali: gli scenari di utilizzo

    Directory of Open Access Journals (Sweden)

    Francesco Bartoli

    2009-03-01

    Full Text Available ArcGis 9.3 and spatial databases: application sceneriesThe latest news from ESRI suggests that it will soon be possible to link to the PostgreSQL database. This resulted in a collaboration between the PostGis geometry model with SDOGEOMETRY - the oracle database - a hierarchial and spatial design database. This had a direct impact on the PMI review and the business models of local governments. ArcSdewould be replaced by Zig-Gis 2.0 providing greater offerings to the GIS community. Harnessing this system will take advantage of human resources to aid in the design of potentconceptual data models. Further funds are still requiredto promote the product under a prominent license.

  5. Advantages and disadvantages of relational and non-relational (NoSQL) databases for analytical tasks

    OpenAIRE

    Klapač, Milan

    2015-01-01

    This work focuses on NoSQL databases, their use for analytical tasks and on comparison of NoSQL databases with relational and OLAP databases. The aim is to analyse the benefits of NoSQL databases and their use for analytical purposes. The first part presents the basic principles of Business Intelligence, Data Warehousing, and Big Data. The second part deals with the key features of relational and NoSQL databases. The last part of the thesis describes the properties of four basic types of NoSQ...

  6. Particle swarm optimization based feature enhancement and feature selection for improved emotion recognition in speech and glottal signals.

    Science.gov (United States)

    Muthusamy, Hariharan; Polat, Kemal; Yaacob, Sazali

    2015-01-01

    In the recent years, many research works have been published using speech related features for speech emotion recognition, however, recent studies show that there is a strong correlation between emotional states and glottal features. In this work, Mel-frequency cepstralcoefficients (MFCCs), linear predictive cepstral coefficients (LPCCs), perceptual linear predictive (PLP) features, gammatone filter outputs, timbral texture features, stationary wavelet transform based timbral texture features and relative wavelet packet energy and entropy features were extracted from the emotional speech (ES) signals and its glottal waveforms(GW). Particle swarm optimization based clustering (PSOC) and wrapper based particle swarm optimization (WPSO) were proposed to enhance the discerning ability of the features and to select the discriminating features respectively. Three different emotional speech databases were utilized to gauge the proposed method. Extreme learning machine (ELM) was employed to classify the different types of emotions. Different experiments were conducted and the results show that the proposed method significantly improves the speech emotion recognition performance compared to previous works published in the literature.

  7. Experience with CANDID: Comparison algorithm for navigating digital image databases

    Energy Technology Data Exchange (ETDEWEB)

    Kelly, P.; Cannon, M.

    1994-10-01

    This paper presents results from the authors experience with CANDID (Comparison Algorithm for Navigating Digital Image Databases), which was designed to facilitate image retrieval by content using a query-by-example methodology. A global signature describing the texture, shape, or color content is first computed for every image stored in a database, and a normalized similarity measure between probability density functions of feature vectors is used to match signatures. This method can be used to retrieve images from a database that are similar to a user-provided example image. Results for three test applications are included.

  8. Updated Palaeotsunami Database for Aotearoa/New Zealand

    Science.gov (United States)

    Gadsby, M. R.; Goff, J. R.; King, D. N.; Robbins, J.; Duesing, U.; Franz, T.; Borrero, J. C.; Watkins, A.

    2016-12-01

    The updated configuration, design, and implementation of a national palaeotsunami (pre-historic tsunami) database for Aotearoa/New Zealand (A/NZ) is near completion. This tool enables correlation of events along different stretches of the NZ coastline, provides information on frequency and extent of local, regional and distant-source tsunamis, and delivers detailed information on the science and proxies used to identify the deposits. In A/NZ a plethora of data, scientific research and experience surrounds palaeotsunami deposits, but much of this information has been difficult to locate, has variable reporting standards, and lacked quality assurance. The original database was created by Professor James Goff while working at the National Institute of Water & Atmospheric Research in A/NZ, but has subsequently been updated during his tenure at the University of New South Wales. The updating and establishment of the national database was funded by the Ministry of Civil Defence and Emergency Management (MCDEM), led by Environment Canterbury Regional Council, and supported by all 16 regions of A/NZ's local government. Creation of a single database has consolidated a wide range of published and unpublished research contributions from many science providers on palaeotsunamis in A/NZ. The information is now easily accessible and quality assured and allows examination of frequency, extent and correlation of events. This provides authoritative scientific support for coastal-marine planning and risk management. The database will complement the GNS New Zealand Historical Database, and contributes to a heightened public awareness of tsunami by being a "one-stop-shop" for information on past tsunami impacts. There is scope for this to become an international database, enabling the pacific-wide correlation of large events, as well as identifying smaller regional ones. The Australian research community has already expressed an interest, and the database is also compatible with a

  9. Face recognition using slow feature analysis and contourlet transform

    Science.gov (United States)

    Wang, Yuehao; Peng, Lingling; Zhe, Fuchuan

    2018-04-01

    In this paper we propose a novel face recognition approach based on slow feature analysis (SFA) in contourlet transform domain. This method firstly use contourlet transform to decompose the face image into low frequency and high frequency part, and then takes technological advantages of slow feature analysis for facial feature extraction. We named the new method combining the slow feature analysis and contourlet transform as CT-SFA. The experimental results on international standard face database demonstrate that the new face recognition method is effective and competitive.

  10. A Polygon and Point-Based Approach to Matching Geospatial Features

    Directory of Open Access Journals (Sweden)

    Juan J. Ruiz-Lendínez

    2017-12-01

    Full Text Available A methodology for matching bidimensional entities is presented in this paper. The matching is proposed for both area and point features extracted from geographical databases. The procedure used to obtain homologous entities is achieved in a two-step process: The first matching, polygon to polygon matching (inter-element matching, is obtained by means of a genetic algorithm that allows the classifying of area features from two geographical databases. After this, we apply a point to point matching (intra-element matching based on the comparison of changes in their turning functions. This study shows that genetic algorithms are suitable for matching polygon features even if these features are quite different. Our results show up to 40% of matched polygons with differences in geometrical attributes. With regards to point matching, the vertex from homologous polygons, the function and threshold values proposed in this paper show a useful method for obtaining precise vertex matching.

  11. ATLAS database application enhancements using Oracle 11g

    International Nuclear Information System (INIS)

    Dimitrov, G; Canali, L; Blaszczyk, M; Sorokoletov, R

    2012-01-01

    The ATLAS experiment at LHC relies on databases for detector online data-taking, storage and retrieval of configurations, calibrations and alignments, post data-taking analysis, file management over the grid, job submission and management, condition data replication to remote sites. Oracle Relational Database Management System (RDBMS) has been addressing the ATLAS database requirements to a great extent for many years. Ten database clusters are currently deployed for the needs of the different applications, divided in production, integration and standby databases. The data volume, complexity and demands from the users are increasing steadily with time. Nowadays more than 20 TB of data are stored in the ATLAS production Oracle databases at CERN (not including the index overhead), but the most impressive number is the hosted 260 database schemes (for the most common case each schema is related to a dedicated client application with its own requirements). At the beginning of 2012 all ATLAS databases at CERN have been upgraded to the newest Oracle version at the time: Oracle 11g Release 2. Oracle 11g come with several key improvements compared to previous database engine versions. In this work we present our evaluation of the most relevant new features of Oracle 11g of interest for ATLAS applications and use cases. Notably we report on the performance and scalability enhancements obtained in production since the Oracle 11g deployment during Q1 2012 and we outline plans for future work in this area.

  12. Database Description - Yeast Interacting Proteins Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Yeast Interacting Proteins Database Database Description General information of database Database... name Yeast Interacting Proteins Database Alternative name - DOI 10.18908/lsdba.nbdc00742-000 Creator C...-ken 277-8561 Tel: +81-4-7136-3989 FAX: +81-4-7136-3979 E-mail : Database classif...s cerevisiae Taxonomy ID: 4932 Database description Information on interactions and related information obta...l Acad Sci U S A. 2001 Apr 10;98(8):4569-74. Epub 2001 Mar 13. External Links: Original website information Database

  13. BioModels Database: a repository of mathematical models of biological processes.

    Science.gov (United States)

    Chelliah, Vijayalakshmi; Laibe, Camille; Le Novère, Nicolas

    2013-01-01

    BioModels Database is a public online resource that allows storing and sharing of published, peer-reviewed quantitative, dynamic models of biological processes. The model components and behaviour are thoroughly checked to correspond the original publication and manually curated to ensure reliability. Furthermore, the model elements are annotated with terms from controlled vocabularies as well as linked to relevant external data resources. This greatly helps in model interpretation and reuse. Models are stored in SBML format, accepted in SBML and CellML formats, and are available for download in various other common formats such as BioPAX, Octave, SciLab, VCML, XPP and PDF, in addition to SBML. The reaction network diagram of the models is also available in several formats. BioModels Database features a search engine, which provides simple and more advanced searches. Features such as online simulation and creation of smaller models (submodels) from the selected model elements of a larger one are provided. BioModels Database can be accessed both via a web interface and programmatically via web services. New models are available in BioModels Database at regular releases, about every 4 months.

  14. SVM-RFE based feature selection and Taguchi parameters optimization for multiclass SVM classifier.

    Science.gov (United States)

    Huang, Mei-Ling; Hung, Yung-Hsiang; Lee, W M; Li, R K; Jiang, Bo-Ru

    2014-01-01

    Recently, support vector machine (SVM) has excellent performance on classification and prediction and is widely used on disease diagnosis or medical assistance. However, SVM only functions well on two-group classification problems. This study combines feature selection and SVM recursive feature elimination (SVM-RFE) to investigate the classification accuracy of multiclass problems for Dermatology and Zoo databases. Dermatology dataset contains 33 feature variables, 1 class variable, and 366 testing instances; and the Zoo dataset contains 16 feature variables, 1 class variable, and 101 testing instances. The feature variables in the two datasets were sorted in descending order by explanatory power, and different feature sets were selected by SVM-RFE to explore classification accuracy. Meanwhile, Taguchi method was jointly combined with SVM classifier in order to optimize parameters C and γ to increase classification accuracy for multiclass classification. The experimental results show that the classification accuracy can be more than 95% after SVM-RFE feature selection and Taguchi parameter optimization for Dermatology and Zoo databases.

  15. Three dimensional pattern recognition using feature-based indexing and rule-based search

    Science.gov (United States)

    Lee, Jae-Kyu

    In flexible automated manufacturing, robots can perform routine operations as well as recover from atypical events, provided that process-relevant information is available to the robot controller. Real time vision is among the most versatile sensing tools, yet the reliability of machine-based scene interpretation can be questionable. The effort described here is focused on the development of machine-based vision methods to support autonomous nuclear fuel manufacturing operations in hot cells. This thesis presents a method to efficiently recognize 3D objects from 2D images based on feature-based indexing. Object recognition is the identification of correspondences between parts of a current scene and stored views of known objects, using chains of segments or indexing vectors. To create indexed object models, characteristic model image features are extracted during preprocessing. Feature vectors representing model object contours are acquired from several points of view around each object and stored. Recognition is the process of matching stored views with features or patterns detected in a test scene. Two sets of algorithms were developed, one for preprocessing and indexed database creation, and one for pattern searching and matching during recognition. At recognition time, those indexing vectors with the highest match probability are retrieved from the model image database, using a nearest neighbor search algorithm. The nearest neighbor search predicts the best possible match candidates. Extended searches are guided by a search strategy that employs knowledge-base (KB) selection criteria. The knowledge-based system simplifies the recognition process and minimizes the number of iterations and memory usage. Novel contributions include the use of a feature-based indexing data structure together with a knowledge base. Both components improve the efficiency of the recognition process by improved structuring of the database of object features and reducing data base size

  16. Cyclebase 3.0: a multi-organism database on cell-cycle regulation and phenotypes

    DEFF Research Database (Denmark)

    Santos Delgado, Alberto; Wernersson, Rasmus; Jensen, Lars Juhl

    2015-01-01

    3.0, we have updated the content of the database to reflect changes to genome annotation, added new mRNAand protein expression data, and integrated cell-cycle phenotype information from high-content screens and model-organism databases. The new version of Cyclebase also features a new web interface...

  17. Database system selection for marketing strategies support in information systems

    Directory of Open Access Journals (Sweden)

    František Dařena

    2007-01-01

    Full Text Available In today’s dynamically changing environment marketing has a significant role. Creating successful marketing strategies requires large amount of high quality information of various kinds and data types. A powerful database management system is a necessary condition for marketing strategies creation support. The paper briefly describes the field of marketing strategies and specifies the features that should be provided by database systems in connection with these strategies support. Major commercial (Oracle, DB2, MS SQL, Sybase and open-source (PostgreSQL, MySQL, Firebird databases are than examined from the point of view of accordance with these characteristics and their comparison in made. The results are useful for making the decision before acquisition of a database system during information system’s hardware architecture specification.

  18. COMPARATIVE ANALYSIS OF RUSSIA’S AND CHINA’S PARTICIPATING IN GLOBAL GOVERNANCE INSTITUTIONS EXPERIENCE

    Directory of Open Access Journals (Sweden)

    Vladimir E. Petrovskiy

    2015-01-01

    Full Text Available The article deals with the comparative analysis of Russian and Chinese participation in the current system of global governance, and in its reform. The author views participation of the respective countries in the system of global governance as part of their foreign policy and foreign policy strategy. He shows common and distinctive features of conceptual and practical approaches towards global governance defined by specific features of Russia’s and China’s history, economic development, political culture and traditions. Based on this comparative analysis, the author speculates on the future trends of participation of the two countries in the global governance system, in the spheres of global economy and international security, and on the future trends of their policy coordination in these respective areas.

  19. Comparison of open source database systems(characteristics, limits of usage)

    OpenAIRE

    Husárik, Braňko

    2008-01-01

    The goal of this work is to compare some chosen open source database systems (Ingres, PostgreSQL, Firebird, Mysql). First part of work is focused on history and present situation of companies which are developing these products. Second part contains the comparision of certain group of specific features and limits. The benchmark of some operations is its own part. Possibilities of usage of mentioned database systems are summarized at the end of work.

  20. What determines the informativeness of firms' explanations for deviations from the Dutch corporate governance code?

    NARCIS (Netherlands)

    Hooghiemstra, R.B.H.

    2012-01-01

    The comply-or-explain principle is a common feature of corporate governance codes. While prior studies investigated compliance with corporate governance codes as well as the effects of compliance on firm behaviour and performance, explanations for deviations from a corporate governance code remain

  1. Auditory ERB like admissible wavelet packet features for TIMIT phoneme recognition

    Directory of Open Access Journals (Sweden)

    P.K. Sahu

    2014-09-01

    Full Text Available In recent years wavelet transform has been found to be an effective tool for time–frequency analysis. Wavelet transform has been used as feature extraction in speech recognition applications and it has proved to be an effective technique for unvoiced phoneme classification. In this paper a new filter structure using admissible wavelet packet is analyzed for English phoneme recognition. These filters have the benefit of having frequency bands spacing similar to the auditory Equivalent Rectangular Bandwidth (ERB scale. Central frequencies of ERB scale are equally distributed along the frequency response of human cochlea. A new sets of features are derived using wavelet packet transform's multi-resolution capabilities and found to be better than conventional features for unvoiced phoneme problems. Some of the noises from NOISEX-92 database has been used for preparing the artificial noisy database to test the robustness of wavelet based features.

  2. Multimodality medical image database for temporal lobe epilepsy

    Science.gov (United States)

    Siadat, Mohammad-Reza; Soltanian-Zadeh, Hamid; Fotouhi, Farshad A.; Elisevich, Kost

    2003-05-01

    This paper presents the development of a human brain multi-modality database for surgical candidacy determination in temporal lobe epilepsy. The focus of the paper is on content-based image management, navigation and retrieval. Several medical image-processing methods including our newly developed segmentation method are utilized for information extraction/correlation and indexing. The input data includes T1-, T2-Weighted and FLAIR MRI and ictal/interictal SPECT modalities with associated clinical data and EEG data analysis. The database can answer queries regarding issues such as the correlation between the attribute X of the entity Y and the outcome of a temporal lobe epilepsy surgery. The entity Y can be a brain anatomical structure such as the hippocampus. The attribute X can be either a functionality feature of the anatomical structure Y, calculated with SPECT modalities, such as signal average, or a volumetric/morphological feature of the entity Y such as volume or average curvature. The outcome of the surgery can be any surgery assessment such as non-verbal Wechsler memory quotient. A determination is made regarding surgical candidacy by analysis of both textual and image data. The current database system suggests a surgical determination for the cases with relatively small hippocampus and high signal intensity average on FLAIR images within the hippocampus. This indication matches the neurosurgeons expectations/observations. Moreover, as the database gets more populated with patient profiles and individual surgical outcomes, using data mining methods one may discover partially invisible correlations between the contents of different modalities of data and the outcome of the surgery.

  3. Whole of Government Accounts

    DEFF Research Database (Denmark)

    Pontoppidan, Caroline Aggestam; Chow, Danny; Day, Ronald

    In our comparative study, we surveyed an emerging literature on the use of consolidation in government accounting and develop a research agenda. We find heterogeneous approaches to the development of consolidation models across the five countries (Australia, New Zealand, UK, Canada and Sweden...... of financial reporting (GAAP)-based reforms when compared with budget-centric systems of accounting, which dominate government decision-making. At a trans-national level, there is a need to examine the embedded or implicit contests or ‘trials of strength’ between nations and/or institutions jockeying...... for influence. We highlight three arenas where such contests are being played out: 1. Statistical versus GAAP notions of accounting value, which features in all accounting debates over the merits and costs of ex-ante versus ex-post notions of value (i.e., the relevance versus reliability debate); 2. Private...

  4. High-integrity databases for helicopter operations

    Science.gov (United States)

    Pschierer, Christian; Schiefele, Jens; Lüthy, Juerg

    2009-05-01

    Helicopter Emergency Medical Service missions (HEMS) impose a high workload on pilots due to short preparation time, operations in low level flight, and landings in unknown areas. The research project PILAS, a cooperation between Eurocopter, Diehl Avionics, DLR, EADS, Euro Telematik, ESG, Jeppesen, the Universities of Darmstadt and Munich, and funded by the German government, approached this problem by researching a pilot assistance system which supports the pilots during all phases of flight. The databases required for the specified helicopter missions include different types of topological and cultural data for graphical display on the SVS system, AMDB data for operations at airports and helipads, and navigation data for IFR segments. The most critical databases for the PILAS system however are highly accurate terrain and obstacle data. While RTCA DO-276 specifies high accuracies and integrities only for the areas around airports, HEMS helicopters typically operate outside of these controlled areas and thus require highly reliable terrain and obstacle data for their designated response areas. This data has been generated by a LIDAR scan of the specified test region. Obstacles have been extracted into a vector format. This paper includes a short overview of the complete PILAS system and then focus on the generation of the required high quality databases.

  5. CyMSatDB: The Globe Artichoke (Cynara cardunculus var. scolymus) Microsatellite Database

    DEFF Research Database (Denmark)

    Portis, Ezio; Portis, Flavio; Valente, Luisa

    2015-01-01

    and for the construction of the first microsatellite marker database CyMSatDB (Cynara cardunculusMicroSatellite DataBase). Both perfect and compound SSRs were in-silico mined using the SciRoKo SSR-search module (http://kofler.or.at/bioinformatics/SciRoKo). On the whole, about 295,000 SSR motifs were identified which also...... Kbp), which represent 725 Mb of genomic sequence. Scaffolds were genetically anchored using a low-coverage genotyping by sequencing (GBS) of a mapping population and 17 pseudomolecules reconstructed. Peudomolecules as well as unmapped scaffolds were used for the bulk mining of SSR markers...... in a MySQL database and provides an effective and responsive interface developed in PHP. To cater the customized needs of wet lab, features with a novelty of an automated primer designing tool is added. The feature of user defined primer designing has great advantage in terms of precise selection from...

  6. Network-based statistical comparison of citation topology of bibliographic databases

    Science.gov (United States)

    Šubelj, Lovro; Fiala, Dalibor; Bajec, Marko

    2014-01-01

    Modern bibliographic databases provide the basis for scientific research and its evaluation. While their content and structure differ substantially, there exist only informal notions on their reliability. Here we compare the topological consistency of citation networks extracted from six popular bibliographic databases including Web of Science, CiteSeer and arXiv.org. The networks are assessed through a rich set of local and global graph statistics. We first reveal statistically significant inconsistencies between some of the databases with respect to individual statistics. For example, the introduced field bow-tie decomposition of DBLP Computer Science Bibliography substantially differs from the rest due to the coverage of the database, while the citation information within arXiv.org is the most exhaustive. Finally, we compare the databases over multiple graph statistics using the critical difference diagram. The citation topology of DBLP Computer Science Bibliography is the least consistent with the rest, while, not surprisingly, Web of Science is significantly more reliable from the perspective of consistency. This work can serve either as a reference for scholars in bibliometrics and scientometrics or a scientific evaluation guideline for governments and research agencies. PMID:25263231

  7. School governing body election deficiencies – deliberative ...

    African Journals Online (AJOL)

    Undemocratic features in the election process results in the election of unsuitable or incompetent candidates which has a detrimental effect on the governance of public schools. It is therefore recommended that a new set of nationally uniform SGB election regulations, which allows for transparent deliberation between ...

  8. Update History of This Database - Arabidopsis Phenome Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Arabidopsis Phenome Database Update History of This Database Date Update contents 2017/02/27 Arabidopsis Phenome Data...base English archive site is opened. - Arabidopsis Phenome Database (http://jphenom...e.info/?page_id=95) is opened. About This Database Database Description Download License Update History of This Database... Site Policy | Contact Us Update History of This Database - Arabidopsis Phenome Database | LSDB Archive ...

  9. Software-Enabled Distributed Network Governance: The PopMedNet Experience.

    Science.gov (United States)

    Davies, Melanie; Erickson, Kyle; Wyner, Zachary; Malenfant, Jessica; Rosen, Rob; Brown, Jeffrey

    2016-01-01

    The expanded availability of electronic health information has led to increased interest in distributed health data research networks. The distributed research network model leaves data with and under the control of the data holder. Data holders, network coordinating centers, and researchers have distinct needs and challenges within this model. The concerns of network stakeholders are addressed in the design and governance models of the PopMedNet software platform. PopMedNet features include distributed querying, customizable workflows, and auditing and search capabilities. Its flexible role-based access control system enables the enforcement of varying governance policies. Four case studies describe how PopMedNet is used to enforce network governance models. Trust is an essential component of a distributed research network and must be built before data partners may be willing to participate further. The complexity of the PopMedNet system must be managed as networks grow and new data, analytic methods, and querying approaches are developed. The PopMedNet software platform supports a variety of network structures, governance models, and research activities through customizable features designed to meet the needs of network stakeholders.

  10. Legacy2Drupal - Conversion of an existing oceanographic relational database to a semantically enabled Drupal content management system

    Science.gov (United States)

    Maffei, A. R.; Chandler, C. L.; Work, T.; Allen, J.; Groman, R. C.; Fox, P. A.

    2009-12-01

    Content Management Systems (CMSs) provide powerful features that can be of use to oceanographic (and other geo-science) data managers. However, in many instances, geo-science data management offices have previously designed customized schemas for their metadata. The WHOI Ocean Informatics initiative and the NSF funded Biological Chemical and Biological Data Management Office (BCO-DMO) have jointly sponsored a project to port an existing, relational database containing oceanographic metadata, along with an existing interface coded in Cold Fusion middleware, to a Drupal6 Content Management System. The goal was to translate all the existing database tables, input forms, website reports, and other features present in the existing system to employ Drupal CMS features. The replacement features include Drupal content types, CCK node-reference fields, themes, RDB, SPARQL, workflow, and a number of other supporting modules. Strategic use of some Drupal6 CMS features enables three separate but complementary interfaces that provide access to oceanographic research metadata via the MySQL database: 1) a Drupal6-powered front-end; 2) a standard SQL port (used to provide a Mapserver interface to the metadata and data; and 3) a SPARQL port (feeding a new faceted search capability being developed). Future plans include the creation of science ontologies, by scientist/technologist teams, that will drive semantically-enabled faceted search capabilities planned for the site. Incorporation of semantic technologies included in the future Drupal 7 core release is also anticipated. Using a public domain CMS as opposed to proprietary middleware, and taking advantage of the many features of Drupal 6 that are designed to support semantically-enabled interfaces will help prepare the BCO-DMO database for interoperability with other ecosystem databases.

  11. Refactoring databases evolutionary database design

    CERN Document Server

    Ambler, Scott W

    2006-01-01

    Refactoring has proven its value in a wide range of development projects–helping software professionals improve system designs, maintainability, extensibility, and performance. Now, for the first time, leading agile methodologist Scott Ambler and renowned consultant Pramodkumar Sadalage introduce powerful refactoring techniques specifically designed for database systems. Ambler and Sadalage demonstrate how small changes to table structures, data, stored procedures, and triggers can significantly enhance virtually any database design–without changing semantics. You’ll learn how to evolve database schemas in step with source code–and become far more effective in projects relying on iterative, agile methodologies. This comprehensive guide and reference helps you overcome the practical obstacles to refactoring real-world databases by covering every fundamental concept underlying database refactoring. Using start-to-finish examples, the authors walk you through refactoring simple standalone databas...

  12. SNaX: A Database of Supernova X-Ray Light Curves

    Energy Technology Data Exchange (ETDEWEB)

    Ross, Mathias; Dwarkadas, Vikram V., E-mail: Mathias_Ross@msn.com, E-mail: vikram@oddjob.uchicago.edu [Astronomy and Astrophysics, University of Chicago, 5640 S Ellis Avenue, ERC 569, Chicago, IL 60637 (United States)

    2017-06-01

    We present the Supernova X-ray Database (SNaX), a compilation of the X-ray data from young supernovae (SNe). The database includes the X-ray fluxes and luminosities of young SNe, from days to years after outburst. The original goal and intent of this study was to present a database of Type IIn SNe (SNe IIn), which we have accomplished. Our ongoing goal is to expand the database to include all SNe for which published data are available. The database interface allows one to search for SNe using various criteria, plot all or selected data points, and download both the data and the plot. The plotting facility allows for significant customization. There is also a facility for the user to submit data that can be directly incorporated into the database. We include an option to fit the decay of any given SN light curve with a power-law. The database includes a conversion of most data points to a common 0.3–8 keV band so that SN light curves may be directly compared with each other. A mailing list has been set up to disseminate information about the database. We outline the structure and function of the database, describe its various features, and outline the plans for future expansion.

  13. The STEP database through the end-users eyes--USABILITY STUDY.

    Science.gov (United States)

    Salunke, Smita; Tuleu, Catherine

    2015-08-15

    The user-designed database of Safety and Toxicity of Excipients for Paediatrics ("STEP") is created to address the shared need of drug development community to access the relevant information of excipients effortlessly. Usability testing was performed to validate if the database satisfies the need of the end-users. Evaluation framework was developed to assess the usability. The participants performed scenario based tasks and provided feedback and post-session usability ratings. Failure Mode Effect Analysis (FMEA) was performed to prioritize the problems and improvements to the STEP database design and functionalities. The study revealed several design vulnerabilities. Tasks such as limiting the results, running complex queries, location of data and registering to access the database were challenging. The three critical attributes identified to have impact on the usability of the STEP database included (1) content and presentation (2) the navigation and search features (3) potential end-users. Evaluation framework proved to be an effective method for evaluating database effectiveness and user satisfaction. This study provides strong initial support for the usability of the STEP database. Recommendations would be incorporated into the refinement of the database to improve its usability and increase user participation towards the advancement of the database. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. SNaX: A Database of Supernova X-Ray Light Curves

    International Nuclear Information System (INIS)

    Ross, Mathias; Dwarkadas, Vikram V.

    2017-01-01

    We present the Supernova X-ray Database (SNaX), a compilation of the X-ray data from young supernovae (SNe). The database includes the X-ray fluxes and luminosities of young SNe, from days to years after outburst. The original goal and intent of this study was to present a database of Type IIn SNe (SNe IIn), which we have accomplished. Our ongoing goal is to expand the database to include all SNe for which published data are available. The database interface allows one to search for SNe using various criteria, plot all or selected data points, and download both the data and the plot. The plotting facility allows for significant customization. There is also a facility for the user to submit data that can be directly incorporated into the database. We include an option to fit the decay of any given SN light curve with a power-law. The database includes a conversion of most data points to a common 0.3–8 keV band so that SN light curves may be directly compared with each other. A mailing list has been set up to disseminate information about the database. We outline the structure and function of the database, describe its various features, and outline the plans for future expansion.

  15. Update History of This Database - SKIP Stemcell Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us SKIP Stemcell Database Update History of This Database Date Update contents 2017/03/13 SKIP Stemcell Database... English archive site is opened. 2013/03/29 SKIP Stemcell Database ( https://www.skip.med.k...eio.ac.jp/SKIPSearch/top?lang=en ) is opened. About This Database Database Description Download License Update History of This Databa...se Site Policy | Contact Us Update History of This Database - SKIP Stemcell Database | LSDB Archive ...

  16. The Unified Database for BM@N experiment data handling

    Science.gov (United States)

    Gertsenberger, Konstantin; Rogachevsky, Oleg

    2018-04-01

    The article describes the developed Unified Database designed as a comprehensive relational data storage for the BM@N experiment at the Joint Institute for Nuclear Research in Dubna. The BM@N experiment, which is one of the main elements of the first stage of the NICA project, is a fixed target experiment at extracted Nuclotron beams of the Laboratory of High Energy Physics (LHEP JINR). The structure and purposes of the BM@N setup are briefly presented. The article considers the scheme of the Unified Database, its attributes and implemented features in detail. The use of the developed BM@N database provides correct multi-user access to actual information of the experiment for data processing. It stores information on the experiment runs, detectors and their geometries, different configuration, calibration and algorithm parameters used in offline data processing. An important part of any database - user interfaces are presented.

  17. Governing sexual behaviour through humanitarian codes of conduct.

    Science.gov (United States)

    Matti, Stephanie

    2015-10-01

    Since 2001, there has been a growing consensus that sexual exploitation and abuse of intended beneficiaries by humanitarian workers is a real and widespread problem that requires governance. Codes of conduct have been promoted as a key mechanism for governing the sexual behaviour of humanitarian workers and, ultimately, preventing sexual exploitation and abuse (PSEA). This article presents a systematic study of PSEA codes of conduct adopted by humanitarian non-governmental organisations (NGOs) and how they govern the sexual behaviour of humanitarian workers. It draws on Foucault's analytics of governance and speech act theory to examine the findings of a survey of references to codes of conduct made on the websites of 100 humanitarian NGOs, and to analyse some features of the organisation-specific PSEA codes identified. © 2015 The Author(s). Disasters © Overseas Development Institute, 2015.

  18. Adding Conflict Resolution Features to a Query Language for Database Federations

    Directory of Open Access Journals (Sweden)

    Kai-Uwe Sattler

    2000-11-01

    Full Text Available A main problem of data integration is the treatment of conflicts caused by different modeling of realworld entities, different data models or simply by different representations of one and the same object. During the integration phase these conflicts have to be identified and resolved as part of the mapping between local and global schemata. Therefore, conflict resolution affects the definition of the integrated view as well as query transformation and evaluation, in this paper we present a SQL extension for defining and querying database federations. This language addresses in particular the resolution of integration conflicts by providing mechanisms for mapping attributes, restructuring relations as well as extended integration operations. Finally, the application of these resolution strategies is briefly explained by presenting a simple conflict resolution method.

  19. VerSeDa: vertebrate secretome database.

    Science.gov (United States)

    Cortazar, Ana R; Oguiza, José A; Aransay, Ana M; Lavín, José L

    2017-01-01

    Based on the current tools, de novo secretome (full set of proteins secreted by an organism) prediction is a time consuming bioinformatic task that requires a multifactorial analysis in order to obtain reliable in silico predictions. Hence, to accelerate this process and offer researchers a reliable repository where secretome information can be obtained for vertebrates and model organisms, we have developed VerSeDa (Vertebrate Secretome Database). This freely available database stores information about proteins that are predicted to be secreted through the classical and non-classical mechanisms, for the wide range of vertebrate species deposited at the NCBI, UCSC and ENSEMBL sites. To our knowledge, VerSeDa is the only state-of-the-art database designed to store secretome data from multiple vertebrate genomes, thus, saving an important amount of time spent in the prediction of protein features that can be retrieved from this repository directly. VerSeDa is freely available at http://genomics.cicbiogune.es/VerSeDa/index.php. © The Author(s) 2017. Published by Oxford University Press.

  20. High-throughput STR analysis for DNA database using direct PCR.

    Science.gov (United States)

    Sim, Jeong Eun; Park, Su Jeong; Lee, Han Chul; Kim, Se-Yong; Kim, Jong Yeol; Lee, Seung Hwan

    2013-07-01

    Since the Korean criminal DNA database was launched in 2010, we have focused on establishing an automated DNA database profiling system that analyzes short tandem repeat loci in a high-throughput and cost-effective manner. We established a DNA database profiling system without DNA purification using a direct PCR buffer system. The quality of direct PCR procedures was compared with that of conventional PCR system under their respective optimized conditions. The results revealed not only perfect concordance but also an excellent PCR success rate, good electropherogram quality, and an optimal intra/inter-loci peak height ratio. In particular, the proportion of DNA extraction required due to direct PCR failure could be minimized to <3%. In conclusion, the newly developed direct PCR system can be adopted for automated DNA database profiling systems to replace or supplement conventional PCR system in a time- and cost-saving manner. © 2013 American Academy of Forensic Sciences Published 2013. This article is a U.S. Government work and is in the public domain in the U.S.A.

  1. Global Water Governance in the Context of Global and Multilevel Governance: Its Need, Form, and Challenges

    Directory of Open Access Journals (Sweden)

    Joyeeta Gupta

    2013-12-01

    Full Text Available To complement this Special Feature on global water governance, we focused on a generic challenge at the global level, namely, the degree to which water issues need to be dealt with in a centralized, concentrated, and hierarchical manner. We examined water ecosystem services and their impact on human well-being, the role of policies, indirect and direct drivers in influencing these services, and the administrative level(s at which the provision of services and potential trade-offs can be dealt with. We applied a politics of scale perspective to understand motivations for defining a problem at the global or local level and show that the multilevel approach to water governance is evolving and inevitable. We argue that a centralized overarching governance system for water is unlikely and possibly undesirable; however, there is a need for a high-level think tank and leadership to develop a cosmopolitan perspective to promote sustainable water development.

  2. Enhanced annotations and features for comparing thousands of Pseudomonas genomes in the Pseudomonas genome database.

    Science.gov (United States)

    Winsor, Geoffrey L; Griffiths, Emma J; Lo, Raymond; Dhillon, Bhavjinder K; Shay, Julie A; Brinkman, Fiona S L

    2016-01-04

    The Pseudomonas Genome Database (http://www.pseudomonas.com) is well known for the application of community-based annotation approaches for producing a high-quality Pseudomonas aeruginosa PAO1 genome annotation, and facilitating whole-genome comparative analyses with other Pseudomonas strains. To aid analysis of potentially thousands of complete and draft genome assemblies, this database and analysis platform was upgraded to integrate curated genome annotations and isolate metadata with enhanced tools for larger scale comparative analysis and visualization. Manually curated gene annotations are supplemented with improved computational analyses that help identify putative drug targets and vaccine candidates or assist with evolutionary studies by identifying orthologs, pathogen-associated genes and genomic islands. The database schema has been updated to integrate isolate metadata that will facilitate more powerful analysis of genomes across datasets in the future. We continue to place an emphasis on providing high-quality updates to gene annotations through regular review of the scientific literature and using community-based approaches including a major new Pseudomonas community initiative for the assignment of high-quality gene ontology terms to genes. As we further expand from thousands of genomes, we plan to provide enhancements that will aid data visualization and analysis arising from whole-genome comparative studies including more pan-genome and population-based approaches. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  3. Lula’s government in search of direction

    Directory of Open Access Journals (Sweden)

    Cláudio GONÇALVES COUTO

    2011-01-01

    Full Text Available Lula’s government has just completed a third of its term in office and is now undergoing a spell of crisis and aimlessness. Popularity rates are falling, the economy remains in the doldrums and irresolution seems to be the main feature of the current administration. Lula came to power leading a party whose performance was always marked by staunchness in oppo¬sition and the preaching of immaculate public morals. Despite all this, the government lacks cla¬rity concerning the shaping of a state building project extending beyond self righteousness and antagonism.

  4. Database design and database administration for a kindergarten

    OpenAIRE

    Vítek, Daniel

    2009-01-01

    The bachelor thesis deals with creation of database design for a standard kindergarten, installation of the designed database into the database system Oracle Database 10g Express Edition and demonstration of the administration tasks in this database system. The verification of the database was proved by a developed access application.

  5. Interactive Multi-Instrument Database of Solar Flares

    Science.gov (United States)

    Ranjan, Shubha S.; Spaulding, Ryan; Deardorff, Donald G.

    2018-01-01

    The fundamental motivation of the project is that the scientific output of solar research can be greatly enhanced by better exploitation of the existing solar/heliosphere space-data products jointly with ground-based observations. Our primary focus is on developing a specific innovative methodology based on recent advances in "big data" intelligent databases applied to the growing amount of high-spatial and multi-wavelength resolution, high-cadence data from NASA's missions and supporting ground-based observatories. Our flare database is not simply a manually searchable time-based catalog of events or list of web links pointing to data. It is a preprocessed metadata repository enabling fast search and automatic identification of all recorded flares sharing a specifiable set of characteristics, features, and parameters. The result is a new and unique database of solar flares and data search and classification tools for the Heliophysics community, enabling multi-instrument/multi-wavelength investigations of flare physics and supporting further development of flare-prediction methodologies.

  6. RTDB: A memory resident real-time object database

    International Nuclear Information System (INIS)

    Nogiec, Jerzy M.; Desavouret, Eugene

    2003-01-01

    RTDB is a fast, memory-resident object database with built-in support for distribution. It constitutes an attractive alternative for architecting real-time solutions with multiple, possibly distributed, processes or agents sharing data. RTDB offers both direct and navigational access to stored objects, with local and remote random access by object identifiers, and immediate direct access via object indices. The database supports transparent access to objects stored in multiple collaborating dispersed databases and includes a built-in cache mechanism that allows for keeping local copies of remote objects, with specifiable invalidation deadlines. Additional features of RTDB include a trigger mechanism on objects that allows for issuing events or activating handlers when objects are accessed or modified and a very fast, attribute based search/query mechanism. The overall architecture and application of RTDB in a control and monitoring system is presented

  7. Unsupervised feature learning for autonomous rock image classification

    Science.gov (United States)

    Shu, Lei; McIsaac, Kenneth; Osinski, Gordon R.; Francis, Raymond

    2017-09-01

    Autonomous rock image classification can enhance the capability of robots for geological detection and enlarge the scientific returns, both in investigation on Earth and planetary surface exploration on Mars. Since rock textural images are usually inhomogeneous and manually hand-crafting features is not always reliable, we propose an unsupervised feature learning method to autonomously learn the feature representation for rock images. In our tests, rock image classification using the learned features shows that the learned features can outperform manually selected features. Self-taught learning is also proposed to learn the feature representation from a large database of unlabelled rock images of mixed class. The learned features can then be used repeatedly for classification of any subclass. This takes advantage of the large dataset of unlabelled rock images and learns a general feature representation for many kinds of rocks. We show experimental results supporting the feasibility of self-taught learning on rock images.

  8. UbiProt: a database of ubiquitylated proteins

    Directory of Open Access Journals (Sweden)

    Kondratieva Ekaterina V

    2007-04-01

    Full Text Available Abstract Background Post-translational protein modification with ubiquitin, or ubiquitylation, is one of the hottest topics in a modern biology due to a dramatic impact on diverse metabolic pathways and involvement in pathogenesis of severe human diseases. A great number of eukaryotic proteins was found to be ubiquitylated. However, data about particular ubiquitylated proteins are rather disembodied. Description To fill a general need for collecting and systematizing experimental data concerning ubiquitylation we have developed a new resource, UbiProt Database, a knowledgebase of ubiquitylated proteins. The database contains retrievable information about overall characteristics of a particular protein, ubiquitylation features, related ubiquitylation and de-ubiquitylation machinery and literature references reflecting experimental evidence of ubiquitylation. UbiProt is available at http://ubiprot.org.ru for free. Conclusion UbiProt Database is a public resource offering comprehensive information on ubiquitylated proteins. The resource can serve as a general reference source both for researchers in ubiquitin field and those who deal with particular ubiquitylated proteins which are of their interest. Further development of the UbiProt Database is expected to be of common interest for research groups involved in studies of the ubiquitin system.

  9. Global health governance as shared health governance.

    Science.gov (United States)

    Ruger, Jennifer Prah

    2012-07-01

    With the exception of key 'proven successes' in global health, the current regime of global health governance can be understood as transnational and national actors pursuing their own interests under a rational actor model of international cooperation, which fails to provide sufficient justification for an obligation to assist in meeting the health needs of others. An ethical commitment to providing all with the ability to be healthy is required. This article develops select components of an alternative model of shared health governance (SHG), which aims to provide a 'road map,' 'focal points' and 'the glue' among various global health actors to better effectuate cooperation on universal ethical principles for an alternative global health equilibrium. Key features of SHG include public moral norms as shared authoritative standards; ethical commitments, shared goals and role allocation; shared sovereignty and constitutional commitments; legitimacy and accountability; country-level attention to international health relations. A framework of social agreement based on 'overlapping consensus' is contrasted against one based on self-interested political bargaining. A global health constitution delineating duties and obligations of global health actors and a global institute of health and medicine for holding actors responsible are proposed. Indicators for empirical assessment of select SHG principles are described. Global health actors, including states, must work together to correct and avert global health injustices through a framework of SHG based on shared ethical commitments.

  10. EMEN2: an object oriented database and electronic lab notebook.

    Science.gov (United States)

    Rees, Ian; Langley, Ed; Chiu, Wah; Ludtke, Steven J

    2013-02-01

    Transmission electron microscopy and associated methods, such as single particle analysis, two-dimensional crystallography, helical reconstruction, and tomography, are highly data-intensive experimental sciences, which also have substantial variability in experimental technique. Object-oriented databases present an attractive alternative to traditional relational databases for situations where the experiments themselves are continually evolving. We present EMEN2, an easy to use object-oriented database with a highly flexible infrastructure originally targeted for transmission electron microscopy and tomography, which has been extended to be adaptable for use in virtually any experimental science. It is a pure object-oriented database designed for easy adoption in diverse laboratory environments and does not require professional database administration. It includes a full featured, dynamic web interface in addition to APIs for programmatic access. EMEN2 installations currently support roughly 800 scientists worldwide with over 1/2 million experimental records and over 20 TB of experimental data. The software is freely available with complete source.

  11. Database Description - Open TG-GATEs Pathological Image Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Open TG-GATEs Pathological Image Database Database Description General information of database Database... name Open TG-GATEs Pathological Image Database Alternative name - DOI 10.18908/lsdba.nbdc00954-0...iomedical Innovation 7-6-8, Saito-asagi, Ibaraki-city, Osaka 567-0085, Japan TEL:81-72-641-9826 Email: Database... classification Toxicogenomics Database Organism Taxonomy Name: Rattus norvegi... Article title: Author name(s): Journal: External Links: Original website information Database

  12. Adaptive weighted local textural features for illumination, expression, and occlusion invariant face recognition

    Science.gov (United States)

    Cui, Chen; Asari, Vijayan K.

    2014-03-01

    Biometric features such as fingerprints, iris patterns, and face features help to identify people and restrict access to secure areas by performing advanced pattern analysis and matching. Face recognition is one of the most promising biometric methodologies for human identification in a non-cooperative security environment. However, the recognition results obtained by face recognition systems are a affected by several variations that may happen to the patterns in an unrestricted environment. As a result, several algorithms have been developed for extracting different facial features for face recognition. Due to the various possible challenges of data captured at different lighting conditions, viewing angles, facial expressions, and partial occlusions in natural environmental conditions, automatic facial recognition still remains as a difficult issue that needs to be resolved. In this paper, we propose a novel approach to tackling some of these issues by analyzing the local textural descriptions for facial feature representation. The textural information is extracted by an enhanced local binary pattern (ELBP) description of all the local regions of the face. The relationship of each pixel with respect to its neighborhood is extracted and employed to calculate the new representation. ELBP reconstructs a much better textural feature extraction vector from an original gray level image in different lighting conditions. The dimensionality of the texture image is reduced by principal component analysis performed on each local face region. Each low dimensional vector representing a local region is now weighted based on the significance of the sub-region. The weight of each sub-region is determined by employing the local variance estimate of the respective region, which represents the significance of the region. The final facial textural feature vector is obtained by concatenating the reduced dimensional weight sets of all the modules (sub-regions) of the face image

  13. Red Lesion Detection Using Dynamic Shape Features for Diabetic Retinopathy Screening.

    Science.gov (United States)

    Seoud, Lama; Hurtut, Thomas; Chelbi, Jihed; Cheriet, Farida; Langlois, J M Pierre

    2016-04-01

    The development of an automatic telemedicine system for computer-aided screening and grading of diabetic retinopathy depends on reliable detection of retinal lesions in fundus images. In this paper, a novel method for automatic detection of both microaneurysms and hemorrhages in color fundus images is described and validated. The main contribution is a new set of shape features, called Dynamic Shape Features, that do not require precise segmentation of the regions to be classified. These features represent the evolution of the shape during image flooding and allow to discriminate between lesions and vessel segments. The method is validated per-lesion and per-image using six databases, four of which are publicly available. It proves to be robust with respect to variability in image resolution, quality and acquisition system. On the Retinopathy Online Challenge's database, the method achieves a FROC score of 0.420 which ranks it fourth. On the Messidor database, when detecting images with diabetic retinopathy, the proposed method achieves an area under the ROC curve of 0.899, comparable to the score of human experts, and it outperforms state-of-the-art approaches.

  14. Comparison of Automatic Classifiers’ Performances using Word-based Feature Extraction Techniques in an E-government setting

    OpenAIRE

    Marin Rodenas, Alfonso

    2011-01-01

    Projecte realitzat mitjançant programa de mobilitat. KUNGLIGA TEKNISKA HÖGSKOLAN, STOCKHOLM Nowadays email is commonly used by citizens to establish communication with their government. On the received emails, governments deal with some common queries and subjects which some handling officers have to manually answer. Automatic email classification of the incoming emails allows to increase the communication efficiency by decreasing the delay between the query and its response. This thesis t...

  15. Transformation of hand-shape features for a biometric identification approach.

    Science.gov (United States)

    Travieso, Carlos M; Briceño, Juan Carlos; Alonso, Jesús B

    2012-01-01

    The present work presents a biometric identification system for hand shape identification. The different contours have been coded based on angular descriptions forming a Markov chain descriptor. Discrete Hidden Markov Models (DHMM), each representing a target identification class, have been trained with such chains. Features have been calculated from a kernel based on the HMM parameter descriptors. Finally, supervised Support Vector Machines were used to classify parameters from the DHMM kernel. First, the system was modelled using 60 users to tune the DHMM and DHMM_kernel+SVM configuration parameters and finally, the system was checked with the whole database (GPDS database, 144 users with 10 samples per class). Our experiments have obtained similar results in both cases, demonstrating a scalable, stable and robust system. Our experiments have achieved an upper success rate of 99.87% for the GPDS database using three hand samples per class in training mode, and seven hand samples in test mode. Secondly, the authors have verified their algorithms using another independent and public database (the UST database). Our approach has reached 100% and 99.92% success for right and left hand, respectively; showing the robustness and independence of our algorithms. This success was found using as features the transformation of 100 points hand shape with our DHMM kernel, and as classifier Support Vector Machines with linear separating functions, with similar success.

  16. Transformation of Hand-Shape Features for a Biometric Identification Approach

    Directory of Open Access Journals (Sweden)

    Jesús B. Alonso

    2012-01-01

    Full Text Available The present work presents a biometric identification system for hand shape identification. The different contours have been coded based on angular descriptions forming a Markov chain descriptor. Discrete Hidden Markov Models (DHMM, each representing a target identification class, have been trained with such chains. Features have been calculated from a kernel based on the HMM parameter descriptors. Finally, supervised Support Vector Machines were used to classify parameters from the DHMM kernel. First, the system was modelled using 60 users to tune the DHMM and DHMM_kernel+SVM configuration parameters and finally, the system was checked with the whole database (GPDS database, 144 users with 10 samples per class. Our experiments have obtained similar results in both cases, demonstrating a scalable, stable and robust system. Our experiments have achieved an upper success rate of 99.87% for the GPDS database using three hand samples per class in training mode, and seven hand samples in test mode. Secondly, the authors have verified their algorithms using another independent and public database (the UST database. Our approach has reached 100% and 99.92% success for right and left hand, respectively; showing the robustness and independence of our algorithms. This success was found using as features the transformation of 100 points hand shape with our DHMM kernel, and as classifier Support Vector Machines with linear separating functions, with similar success.

  17. Meaning Of The Term "Corruption Offense" As A Feature Of The Public Prosecutor's Supervision Over The Legislation On The Corruption Counteraction In The Municipal Governments Execution

    Directory of Open Access Journals (Sweden)

    Kseniya D. Okuneva

    2014-12-01

    Full Text Available In the present article theoretical and practical aspects of the corruption offense definition, which are being characteristic features of the methodology of prosecutorial supervision over the legislation on counteraction to corruption in local government are analyzed. Federal Law of Jan. 17, 1992 No. 2202-1 "On the Procuracy of the Russian Federation" (Article 21 establishes the public prosecutor's supervision over the legislation on combating corruption in local government execution, which is a special sub-cluster. On general terms of theoretical techniques of the prosecutor's supervision, taking into account its specific and complex nature of corruption prosecutors based activities in this area. Author emphasizes attention on characteristics of the corruption offense, as well as aspects of legal responsibility, which lie in the fact that it is applied in accordance with law to offender as measures of state coercion of personal, financial or organizational nature for the offense committed; responsibilities of the person, who committed the offense, to be subject to measures of state coercion. In the conclusion author notes that specifics of corruption offenses that are subject of prosecutorial supervision over the execution of legislation on combating corruption in local government is determined by the special status of the offense subjects, as well as the content of legal prohibitions and legal responsibilities in the field of ​​anti-corruption at the municipal level.

  18. Evaluation of relational database products for the VAX

    International Nuclear Information System (INIS)

    Kannan, K.L.

    1985-11-01

    Four commercially available database products for the VAX/VMS operating system were evaluated for relative performance and ease of use. The products were DATATRIEVE, INGRES, Rdb, and S1032. Performance was measured in terms of elapsed time, CPU time, direct I/O counts, buffered I/O counts, and page faults. East of use is more subjective and has not been quantified here; however, discussion and tables of features as well as query syntax are included. This report describes the environment in which these products were evaluated and the characteristics of the databases used. All comparisons must be interpreted in the context of this setting

  19. Evaluation of relational database products for the VAX

    Energy Technology Data Exchange (ETDEWEB)

    Kannan, K.L.

    1985-11-01

    Four commercially available database products for the VAX/VMS operating system were evaluated for relative performance and ease of use. The products were DATATRIEVE, INGRES, Rdb, and S1032. Performance was measured in terms of elapsed time, CPU time, direct I/O counts, buffered I/O counts, and page faults. East of use is more subjective and has not been quantified here; however, discussion and tables of features as well as query syntax are included. This report describes the environment in which these products were evaluated and the characteristics of the databases used. All comparisons must be interpreted in the context of this setting.

  20. CoR's White Paper on Multilevel Governance - Advantages and Disadvantages

    OpenAIRE

    Gal, Diana; Brie, Mircea

    2011-01-01

    By the multilevel governance concept we can see the European Union as a political system with interconnected institutions that exist at multiple levels with unique policy features. The White Paper on Multilevel Governance reflects the determination to "Build Europe in partnership" and sets two main strategic objectives: encouraging participation in the European process and reinforcing the efficiency of Community action. The fact that public interest in European elections is decreasing, whils...

  1. Face recognition algorithm using extended vector quantization histogram features.

    Science.gov (United States)

    Yan, Yan; Lee, Feifei; Wu, Xueqian; Chen, Qiu

    2018-01-01

    In this paper, we propose a face recognition algorithm based on a combination of vector quantization (VQ) and Markov stationary features (MSF). The VQ algorithm has been shown to be an effective method for generating features; it extracts a codevector histogram as a facial feature representation for face recognition. Still, the VQ histogram features are unable to convey spatial structural information, which to some extent limits their usefulness in discrimination. To alleviate this limitation of VQ histograms, we utilize Markov stationary features (MSF) to extend the VQ histogram-based features so as to add spatial structural information. We demonstrate the effectiveness of our proposed algorithm by achieving recognition results superior to those of several state-of-the-art methods on publicly available face databases.

  2. Large Science Databases – Are Cloud Services Ready for Them?

    Directory of Open Access Journals (Sweden)

    Ani Thakar

    2011-01-01

    Full Text Available We report on attempts to put an astronomical database – the Sloan Digital Sky Survey science archive – in the cloud. We find that it is very frustrating to impossible at this time to migrate a complex SQL Server database into current cloud service offerings such as Amazon (EC2 and Microsoft (SQL Azure. Certainly it is impossible to migrate a large database in excess of a TB, but even with (much smaller databases, the limitations of cloud services make it very difficult to migrate the data to the cloud without making changes to the schema and settings that would degrade performance and/or make the data unusable. Preliminary performance comparisons show a large performance discrepancy with the Amazon cloud version of the SDSS database. These difficulties suggest that much work and coordination needs to occur between cloud service providers and their potential clients before science databases – not just large ones but even smaller databases that make extensive use of advanced database features for performance and usability – can successfully and effectively be deployed in the cloud. We describe a powerful new computational instrument that we are developing in the interim – the Data-Scope – that will enable fast and efficient analysis of the largest (petabyte scale scientific datasets.

  3. How do board of directors affect corporate governance disclosure? – the case of banking system

    OpenAIRE

    Stefanescu Cristina Alexandrina

    2013-01-01

    The purpose of our empirical study is to assess the relationship between board of directors’ features and the level of disclosure in case of European Union banking environment, basing on the general statement that disclosure and quality of corporate governance system are two closely related concepts - the higher the level of transparency, the better the quality corporate governance practices. The main features considered for assessing board of directors quality were: independence, size, educa...

  4. Planform: an application and database of graph-encoded planarian regenerative experiments.

    Science.gov (United States)

    Lobo, Daniel; Malone, Taylor J; Levin, Michael

    2013-04-15

    Understanding the mechanisms governing the regeneration capabilities of many organisms is a fundamental interest in biology and medicine. An ever-increasing number of manipulation and molecular experiments are attempting to discover a comprehensive model for regeneration, with the planarian flatworm being one of the most important model species. Despite much effort, no comprehensive, constructive, mechanistic models exist yet, and it is now clear that computational tools are needed to mine this huge dataset. However, until now, there is no database of regenerative experiments, and the current genotype-phenotype ontologies and databases are based on textual descriptions, which are not understandable by computers. To overcome these difficulties, we present here Planform (Planarian formalization), a manually curated database and software tool for planarian regenerative experiments, based on a mathematical graph formalism. The database contains more than a thousand experiments from the main publications in the planarian literature. The software tool provides the user with a graphical interface to easily interact with and mine the database. The presented system is a valuable resource for the regeneration community and, more importantly, will pave the way for the application of novel artificial intelligence tools to extract knowledge from this dataset. The database and software tool are freely available at http://planform.daniel-lobo.com.

  5. Coproductive capacities: rethinking science-governance relations in a diverse world

    Directory of Open Access Journals (Sweden)

    Lorrae E. van Kerkhoff

    2015-03-01

    Full Text Available Tackling major environmental change issues requires effective partnerships between science and governance, but relatively little work in this area has examined the diversity of settings from which such partnerships may, or may not, emerge. In this special feature we draw on experiences from around the world to demonstrate and investigate the consequences of diverse capacities and capabilities in bringing science and governance together. We propose the concept of coproductive capacities as a useful new lens through which to examine these relations. Coproductive capacity is "the combination of scientific resources and governance capability that shapes the extent to which a society, at various levels, can operationalize relationships between scientific and public, private, and civil society institutions and actors to effect scientifically-informed social change." This recasts the relationships between science and society from notions of "gaps" to notions of interconnectedness and interplay (coproduction; alongside the societal foundations that shape what is or is not possible in that dynamic connection (capacities. The articles in this special feature apply this concept to reveal social, political, and institutional conditions that both support and inhibit high-quality environmental governance as global issues are tackled in particular places. Across these articles we suggest that five themes emerge as important to understanding coproductive capacity: history, experience, and perceptions; quality of relationships (especially in suboptimal settings; disjunct across scales; power, interests, and legitimacy; and alternative pathways for environmental governance. Taking a coproductive capacities perspective can help us identify which interventions may best enable scientifically informed, but locally sensitive approaches to environmental governance.

  6. Governance of Clubs and Firms with Cultural Dimensions

    NARCIS (Netherlands)

    van den Brink, J.R.; Ruys, P.H.M.; Semenov, R.

    1999-01-01

    The neoclassical way to cope with firms providing services, or with clubs procuring services, is restricted by the lack of institutional features. An institutional approach is introduced that requires a cooperative governance to realize the potential value-production by firms, or to realize the

  7. Flashflood-related mortality in southern France: first results from a new database

    Directory of Open Access Journals (Sweden)

    Vinet Freddy

    2016-01-01

    Full Text Available Over the last 25 years, flash floods in the South of France have killed almost 250 people. The protection of prone populations is a priority for the French government. It is also a goal of the 2007 European flood directive. However, no accurate database exists gathering the fatalities due to floods in France. Fatalities are supposed to be rare and hazardous, mainly due to individual behaviour. A Ph. D. work has initiated the building of a database gathering a detailed analysis of the circumstances of death and the profiles of the deceased (age, gender…. The study area covers the French Mediterranean departments prone to flash floods over the period 1988-2015. This presentation details the main features of the sample, 244 fatalities collected through newspapers completed with field surveys near police services and municipalities. The sample is broken down between huge events that account for two thirds of the fatalities and “small” events (34 % of the fatalities. Deaths at home account for 35 % of the total number of fatalities, mainly during huge events. 30 % of fatalities are related to vehicles. The last part of the work explains the relations between fatalities and prevention and how better knowledge of flood-related deaths can help to improve flood prevention. The given example shows the relationship between flood forecasting and fatalities. Half of the deaths took place in a small watershed (<150 km2. It emphasizes the need for the dissemination of a complementary system of flash flood forecast based on forecasted rainfall depth and adapted to small watersheds.

  8. Improved medical image modality classification using a combination of visual and textual features.

    Science.gov (United States)

    Dimitrovski, Ivica; Kocev, Dragi; Kitanovski, Ivan; Loskovska, Suzana; Džeroski, Sašo

    2015-01-01

    In this paper, we present the approach that we applied to the medical modality classification tasks at the ImageCLEF evaluation forum. More specifically, we used the modality classification databases from the ImageCLEF competitions in 2011, 2012 and 2013, described by four visual and one textual types of features, and combinations thereof. We used local binary patterns, color and edge directivity descriptors, fuzzy color and texture histogram and scale-invariant feature transform (and its variant opponentSIFT) as visual features and the standard bag-of-words textual representation coupled with TF-IDF weighting. The results from the extensive experimental evaluation identify the SIFT and opponentSIFT features as the best performing features for modality classification. Next, the low-level fusion of the visual features improves the predictive performance of the classifiers. This is because the different features are able to capture different aspects of an image, their combination offering a more complete representation of the visual content in an image. Moreover, adding textual features further increases the predictive performance. Finally, the results obtained with our approach are the best results reported on these databases so far. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. The role of law in adaptive governance

    Directory of Open Access Journals (Sweden)

    Barbara A. Cosens

    2017-03-01

    Full Text Available The term "governance" encompasses both governmental and nongovernmental participation in collective choice and action. Law dictates the structure, boundaries, rules, and processes within which governmental action takes place, and in doing so becomes one of the focal points for analysis of barriers to adaptation as the effects of climate change are felt. Adaptive governance must therefore contemplate a level of flexibility and evolution in governmental action beyond that currently found in the heavily administrative governments of many democracies. Nevertheless, over time, law itself has proven highly adaptive in western systems of government, evolving to address and even facilitate the emergence of new social norms (such as the rights of women and minorities or to provide remedies for emerging problems (such as pollution. Thus, there is no question that law can adapt, evolve, and be reformed to make room for adaptive governance. In doing this, not only may barriers be removed, but law may be adjusted to facilitate adaptive governance and to aid in institutionalizing new and emerging approaches to governance. The key is to do so in a way that also enhances legitimacy, accountability, and justice, or else such reforms will never be adopted by democratic societies, or if adopted, will destabilize those societies. By identifying those aspects of the frameworks for adaptive governance reviewed in the introduction to this special feature relevant to the legal system, we present guidelines for evaluating the role of law in environmental governance to identify the ways in which law can be used, adapted, and reformed to facilitate adaptive governance and to do so in a way that enhances the legitimacy of governmental action.

  10. Performance Assessment for e-Government Services: An Experience Report

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Yan; Zhu, Liming; Gorton, Ian

    2007-08-14

    The transformation and integration of government services, enabled by the use of new technologies such as application servers and Web services, is fundamental to reduce the cost of government and improving service outcomes to citizens. Many core Government information systems comprise applications running on legacy mainframes, databases and transaction processing monitors. As Governments worldwide provide direct access over the Internet to these legacy applications from the general public, they may be exposed to workloads well above the origin design parameters of these back-end systems. This creates a significant risk of high profile failures for Government agencies whose newly integrated systems become overloaded. In this paper we describe how we conducted a performance assessment of a business-critical, Internet-facing Web services that integrated new and legacy systems from two Australian Government agencies. We leveraged prototype tools from our own research along with known techniques in performance modeling. We were able to clearly demonstrate that the existing hardware and software would be adequate to handle the predicted workload for the next financial year. We were also able to do ‘what-if’ analysis and predict how the system can perform with alternative strategies to scale the system. We conclude by summarizing the lessons learnt, including the importance of architecture visibility, benchmarking data quality, and measurement feasibility due to issues of outsourcing, privacy legislation and cross-agency involvement.

  11. Update History of This Database - Yeast Interacting Proteins Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Yeast Interacting Proteins Database Update History of This Database Date Update contents 201...0/03/29 Yeast Interacting Proteins Database English archive site is opened. 2000/12/4 Yeast Interacting Proteins Database...( http://itolab.cb.k.u-tokyo.ac.jp/Y2H/ ) is released. About This Database Database Description... Download License Update History of This Database Site Policy | Contact Us Update History of This Database... - Yeast Interacting Proteins Database | LSDB Archive ...

  12. Notre Dame Nuclear Database: A New Chart of Nuclides

    Science.gov (United States)

    Lee, Kevin; Khouw, Timothy; Fasano, Patrick; Mumpower, Matthew; Aprahamian, Ani

    2014-09-01

    Nuclear data is critical to research fields from medicine to astrophysics. We are creating a database, the Notre Dame Nuclear Database, which can store theoretical and experimental datasets. We place emphasis on storing metadata and user interaction with the database. Users are able to search in addition to the specific nuclear datum, the author(s), the facility where the measurements were made, the institution of the facility, and device or method/technique used. We also allow users to interact with the database by providing online search, an interactive nuclide chart, and a command line interface. The nuclide chart is a more descriptive version of the periodic table that can be used to visualize nuclear properties such as half-lives and mass. We achieve this by using D3 (Data Driven Documents), HTML, and CSS3 to plot the nuclides and color them accordingly. Search capabilities can be applied dynamically to the chart by using Python to communicate with MySQL, allowing for customization. Users can save the customized chart they create to any image format. These features provide a unique approach for researchers to interface with nuclear data. We report on the current progress of this project and will present a working demo that highlights each aspect of the aforementioned features. This is the first time that all available technologies are put to use to make nuclear data more accessible than ever before in a manner that is much easier and fully detailed. This is a first and we will make it available as open source ware.

  13. The Establishment of the SAR images database System Based on Oracle and ArcSDE

    International Nuclear Information System (INIS)

    Zhou, Jijin; Li, Zhen; Chen, Quan; Tian, Bangsen

    2014-01-01

    Synthetic aperture radar is a kind of microwave imaging system, and has the advantages of multi-band, multi-polarization and multi-angle. At present, there is no SAR images database system based on typical features. For solving problems in interpretation and identification, a new SAR images database system of the typical features is urgent in the current development need. In this article, a SAR images database system based on Oracle and ArcSDE was constructed. The main works involving are as follows: (1) SAR image data was calibrated and corrected geometrically and geometrically. Besides, the fully polarimetric image was processed as the coherency matrix[T] to preserve the polarimetric information. (2) After analyzing multiple space borne SAR images, the metadata table was defined as: IMAGEID; Name of features; Latitude and Longitude; Sensor name; Range and Azimuth resolution etc. (3) Through the comparison between GeoRaster and ArcSDE, result showed ArcSDE is a more appropriate technology to store images in a central database. The System stores and manages multisource SAR image data well, reflects scattering, geometry, polarization, band and angle characteristics, and combines with analysis of the managed objects and service objects of the database as well as focuses on constructing SAR image system in the aspects of data browse and data retrieval. According the analysis of characteristics of SAR images such as scattering, polarization, incident angle and wave band information, different weights can be given to these characteristics. Then an interpreted tool is formed to provide an efficient platform for interpretation

  14. Digital Education Governance: Data Visualization, Predictive Analytics, and "Real-Time" Policy Instruments

    Science.gov (United States)

    Williamson, Ben

    2016-01-01

    Educational institutions and governing practices are increasingly augmented with digital database technologies that function as new kinds of policy instruments. This article surveys and maps the landscape of digital policy instrumentation in education and provides two detailed case studies of new digital data systems. The Learning Curve is a…

  15. Experience using a distributed object oriented database for a DAQ system

    International Nuclear Information System (INIS)

    Bee, C.P.; Eshghi, S.; Jones, R.

    1996-01-01

    To configure the RD13 data acquisition system, we need many parameters which describe the various hardware and software components. Such information has been defined using an entity-relation model and stored in a commercial memory-resident database. during the last year, Itasca, an object oriented database management system (OODB), was chosen as a replacement database system. We have ported the existing databases (hs and sw configurations, run parameters etc.) to Itasca and integrated it with the run control system. We believe that it is possible to use an OODB in real-time environments such as DAQ systems. In this paper, we present our experience and impression: why we wanted to change from an entity-relational approach, some useful features of Itasca, the issues we meet during this project including integration of the database into an existing distributed environment and factors which influence performance. (author)

  16. Database Description - RMOS | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name RMOS Alternative nam...arch Unit Shoshi Kikuchi E-mail : Database classification Plant databases - Rice Microarray Data and other Gene Expression Database...s Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database description The Ric...19&lang=en Whole data download - Referenced database Rice Expression Database (RED) Rice full-length cDNA Database... (KOME) Rice Genome Integrated Map Database (INE) Rice Mutant Panel Database (Tos17) Rice Genome Annotation Database

  17. Standardization of XML Database Exchanges and the James Webb Space Telescope Experience

    Science.gov (United States)

    Gal-Edd, Jonathan; Detter, Ryan; Jones, Ron; Fatig, Curtis C.

    2007-01-01

    Personnel from the National Aeronautics and Space Administration (NASA) James Webb Space Telescope (JWST) Project have been working with various standard communities such the Object Management Group (OMG) and the Consultative Committee for Space Data Systems (CCSDS) to assist in the definition of a common extensible Markup Language (XML) for database exchange format. The CCSDS and OMG standards are intended for the exchange of core command and telemetry information, not for all database information needed to exercise a NASA space mission. The mission-specific database, containing all the information needed for a space mission, is translated from/to the standard using a translator. The standard is meant to provide a system that encompasses 90% of the information needed for command and telemetry processing. This paper will discuss standardization of the XML database exchange format, tools used, and the JWST experience, as well as future work with XML standard groups both commercial and government.

  18. Governance of Interoperability in Intergovernmental Services - Towards an Empirical Taxonomy

    Directory of Open Access Journals (Sweden)

    Herbert Kubicek

    2008-12-01

    Full Text Available High quality and comfortable online delivery of governmental services often requires the seamless exchange of data between two or more government agencies. Smooth data exchange, in turn, requires interoperability of the databases and workflows in the agencies involved. Interoperability (IOP is a complex issue covering purely technical aspects such as transmission protocols and data exchange formats, but also content-related semantic aspects such as identifiers and the meaning of codes as well as organizational, contractual or legal issues. Starting from IOP frameworks which provide classifications of what has to be standardized, this paper, based on an ongoing research project, adopts a political and managerial view and tries to clarify the governance of achieving IOP, i.e. where and by whom IOPstandards are developed and established and how they are put into operation. By analyzing 32 cases of successful implementation of IOP in E-Government services within the European Union empirical indicators for different aspects of governance are proposed and applied to develop an empirical taxonomy of different types of IOP governance which can be used for future comparative research regarding success factors, barriers etc.

  19. Product information representation for feature conversion and implementation of group technology automated coding

    Science.gov (United States)

    Medland, A. J.; Zhu, Guowang; Gao, Jian; Sun, Jian

    1996-03-01

    Feature conversion, also called feature transformation and feature mapping, is defined as the process of converting features from one view of an object to another view of the object. In a relatively simple implementation, for each application the design features are automatically converted into features specific for that application. All modifications have to be made via the design features. This is the approach that has attracted most attention until now. In the ideal situation, however, conversions directly from application views to the design view, and to other applications views, are also possible. In this paper, some difficulties faced in feature conversion are discussed. A new representation scheme of feature-based parts models has been proposed for the purpose of one-way feature conversion. The parts models consist of five different levels of abstraction, extending from an assembly level and its attributes, single parts and their attributes, single features and their attributes, one containing the geometric reference element and finally one for detailed geometry. One implementation of feature conversion for rotational components within GT (Group Technology) has already been undertaken using an automated coding procedure operating on a design-feature database. This database has been generated by a feature-based design system, and the GT coding scheme used in this paper is a specific scheme created for a textile machine manufacturing plant. Such feature conversion techniques presented here are only in their early stages of development and further research is underway.

  20. Databases on biotechnology and biosafety of GMOs.

    Science.gov (United States)

    Degrassi, Giuliano; Alexandrova, Nevena; Ripandelli, Decio

    2003-01-01

    Due to the involvement of scientific, industrial, commercial and public sectors of society, the complexity of the issues concerning the safety of genetically modified organisms (GMOs) for the environment, agriculture, and human and animal health calls for a wide coverage of information. Accordingly, development of the field of biotechnology, along with concerns related to the fate of released GMOs, has led to a rapid development of tools for disseminating such information. As a result, there is a growing number of databases aimed at collecting and storing information related to GMOs. Most of the sites deal with information on environmental releases, field trials, transgenes and related sequences, regulations and legislation, risk assessment documents, and literature. Databases are mainly established and managed by scientific, national or international authorities, and are addressed towards scientists, government officials, policy makers, consumers, farmers, environmental groups and civil society representatives. This complexity can lead to an overlapping of information. The purpose of the present review is to analyse the relevant databases currently available on the web, providing comments on their vastly different information and on the structure of the sites pertaining to different users. A preliminary overview on the development of these sites during the last decade, at both the national and international level, is also provided.

  1. Recent updates and developments to plant genome size databases

    Science.gov (United States)

    Garcia, Sònia; Leitch, Ilia J.; Anadon-Rosell, Alba; Canela, Miguel Á.; Gálvez, Francisco; Garnatje, Teresa; Gras, Airy; Hidalgo, Oriane; Johnston, Emmeline; Mas de Xaxars, Gemma; Pellicer, Jaume; Siljak-Yakovlev, Sonja; Vallès, Joan; Vitales, Daniel; Bennett, Michael D.

    2014-01-01

    Two plant genome size databases have been recently updated and/or extended: the Plant DNA C-values database (http://data.kew.org/cvalues), and GSAD, the Genome Size in Asteraceae database (http://www.asteraceaegenomesize.com). While the first provides information on nuclear DNA contents across land plants and some algal groups, the second is focused on one of the largest and most economically important angiosperm families, Asteraceae. Genome size data have numerous applications: they can be used in comparative studies on genome evolution, or as a tool to appraise the cost of whole-genome sequencing programs. The growing interest in genome size and increasing rate of data accumulation has necessitated the continued update of these databases. Currently, the Plant DNA C-values database (Release 6.0, Dec. 2012) contains data for 8510 species, while GSAD has 1219 species (Release 2.0, June 2013), representing increases of 17 and 51%, respectively, in the number of species with genome size data, compared with previous releases. Here we provide overviews of the most recent releases of each database, and outline new features of GSAD. The latter include (i) a tool to visually compare genome size data between species, (ii) the option to export data and (iii) a webpage containing information about flow cytometry protocols. PMID:24288377

  2. Croatian Cadastre Database Modelling

    Directory of Open Access Journals (Sweden)

    Zvonko Biljecki

    2013-04-01

    Full Text Available The Cadastral Data Model has been developed as a part of a larger programme to improve products and production environment of the Croatian Cadastral Service of the State Geodetic Administration (SGA. The goal of the project was to create a cadastral data model conforming to relevant standards and specifications in the field of geoinformation (GI adapted by international organisations for standardisation under the competence of GI (ISO TC211 and OpenGIS and it implementations.The main guidelines during the project have been object-oriented conceptual modelling of the updated users' requests and a "new" cadastral data model designed by SGA - Faculty of Geodesy - Geofoto LLC project team. The UML of the conceptual model is given per all feature categories and is described only at class level. The next step was the UML technical model, which was developed from the UML conceptual model. The technical model integrates different UML schemas in one united schema.XML (eXtensible Markup Language was applied for XML description of UML models, and then the XML schema was transferred into GML (Geography Markup Language application schema. With this procedure we have completely described the behaviour of each cadastral feature and rules for the transfer and storage of cadastral features into the database.

  3. Nencki Genomics Database--Ensembl funcgen enhanced with intersections, user data and genome-wide TFBS motifs.

    Science.gov (United States)

    Krystkowiak, Izabella; Lenart, Jakub; Debski, Konrad; Kuterba, Piotr; Petas, Michal; Kaminska, Bozena; Dabrowski, Michal

    2013-01-01

    We present the Nencki Genomics Database, which extends the functionality of Ensembl Regulatory Build (funcgen) for the three species: human, mouse and rat. The key enhancements over Ensembl funcgen include the following: (i) a user can add private data, analyze them alongside the public data and manage access rights; (ii) inside the database, we provide efficient procedures for computing intersections between regulatory features and for mapping them to the genes. To Ensembl funcgen-derived data, which include data from ENCODE, we add information on conserved non-coding (putative regulatory) sequences, and on genome-wide occurrence of transcription factor binding site motifs from the current versions of two major motif libraries, namely, Jaspar and Transfac. The intersections and mapping to the genes are pre-computed for the public data, and the result of any procedure run on the data added by the users is stored back into the database, thus incrementally increasing the body of pre-computed data. As the Ensembl funcgen schema for the rat is currently not populated, our database is the first database of regulatory features for this frequently used laboratory animal. The database is accessible without registration using the mysql client: mysql -h database.nencki-genomics.org -u public. Registration is required only to add or access private data. A WSDL webservice provides access to the database from any SOAP client, including the Taverna Workbench with a graphical user interface.

  4. Evaluating e-Government and Good Governance Correlation

    Directory of Open Access Journals (Sweden)

    Suhardi Suhardi

    2016-03-01

    Full Text Available Assessing the achievement of Indonesian government institutions in implementing e-government has been conducted since around a decade ago. Several national assessments are available with almost the same ranking results. There is an agreement that the ultimate goal of e-government implementation is to achieve good government governance (GGG, while success stories of e-government require good governance practices. This study explored the correlation between e-government achievement and GGG achievement in Indonesia. Spearman’s rank correlation was used to characterize the relationship strength between e-government assessment results and good governance assessment results. The data were collected from institutions that participated in e-government and good governance assessments. The results showed that the correlation between these two entities is not very strong. Most cases showed that e-government implementation and the achievement of good governance have only a moderate positive correlation and none of the studied cases indicated a significant connection. This result can be attributed to the lack of emphasis on goals achievement in the assessments. Thus, it is recommended that future Indonesian e-government assessments should involve impact indicators.

  5. THE IMPACT OF CORPORATE GOVERNANCE QUALITY ON COMPANIES

    Directory of Open Access Journals (Sweden)

    IONESCU ALIN

    2015-08-01

    Full Text Available Corporate governance represents a current topic, with a considerable importance in field of economic research of the last decades, even more so in most developed and developing countries the companies listed at stock exchange are forced to adopt and implement several national and international recommendations regarding corporate practices. In the context of recent years, considering the maturity of financial system of developed countries, international organizations and researchers attention was focused especially on analyzing corporate governance concept in developing countries. The main purpose of this paper is to estimate the impact of corporate governance quality on the performance of the companies, taking into account a series of data provided by the World Bank database (www.enterprisesurveys.org in case of 82 developing countries around the world. In this regard, using the principal components analysis, were constructed two informational synthetic indicators: one which describes the corporate governance quality and one for companies performances of analyzed countries. Thus, in assessing the quality level of corporate governance were tacked into account some aspects considered relevant in the literature, such as the type of the companies, innovation, corporate social responsibility, transparency and quality of workforce, while corporate performance has been defined and quantified in terms of issues such as annual real growth of sales, growth of labor productivity and capacity utilization. In this context, the impact of corporate governance quality on the firms performance was tested using the generalized linear model framework and the main result of the study consists in the thesis that, in analyzed countries, companies performance index is significantly influenced by the corporate governance quality index.

  6. Maintaining clinical governance when giving telephone advice.

    Science.gov (United States)

    Alazawi, William; Agarwal, Kosh; Suddle, Abid; Aluvihare, Varuna; Heneghan, Michael A

    2013-10-01

    Delivering excellent healthcare depends on accurate communication between professionals who may be in different locations. Frequently, the first point of contact with the liver unit at King's College Hospital (KCH) is through a telephone call to a specialist registrar or liver fellow, for whom no case notes are available in which to record information. The aim of this study was to improve the clinical governance of telephone referrals and to generate contemporaneous records that could be easily retrieved and audited. An electronic database for telephone referrals and advice was designed and made securely available to registrars in our unit. Service development in a tertiary liver centre that receives referrals from across the UK and Europe. Demographic and clinical data were recorded prospectively and analysed retrospectively. Data from 350 calls were entered during 5 months. The information included the nature and origin of the call (200 from 75 different institutions), disease burden and severity of disease among the patients discussed with KCH, and outcome of the call. The majority of cases were discussed with consultants or arrangements were made for formal review at KCH. A telephone referrals and advice database provides clinical governance, serves as a quality indicator and forms a contemporaneous record at the referral centre. Activity data and knowledge of disease burden help to tailor services to the needs of referrers and commissioners. We recommend implementation of similar models in other centres that give extramural verbal advice.

  7. Solvent Handbook Database System user's manual

    International Nuclear Information System (INIS)

    1993-03-01

    Industrial solvents and cleaners are used in maintenance facilities to remove wax, grease, oil, carbon, machining fluids, solder fluxes, mold release, and various other contaminants from parts, and to prepare the surface of various metals. However, because of growing environmental and worker-safety concerns, government regulations have already excluded the use of some chemicals and have restricted the use of halogenated hydrocarbons because they affect the ozone layer and may cause cancer. The Solvent Handbook Database System lets you view information on solvents and cleaners, including test results on cleaning performance, air emissions, recycling and recovery, corrosion, and non-metals compatibility. Company and product safety information is also available

  8. Extracting Databases from Dark Data with DeepDive.

    Science.gov (United States)

    Zhang, Ce; Shin, Jaeho; Ré, Christopher; Cafarella, Michael; Niu, Feng

    2016-01-01

    DeepDive is a system for extracting relational databases from dark data : the mass of text, tables, and images that are widely collected and stored but which cannot be exploited by standard relational tools. If the information in dark data - scientific papers, Web classified ads, customer service notes, and so on - were instead in a relational database, it would give analysts a massive and valuable new set of "big data." DeepDive is distinctive when compared to previous information extraction systems in its ability to obtain very high precision and recall at reasonable engineering cost; in a number of applications, we have used DeepDive to create databases with accuracy that meets that of human annotators. To date we have successfully deployed DeepDive to create data-centric applications for insurance, materials science, genomics, paleontologists, law enforcement, and others. The data unlocked by DeepDive represents a massive opportunity for industry, government, and scientific researchers. DeepDive is enabled by an unusual design that combines large-scale probabilistic inference with a novel developer interaction cycle. This design is enabled by several core innovations around probabilistic training and inference.

  9. The role of law in adaptive governance

    Science.gov (United States)

    Cosens, Barbara A.; Craig, Robin K.; Hirsch, Shana Lee; Arnold, Craig Anthony (Tony); Benson, Melinda H.; DeCaro, Daniel A.; Garmestani, Ahjond S.; Gosnell, Hannah; Ruhl, J.B.; Schlager, Edella

    2018-01-01

    The term “governance” encompasses both governmental and nongovernmental participation in collective choice and action. Law dictates the structure, boundaries, rules, and processes within which governmental action takes place, and in doing so becomes one of the focal points for analysis of barriers to adaptation as the effects of climate change are felt. Adaptive governance must therefore contemplate a level of flexibility and evolution in governmental action beyond that currently found in the heavily administrative governments of many democracies. Nevertheless, over time, law itself has proven highly adaptive in western systems of government, evolving to address and even facilitate the emergence of new social norms (such as the rights of women and minorities) or to provide remedies for emerging problems (such as pollution). Thus, there is no question that law can adapt, evolve, and be reformed to make room for adaptive governance. In doing this, not only may barriers be removed, but law may be adjusted to facilitate adaptive governance and to aid in institutionalizing new and emerging approaches to governance. The key is to do so in a way that also enhances legitimacy, accountability, and justice, or else such reforms will never be adopted by democratic societies, or if adopted, will destabilize those societies. By identifying those aspects of the frameworks for adaptive governance reviewed in the introduction to this special feature relevant to the legal system, we present guidelines for evaluating the role of law in environmental governance to identify the ways in which law can be used, adapted, and reformed to facilitate adaptive governance and to do so in a way that enhances the legitimacy of governmental action. PMID:29780426

  10. On Applicability of Tunable Filter Bank Based Feature for Ear Biometrics: A Study from Constrained to Unconstrained.

    Science.gov (United States)

    Chowdhury, Debbrota Paul; Bakshi, Sambit; Guo, Guodong; Sa, Pankaj Kumar

    2017-11-27

    In this paper, an overall framework has been presented for person verification using ear biometric which uses tunable filter bank as local feature extractor. The tunable filter bank, based on a half-band polynomial of 14th order, extracts distinct features from ear images maintaining its frequency selectivity property. To advocate the applicability of tunable filter bank on ear biometrics, recognition test has been performed on available constrained databases like AMI, WPUT, IITD and unconstrained database like UERC. Experiments have been conducted applying tunable filter based feature extractor on subparts of the ear. Empirical experiments have been conducted with four and six subdivisions of the ear image. Analyzing the experimental results, it has been found that tunable filter moderately succeeds to distinguish ear features at par with the state-of-the-art features used for ear recognition. Accuracies of 70.58%, 67.01%, 81.98%, and 57.75% have been achieved on AMI, WPUT, IITD, and UERC databases through considering Canberra Distance as underlying measure of separation. The performances indicate that tunable filter is a candidate for recognizing human from ear images.

  11. Database Description - SAHG | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name SAHG Alternative nam...h: Contact address Chie Motono Tel : +81-3-3599-8067 E-mail : Database classification Structure Databases - ...e databases - Protein properties Organism Taxonomy Name: Homo sapiens Taxonomy ID: 9606 Database description... Links: Original website information Database maintenance site The Molecular Profiling Research Center for D...stration Not available About This Database Database Description Download License Update History of This Database Site Policy | Contact Us Database Description - SAHG | LSDB Archive ...

  12. Event-governed and verbally-governed behavior.

    Science.gov (United States)

    Vargas, E A

    1988-01-01

    A NUMBER OF STATEMENTS PRESCRIBE BEHAVIOR: apothegms, maxims, proverbs, instructions, and so on. These differing guides to conduct present varieties of the dictionary definition of "rules." The term "rules" thus defines a category of language usage. Such a term, and its derivative, "rule-governed," does not address a controlling relation in the analysis of verbal behavior. The prevailing confounding of a category of language with a category of verbal behavior appears related to a lack of understanding as to what distinguishes verbal behavior from other behavior. Verbal behavior is a behavior-behavior relation in which events are contacted through the mediation of another organism's behavior specifically shaped for such mediation by a verbal community. It contrasts with behavior that contacts events directly, and shaped directly by the features of those events. Thus we may distinguish between two large classes of behavior by whether it is behavior controlled by events, or behavior controlled verbally. However, the functional controls operative with both classes of behavior do not differ.

  13. URS DataBase: universe of RNA structures and their motifs.

    Science.gov (United States)

    Baulin, Eugene; Yacovlev, Victor; Khachko, Denis; Spirin, Sergei; Roytberg, Mikhail

    2016-01-01

    The Universe of RNA Structures DataBase (URSDB) stores information obtained from all RNA-containing PDB entries (2935 entries in October 2015). The content of the database is updated regularly. The database consists of 51 tables containing indexed data on various elements of the RNA structures. The database provides a web interface allowing user to select a subset of structures with desired features and to obtain various statistical data for a selected subset of structures or for all structures. In particular, one can easily obtain statistics on geometric parameters of base pairs, on structural motifs (stems, loops, etc.) or on different types of pseudoknots. The user can also view and get information on an individual structure or its selected parts, e.g. RNA-protein hydrogen bonds. URSDB employs a new original definition of loops in RNA structures. That definition fits both pseudoknot-free and pseudoknotted secondary structures and coincides with the classical definition in case of pseudoknot-free structures. To our knowledge, URSDB is the first database supporting searches based on topological classification of pseudoknots and on extended loop classification.Database URL: http://server3.lpm.org.ru/urs/. © The Author(s) 2016. Published by Oxford University Press.

  14. Fiscal 1998 research report. Construction model project of the human sensory database; 1998 nendo ningen kankaku database kochiku model jigyo seika hokokusho

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2000-03-01

    This report summarizes the fiscal 1998 research result on construction of the human sensory database. The human sensory database for evaluating working environment was constructed on the basis of the measurement result on human sensory data (stress and fatigue) of 400 examinees at fields (transport field, control room and office) and in a laboratory. By using the newly developed standard measurement protocol for evaluating summer clothing (shirt, slacks and underwear), the database composed of the evaluation experiment results and the comparative experiment results on human physiological and sensory data of aged and young people was constructed. The database is featured by easy retrieval of various information concerned corresponding to requirements of tasks and use purposes. For evaluating the mass data with large time variation read corresponding to use purposes for every scene, the data detection support technique was adopted paying attention to physical and psychological variable phases, and mind and body events. A meaning of reaction and a hint for necessary measures are showed for every phase and event. (NEDO)

  15. Economising subsidies for green housing features: A stated preference approach

    Directory of Open Access Journals (Sweden)

    Yung Yau

    2014-12-01

    Full Text Available In light of the enormous amounts of energy and resources consumed by housing development and operations, many governments have started recognising the urgent need to promote green or eco-friendly housing with the aim of achieving sustainable development. Apart from regulations, governments can offer incentives to developers to provide green features in their developments by offering subsidies in various forms. However, such subsidisation is often uneconomical. In theory, market forces can lead to green housing provision without any government intervention if the market players are willing to pay extra for the green features of housing. Against this background, this article presents the findings of a study that compared potential homebuyers’ willingness to pay (WTP for various green housing features based on findings from a structured questionnaire survey in Macau. The housing attributes under investigation included uses of green materials (e.g., sustainable forest products and construction methods (e.g., prefabrication, energy-efficient technologies (e.g., LED lighting and water-saving devices (e.g., grey-water recycling systems. Results indicate that the respondents’ WTP was mainly motivated by economic incentives. Green housing attributes that can offer direct financial benefits corresponded to greater WTP. The policy implications of the research findings then follow.

  16. An Object-Relational Ifc Storage Model Based on Oracle Database

    Science.gov (United States)

    Li, Hang; Liu, Hua; Liu, Yong; Wang, Yuan

    2016-06-01

    With the building models are getting increasingly complicated, the levels of collaboration across professionals attract more attention in the architecture, engineering and construction (AEC) industry. In order to adapt the change, buildingSMART developed Industry Foundation Classes (IFC) to facilitate the interoperability between software platforms. However, IFC data are currently shared in the form of text file, which is defective. In this paper, considering the object-based inheritance hierarchy of IFC and the storage features of different database management systems (DBMS), we propose a novel object-relational storage model that uses Oracle database to store IFC data. Firstly, establish the mapping rules between data types in IFC specification and Oracle database. Secondly, design the IFC database according to the relationships among IFC entities. Thirdly, parse the IFC file and extract IFC data. And lastly, store IFC data into corresponding tables in IFC database. In experiment, three different building models are selected to demonstrate the effectiveness of our storage model. The comparison of experimental statistics proves that IFC data are lossless during data exchange.

  17. The National Landslide Database of Great Britain: Acquisition, communication and the role of social media

    Science.gov (United States)

    Pennington, Catherine; Freeborough, Katy; Dashwood, Claire; Dijkstra, Tom; Lawrie, Kenneth

    2015-11-01

    The British Geological Survey (BGS) is the national geological agency for Great Britain that provides geoscientific information to government, other institutions and the public. The National Landslide Database has been developed by the BGS and is the focus for national geohazard research for landslides in Great Britain. The history and structure of the geospatial database and associated Geographical Information System (GIS) are explained, along with the future developments of the database and its applications. The database is the most extensive source of information on landslides in Great Britain with over 17,000 records of landslide events to date, each documented as fully as possible for inland, coastal and artificial slopes. Data are gathered through a range of procedures, including: incorporation of other databases; automated trawling of current and historical scientific literature and media reports; new field- and desk-based mapping technologies with digital data capture, and using citizen science through social media and other online resources. This information is invaluable for directing the investigation, prevention and mitigation of areas of unstable ground in accordance with Government planning policy guidelines. The national landslide susceptibility map (GeoSure) and a national landslide domains map currently under development, as well as regional mapping campaigns, rely heavily on the information contained within the landslide database. Assessing susceptibility to landsliding requires knowledge of the distribution of failures, an understanding of causative factors, their spatial distribution and likely impacts, whilst understanding the frequency and types of landsliding present is integral to modelling how rainfall will influence the stability of a region. Communication of landslide data through the Natural Hazard Partnership (NHP) and Hazard Impact Model contributes to national hazard mitigation and disaster risk reduction with respect to weather and

  18. Database Description - PSCDB | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available abase Description General information of database Database name PSCDB Alternative n...rial Science and Technology (AIST) Takayuki Amemiya E-mail: Database classification Structure Databases - Protein structure Database...554-D558. External Links: Original website information Database maintenance site Graduate School of Informat...available URL of Web services - Need for user registration Not available About This Database Database Descri...ption Download License Update History of This Database Site Policy | Contact Us Database Description - PSCDB | LSDB Archive ...

  19. An Oracle(c) database for the AMS experiment

    International Nuclear Information System (INIS)

    Boschini, M.; Gervasi, M.; Grandi, D.; Rancoita, P.G.; Trombetta, L.; Usoskin, I.G.

    1999-01-01

    We present hardware and software technologies implemented for the AMS Milano Data Center. Goal of the AMS Milano Data Center is to provide data collected during the STS-91 Space Shuttle flight to users and to provide a User Interface as well to manage the data properly. Data are stored in a database that provides high level query and retrieval features, the support being a magneto-optical juke-box. We describe the use of proprietary software (Oracle(c)) as well as custom-written software to enhance access performances. In particular we underscore the use of the Oracle Call Interfaces as a powerful tool to interface the database and the operating system in a natural way

  20. Database Description - ASTRA | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available abase Description General information of database Database name ASTRA Alternative n...tics Journal Search: Contact address Database classification Nucleotide Sequence Databases - Gene structure,...3702 Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database description The database represents classified p...(10):1211-6. External Links: Original website information Database maintenance site National Institute of Ad... for user registration Not available About This Database Database Description Dow

  1. Volcanoes of the World: Reconfiguring a scientific database to meet new goals and expectations

    Science.gov (United States)

    Venzke, Edward; Andrews, Ben; Cottrell, Elizabeth

    2015-04-01

    The Smithsonian Global Volcanism Program's (GVP) database of Holocene volcanoes and eruptions, Volcanoes of the World (VOTW), originated in 1971, and was largely populated with content from the IAVCEI Catalog of Volcanoes of Active Volcanoes and some independent datasets. Volcanic activity reported by Smithsonian's Bulletin of the Global Volcanism Network and USGS/SI Weekly Activity Reports (and their predecessors), published research, and other varied sources has expanded the database significantly over the years. Three editions of the VOTW were published in book form, creating a catalog with new ways to display data that included regional directories, a gazetteer, and a 10,000-year chronology of eruptions. The widespread dissemination of the data in electronic media since the first GVP website in 1995 has created new challenges and opportunities for this unique collection of information. To better meet current and future goals and expectations, we have recently transitioned VOTW into a SQL Server database. This process included significant schema changes to the previous relational database, data auditing, and content review. We replaced a disparate, confusing, and changeable volcano numbering system with unique and permanent volcano numbers. We reconfigured structures for recording eruption data to allow greater flexibility in describing the complexity of observed activity, adding in the ability to distinguish episodes within eruptions (in time and space) and events (including dates) rather than characteristics that take place during an episode. We have added a reference link field in multiple tables to enable attribution of sources at finer levels of detail. We now store and connect synonyms and feature names in a more consistent manner, which will allow for morphological features to be given unique numbers and linked to specific eruptions or samples; if the designated overall volcano name is also a morphological feature, it is then also listed and described as

  2. GPCALMA: A Tool For Mammography With A GRID-Connected Distributed Database

    International Nuclear Information System (INIS)

    Bottigli, U.; Golosio, B.; Masala, G.L.; Oliva, P.; Stumbo, S.; Cerello, P.; Cheran, S.; Delogu, P.; Fantacci, M.E.; Retico, A.; Fauci, F.; Magro, R.; Raso, G.; Lauria, A.; Palmiero, R.; Lopez Torres, E.; Tangaro, S.

    2003-01-01

    The GPCALMA (Grid Platform for Computer Assisted Library for MAmmography) collaboration involves several departments of physics, INFN (National Institute of Nuclear Physics) sections, and italian hospitals. The aim of this collaboration is developing a tool that can help radiologists in early detection of breast cancer. GPCALMA has built a large distributed database of digitised mammographic images (about 5500 images corresponding to 1650 patients) and developed a CAD (Computer Aided Detection) software which is integrated in a station that can also be used to acquire new images, as archive and to perform statistical analysis. The images (18x24 cm2, digitised by a CCD linear scanner with a 85 μm pitch and 4096 gray levels) are completely described: pathological ones have a consistent characterization with radiologist's diagnosis and histological data, non pathological ones correspond to patients with a follow up at least three years. The distributed database is realized through the connection of all the hospitals and research centers in GRID technology. In each hospital local patients digital images are stored in the local database. Using GRID connection, GPCALMA will allow each node to work on distributed database data as well as local database data. Using its database the GPCALMA tools perform several analysis. A texture analysis, i.e. an automated classification on adipose, dense or glandular texture, can be provided by the system. GPCALMA software also allows classification of pathological features, in particular massive lesions (both opacities and spiculated lesions) analysis and microcalcification clusters analysis. The detection of pathological features is made using neural network software that provides a selection of areas showing a given 'suspicion level' of lesion occurrence. The performance of the GPCALMA system will be presented in terms of the ROC (Receiver Operating Characteristic) curves. The results of GPCALMA system as 'second reader' will also

  3. Mission critical database for SPS accelerator measurements

    CERN Document Server

    Billen, R; Laugier, I; Reguero, I; Segura, N

    1995-01-01

    In order to maintain efficient control over the hadron and lepton beams in CERN¹s SPS accelerator, measurements are of vital importance. Beam parameters such as intensities, positions and losses need to be rapidly available in the SPS control room to allow the operators to monitor, judge and act on beam physics conditions. For the 1994 SPS startup, a completely new and redesigned measurement system based on client and server C-programs running on UNIX-workstations was introduced. The kernel of this new measurement system is an on-line ORACLE database.The NIAM method was used for the database design as well as a technique to tag synchronized data with timeslots instead of timestamps. A great attention was paid to proper storage allocation for tables and indices since this has a major impact on the efficiency of the database, due to its time-critical nature. Many new features of Oracle7 were exploited to reduce the surrounding software.During the 1994 SPS physics run, this new measurement system was commission...

  4. Frameworks to assess health systems governance: a systematic review.

    Science.gov (United States)

    Pyone, Thidar; Smith, Helen; van den Broek, Nynke

    2017-06-01

    Governance of the health system is a relatively new concept and there are gaps in understanding what health system governance is and how it could be assessed. We conducted a systematic review of the literature to describe the concept of governance and the theories underpinning as applied to health systems; and to identify which frameworks are available and have been applied to assess health systems governance. Frameworks were reviewed to understand how the principles of governance might be operationalized at different levels of a health system. Electronic databases and web portals of international institutions concerned with governance were searched for publications in English for the period January 1994 to February 2016. Sixteen frameworks developed to assess governance in the health system were identified and are described. Of these, six frameworks were developed based on theories from new institutional economics; three are primarily informed by political science and public management disciplines; three arise from the development literature and four use multidisciplinary approaches. Only five of the identified frameworks have been applied. These used the principal-agent theory, theory of common pool resources, North's institutional analysis and the cybernetics theory. Governance is a practice, dependent on arrangements set at political or national level, but which needs to be operationalized by individuals at lower levels in the health system; multi-level frameworks acknowledge this. Three frameworks were used to assess governance at all levels of the health system. Health system governance is complex and difficult to assess; the concept of governance originates from different disciplines and is multidimensional. There is a need to validate and apply existing frameworks and share lessons learnt regarding which frameworks work well in which settings. A comprehensive assessment of governance could enable policy makers to prioritize solutions for problems identified

  5. MEIMAN: Database exploring Medicinal and Edible insects of Manipur.

    Science.gov (United States)

    Shantibala, Tourangbam; Lokeshwari, Rajkumari; Thingnam, Gourshyam; Somkuwar, Bharat Gopalrao

    2012-01-01

    We have developed MEIMAN, a unique database on medicinal and edible insects of Manipur which comprises 51 insects species collected through extensive survey and questionnaire for two years. MEIMAN provides integrated access to insect species thorough sophisticated web interface which has following capabilities a) Graphical interface of seasonality, b) Method of preparation, c) Form of use - edible and medicinal, d) habitat, e) medicinal uses, f) commercial importance and g) economic status. This database will be useful for scientific validations and updating of traditional wisdom in bioprospecting aspects. It will be useful in analyzing the insect biodiversity for the development of virgin resources and their industrialization. Further, the features will be suited for detailed investigation on potential medicinal and edible insects that make MEIMAN a powerful tool for sustainable management. The database is available for free at www.ibsd.gov.in/meiman.

  6. Database Description - RPD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available ase Description General information of database Database name RPD Alternative name Rice Proteome Database...titute of Crop Science, National Agriculture and Food Research Organization Setsuko Komatsu E-mail: Database... classification Proteomics Resources Plant databases - Rice Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database... description Rice Proteome Database contains information on protei...and entered in the Rice Proteome Database. The database is searchable by keyword,

  7. Database Description - PLACE | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available abase Description General information of database Database name PLACE Alternative name A Database...Kannondai, Tsukuba, Ibaraki 305-8602, Japan National Institute of Agrobiological Sciences E-mail : Databas...e classification Plant databases Organism Taxonomy Name: Tracheophyta Taxonomy ID: 58023 Database...99, Vol.27, No.1 :297-300 External Links: Original website information Database maintenance site National In...- Need for user registration Not available About This Database Database Descripti

  8. THE GLOBAL GOVERNANCE PROBLEM AND THE ROLE OF THE INTERNATIONAL MONETARY FUND

    Directory of Open Access Journals (Sweden)

    Sidorova E. A.

    2015-06-01

    Full Text Available Currently, globalization begins to permeate more and more areas of human activity, therefore it is important question of the complex mechanisms and principles of global governance formation.The article analyzes the essence, subjects and mechanisms for the implementation of the global economic governance. Moreover, it investigates the role and current state of the International Monetary Fund (IMF in the global economy. In conclusion, it clarifies the relationship of the IMF and processes of global governance. Research has shown that it is necessary to create within the IMF more representative, economically and politically balanced system of global governance of the world monetary and financial relations as part of the emerging mechanisms of global economic governance. This article extends the knowledge about the features of the IMF in the forming global governance.

  9. A Feature Subtraction Method for Image Based Kinship Verification under Uncontrolled Environments

    DEFF Research Database (Denmark)

    Duan, Xiaodong; Tan, Zheng-Hua

    2015-01-01

    The most fundamental problem of local feature based kinship verification methods is that a local feature can capture the variations of environmental conditions and the differences between two persons having a kin relation, which can significantly decrease the performance. To address this problem...... the feature distance between face image pairs with kinship and maximize the distance between non-kinship pairs. Based on the subtracted feature, the verification is realized through a simple Gaussian based distance comparison method. Experiments on two public databases show that the feature subtraction method...

  10. Annotation-based feature extraction from sets of SBML models.

    Science.gov (United States)

    Alm, Rebekka; Waltemath, Dagmar; Wolfien, Markus; Wolkenhauer, Olaf; Henkel, Ron

    2015-01-01

    Model repositories such as BioModels Database provide computational models of biological systems for the scientific community. These models contain rich semantic annotations that link model entities to concepts in well-established bio-ontologies such as Gene Ontology. Consequently, thematically similar models are likely to share similar annotations. Based on this assumption, we argue that semantic annotations are a suitable tool to characterize sets of models. These characteristics improve model classification, allow to identify additional features for model retrieval tasks, and enable the comparison of sets of models. In this paper we discuss four methods for annotation-based feature extraction from model sets. We tested all methods on sets of models in SBML format which were composed from BioModels Database. To characterize each of these sets, we analyzed and extracted concepts from three frequently used ontologies, namely Gene Ontology, ChEBI and SBO. We find that three out of the methods are suitable to determine characteristic features for arbitrary sets of models: The selected features vary depending on the underlying model set, and they are also specific to the chosen model set. We show that the identified features map on concepts that are higher up in the hierarchy of the ontologies than the concepts used for model annotations. Our analysis also reveals that the information content of concepts in ontologies and their usage for model annotation do not correlate. Annotation-based feature extraction enables the comparison of model sets, as opposed to existing methods for model-to-keyword comparison, or model-to-model comparison.

  11. Database Description - JSNP | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name JSNP Alternative nam...n Science and Technology Agency Creator Affiliation: Contact address E-mail : Database...sapiens Taxonomy ID: 9606 Database description A database of about 197,000 polymorphisms in Japanese populat...1):605-610 External Links: Original website information Database maintenance site Institute of Medical Scien...er registration Not available About This Database Database Description Download License Update History of This Database

  12. Database Description - RED | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available ase Description General information of database Database name RED Alternative name Rice Expression Database...enome Research Unit Shoshi Kikuchi E-mail : Database classification Plant databases - Rice Database classifi...cation Microarray, Gene Expression Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database descripti... Article title: Rice Expression Database: the gateway to rice functional genomics...nt Science (2002) Dec 7 (12):563-564 External Links: Original website information Database maintenance site

  13. [Discussion of the implementation of MIMIC database in emergency medical study].

    Science.gov (United States)

    Li, Kaiyuan; Feng, Cong; Jia, Lijing; Chen, Li; Pan, Fei; Li, Tanshi

    2018-05-01

    To introduce Medical Information Mart for Intensive Care (MIMIC) database and elaborate the approach of critically emergent research with big data based on the feature of MIMIC and updated studies both domestic and overseas, we put forward the feasibility and necessity of introducing medical big data to research in emergency. Then we discuss the role of MIMIC database in emergency clinical study, as well as the principles and key notes of experimental design and implementation under the medical big data circumstance. The implementation of MIMIC database in emergency medical research provides a brand new field for the early diagnosis, risk warning and prognosis of critical illness, however there are also limitations. To meet the era of big data, emergency medical database which is in accordance with our national condition is needed, which will provide new energy to the development of emergency medicine.

  14. Database Description - ConfC | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available abase Description General information of database Database name ConfC Alternative name Database...amotsu Noguchi Tel: 042-495-8736 E-mail: Database classification Structure Database...s - Protein structure Structure Databases - Small molecules Structure Databases - Nucleic acid structure Database... services - Need for user registration - About This Database Database Description Download License Update History of This Database... Site Policy | Contact Us Database Description - ConfC | LSDB Archive ...

  15. Modern trends in international researches in the sphere of electronic governance (in the case of publications of the international journal Electronic Government

    Directory of Open Access Journals (Sweden)

    V. M. Dreshpak

    2017-08-01

    Full Text Available Current trends of studying the problems of electronic government by world scientific community have been revealed in the article using the analysis of publications in the international journal Electronic Government. It has been noted that the peculiarity of modern research in the sphere of electronic government is that this sphere is changing constantly and dynamically under the influence of many factors and is being modernized together with the development of information and communication technologies and social relations. This requires a bigger degree of integration of Ukrainian researches in the sphere of public administration with the global scientific context, more active introduction of the foreign researches’ materials on electronic government issues to the scientific Ukrainian use, study of foreign approaches on publications in scientific periodicals of the industry. The survey was conducted on the basis of Electronic Government, an International Journal, which has been published since 2004 in the UK and is submitted to the international Scientometrics Scopus database and publishes materials in the sphere of «Public administration». It has been found that the key topics of the journal relate to the current practice and studies of various aspects of electronic government in different countries. In particular, the analyzed publications of 2015 - 2017 years provide a broad picture of the situation with e-government in the world and reveal specific problems of different states according to the level of development of their e-government powers. The authors of these articles are scientists from 24 countries. They have studied the problems of electronic government in 14 states and the global problems of electronic government. For example, the magazine focuses on issues related to technological, social and humanitarian components of functioning and development of electronic governance, issues of methodology and methods of implementation of

  16. Personal Leadership Development within Master of Public Governance

    DEFF Research Database (Denmark)

    Meier, Frank; Tangkjær, Christian

    2013-01-01

    The purpose of this study is to explore how managerial agency is constructed through three relational strategies: i. between self and institutional context, ii. between self and social context, and iii. between self and oneself. The empirical source is a database of assignments by some 270 students......, participating in a one year Personal Leadership Development course within a Master of Public Governance 2009 – 2012. The context of the study is the accelerated changes in Danish Public Sector, and how these changes impact managers and their organisations under dominant management discourses, New Public...... Management and New Public Governance etc. The empirical analysis – initiated in this paper - explore if a žižekian approach can make sense of the managers ‘fantastic’ reliance on leadership and management tools and concepts to complete the (likewise) fantastic promises of organisational change brought...

  17. A feature dictionary supporting a multi-domain medical knowledge base.

    Science.gov (United States)

    Naeymi-Rad, F

    1989-01-01

    Because different terminology is used by physicians of different specialties in different locations to refer to the same feature (signs, symptoms, test results), it is essential that our knowledge development tools provide a means to access a common pool of terms. This paper discusses the design of an online medical dictionary that provides a solution to this problem for developers of multi-domain knowledge bases for MEDAS (Medical Emergency Decision Assistance System). Our Feature Dictionary supports phrase equivalents for features, feature interactions, feature classifications, and translations to the binary features generated by the expert during knowledge creation. It is also used in the conversion of a domain knowledge to the database used by the MEDAS inference diagnostic sessions. The Feature Dictionary also provides capabilities for complex queries across multiple domains using the supported relations. The Feature Dictionary supports three methods for feature representation: (1) for binary features, (2) for continuous valued features, and (3) for derived features.

  18. An object-oriented language-database integration model: The composition filters approach

    NARCIS (Netherlands)

    Aksit, Mehmet; Bergmans, Lodewijk; Vural, Sinan; Vural, S.

    1991-01-01

    This paper introduces a new model, based on so-called object-composition filters, that uniformly integrates database-like features into an object-oriented language. The focus is on providing persistent dynamic data structures, data sharing, transactions, multiple views and associative access,

  19. An Object-Oriented Language-Database Integration Model: The Composition-Filters Approach

    NARCIS (Netherlands)

    Aksit, Mehmet; Bergmans, Lodewijk; Vural, S.; Vural, Sinan; Lehrmann Madsen, O.

    1992-01-01

    This paper introduces a new model, based on so-called object-composition filters, that uniformly integrates database-like features into an object-oriented language. The focus is on providing persistent dynamic data structures, data sharing, transactions, multiple views and associative access,

  20. Database management systems understanding and applying database technology

    CERN Document Server

    Gorman, Michael M

    1991-01-01

    Database Management Systems: Understanding and Applying Database Technology focuses on the processes, methodologies, techniques, and approaches involved in database management systems (DBMSs).The book first takes a look at ANSI database standards and DBMS applications and components. Discussion focus on application components and DBMS components, implementing the dynamic relationship application, problems and benefits of dynamic relationship DBMSs, nature of a dynamic relationship application, ANSI/NDL, and DBMS standards. The manuscript then ponders on logical database, interrogation, and phy

  1. The LAILAPS Search Engine: Relevance Ranking in Life Science Databases

    Directory of Open Access Journals (Sweden)

    Lange Matthias

    2010-06-01

    Full Text Available Search engines and retrieval systems are popular tools at a life science desktop. The manual inspection of hundreds of database entries, that reflect a life science concept or fact, is a time intensive daily work. Hereby, not the number of query results matters, but the relevance does. In this paper, we present the LAILAPS search engine for life science databases. The concept is to combine a novel feature model for relevance ranking, a machine learning approach to model user relevance profiles, ranking improvement by user feedback tracking and an intuitive and slim web user interface, that estimates relevance rank by tracking user interactions. Queries are formulated as simple keyword lists and will be expanded by synonyms. Supporting a flexible text index and a simple data import format, LAILAPS can easily be used both as search engine for comprehensive integrated life science databases and for small in-house project databases.

  2. Design and implementation of the NPOI database and website

    Science.gov (United States)

    Newman, K.; Jorgensen, A. M.; Landavazo, M.; Sun, B.; Hutter, D. J.; Armstrong, J. T.; Mozurkewich, David; Elias, N.; van Belle, G. T.; Schmitt, H. R.; Baines, E. K.

    2014-07-01

    The Navy Precision Optical Interferometer (NPOI) has been recording astronomical observations for nearly two decades, at this point with hundreds of thousands of individual observations recorded to date for a total data volume of many terabytes. To make maximum use of the NPOI data it is necessary to organize them in an easily searchable manner and be able to extract essential diagnostic information from the data to allow users to quickly gauge data quality and suitability for a specific science investigation. This sets the motivation for creating a comprehensive database of observation metadata as well as, at least, reduced data products. The NPOI database is implemented in MySQL using standard database tools and interfaces. The use of standard database tools allows us to focus on top-level database and interface implementation and take advantage of standard features such as backup, remote access, mirroring, and complex queries which would otherwise be time-consuming to implement. A website was created in order to give scientists a user friendly interface for searching the database. It allows the user to select various metadata to search for and also allows them to decide how and what results are displayed. This streamlines the searches, making it easier and quicker for scientists to find the information they are looking for. The website has multiple browser and device support. In this paper we present the design of the NPOI database and website, and give examples of its use.

  3. Expert system for quality control in the INIS database

    International Nuclear Information System (INIS)

    Todeschini, C.; Tolstenkov, A.

    1990-05-01

    An expert system developed to identify input items to INIS database with a high probability of containing errors is described. The system employs a Knowledge Base constructed by the interpretation of a large number of intellectual choices or expert decisions made by human indexers and incorporated in the INIS database. On the basis of the descriptor indexing, the system checks the correctness of the categorization. A notable feature of the system is its capability of self improvement by the continuous updating of the Knowledge Base. The expert system has also been found to be extremely useful in identifying documents with poor indexing. 3 refs, 9 figs

  4. Expert system for quality control in the INIS database

    Energy Technology Data Exchange (ETDEWEB)

    Todeschini, C; Tolstenkov, A [International Atomic Energy Agency, Vienna (Austria)

    1990-05-01

    An expert system developed to identify input items to INIS database with a high probability of containing errors is described. The system employs a Knowledge Base constructed by the interpretation of a large number of intellectual choices or expert decisions made by human indexers and incorporated in the INIS database. On the basis of the descriptor indexing, the system checks the correctness of the categorization. A notable feature of the system is its capability of self improvement by the continuous updating of the Knowledge Base. The expert system has also been found to be extremely useful in identifying documents with poor indexing. 3 refs, 9 figs.

  5. Filling Terrorism Gaps: VEOs, Evaluating Databases, and Applying Risk Terrain Modeling to Terrorism

    Energy Technology Data Exchange (ETDEWEB)

    Hagan, Ross F. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2016-08-29

    This paper aims to address three issues: the lack of literature differentiating terrorism and violent extremist organizations (VEOs), terrorism incident databases, and the applicability of Risk Terrain Modeling (RTM) to terrorism. Current open source literature and publicly available government sources do not differentiate between terrorism and VEOs; furthermore, they fail to define them. Addressing the lack of a comprehensive comparison of existing terrorism data sources, a matrix comparing a dozen terrorism databases is constructed, providing insight toward the array of data available. RTM, a method for spatial risk analysis at a micro level, has some applicability to terrorism research, particularly for studies looking at risk indicators of terrorism. Leveraging attack data from multiple databases, combined with RTM, offers one avenue for closing existing research gaps in terrorism literature.

  6. Database Description - RMG | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available ase Description General information of database Database name RMG Alternative name ...raki 305-8602, Japan National Institute of Agrobiological Sciences E-mail : Database... classification Nucleotide Sequence Databases Organism Taxonomy Name: Oryza sativa Japonica Group Taxonomy ID: 39947 Database...rnal: Mol Genet Genomics (2002) 268: 434–445 External Links: Original website information Database...available URL of Web services - Need for user registration Not available About This Database Database Descri

  7. PairWise Neighbours database: overlaps and spacers among prokaryote genomes

    Directory of Open Access Journals (Sweden)

    Garcia-Vallvé Santiago

    2009-06-01

    Full Text Available Abstract Background Although prokaryotes live in a variety of habitats and possess different metabolic and genomic complexity, they have several genomic architectural features in common. The overlapping genes are a common feature of the prokaryote genomes. The overlapping lengths tend to be short because as the overlaps become longer they have more risk of deleterious mutations. The spacers between genes tend to be short too because of the tendency to reduce the non coding DNA among prokaryotes. However they must be long enough to maintain essential regulatory signals such as the Shine-Dalgarno (SD sequence, which is responsible of an efficient translation. Description PairWise Neighbours is an interactive and intuitive database used for retrieving information about the spacers and overlapping genes among bacterial and archaeal genomes. It contains 1,956,294 gene pairs from 678 fully sequenced prokaryote genomes and is freely available at the URL http://genomes.urv.cat/pwneigh. This database provides information about the overlaps and their conservation across species. Furthermore, it allows the wide analysis of the intergenic regions providing useful information such as the location and strength of the SD sequence. Conclusion There are experiments and bioinformatic analysis that rely on correct annotations of the initiation site. Therefore, a database that studies the overlaps and spacers among prokaryotes appears to be desirable. PairWise Neighbours database permits the reliability analysis of the overlapping structures and the study of the SD presence and location among the adjacent genes, which may help to check the annotation of the initiation sites.

  8. Building an integrated neurodegenerative disease database at an academic health center.

    Science.gov (United States)

    Xie, Sharon X; Baek, Young; Grossman, Murray; Arnold, Steven E; Karlawish, Jason; Siderowf, Andrew; Hurtig, Howard; Elman, Lauren; McCluskey, Leo; Van Deerlin, Vivianna; Lee, Virginia M-Y; Trojanowski, John Q

    2011-07-01

    It is becoming increasingly important to study common and distinct etiologies, clinical and pathological features, and mechanisms related to neurodegenerative diseases such as Alzheimer's disease, Parkinson's disease, amyotrophic lateral sclerosis, and frontotemporal lobar degeneration. These comparative studies rely on powerful database tools to quickly generate data sets that match diverse and complementary criteria set by them. In this article, we present a novel integrated neurodegenerative disease (INDD) database, which was developed at the University of Pennsylvania (Penn) with the help of a consortium of Penn investigators. Because the work of these investigators are based on Alzheimer's disease, Parkinson's disease, amyotrophic lateral sclerosis, and frontotemporal lobar degeneration, it allowed us to achieve the goal of developing an INDD database for these major neurodegenerative disorders. We used the Microsoft SQL server as a platform, with built-in "backwards" functionality to provide Access as a frontend client to interface with the database. We used PHP Hypertext Preprocessor to create the "frontend" web interface and then used a master lookup table to integrate individual neurodegenerative disease databases. We also present methods of data entry, database security, database backups, and database audit trails for this INDD database. Using the INDD database, we compared the results of a biomarker study with those using an alternative approach by querying individual databases separately. We have demonstrated that the Penn INDD database has the ability to query multiple database tables from a single console with high accuracy and reliability. The INDD database provides a powerful tool for generating data sets in comparative studies on several neurodegenerative diseases. Copyright © 2011 The Alzheimer's Association. Published by Elsevier Inc. All rights reserved.

  9. Relational databases

    CERN Document Server

    Bell, D A

    1986-01-01

    Relational Databases explores the major advances in relational databases and provides a balanced analysis of the state of the art in relational databases. Topics covered include capture and analysis of data placement requirements; distributed relational database systems; data dependency manipulation in database schemata; and relational database support for computer graphics and computer aided design. This book is divided into three sections and begins with an overview of the theory and practice of distributed systems, using the example of INGRES from Relational Technology as illustration. The

  10. THE NASA AMES POLYCYCLIC AROMATIC HYDROCARBON INFRARED SPECTROSCOPIC DATABASE: THE COMPUTED SPECTRA

    International Nuclear Information System (INIS)

    Bauschlicher, C. W.; Ricca, A.; Boersma, C.; Mattioda, A. L.; Cami, J.; Peeters, E.; Allamandola, L. J.; Sanchez de Armas, F.; Puerta Saborido, G.; Hudgins, D. M.

    2010-01-01

    The astronomical emission features, formerly known as the unidentified infrared bands, are now commonly ascribed to polycyclic aromatic hydrocarbons (PAHs). The laboratory experiments and computational modeling done at the NASA Ames Research Center to create a collection of PAH IR spectra relevant to test and refine the PAH hypothesis have been assembled into a spectroscopic database. This database now contains over 800 PAH spectra spanning 2-2000 μm (5000-5 cm -1 ). These data are now available on the World Wide Web at www.astrochem.org/pahdb. This paper presents an overview of the computational spectra in the database and the tools developed to analyze and interpret astronomical spectra using the database. A description of the online and offline user tools available on the Web site is also presented.

  11. Database Description - DGBY | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name DGBY Alternative name Database...EL: +81-29-838-8066 E-mail: Database classification Microarray Data and other Gene Expression Databases Orga...nism Taxonomy Name: Saccharomyces cerevisiae Taxonomy ID: 4932 Database descripti...-called phenomics). We uploaded these data on this website which is designated DGBY(Database for Gene expres...ma J, Ando A, Takagi H. Journal: Yeast. 2008 Mar;25(3):179-90. External Links: Original website information Database

  12. Database Description - KOME | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name KOME Alternative nam... Sciences Plant Genome Research Unit Shoshi Kikuchi E-mail : Database classification Plant databases - Rice ...Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database description Information about approximately ...Hayashizaki Y, Kikuchi S. Journal: PLoS One. 2007 Nov 28; 2(11):e1235. External Links: Original website information Database...OS) Rice mutant panel database (Tos17) A Database of Plant Cis-acting Regulatory

  13. ProCarDB: a database of bacterial carotenoids.

    Science.gov (United States)

    Nupur, L N U; Vats, Asheema; Dhanda, Sandeep Kumar; Raghava, Gajendra P S; Pinnaka, Anil Kumar; Kumar, Ashwani

    2016-05-26

    Carotenoids have important functions in bacteria, ranging from harvesting light energy to neutralizing oxidants and acting as virulence factors. However, information pertaining to the carotenoids is scattered throughout the literature. Furthermore, information about the genes/proteins involved in the biosynthesis of carotenoids has tremendously increased in the post-genomic era. A web server providing the information about microbial carotenoids in a structured manner is required and will be a valuable resource for the scientific community working with microbial carotenoids. Here, we have created a manually curated, open access, comprehensive compilation of bacterial carotenoids named as ProCarDB- Prokaryotic Carotenoid Database. ProCarDB includes 304 unique carotenoids arising from 50 biosynthetic pathways distributed among 611 prokaryotes. ProCarDB provides important information on carotenoids, such as 2D and 3D structures, molecular weight, molecular formula, SMILES, InChI, InChIKey, IUPAC name, KEGG Id, PubChem Id, and ChEBI Id. The database also provides NMR data, UV-vis absorption data, IR data, MS data and HPLC data that play key roles in the identification of carotenoids. An important feature of this database is the extension of biosynthetic pathways from the literature and through the presence of the genes/enzymes in different organisms. The information contained in the database was mined from published literature and databases such as KEGG, PubChem, ChEBI, LipidBank, LPSN, and Uniprot. The database integrates user-friendly browsing and searching with carotenoid analysis tools to help the user. We believe that this database will serve as a major information centre for researchers working on bacterial carotenoids.

  14. Enhancements to the Redmine Database Metrics Plug in

    Science.gov (United States)

    2017-08-01

    management web application has been adopted within the US Army Research Laboratory’s Computational and Information Sciences Directorate as a database...project management web application.∗ The Redmine plug-in† enabled the use of the numerous, powerful features of the web application. The many...distribution is unlimited. 2 • Selectable export of citations/references by type, writing style , and FY • Enhanced naming convention options for

  15. Thermodynamic database of multi-component Mg alloys and its application to solidification and heat treatment

    Directory of Open Access Journals (Sweden)

    Guanglong Xu

    2016-12-01

    Full Text Available An overview about one thermodynamic database of multi-component Mg alloys is given in this work. This thermodynamic database includes thermodynamic descriptions for 145 binary systems and 48 ternary systems in 23-component (Mg–Ag–Al–Ca–Ce–Cu–Fe–Gd–K–La–Li–Mn–Na–Nd–Ni–Pr–Si–Sn–Sr–Th–Y–Zn–Zr system. First, the major computational and experimental tools to establish the thermodynamic database of Mg alloys are briefly described. Subsequently, among the investigated binary and ternary systems, representative binary and ternary systems are shown to demonstrate the major feature of the database. Finally, application of the thermodynamic database to solidification simulation and selection of heat treatment schedule is described.

  16. E-Government Partnerships Across Levels of Government

    OpenAIRE

    Charbit, Claire; Michalun, Varinia

    2009-01-01

    E-government Partnerships across Levels of Government, is an overview of the challenges and approaches to creating a collaborative and cooperative partnership across levels of government for e-government development and implementation.

  17. Databases

    Directory of Open Access Journals (Sweden)

    Nick Ryan

    2004-01-01

    Full Text Available Databases are deeply embedded in archaeology, underpinning and supporting many aspects of the subject. However, as well as providing a means for storing, retrieving and modifying data, databases themselves must be a result of a detailed analysis and design process. This article looks at this process, and shows how the characteristics of data models affect the process of database design and implementation. The impact of the Internet on the development of databases is examined, and the article concludes with a discussion of a range of issues associated with the recording and management of archaeological data.

  18. On the advancement of highly cited research in China: An analysis of the Highly Cited database.

    Science.gov (United States)

    Li, John Tianci

    2018-01-01

    This study investigates the progress of highly cited research in China from 2001 to 2016 through the analysis of the Highly Cited database. The Highly Cited database, compiled by Clarivate Analytics, is comprised of the world's most influential researchers in the 22 Essential Science Indicator fields as catalogued by the Web of Science. The database is considered an international standard for the measurement of national and institutional highly cited research output. Overall, we found a consistent and substantial increase in Highly Cited Researchers from China during the timespan. The Chinese institutions with the most Highly Cited Researchers- the Chinese Academy of Sciences, Tsinghua University, Peking University, Zhejiang University, the University of Science and Technology of China, and BGI Shenzhen- are all top ten universities or primary government research institutions. Further evaluation of separate fields of research and government funding data from the National Natural Science Foundation of China revealed disproportionate growth efficiencies among the separate divisions of the National Natural Science Foundation. The most development occurred in the fields of Chemistry, Materials Sciences, and Engineering, whereas the least development occurred in Economics and Business, Health Sciences, and Life Sciences.

  19. Finger vein recognition based on the hyperinformation feature

    Science.gov (United States)

    Xi, Xiaoming; Yang, Gongping; Yin, Yilong; Yang, Lu

    2014-01-01

    The finger vein is a promising biometric pattern for personal identification due to its advantages over other existing biometrics. In finger vein recognition, feature extraction is a critical step, and many feature extraction methods have been proposed to extract the gray, texture, or shape of the finger vein. We treat them as low-level features and present a high-level feature extraction framework. Under this framework, base attribute is first defined to represent the characteristics of a certain subcategory of a subject. Then, for an image, the correlation coefficient is used for constructing the high-level feature, which reflects the correlation between this image and all base attributes. Since the high-level feature can reveal characteristics of more subcategories and contain more discriminative information, we call it hyperinformation feature (HIF). Compared with low-level features, which only represent the characteristics of one subcategory, HIF is more powerful and robust. In order to demonstrate the potential of the proposed framework, we provide a case study to extract HIF. We conduct comprehensive experiments to show the generality of the proposed framework and the efficiency of HIF on our databases, respectively. Experimental results show that HIF significantly outperforms the low-level features.

  20. The Feature Extraction Based on Texture Image Information for Emotion Sensing in Speech

    Directory of Open Access Journals (Sweden)

    Kun-Ching Wang

    2014-09-01

    Full Text Available In this paper, we present a novel texture image feature for Emotion Sensing in Speech (ESS. This idea is based on the fact that the texture images carry emotion-related information. The feature extraction is derived from time-frequency representation of spectrogram images. First, we transform the spectrogram as a recognizable image. Next, we use a cubic curve to enhance the image contrast. Then, the texture image information (TII derived from the spectrogram image can be extracted by using Laws’ masks to characterize emotional state. In order to evaluate the effectiveness of the proposed emotion recognition in different languages, we use two open emotional databases including the Berlin Emotional Speech Database (EMO-DB and eNTERFACE corpus and one self-recorded database (KHUSC-EmoDB, to evaluate the performance cross-corpora. The results of the proposed ESS system are presented using support vector machine (SVM as a classifier. Experimental results show that the proposed TII-based feature extraction inspired by visual perception can provide significant classification for ESS systems. The two-dimensional (2-D TII feature can provide the discrimination between different emotions in visual expressions except for the conveyance pitch and formant tracks. In addition, the de-noising in 2-D images can be more easily completed than de-noising in 1-D speech.

  1. A web-based data visualization tool for the MIMIC-II database.

    Science.gov (United States)

    Lee, Joon; Ribey, Evan; Wallace, James R

    2016-02-04

    Although MIMIC-II, a public intensive care database, has been recognized as an invaluable resource for many medical researchers worldwide, becoming a proficient MIMIC-II researcher requires knowledge of SQL programming and an understanding of the MIMIC-II database schema. These are challenging requirements especially for health researchers and clinicians who may have limited computer proficiency. In order to overcome this challenge, our objective was to create an interactive, web-based MIMIC-II data visualization tool that first-time MIMIC-II users can easily use to explore the database. The tool offers two main features: Explore and Compare. The Explore feature enables the user to select a patient cohort within MIMIC-II and visualize the distributions of various administrative, demographic, and clinical variables within the selected cohort. The Compare feature enables the user to select two patient cohorts and visually compare them with respect to a variety of variables. The tool is also helpful to experienced MIMIC-II researchers who can use it to substantially accelerate the cumbersome and time-consuming steps of writing SQL queries and manually visualizing extracted data. Any interested researcher can use the MIMIC-II data visualization tool for free to quickly and conveniently conduct a preliminary investigation on MIMIC-II with a few mouse clicks. Researchers can also use the tool to learn the characteristics of the MIMIC-II patients. Since it is still impossible to conduct multivariable regression inside the tool, future work includes adding analytics capabilities. Also, the next version of the tool will aim to utilize MIMIC-III which contains more data.

  2. 75 FR 64742 - Government in the Sunshine Act Meeting Notice

    Science.gov (United States)

    2010-10-20

    ... INTERNATIONAL TRADE COMMISSION [USITC SE-10-029] Government in the Sunshine Act Meeting Notice AGENCY HOLDING THE MEETING: United States International Trade Commission. TIME AND DATE: October 22, 2010... Wireless Communications Devices Featuring Digital Cameras, and Components Thereof). In accordance with...

  3. RaftProt: mammalian lipid raft proteome database.

    Science.gov (United States)

    Shah, Anup; Chen, David; Boda, Akash R; Foster, Leonard J; Davis, Melissa J; Hill, Michelle M

    2015-01-01

    RaftProt (http://lipid-raft-database.di.uq.edu.au/) is a database of mammalian lipid raft-associated proteins as reported in high-throughput mass spectrometry studies. Lipid rafts are specialized membrane microdomains enriched in cholesterol and sphingolipids thought to act as dynamic signalling and sorting platforms. Given their fundamental roles in cellular regulation, there is a plethora of information on the size, composition and regulation of these membrane microdomains, including a large number of proteomics studies. To facilitate the mining and analysis of published lipid raft proteomics studies, we have developed a searchable database RaftProt. In addition to browsing the studies, performing basic queries by protein and gene names, searching experiments by cell, tissue and organisms; we have implemented several advanced features to facilitate data mining. To address the issue of potential bias due to biochemical preparation procedures used, we have captured the lipid raft preparation methods and implemented advanced search option for methodology and sample treatment conditions, such as cholesterol depletion. Furthermore, we have identified a list of high confidence proteins, and enabled searching only from this list of likely bona fide lipid raft proteins. Given the apparent biological importance of lipid raft and their associated proteins, this database would constitute a key resource for the scientific community. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  4. Solar Sail Propulsion Technology Readiness Level Database

    Science.gov (United States)

    Adams, Charles L.

    2004-01-01

    The NASA In-Space Propulsion Technology (ISPT) Projects Office has been sponsoring 2 solar sail system design and development hardware demonstration activities over the past 20 months. Able Engineering Company (AEC) of Goleta, CA is leading one team and L Garde, Inc. of Tustin, CA is leading the other team. Component, subsystem and system fabrication and testing has been completed successfully. The goal of these activities is to advance the technology readiness level (TRL) of solar sail propulsion from 3 towards 6 by 2006. These activities will culminate in the deployment and testing of 20-meter solar sail system ground demonstration hardware in the 30 meter diameter thermal-vacuum chamber at NASA Glenn Plum Brook in 2005. This paper will describe the features of a computer database system that documents the results of the solar sail development activities to-date. Illustrations of the hardware components and systems, test results, analytical models, relevant space environment definition and current TRL assessment, as stored and manipulated within the database are presented. This database could serve as a central repository for all data related to the advancement of solar sail technology sponsored by the ISPT, providing an up-to-date assessment of the TRL of this technology. Current plans are to eventually make the database available to the Solar Sail community through the Space Transportation Information Network (STIN).

  5. Interactive Exploration for Continuously Expanding Neuron Databases.

    Science.gov (United States)

    Li, Zhongyu; Metaxas, Dimitris N; Lu, Aidong; Zhang, Shaoting

    2017-02-15

    This paper proposes a novel framework to help biologists explore and analyze neurons based on retrieval of data from neuron morphological databases. In recent years, the continuously expanding neuron databases provide a rich source of information to associate neuronal morphologies with their functional properties. We design a coarse-to-fine framework for efficient and effective data retrieval from large-scale neuron databases. In the coarse-level, for efficiency in large-scale, we employ a binary coding method to compress morphological features into binary codes of tens of bits. Short binary codes allow for real-time similarity searching in Hamming space. Because the neuron databases are continuously expanding, it is inefficient to re-train the binary coding model from scratch when adding new neurons. To solve this problem, we extend binary coding with online updating schemes, which only considers the newly added neurons and update the model on-the-fly, without accessing the whole neuron databases. In the fine-grained level, we introduce domain experts/users in the framework, which can give relevance feedback for the binary coding based retrieval results. This interactive strategy can improve the retrieval performance through re-ranking the above coarse results, where we design a new similarity measure and take the feedback into account. Our framework is validated on more than 17,000 neuron cells, showing promising retrieval accuracy and efficiency. Moreover, we demonstrate its use case in assisting biologists to identify and explore unknown neurons. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Database Description - SSBD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name SSBD Alternative nam...ss 2-2-3 Minatojima-minamimachi, Chuo-ku, Kobe 650-0047, Japan, RIKEN Quantitative Biology Center Shuichi Onami E-mail: Database... classification Other Molecular Biology Databases Database classification Dynamic databa...elegans Taxonomy ID: 6239 Taxonomy Name: Escherichia coli Taxonomy ID: 562 Database description Systems Scie...i Onami Journal: Bioinformatics/April, 2015/Volume 31, Issue 7 External Links: Original website information Database

  7. Database Description - GETDB | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available abase Description General information of database Database name GETDB Alternative n...ame Gal4 Enhancer Trap Insertion Database DOI 10.18908/lsdba.nbdc00236-000 Creator Creator Name: Shigeo Haya... Chuo-ku, Kobe 650-0047 Tel: +81-78-306-3185 FAX: +81-78-306-3183 E-mail: Database classification Expression... Invertebrate genome database Organism Taxonomy Name: Drosophila melanogaster Taxonomy ID: 7227 Database des...riginal website information Database maintenance site Drosophila Genetic Resource

  8. JICST Factual DatabaseJICST Chemical Substance Safety Regulation Database

    Science.gov (United States)

    Abe, Atsushi; Sohma, Tohru

    JICST Chemical Substance Safety Regulation Database is based on the Database of Safety Laws for Chemical Compounds constructed by Japan Chemical Industry Ecology-Toxicology & Information Center (JETOC) sponsored by the Sience and Technology Agency in 1987. JICST has modified JETOC database system, added data and started the online service through JOlS-F (JICST Online Information Service-Factual database) in January 1990. JICST database comprises eighty-three laws and fourteen hundred compounds. The authors outline the database, data items, files and search commands. An example of online session is presented.

  9. Database Access through Java Technologies

    Directory of Open Access Journals (Sweden)

    Nicolae MERCIOIU

    2010-09-01

    Full Text Available As a high level development environment, the Java technologies offer support to the development of distributed applications, independent of the platform, providing a robust set of methods to access the databases, used to create software components on the server side, as well as on the client side. Analyzing the evolution of Java tools to access data, we notice that these tools evolved from simple methods that permitted the queries, the insertion, the update and the deletion of the data to advanced implementations such as distributed transactions, cursors and batch files. The client-server architectures allows through JDBC (the Java Database Connectivity the execution of SQL (Structured Query Language instructions and the manipulation of the results in an independent and consistent manner. The JDBC API (Application Programming Interface creates the level of abstractization needed to allow the call of SQL queries to any DBMS (Database Management System. In JDBC the native driver and the ODBC (Open Database Connectivity-JDBC bridge and the classes and interfaces of the JDBC API will be described. The four steps needed to build a JDBC driven application are presented briefly, emphasizing on the way each step has to be accomplished and the expected results. In each step there are evaluations on the characteristics of the database systems and the way the JDBC programming interface adapts to each one. The data types provided by SQL2 and SQL3 standards are analyzed by comparison with the Java data types, emphasizing on the discrepancies between those and the SQL types, but also the methods that allow the conversion between different types of data through the methods of the ResultSet object. Next, starting from the metadata role and studying the Java programming interfaces that allow the query of result sets, we will describe the advanced features of the data mining with JDBC. As alternative to result sets, the Rowsets add new functionalities that

  10. Smart Images Search based on Visual Features Fusion

    International Nuclear Information System (INIS)

    Saad, M.H.

    2013-01-01

    is more accurate in retrieving images even in distortion cases such as geometric deformations and noise. The third proposed approach uses a modified region-based segmentation scheme that provides efficient segmentation results and treats over segmentation problems. This approach segments an image to regions that work as local descriptors. This proposed approach integrates the global features vector, which is used in the first approach, with the segmented regions as local feature. A spatial graph is constructed from the segmented regions and a greedy graph matching algorithm is applied to determine the final image rank. The proposed approaches are tested on a standard image databases such as Wang and UCID databases. Also is tested on our deformed Wang image database. Finally, the third approach is tested on breast cancer images retrieved from mammographic image analysis society. Experimental work shows that the proposed approaches improve the precision and recall of retrieval results compared to other approaches reported in thesis

  11. Neighbors Based Discriminative Feature Difference Learning for Kinship Verification

    DEFF Research Database (Denmark)

    Duan, Xiaodong; Tan, Zheng-Hua

    2015-01-01

    In this paper, we present a discriminative feature difference learning method for facial image based kinship verification. To transform feature difference of an image pair to be discriminative for kinship verification, a linear transformation matrix for feature difference between an image pair...... than the commonly used feature concatenation, leading to a low complexity. Furthermore, there is no positive semi-definitive constrain on the transformation matrix while there is in metric learning methods, leading to an easy solution for the transformation matrix. Experimental results on two public...... databases show that the proposed method combined with a SVM classification method outperforms or is comparable to state-of-the-art kinship verification methods. © Springer International Publishing AG, Part of Springer Science+Business Media...

  12. Local Feature Learning for Face Recognition under Varying Poses

    DEFF Research Database (Denmark)

    Duan, Xiaodong; Tan, Zheng-Hua

    2015-01-01

    In this paper, we present a local feature learning method for face recognition to deal with varying poses. As opposed to the commonly used approaches of recovering frontal face images from profile views, the proposed method extracts the subject related part from a local feature by removing the pose...... related part in it on the basis of a pose feature. The method has a closed-form solution, hence being time efficient. For performance evaluation, cross pose face recognition experiments are conducted on two public face recognition databases FERET and FEI. The proposed method shows a significant...... recognition improvement under varying poses over general local feature approaches and outperforms or is comparable with related state-of-the-art pose invariant face recognition approaches. Copyright ©2015 by IEEE....

  13. GigaDB: announcing the GigaScience database

    Directory of Open Access Journals (Sweden)

    Sneddon Tam P

    2012-07-01

    Full Text Available Abstract With the launch of GigaScience journal, here we provide insight into the accompanying database GigaDB, which allows the integration of manuscript publication with supporting data and tools. Reinforcing and upholding GigaScience’s goals to promote open-data and reproducibility of research, GigaDB also aims to provide a home, when a suitable public repository does not exist, for the supporting data or tools featured in the journal and beyond.

  14. The Structural Ceramics Database: Technical Foundations

    Science.gov (United States)

    Munro, R. G.; Hwang, F. Y.; Hubbard, C. R.

    1989-01-01

    The development of a computerized database on advanced structural ceramics can play a critical role in fostering the widespread use of ceramics in industry and in advanced technologies. A computerized database may be the most effective means of accelerating technology development by enabling new materials to be incorporated into designs far more rapidly than would have been possible with traditional information transfer processes. Faster, more efficient access to critical data is the basis for creating this technological advantage. Further, a computerized database provides the means for a more consistent treatment of data, greater quality control and product reliability, and improved continuity of research and development programs. A preliminary system has been completed as phase one of an ongoing program to establish the Structural Ceramics Database system. The system is designed to be used on personal computers. Developed in a modular design, the preliminary system is focused on the thermal properties of monolithic ceramics. The initial modules consist of materials specification, thermal expansion, thermal conductivity, thermal diffusivity, specific heat, thermal shock resistance, and a bibliography of data references. Query and output programs also have been developed for use with these modules. The latter program elements, along with the database modules, will be subjected to several stages of testing and refinement in the second phase of this effort. The goal of the refinement process will be the establishment of this system as a user-friendly prototype. Three primary considerations provide the guidelines to the system’s development: (1) The user’s needs; (2) The nature of materials properties; and (3) The requirements of the programming language. The present report discusses the manner and rationale by which each of these considerations leads to specific features in the design of the system. PMID:28053397

  15. E-government and employment services a case study in effectiveness

    CERN Document Server

    Fugini, Maria Grazia; Valles, Ramon Salvador

    2014-01-01

    This book explores the factors that affect the efficiency and effectiveness of electronic government (e-Government) by analyzing two employment- service systems in Italy and Catalonia: the Borsa Lavoro Lombardia Portal (Lombardy Employment Services Portal) and the Servei d'Ocupació de Catalunya (Catalan Employment Services Portal). The evaluation methodology used in the case studies and the related set of technical, social, and economic indicators are clearly described. The technological and organizational features of the systems of the two systems are then compared and their impacts assessed

  16. Pembangunan Database Destinasi Pariwisata Indonesia Pengumpulan dan Pengolahan Data Tahap I

    Directory of Open Access Journals (Sweden)

    Yosafati Hulu

    2014-12-01

    Full Text Available Considering the increasing need for local (government and community in developing tourism destinations in the era of autonomy, considering the need to select the appropriate attraction according to the respective criteria, and considering the needs of businessmen travel / hotel to offer the appropriate attraction with the needs of potential tourists, it is necessary to develop a database of tourist destinations in Indonesia that is able to facilitate these needs. The database is built is a web-based database that is widely accessible and capable of storing complete information about Indonesian tourism destination as a whole, systematic and structured. Attractions in the database already classifiable based attributes: location (the name of the island, province, district, type/tourism products, how to achieve these attractions, the cost, and also a variety of informal information such as: the ins and outs of local attractions by local communities or tourists. This study is a continuation of previous studies or research phase two of three phases planned. Phase two will focus on the collection and processing of data as well as testing and refinement of the model design and database structure that has been created in Phase I. The study was conducted in stages: 1 Design Model and Structure Database,2 Making a Web-based program, 3 Installation and Hosting, 4 Data Collection, 5 Data Processing and Data entry, and 6 Evaluation and improvement/Completion.

  17. Features of Chaotic Transients in Excitable Media Governed by Spiral and Scroll Waves

    Science.gov (United States)

    Lilienkamp, Thomas; Christoph, Jan; Parlitz, Ulrich

    2017-08-01

    In excitable media, chaotic dynamics governed by spiral or scroll waves is often not persistent but transient. Using extensive simulations employing different mathematical models we identify a specific type-II supertransient by an exponential increase of transient lifetimes with the system size in 2D and an investigation of the dynamics (number and lifetime of spiral waves, Kaplan-Yorke dimension). In 3D, simulations exhibit an increase of transient lifetimes and filament lengths only above a critical thickness. Finally, potential implications for understanding cardiac arrhythmias are discussed.

  18. Database Description - KAIKOcDNA | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us KAIKOcDNA Database Description General information of database Database name KAIKOcDNA Alter...National Institute of Agrobiological Sciences Akiya Jouraku E-mail : Database cla...ssification Nucleotide Sequence Databases Organism Taxonomy Name: Bombyx mori Taxonomy ID: 7091 Database des...rnal: G3 (Bethesda) / 2013, Sep / vol.9 External Links: Original website information Database maintenance si...available URL of Web services - Need for user registration Not available About This Database Database

  19. Download - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Trypanosomes Database Download First of all, please read the license of this database. Data ...1.4 KB) Simple search and download Downlaod via FTP FTP server is sometimes jammed. If it is, access [here]. About This Database Data...base Description Download License Update History of This Database Site Policy | Contact Us Download - Trypanosomes Database | LSDB Archive ...

  20. License - Arabidopsis Phenome Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Arabidopsis Phenome Database License License to Use This Database Last updated : 2017/02/27 You may use this database...cense specifies the license terms regarding the use of this database and the requirements you must follow in using this database.... The license for this database is specified in the Creative ...Commons Attribution-Share Alike 4.0 International . If you use data from this database, please be sure attribute this database...ative Commons Attribution-Share Alike 4.0 International is found here . With regard to this database, you ar

  1. Hindi vowel classification using QCN-MFCC features

    Directory of Open Access Journals (Sweden)

    Shipra Mishra

    2016-09-01

    Full Text Available In presence of environmental noise, speakers tend to emphasize their vocal effort to improve the audibility of voice. This involuntary adjustment is known as Lombard effect (LE. Due to LE the signal to noise ratio of speech increases, but at the same time the loudness, pitch and duration of phonemes changes. Hence, accuracy of automatic speech recognition systems degrades. In this paper, the effect of unsupervised equalization of Lombard effect is investigated for Hindi vowel classification task using Hindi database designed at TIFR Mumbai, India. Proposed Quantile-based Dynamic Cepstral Normalization MFCC (QCN-MFCC along with baseline MFCC features have been used for vowel classification. Hidden Markov Model (HMM is used as classifier. It is observed that QCN-MFCC features have given a maximum improvement of 5.97% and 5% over MFCC features for context-dependent and context-independent cases respectively. It is also observed that QCN-MFCC features have given improvement of 13% and 11.5% over MFCC features for context-dependent and context-independent classification of mid vowels.

  2. Portable database driven control system for SPEAR

    International Nuclear Information System (INIS)

    Howry, S.; Gromme, T.; King, A.; Sullenberger, M.

    1985-04-01

    The new computer control system software for SPEAR is presented as a transfer from the PEP system. Features of the target ring (SPEAR) such as symmetries, magnet groupings, etc., are all contained in a design file which is read by both people and computer. People use it as documentation; a program reads it to generate the database structure, which becomes the center of communication for all the software. Geometric information, such as element positions and lengths, and CAMAC I/O routing information is entered into the database as it is developed. Since application processes refer only to the database and since they do so only in generic terms, almost all of this software (representing more then fifteen man years) is transferred with few changes. Operator console menus (touchpanels) are also transferred with only superficial changes for the same reasons. The system is modular: the CAMAC I/O software is all in one process; the menu control software is a process; the ring optics model and the orbit model are separate processes, each of which runs concurrently with about 15 others in the multiprogramming environment of the VAX/VMS operating system. 10 refs., 1 fig

  3. Portable database driven control system for SPEAR

    Energy Technology Data Exchange (ETDEWEB)

    Howry, S.; Gromme, T.; King, A.; Sullenberger, M.

    1985-04-01

    The new computer control system software for SPEAR is presented as a transfer from the PEP system. Features of the target ring (SPEAR) such as symmetries, magnet groupings, etc., are all contained in a design file which is read by both people and computer. People use it as documentation; a program reads it to generate the database structure, which becomes the center of communication for all the software. Geometric information, such as element positions and lengths, and CAMAC I/O routing information is entered into the database as it is developed. Since application processes refer only to the database and since they do so only in generic terms, almost all of this software (representing more then fifteen man years) is transferred with few changes. Operator console menus (touchpanels) are also transferred with only superficial changes for the same reasons. The system is modular: the CAMAC I/O software is all in one process; the menu control software is a process; the ring optics model and the orbit model are separate processes, each of which runs concurrently with about 15 others in the multiprogramming environment of the VAX/VMS operating system. 10 refs., 1 fig.

  4. Constitutional “World Views”, Global Governance and International Relations Theory

    NARCIS (Netherlands)

    Larik, J.E.

    2014-01-01

    This paper addresses the constitutional entrenchment of foreign policy preferences, or “world views”, from the vantage point of International Relations theory. Empirically, norms that sketch out certain visions of global governance have become a popular feature of constitutional design. The paper

  5. Typed Sets as a Basis for Object-Oriented Database Schemas

    NARCIS (Netherlands)

    Balsters, H.; de By, R.A.; Zicari, R.

    The object-oriented data model TM is a language that is based on the formal theory of FM, a typed language with object-oriented features such as attributes and methods in the presence of subtyping. The general (typed) set constructs of FM allow one to deal with (database) constraints in TM. The

  6. FEATURE EXTRACTION FOR EMG BASED PROSTHESES CONTROL

    Directory of Open Access Journals (Sweden)

    R. Aishwarya

    2013-01-01

    Full Text Available The control of prosthetic limb would be more effective if it is based on Surface Electromyogram (SEMG signals from remnant muscles. The analysis of SEMG signals depend on a number of factors, such as amplitude as well as time- and frequency-domain properties. Time series analysis using Auto Regressive (AR model and Mean frequency which is tolerant to white Gaussian noise are used as feature extraction techniques. EMG Histogram is used as another feature vector that was seen to give more distinct classification. The work was done with SEMG dataset obtained from the NINAPRO DATABASE, a resource for bio robotics community. Eight classes of hand movements hand open, hand close, Wrist extension, Wrist flexion, Pointing index, Ulnar deviation, Thumbs up, Thumb opposite to little finger are taken into consideration and feature vectors are extracted. The feature vectors can be given to an artificial neural network for further classification in controlling the prosthetic arm which is not dealt in this paper.

  7. Comparison of the effectiveness of alternative feature sets in shape retrieval of multicomponent images

    Science.gov (United States)

    Eakins, John P.; Edwards, Jonathan D.; Riley, K. Jonathan; Rosin, Paul L.

    2001-01-01

    Many different kinds of features have been used as the basis for shape retrieval from image databases. This paper investigates the relative effectiveness of several types of global shape feature, both singly and in combination. The features compared include well-established descriptors such as Fourier coefficients and moment invariants, as well as recently-proposed measures of triangularity and ellipticity. Experiments were conducted within the framework of the ARTISAN shape retrieval system, and retrieval effectiveness assessed on a database of over 10,000 images, using 24 queries and associated ground truth supplied by the UK Patent Office . Our experiments revealed only minor differences in retrieval effectiveness between different measures, suggesting that a wide variety of shape feature combinations can provide adequate discriminating power for effective shape retrieval in multi-component image collections such as trademark registries. Marked differences between measures were observed for some individual queries, suggesting that there could be considerable scope for improving retrieval effectiveness by providing users with an improved framework for searching multi-dimensional feature space.

  8. A computational platform to maintain and migrate manual functional annotations for BioCyc databases.

    Science.gov (United States)

    Walsh, Jesse R; Sen, Taner Z; Dickerson, Julie A

    2014-10-12

    BioCyc databases are an important resource for information on biological pathways and genomic data. Such databases represent the accumulation of biological data, some of which has been manually curated from literature. An essential feature of these databases is the continuing data integration as new knowledge is discovered. As functional annotations are improved, scalable methods are needed for curators to manage annotations without detailed knowledge of the specific design of the BioCyc database. We have developed CycTools, a software tool which allows curators to maintain functional annotations in a model organism database. This tool builds on existing software to improve and simplify annotation data imports of user provided data into BioCyc databases. Additionally, CycTools automatically resolves synonyms and alternate identifiers contained within the database into the appropriate internal identifiers. Automating steps in the manual data entry process can improve curation efforts for major biological databases. The functionality of CycTools is demonstrated by transferring GO term annotations from MaizeCyc to matching proteins in CornCyc, both maize metabolic pathway databases available at MaizeGDB, and by creating strain specific databases for metabolic engineering.

  9. Transparency and Accountability of Government Regulations as an Integral Part of Social Responsibility Effectiveness

    Directory of Open Access Journals (Sweden)

    Elena A. Frolova

    2016-09-01

    Full Text Available In the paper the author's view on the role of government in promoting social responsibility of business and the individual is described. The main features of the socio-economic situation in Russia today are presented (horizontal and vertical mobility of the population, a small number of organizations and the extra-centralized public authorities, the predominance of personal relations between economic agents. The necessity of increasing the role of individuals and businesses in the social system is substantiated and the basic directions of activity are suggested (prosocial preferences, interpersonal trust, redistribution of social responsibility. Transparency and accountability of public authorities are very powerful tool to improve the quality of governance and it is one of the important conditions for the social responsibility, as well as to economic performance in modern Russia. The legitimacy of government is a multidimensional issue. And if we take into account the Russian features it is necessary to point out public control and enforcement, quality of formal institutions, and effectiveness of enforcement mechanisms. Also governance is important to enhance quality of regulation.

  10. Database Description - AcEST | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available abase Description General information of database Database name AcEST Alternative n...hi, Tokyo-to 192-0397 Tel: +81-42-677-1111(ext.3654) E-mail: Database classificat...eneris Taxonomy ID: 13818 Database description This is a database of EST sequences of Adiantum capillus-vene...(3): 223-227. External Links: Original website information Database maintenance site Plant Environmental Res...base Database Description Download License Update History of This Database Site Policy | Contact Us Database Description - AcEST | LSDB Archive ...

  11. What governs governance, and how does it evolve? The sociology of governance-in-action.

    Science.gov (United States)

    Fox, Nick J; Ward, Katie J

    2008-09-01

    Governance addresses a wide range of issues including social, economic and political continuity, security and integrity, individual and collective safety and the liberty and rights to self-actualization of citizens. Questions to be answered include how governance can be achieved and sustained within a social context imbued with cultural values and in which power is distributed unevenly and dynamically, and how governance impacts on individuals and institutions. Drawing on Gramscian notions of hegemony and consent, and recent political science literatures on regulation and meta-regulation, this paper develops a sociological model of governance that emphasizes a dynamic and responsive governance in action. Empirical data from a study of pharmaceutical governance is used to show how multiple institutions and actors are involved in sustaining effective governance. The model addresses issues of how governance is sustained in the face of change, why governance of practices varies from setting to setting, and how governance is achieved without legislation.

  12. License - SKIP Stemcell Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us SKIP Stemcell Database License License to Use This Database Last updated : 2017/03/13 You may use this database...specifies the license terms regarding the use of this database and the requirements you must follow in using this database.... The license for this database is specified in the Creative Common...s Attribution-Share Alike 4.0 International . If you use data from this database, please be sure attribute this database...al ... . The summary of the Creative Commons Attribution-Share Alike 4.0 International is found here . With regard to this database

  13. Web 2.0 and Participatory Governance

    Directory of Open Access Journals (Sweden)

    Peter Towbin

    2009-09-01

    Full Text Available By integrating a Geographic Information System (GIS into a web portal, we allow a multi-way dialog between Hong Kong's citizens and planning officials. Alternative development plans for Lantau (Hong Kong's largest island can be analyzed through interactive maps, which allow citizens to compare and comment on specific geo-referenced features. Lantau Island's extensive nature reserves, which offer protected nesting grounds for numerous bird species and other ecological and recreational services, are being weighed against extensive economic development. This experiment in open governance within China will also serve as a laboratory to study qualitative differences in citizen learning, between online dialog and face-to-face group deliberation. Our experiments will explore resolutions to a classical economic paradox from social choice theory, and point to potential improvements in contemporary efforts to bring open and responsive government through information technology.

  14. KALIMER database development

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Kwan Seong; Lee, Yong Bum; Jeong, Hae Yong; Ha, Kwi Seok

    2003-03-01

    KALIMER database is an advanced database to utilize the integration management for liquid metal reactor design technology development using Web applications. KALIMER design database is composed of results database, Inter-Office Communication (IOC), 3D CAD database, and reserved documents database. Results database is a research results database during all phase for liquid metal reactor design technology development of mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD database is a schematic overview for KALIMER design structure. And reserved documents database is developed to manage several documents and reports since project accomplishment.

  15. KALIMER database development

    International Nuclear Information System (INIS)

    Jeong, Kwan Seong; Lee, Yong Bum; Jeong, Hae Yong; Ha, Kwi Seok

    2003-03-01

    KALIMER database is an advanced database to utilize the integration management for liquid metal reactor design technology development using Web applications. KALIMER design database is composed of results database, Inter-Office Communication (IOC), 3D CAD database, and reserved documents database. Results database is a research results database during all phase for liquid metal reactor design technology development of mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD database is a schematic overview for KALIMER design structure. And reserved documents database is developed to manage several documents and reports since project accomplishment

  16. Optimization of an individual re-identification modeling process using biometric features

    Energy Technology Data Exchange (ETDEWEB)

    Heredia-Langner, Alejandro; Amidan, Brett G.; Matzner, Shari; Jarman, Kristin H.

    2014-09-24

    We present results from the optimization of a re-identification process using two sets of biometric data obtained from the Civilian American and European Surface Anthropometry Resource Project (CAESAR) database. The datasets contain real measurements of features for 2378 individuals in a standing (43 features) and seated (16 features) position. A genetic algorithm (GA) was used to search a large combinatorial space where different features are available between the probe (seated) and gallery (standing) datasets. Results show that optimized model predictions obtained using less than half of the 43 gallery features and data from roughly 16% of the individuals available produce better re-identification rates than two other approaches that use all the information available.

  17. Semantic memory: a feature-based analysis and new norms for Italian.

    Science.gov (United States)

    Montefinese, Maria; Ambrosini, Ettore; Fairfield, Beth; Mammarella, Nicola

    2013-06-01

    Semantic norms for properties produced by native speakers are valuable tools for researchers interested in the structure of semantic memory and in category-specific semantic deficits in individuals following brain damage. The aims of this study were threefold. First, we sought to extend existing semantic norms by adopting an empirical approach to category (Exp. 1) and concept (Exp. 2) selection, in order to obtain a more representative set of semantic memory features. Second, we extensively outlined a new set of semantic production norms collected from Italian native speakers for 120 artifactual and natural basic-level concepts, using numerous measures and statistics following a feature-listing task (Exp. 3b). Finally, we aimed to create a new publicly accessible database, since only a few existing databases are publicly available online.

  18. Database Description - RPSD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name RPSD Alternative nam...e Rice Protein Structure Database DOI 10.18908/lsdba.nbdc00749-000 Creator Creator Name: Toshimasa Yamazaki ... Ibaraki 305-8602, Japan National Institute of Agrobiological Sciences Toshimasa Yamazaki E-mail : Databas...e classification Structure Databases - Protein structure Organism Taxonomy Name: Or...or name(s): Journal: External Links: Original website information Database maintenance site National Institu

  19. Database Description - FANTOM5 | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us FANTOM5 Database Description General information of database Database name FANTOM5 Alternati...me: Rattus norvegicus Taxonomy ID: 10116 Taxonomy Name: Macaca mulatta Taxonomy ID: 9544 Database descriptio...l Links: Original website information Database maintenance site RIKEN Center for Life Science Technologies, ...ilable Web services Not available URL of Web services - Need for user registration Not available About This Database Database... Description Download License Update History of This Database Site Policy | Contact Us Database Description - FANTOM5 | LSDB Archive ...

  20. Governance codes: facts or fictions? a study of governance codes in colombia1,2

    Directory of Open Access Journals (Sweden)

    Julián Benavides Franco

    2010-10-01

    Full Text Available This article studies the effects on accounting performance and financing decisions of Colombian firms after issuing a corporate governance code. We assemble a database of Colombian issuers and test the hypotheses of improved performance and higher leverage after issuing a code. The results show that the firms’ return on assets after the code introduction improves in excess of 1%; the effect is amplified by the code quality. Additionally, the firms leverage increased, in excess of 5%, when the code quality was factored into the analysis. These results suggest that controlling parties commitment to self restrain, by reducing their private benefits and/or the expropriation of non controlling parties, through the code introduction, is indeed an effective measure and that the financial markets agree, increasing the supply of funds to the firms.

  1. [Relational database for urinary stone ambulatory consultation. Assessment of initial outcomes].

    Science.gov (United States)

    Sáenz Medina, J; Páez Borda, A; Crespo Martinez, L; Gómez Dos Santos, V; Barrado, C; Durán Poveda, M

    2010-05-01

    To create a relational database for monitoring lithiasic patients. We describe the architectural details and the initial results of the statistical analysis. Microsoft Access 2002 was used as template. Four different tables were constructed to gather demographic data (table 1), clinical and laboratory findings (table 2), stone features (table 3) and therapeutic approach (table 4). For a reliability analysis of the database the number of correctly stored data was gathered. To evaluate the performance of the database, a prospective analysis was conducted, from May 2004 to August 2009, on 171 stone free patients after treatment (EWSL, surgery or medical) from a total of 511 patients stored in the database. Lithiasic status (stone free or stone relapse) was used as primary end point, while demographic factors (age, gender), lithiasic history, upper urinary tract alterations and characteristics of the stone (side, location, composition and size) were considered as predictive factors. An univariate analysis was conducted initially by chi square test and supplemented by Kaplan Meier estimates for time to stone recurrence. A multiple Cox proportional hazards regression model was generated to jointly assess the prognostic value of the demographic factors and the predictive value of stones characteristics. For the reliability analysis 22,084 data were available corresponding to 702 consultations on 511 patients. Analysis of data showed a recurrence rate of 85.4% (146/171, median time to recurrence 608 days, range 70-1758). In the univariate and multivariate analysis, none of the factors under consideration had a significant effect on recurrence rate (p=ns). The relational database is useful for monitoring patients with urolithiasis. It allows easy control and update, as well as data storage for later use. The analysis conducted for its evaluation showed no influence of demographic factors and stone features on stone recurrence.

  2. Asynchronous data change notification between database server and accelerator control systems

    International Nuclear Information System (INIS)

    Wenge Fu; Seth Nemesure; Morris, J.

    2012-01-01

    Database data change notification (DCN) is a commonly used feature, it allows to be informed when the data has been changed on the server side by another client. Not all database management systems (DBMS) provide an explicit DCN mechanism. Even for those DBMS's which support DCN (such as Oracle and MS SQL server), some server side and/or client side programming may be required to make the DCN system work. This makes the setup of DCN between database server and interested clients tedious and time consuming. In accelerator control systems, there are many well established software client/server architectures (such as CDEV, EPICS, and ADO) that can be used to implement data reflection servers that transfer data asynchronously to any client using the standard SET/GET API. This paper describes a method for using such a data reflection server to set up asynchronous DCN (ADCN) between a DBMS and clients. This method works well for all DBMS systems which provide database trigger functionality. (authors)

  3. The Role of Law in Adaptive Governance | Science Inventory ...

    Science.gov (United States)

    The term “governance” encompasses both governmental and nongovernmental participation in collective choice and action. Law dictates the structure, boundaries, rules, and processes within which governmental action takes place, and in doing so becomes one of the focal points for analysis of barriers to adaptation as the effects of climate change are felt. Adaptive governance must therefore contemplate a level of flexibility and evolution in governmental action beyond that currently found in the heavily administrative governments of many democracies. Nevertheless, over time, law itself has proven highly adaptive in western systems of government, evolving to address and even facilitate the emergence of new social norms (such as the rights of women and minorities) or to provide remedies for emerging problems (such as pollution). Thus, there is no question that law can adapt, evolve, and be reformed to make room for adaptive governance. In doing this, not only may barriers be removed, but law may be adjusted to facilitate adaptive governance and to aid in institutionalizing new and emerging approaches to governance. The key is to do so in a way that also enhances legitimacy, accountability, and justice, or else such reforms will never be adopted by democratic societies, or if adopted, will destabilize those societies. By identifying those aspects of the frameworks for adaptive governance reviewed in the introduction to this special feature relevant to the legal sy

  4. The problem of causality in corporate governance research: The case of governance indexes and firm valuation

    Directory of Open Access Journals (Sweden)

    Jimmy A. Saravia

    2017-09-01

    Full Text Available In recent years the problem of the determination of causality has become an increasingly important question in the field of corporate governance. This paper reviews contemporary literature on the topic of causality, specifically it examines the literature that investigates the causal relationship between corporate governance indexes and firm valuation and finds that the current approach is to attempt to determine causality empirically and that the problem remains unresolved. After explaining the reasons why it is not possible to attempt to determine causality using real world data without falling prey to a logical fallacy, this paper discusses a traditional approach used in science to deal with the problem. In particular, the paper argues that the appropriate approach for the problem is to build theories, with causality featuring as a part of those theories, and then to test those theories both for logical and empirical consistency.

  5. An Aphasia Database on the Internet

    DEFF Research Database (Denmark)

    Axer, Hubertus; Jantzen, Jan; Graf von Keyserlingk, Diedrich

    2000-01-01

    A web-based software model was developed as an example for data mining in aphasiology. It is used for educating medical and engineering students. It is based upon a database of 254 aphasic patients which contains the diagnosis of the aphasia type, profiles of an aphasia test battery (Aachen Aphasia...... Test), and some further clinical information. In addition, the cerebral lesion profiles of 147 of these cases were standardized by transferring the coordinates of the lesions to a 3D reference brain based upon the ACPC coordinate system. Two artificial neural networks were used to perform...... a classfication of the aphasia type. First, a coarse classification was achieved by using an assessment of spontaneous speech of the patient which produced correct results in 87% of the test cases. Data analysis tools were used to select four features of the 30 available test features to yield a more accurate...

  6. Moral, ethical, and realist dilemmas of transnational governance of migration

    NARCIS (Netherlands)

    Bader, V.

    2012-01-01

    A core feature of the emerging international governance of migration is the reliance on knowledge and science in elaborating policies. Yet how scientists and researchers can productively contribute to policy making is unclear. This is the result of knowledge uncertainty, the complexity of migration

  7. Exploiting database technology for object based event storage and retrieval

    International Nuclear Information System (INIS)

    Rawat, Anil; Rajan, Alpana; Tomar, Shailendra Singh; Bansal, Anurag

    2005-01-01

    This paper discusses the storage and retrieval of experimental data on relational databases. Physics experiments carried out using reactors and particle accelerators, generate huge amount of data. Also, most of the data analysis and simulation programs are developed using object oriented programming concepts. Hence, one of the most important design features of an experiment related software framework is the way object persistency is handled. We intend to discuss these issues in the light of the module developed by us for storing C++ objects in relational databases like Oracle. This module was developed under the POOL persistency framework being developed for LHC, CERN grid. (author)

  8. Corporate social responsibility practice of Malaysian public listed government-linked companies: A dimensional analysis

    Directory of Open Access Journals (Sweden)

    Lim Boon Keong

    2018-06-01

    Full Text Available This paper examines the corporate social responsibility (CSR practices of the Malaysian public-listed government-linked companies (GLCs using a dimensional analysis. Four dimensions of CSR activities, namely community, employees, environment and governance, are investigated to study the latest CSR practice of GLCs in year 2016. Each dimension is divided into three subcategories to further examine the performance of GLCs on a particular CSR area. This is the first paper in Malaysia which uses CSR ratings (obtained from CSRHub database to proxy for CSR practice. None of the past literature has been found to adopt this approach. The findings show that Malay-sian public-listed GLCs performed better in community, employees and environment dimensions, whilst tend to underperform in governance dimension.

  9. The influence of corporate governance on project governance

    OpenAIRE

    Gonda, Pavel

    2011-01-01

    This work identifies the interaction between corporate governance and project management in project governance. It begins with introduction of basics of corporate governance and various principles of corporate governance in chosen countries and organizations. Further it introduces theoretical background of project governance and its connection and to corporate governance. In practical part work analyzes the level of compliancy with Swiss codex of best praxis in chosen company. The results con...

  10. Engineering governance: introducing a governance meta framework.

    OpenAIRE

    Brand, N.; Beens, B.; Vuuregge, E.; Batenburg, R.

    2011-01-01

    There is a need for a framework that depicts strategic choices within an organisation with regard to potential governance structures. The governance meta framework provides the necessary structure in the current developments of governance. Performance as well as conformance are embedded in this framework and provide the balance for all governance domains. (aut.ref.)

  11. Music genre classification using temporal domain features

    Science.gov (United States)

    Shiu, Yu; Kuo, C.-C. Jay

    2004-10-01

    Music genre provides an efficient way to index songs in the music database, and can be used as an effective means to retrieval music of a similar type, i.e. content-based music retrieval. In addition to other features, the temporal domain features of a music signal are exploited so as to increase the classification rate in this research. Three temporal techniques are examined in depth. First, the hidden Markov model (HMM) is used to emulate the time-varying properties of music signals. Second, to further increase the classification rate, we propose another feature set that focuses on the residual part of music signals. Third, the overall classification rate is enhanced by classifying smaller segments from a test material individually and making decision via majority voting. Experimental results are given to demonstrate the performance of the proposed techniques.

  12. Investigation of efficient features for image recognition by neural networks.

    Science.gov (United States)

    Goltsev, Alexander; Gritsenko, Vladimir

    2012-04-01

    In the paper, effective and simple features for image recognition (named LiRA-features) are investigated in the task of handwritten digit recognition. Two neural network classifiers are considered-a modified 3-layer perceptron LiRA and a modular assembly neural network. A method of feature selection is proposed that analyses connection weights formed in the preliminary learning process of a neural network classifier. In the experiments using the MNIST database of handwritten digits, the feature selection procedure allows reduction of feature number (from 60 000 to 7000) preserving comparable recognition capability while accelerating computations. Experimental comparison between the LiRA perceptron and the modular assembly neural network is accomplished, which shows that recognition capability of the modular assembly neural network is somewhat better. Copyright © 2011 Elsevier Ltd. All rights reserved.

  13. Feature hashing for fast image retrieval

    Science.gov (United States)

    Yan, Lingyu; Fu, Jiarun; Zhang, Hongxin; Yuan, Lu; Xu, Hui

    2018-03-01

    Currently, researches on content based image retrieval mainly focus on robust feature extraction. However, due to the exponential growth of online images, it is necessary to consider searching among large scale images, which is very timeconsuming and unscalable. Hence, we need to pay much attention to the efficiency of image retrieval. In this paper, we propose a feature hashing method for image retrieval which not only generates compact fingerprint for image representation, but also prevents huge semantic loss during the process of hashing. To generate the fingerprint, an objective function of semantic loss is constructed and minimized, which combine the influence of both the neighborhood structure of feature data and mapping error. Since the machine learning based hashing effectively preserves neighborhood structure of data, it yields visual words with strong discriminability. Furthermore, the generated binary codes leads image representation building to be of low-complexity, making it efficient and scalable to large scale databases. Experimental results show good performance of our approach.

  14. Exploration of available feature detection and identification systems and their performance on radiographs

    Science.gov (United States)

    Wantuch, Andrew C.; Vita, Joshua A.; Jimenez, Edward S.; Bray, Iliana E.

    2016-10-01

    Despite object detection, recognition, and identification being very active areas of computer vision research, many of the available tools to aid in these processes are designed with only photographs in mind. Although some algorithms used specifically for feature detection and identification may not take explicit advantage of the colors available in the image, they still under-perform on radiographs, which are grayscale images. We are especially interested in the robustness of these algorithms, specifically their performance on a preexisting database of X-ray radiographs in compressed JPEG form, with multiple ways of describing pixel information. We will review various aspects of the performance of available feature detection and identification systems, including MATLABs Computer Vision toolbox, VLFeat, and OpenCV on our non-ideal database. In the process, we will explore possible reasons for the algorithms' lessened ability to detect and identify features from the X-ray radiographs.

  15. Reldata - a tool for reliability database management

    International Nuclear Information System (INIS)

    Vinod, Gopika; Saraf, R.K.; Babar, A.K.; Sanyasi Rao, V.V.S.; Tharani, Rajiv

    2000-01-01

    Component failure, repair and maintenance data is a very important element of any Probabilistic Safety Assessment study. The credibility of the results of such study is enhanced if the data used is generated from operating experience of similar power plants. Towards this objective, a computerised database is designed, with fields such as, date and time of failure, component name, failure mode, failure cause, ways of failure detection, reactor operating power status, repair times, down time, etc. This leads to evaluation of plant specific failure rate, and on demand failure probability/unavailability for all components. Systematic data updation can provide a real time component reliability parameter statistics and trend analysis and this helps in planning maintenance strategies. A software package has been developed RELDATA, which incorporates the database management and data analysis methods. This report describes the software features and underlying methodology in detail. (author)

  16. Database Description - DMPD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name DMPD Alternative nam...e Dynamic Macrophage Pathway CSML Database DOI 10.18908/lsdba.nbdc00558-000 Creator Creator Name: Masao Naga...ty of Tokyo 4-6-1 Shirokanedai, Minato-ku, Tokyo 108-8639 Tel: +81-3-5449-5615 FAX: +83-3-5449-5442 E-mail: Database...606 Taxonomy Name: Mammalia Taxonomy ID: 40674 Database description DMPD collects...e(s) Article title: Author name(s): Journal: External Links: Original website information Database maintenan

  17. Health data research in New Zealand: updating the ethical governance framework.

    Science.gov (United States)

    Ballantyne, Angela; Style, Rochelle

    2017-10-27

    Demand for health data for secondary research is increasing, both in New Zealand and worldwide. The New Zealand government has established a large research database, the Integrated Data Infrastructure (IDI), which facilitates research, and an independent ministerial advisory group, the Data Futures Partnership (DFP), to engage with citizens, the private sector and non-government organisations (NGOs) to facilitate trusted data use and strengthen the data ecosystem in New Zealand. We commend these steps but argue that key strategies for effective health-data governance remain absent in New Zealand. In particular, we argue in favour of the establishment of: (1) a specialist Health and Disability Ethics Committee (HDEC) to review applications for secondary-use data research; (2) a public registry of approved secondary-use research projects (similar to a clinical trials registry); and (3) detailed guidelines for the review and approval of secondary-use data research. We present an ethical framework based on the values of public interest, trust and transparency to justify these innovations.

  18. Database Dump - fRNAdb | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us fRNAdb Database Dump Data detail Data name Database Dump DOI 10.18908/lsdba.nbdc00452-002 De... data (tab separeted text) Data file File name: Database_Dump File URL: ftp://ftp....biosciencedbc.jp/archive/frnadb/LATEST/Database_Dump File size: 673 MB Simple search URL - Data acquisition...s. Data analysis method - Number of data entries 4 files - About This Database Database Description Download... License Update History of This Database Site Policy | Contact Us Database Dump - fRNAdb | LSDB Archive ...

  19. Improving mass candidate detection in mammograms via feature maxima propagation and local feature selection.

    Science.gov (United States)

    Melendez, Jaime; Sánchez, Clara I; van Ginneken, Bram; Karssemeijer, Nico

    2014-08-01

    Mass candidate detection is a crucial component of multistep computer-aided detection (CAD) systems. It is usually performed by combining several local features by means of a classifier. When these features are processed on a per-image-location basis (e.g., for each pixel), mismatching problems may arise while constructing feature vectors for classification, which is especially true when the behavior expected from the evaluated features is a peaked response due to the presence of a mass. In this study, two of these problems, consisting of maxima misalignment and differences of maxima spread, are identified and two solutions are proposed. The first proposed method, feature maxima propagation, reproduces feature maxima through their neighboring locations. The second method, local feature selection, combines different subsets of features for different feature vectors associated with image locations. Both methods are applied independently and together. The proposed methods are included in a mammogram-based CAD system intended for mass detection in screening. Experiments are carried out with a database of 382 digital cases. Sensitivity is assessed at two sets of operating points. The first one is the interval of 3.5-15 false positives per image (FPs/image), which is typical for mass candidate detection. The second one is 1 FP/image, which allows to estimate the quality of the mass candidate detector's output for use in subsequent steps of the CAD system. The best results are obtained when the proposed methods are applied together. In that case, the mean sensitivity in the interval of 3.5-15 FPs/image significantly increases from 0.926 to 0.958 (p < 0.0002). At the lower rate of 1 FP/image, the mean sensitivity improves from 0.628 to 0.734 (p < 0.0002). Given the improved detection performance, the authors believe that the strategies proposed in this paper can render mass candidate detection approaches based on image location classification more robust to feature

  20. Face detection on distorted images using perceptual quality-aware features

    Science.gov (United States)

    Gunasekar, Suriya; Ghosh, Joydeep; Bovik, Alan C.

    2014-02-01

    We quantify the degradation in performance of a popular and effective face detector when human-perceived image quality is degraded by distortions due to additive white gaussian noise, gaussian blur or JPEG compression. It is observed that, within a certain range of perceived image quality, a modest increase in image quality can drastically improve face detection performance. These results can be used to guide resource or bandwidth allocation in a communication/delivery system that is associated with face detection tasks. A new face detector based on QualHOG features is also proposed that augments face-indicative HOG features with perceptual quality-aware spatial Natural Scene Statistics (NSS) features, yielding improved tolerance against image distortions. The new detector provides statistically significant improvements over a strong baseline on a large database of face images representing a wide range of distortions. To facilitate this study, we created a new Distorted Face Database, containing face and non-face patches from images impaired by a variety of common distortion types and levels. This new dataset is available for download and further experimentation at www.ideal.ece.utexas.edu/˜suriya/DFD/.

  1. Two Search Techniques within a Human Pedigree Database

    OpenAIRE

    Gersting, J. M.; Conneally, P. M.; Rogers, K.

    1982-01-01

    This paper presents the basic features of two search techniques from MEGADATS-2 (MEdical Genetics Acquisition and DAta Transfer System), a system for collecting, storing, retrieving and plotting human family pedigrees. The individual search provides a quick method for locating an individual in the pedigree database. This search uses a modified soundex coding and an inverted file structure based on a composite key. The navigational search uses a set of pedigree traversal operations (individual...

  2. Database for fusion devices and associated fuel systems

    International Nuclear Information System (INIS)

    Woolgar, P.W.

    1983-03-01

    A computerized database storage and retrieval system has been set up for fusion devices and the associated fusion fuel systems which should be a useful tool for the CFFTP program and other users. The features of the Wang 'Alliance' system are discussed for this application, as well as some of the limitations of the system. Recommendations are made on the operation, upkeep and further development that should take place to implement and maintain the system

  3. Oracle database 12c release 2 in-memory tips and techniques for maximum performance

    CERN Document Server

    Banerjee, Joyjeet

    2017-01-01

    This Oracle Press guide shows, step-by-step, how to optimize database performance and cut transaction processing time using Oracle Database 12c Release 2 In-Memory. Oracle Database 12c Release 2 In-Memory: Tips and Techniques for Maximum Performance features hands-on instructions, best practices, and expert tips from an Oracle enterprise architect. You will learn how to deploy the software, use In-Memory Advisor, build queries, and interoperate with Oracle RAC and Multitenant. A complete chapter of case studies illustrates real-world applications. • Configure Oracle Database 12c and construct In-Memory enabled databases • Edit and control In-Memory options from the graphical interface • Implement In-Memory with Oracle Real Application Clusters • Use the In-Memory Advisor to determine what objects to keep In-Memory • Optimize In-Memory queries using groups, expressions, and aggregations • Maximize performance using Oracle Exadata Database Machine and In-Memory option • Use Swingbench to create d...

  4. Toward An Unstructured Mesh Database

    Science.gov (United States)

    Rezaei Mahdiraji, Alireza; Baumann, Peter Peter

    2014-05-01

    Unstructured meshes are used in several application domains such as earth sciences (e.g., seismology), medicine, oceanography, cli- mate modeling, GIS as approximate representations of physical objects. Meshes subdivide a domain into smaller geometric elements (called cells) which are glued together by incidence relationships. The subdivision of a domain allows computational manipulation of complicated physical structures. For instance, seismologists model earthquakes using elastic wave propagation solvers on hexahedral meshes. The hexahedral con- tains several hundred millions of grid points and millions of hexahedral cells. Each vertex node in the hexahedrals stores a multitude of data fields. To run simulation on such meshes, one needs to iterate over all the cells, iterate over incident cells to a given cell, retrieve coordinates of cells, assign data values to cells, etc. Although meshes are used in many application domains, to the best of our knowledge there is no database vendor that support unstructured mesh features. Currently, the main tool for querying and manipulating unstructured meshes are mesh libraries, e.g., CGAL and GRAL. Mesh li- braries are dedicated libraries which includes mesh algorithms and can be run on mesh representations. The libraries do not scale with dataset size, do not have declarative query language, and need deep C++ knowledge for query implementations. Furthermore, due to high coupling between the implementations and input file structure, the implementations are less reusable and costly to maintain. A dedicated mesh database offers the following advantages: 1) declarative querying, 2) ease of maintenance, 3) hiding mesh storage structure from applications, and 4) transparent query optimization. To design a mesh database, the first challenge is to define a suitable generic data model for unstructured meshes. We proposed ImG-Complexes data model as a generic topological mesh data model which extends incidence graph model to multi

  5. Hierarchical Fuzzy Feature Similarity Combination for Presentation Slide Retrieval

    Directory of Open Access Journals (Sweden)

    A. Kushki

    2009-02-01

    Full Text Available This paper proposes a novel XML-based system for retrieval of presentation slides to address the growing data mining needs in presentation archives for educational and scholarly settings. In particular, contextual information, such as structural and formatting features, is extracted from the open format XML representation of presentation slides. In response to a textual user query, each extracted feature is used to compute a fuzzy relevance score for each slide in the database. The fuzzy scores from the various features are then combined through a hierarchical scheme to generate a single relevance score per slide. Various fusion operators and their properties are examined with respect to their effect on retrieval performance. Experimental results indicate a significant increase in retrieval performance measured in terms of precision-recall. The improvements are attributed to both the incorporation of the contextual features and the hierarchical feature combination scheme.

  6. FMA Roundtable on New Developments in European Corporate Governance

    DEFF Research Database (Denmark)

    Elson, Charles; Berglund, Tom; Rapp, Marc Steffen

    2017-01-01

    In this discussion that took place in Helsinki last June, three European financial economists and a leading authority on U.S. corporate governance consider the relative strengths and weaknesses of the world's two main corporate financing and governance systems: the Anglo-American market...... to address the question: can we expect one of these two systems to prevail over time, or will both systems continue to coexist, while seeking to adopt some of the most valuable aspects of the other? The consensus was that, in Germany as well as continental Europe, corporate financing and governance practices......-based system, with its dispersed share ownership, lots of takeovers, and an otherwise vigorous market for corporate control; and the relationship-based, or “main bank,” system associated with Japan, Germany, and continental Europe generally. The distinguishing features of the relationship-based system...

  7. Color Texture Image Retrieval Based on Local Extrema Features and Riemannian Distance

    Directory of Open Access Journals (Sweden)

    Minh-Tan Pham

    2017-10-01

    Full Text Available A novel efficient method for content-based image retrieval (CBIR is developed in this paper using both texture and color features. Our motivation is to represent and characterize an input image by a set of local descriptors extracted from characteristic points (i.e., keypoints within the image. Then, dissimilarity measure between images is calculated based on the geometric distance between the topological feature spaces (i.e., manifolds formed by the sets of local descriptors generated from each image of the database. In this work, we propose to extract and use the local extrema pixels as our feature points. Then, the so-called local extrema-based descriptor (LED is generated for each keypoint by integrating all color, spatial as well as gradient information captured by its nearest local extrema. Hence, each image is encoded by an LED feature point cloud and Riemannian distances between these point clouds enable us to tackle CBIR. Experiments performed on several color texture databases including Vistex, STex, color Brodazt, USPtex and Outex TC-00013 using the proposed approach provide very efficient and competitive results compared to the state-of-the-art methods.

  8. CORE-Hom: a powerful and exhaustive database of clinical trials in homeopathy.

    Science.gov (United States)

    Clausen, Jürgen; Moss, Sian; Tournier, Alexander; Lüdtke, Rainer; Albrecht, Henning

    2014-10-01

    The CORE-Hom database was created to answer the need for a reliable and publicly available source of information in the field of clinical research in homeopathy. As of May 2014 it held 1048 entries of clinical trials, observational studies and surveys in the field of homeopathy, including second publications and re-analyses. 352 of the trials referenced in the database were published in peer reviewed journals, 198 of which were randomised controlled trials. The most often used remedies were Arnica montana (n = 103) and Traumeel(®) (n = 40). The most studied medical conditions were respiratory tract infections (n = 126) and traumatic injuries (n = 110). The aim of this article is to introduce the database to the public, describing and explaining the interface, features and content of the CORE-Hom database. Copyright © 2014 The Faculty of Homeopathy. Published by Elsevier Ltd. All rights reserved.

  9. SPANG: a SPARQL client supporting generation and reuse of queries for distributed RDF databases.

    Science.gov (United States)

    Chiba, Hirokazu; Uchiyama, Ikuo

    2017-02-08

    Toward improved interoperability of distributed biological databases, an increasing number of datasets have been published in the standardized Resource Description Framework (RDF). Although the powerful SPARQL Protocol and RDF Query Language (SPARQL) provides a basis for exploiting RDF databases, writing SPARQL code is burdensome for users including bioinformaticians. Thus, an easy-to-use interface is necessary. We developed SPANG, a SPARQL client that has unique features for querying RDF datasets. SPANG dynamically generates typical SPARQL queries according to specified arguments. It can also call SPARQL template libraries constructed in a local system or published on the Web. Further, it enables combinatorial execution of multiple queries, each with a distinct target database. These features facilitate easy and effective access to RDF datasets and integrative analysis of distributed data. SPANG helps users to exploit RDF datasets by generation and reuse of SPARQL queries through a simple interface. This client will enhance integrative exploitation of biological RDF datasets distributed across the Web. This software package is freely available at http://purl.org/net/spang .

  10. Development of the severe accident risk information database management system SARD

    International Nuclear Information System (INIS)

    Ahn, Kwang Il; Kim, Dong Ha

    2003-01-01

    The main purpose of this report is to introduce essential features and functions of a severe accident risk information management system, SARD (Severe Accident Risk Database Management System) version 1.0, which has been developed in Korea Atomic Energy Research Institute, and database management and data retrieval procedures through the system. The present database management system has powerful capabilities that can store automatically and manage systematically the plant-specific severe accident analysis results for core damage sequences leading to severe accidents, and search intelligently the related severe accident risk information. For that purpose, the present database system mainly takes into account the plant-specific severe accident sequences obtained from the Level 2 Probabilistic Safety Assessments (PSAs), base case analysis results for various severe accident sequences (such as code responses and summary for key-event timings), and related sensitivity analysis results for key input parameters/models employed in the severe accident codes. Accordingly, the present database system can be effectively applied in supporting the Level 2 PSA of similar plants, for fast prediction and intelligent retrieval of the required severe accident risk information for the specific plant whose information was previously stored in the database system, and development of plant-specific severe accident management strategies

  11. Development of the severe accident risk information database management system SARD

    Energy Technology Data Exchange (ETDEWEB)

    Ahn, Kwang Il; Kim, Dong Ha

    2003-01-01

    The main purpose of this report is to introduce essential features and functions of a severe accident risk information management system, SARD (Severe Accident Risk Database Management System) version 1.0, which has been developed in Korea Atomic Energy Research Institute, and database management and data retrieval procedures through the system. The present database management system has powerful capabilities that can store automatically and manage systematically the plant-specific severe accident analysis results for core damage sequences leading to severe accidents, and search intelligently the related severe accident risk information. For that purpose, the present database system mainly takes into account the plant-specific severe accident sequences obtained from the Level 2 Probabilistic Safety Assessments (PSAs), base case analysis results for various severe accident sequences (such as code responses and summary for key-event timings), and related sensitivity analysis results for key input parameters/models employed in the severe accident codes. Accordingly, the present database system can be effectively applied in supporting the Level 2 PSA of similar plants, for fast prediction and intelligent retrieval of the required severe accident risk information for the specific plant whose information was previously stored in the database system, and development of plant-specific severe accident management strategies.

  12. Analysis And Database Design from Import And Export Reporting in Company in Indonesia

    Directory of Open Access Journals (Sweden)

    Novan Zulkarnain

    2016-03-01

    Full Text Available Director General of Customs and Excise (DJBC is a government agency that oversees exports and imports in Indonesia. Companies that receive exemption and tax returns are required to wipe Orkan  activities export and import by using IT-based reporting. This study aimed to analyze and design  databases to support the reporting of customs based report format for Director General of Customs and Excise No. PER-09/BC/2014. Data collection used was the Fact Finding Techniques consisted of studying the documents, interviews, observation, and literature study. The methods used for Design Database System is DB-SDLC (System Development Life Cycle Database, namely: Conceptual Design, Design Logical and Physical Design. The result obtained is ERD (Entity Relationship Diagram that can be used in the development of Customs Reporting System in companies throughout Indonesia. In conclusions, ERD has been able to meet all the reporting elements of customs.

  13. Quantification of microstructural features in α/β titanium alloys

    International Nuclear Information System (INIS)

    Tiley, J.; Searles, T.; Lee, E.; Kar, S.; Banerjee, R.; Russ, J.C.; Fraser, H.L.

    2004-01-01

    Mechanical properties of α/β Ti alloys are closely related to their microstructure. The complexity of the microstructural features involved makes it rather difficult to develop models for predicting properties of these alloys. Developing predictive rules-based models for α/β Ti alloys requires a huge database consisting of quantified microstructural data. This in turn requires the development of rigorous stereological procedures capable of quantifying the various microstructural features of interest imaged using optical and scanning electron microscopy (SEM) micrographs. In the present paper, rigorous stereological procedures have been developed for quantifying four important microstructural features in these alloys: thickness of Widmanstaetten α laths, colony scale factor, prior β grain size, and volume fraction of Widmanstaetten α laths

  14. DOT Online Database

    Science.gov (United States)

    Page Home Table of Contents Contents Search Database Search Login Login Databases Advisory Circulars accessed by clicking below: Full-Text WebSearch Databases Database Records Date Advisory Circulars 2092 5 data collection and distribution policies. Document Database Website provided by MicroSearch

  15. USDA food and nutrient databases provide the infrastructure for food and nutrition research, policy, and practice.

    Science.gov (United States)

    Ahuja, Jaspreet K C; Moshfegh, Alanna J; Holden, Joanne M; Harris, Ellen

    2013-02-01

    The USDA food and nutrient databases provide the basic infrastructure for food and nutrition research, nutrition monitoring, policy, and dietary practice. They have had a long history that goes back to 1892 and are unique, as they are the only databases available in the public domain that perform these functions. There are 4 major food and nutrient databases released by the Beltsville Human Nutrition Research Center (BHNRC), part of the USDA's Agricultural Research Service. These include the USDA National Nutrient Database for Standard Reference, the Dietary Supplement Ingredient Database, the Food and Nutrient Database for Dietary Studies, and the USDA Food Patterns Equivalents Database. The users of the databases are diverse and include federal agencies, the food industry, health professionals, restaurants, software application developers, academia and research organizations, international organizations, and foreign governments, among others. Many of these users have partnered with BHNRC to leverage funds and/or scientific expertise to work toward common goals. The use of the databases has increased tremendously in the past few years, especially the breadth of uses. These new uses of the data are bound to increase with the increased availability of technology and public health emphasis on diet-related measures such as sodium and energy reduction. Hence, continued improvement of the databases is important, so that they can better address these challenges and provide reliable and accurate data.

  16. A Database of Interplanetary and Interstellar Dust Detected by the Wind Spacecraft

    Science.gov (United States)

    Malaspina, David M.; Wilson, Lynn B., III

    2016-01-01

    It was recently discovered that the WAVES instrument on the Wind spacecraft has been detecting, in situ, interplanetary and interstellar dust of approximately 1 micron radius for the past 22 years. These data have the potential to enable advances in the study of cosmic dust and dust-plasma coupling within the heliosphere due to several unique properties: the Wind dust database spans two full solar cycles; it contains over 107,000 dust detections; it contains information about dust grain direction of motion; it contains data exclusively from the space environment within 350 Earth radii of Earth; and it overlaps by 12 years with the Ulysses dust database. Further, changes to the WAVES antenna response and the plasma environment traversed by Wind over the lifetime of the Wind mission create an opportunity for these data to inform investigations of the physics governing the coupling of dust impacts on spacecraft surfaces to electric field antennas. A Wind dust database has been created to make the Wind dust data easily accessible to the heliophysics community and other researchers. This work describes the motivation, methodology, contents, and accessibility of the Wind dust database.

  17. WEB-BASED DATABASE ON RENEWAL TECHNOLOGIES ...

    Science.gov (United States)

    As U.S. utilities continue to shore up their aging infrastructure, renewal needs now represent over 43% of annual expenditures compared to new construction for drinking water distribution and wastewater collection systems (Underground Construction [UC], 2016). An increased understanding of renewal options will ultimately assist drinking water utilities in reducing water loss and help wastewater utilities to address infiltration and inflow issues in a cost-effective manner. It will also help to extend the service lives of both drinking water and wastewater mains. This research effort involved collecting case studies on the use of various trenchless pipeline renewal methods and providing the information in an online searchable database. The overall objective was to further support technology transfer and information sharing regarding emerging and innovative renewal technologies for water and wastewater mains. The result of this research is a Web-based, searchable database that utility personnel can use to obtain technology performance and cost data, as well as case study references. The renewal case studies include: technologies used; the conditions under which the technology was implemented; costs; lessons learned; and utility contact information. The online database also features a data mining tool for automated review of the technologies selected and cost data. Based on a review of the case study results and industry data, several findings are presented on tren

  18. MycoDB, a global database of plant response to mycorrhizal fungi

    Science.gov (United States)

    Chaudhary, V. Bala; Rúa, Megan A.; Antoninka, Anita; Bever, James D.; Cannon, Jeffery; Craig, Ashley; Duchicela, Jessica; Frame, Alicia; Gardes, Monique; Gehring, Catherine; Ha, Michelle; Hart, Miranda; Hopkins, Jacob; Ji, Baoming; Johnson, Nancy Collins; Kaonongbua, Wittaya; Karst, Justine; Koide, Roger T.; Lamit, Louis J.; Meadow, James; Milligan, Brook G.; Moore, John C.; Pendergast, Thomas H., IV; Piculell, Bridget; Ramsby, Blake; Simard, Suzanne; Shrestha, Shubha; Umbanhowar, James; Viechtbauer, Wolfgang; Walters, Lawrence; Wilson, Gail W. T.; Zee, Peter C.; Hoeksema, Jason D.

    2016-05-01

    Plants form belowground associations with mycorrhizal fungi in one of the most common symbioses on Earth. However, few large-scale generalizations exist for the structure and function of mycorrhizal symbioses, as the nature of this relationship varies from mutualistic to parasitic and is largely context-dependent. We announce the public release of MycoDB, a database of 4,010 studies (from 438 unique publications) to aid in multi-factor meta-analyses elucidating the ecological and evolutionary context in which mycorrhizal fungi alter plant productivity. Over 10 years with nearly 80 collaborators, we compiled data on the response of plant biomass to mycorrhizal fungal inoculation, including meta-analysis metrics and 24 additional explanatory variables that describe the biotic and abiotic context of each study. We also include phylogenetic trees for all plants and fungi in the database. To our knowledge, MycoDB is the largest ecological meta-analysis database. We aim to share these data to highlight significant gaps in mycorrhizal research and encourage synthesis to explore the ecological and evolutionary generalities that govern mycorrhizal functioning in ecosystems.

  19. 48 CFR 225.7303-3 - Government-to-government agreements.

    Science.gov (United States)

    2010-10-01

    ... Military Sales 225.7303-3 Government-to-government agreements. If a government-to-government agreement... support of a specifically defined weapon system, major end item, or support item, contains language in conflict with the provisions of this section, the language of the government-to-government agreement...

  20. Databases

    Digital Repository Service at National Institute of Oceanography (India)

    Kunte, P.D.

    Information on bibliographic as well as numeric/textual databases relevant to coastal geomorphology has been included in a tabular form. Databases cover a broad spectrum of related subjects like coastal environment and population aspects, coastline...

  1. Cross-Subsidies and Government Transfers: Impacts on Electricity Service Quality in Colombia

    Directory of Open Access Journals (Sweden)

    Fan Li

    2018-05-01

    Full Text Available An affordable and reliable supply of electricity service is essential to encourage sustainable social development in developing countries. Colombia uses cross-subsidies to prompt electricity usage for poor households. This raises the issue of whether charging lower prices to poor households, while boosting their consumption, induces utilities to lower the quality of service received by them. This paper uses unique databases and examines how underfunded cross-subsidies affect perceived electricity service quality across consumer groups. Results indicate that when facing financial deficits, utilities provide lower perceived service quality to subsidized consumers than to residents paying surcharges. The difference in perceived quality across consumer groups is reduced by an increase in the amount of (external government transfers. To prompt electricity consumption by the poor, the Colombian government should fund subsidies, strengthen quality regulation, and increase the transparency and reliability of government transfers.

  2. Government-to-Government E-Government: A Case Study of a Federal Financial Program

    Science.gov (United States)

    Faokunla, Olumide Adegboyega

    2012-01-01

    The problem with the study of the concept of electronic government (e-Gov) is that scholars in the field have not adequately explored various dimensions of the concept. Literature on e-Gov is replete with works on the form of government to consumer e-Gov. Much less work had been done on the government to government (G2G) e-Gov. This qualitative…

  3. Rainwater Harvesting and Social Networks: Visualising Interactions for Niche Governance, Resilience and Sustainability

    Directory of Open Access Journals (Sweden)

    Sarah Ward

    2016-11-01

    Full Text Available Visualising interactions across urban water systems to explore transition and change processes requires the development of methods and models at different scales. This paper contributes a model representing the network interactions of rainwater harvesting (RWH infrastructure innovators and other organisations in the UK RWH niche to identify how resilience and sustainability feature within niche governance in practice. The RWH network interaction model was constructed using a modified participatory social network analysis (SNA. The SNA was further analysed through the application of a two-part analytical framework based on niche management and the safe, resilient and sustainable (‘Safe and SuRe’ framework. Weak interactions between some RWH infrastructure innovators and other organisations highlighted reliance on a limited number of persuaders to influence the regime and landscape, which were underrepresented. Features from niche creation and management were exhibited by the RWH network interaction model, though some observed characteristics were not represented. Additional Safe and SuRe features were identified covering diverse innovation, responsivity, no protection, unconverged expectations, primary influencers, polycentric or adaptive governance and multiple learning-types. These features enable RWH infrastructure innovators and other organisations to reflect on improving resilience and sustainability, though further research in other sectors would be useful to verify and validate observation of the seven features.

  4. Technical Aspects of Interfacing MUMPS to an External SQL Relational Database Management System

    Science.gov (United States)

    Kuzmak, Peter M.; Walters, Richard F.; Penrod, Gail

    1988-01-01

    This paper describes an interface connecting InterSystems MUMPS (M/VX) to an external relational DBMS, the SYBASE Database Management System. The interface enables MUMPS to operate in a relational environment and gives the MUMPS language full access to a complete set of SQL commands. MUMPS generates SQL statements as ASCII text and sends them to the RDBMS. The RDBMS executes the statements and returns ASCII results to MUMPS. The interface suggests that the language features of MUMPS make it an attractive tool for use in the relational database environment. The approach described in this paper separates MUMPS from the relational database. Positioning the relational database outside of MUMPS promotes data sharing and permits a number of different options to be used for working with the data. Other languages like C, FORTRAN, and COBOL can access the RDBMS database. Advanced tools provided by the relational database vendor can also be used. SYBASE is an advanced high-performance transaction-oriented relational database management system for the VAX/VMS and UNIX operating systems. SYBASE is designed using a distributed open-systems architecture, and is relatively easy to interface with MUMPS.

  5. Computer-Aided Diagnosis for Breast Ultrasound Using Computerized BI-RADS Features and Machine Learning Methods.

    Science.gov (United States)

    Shan, Juan; Alam, S Kaisar; Garra, Brian; Zhang, Yingtao; Ahmed, Tahira

    2016-04-01

    This work identifies effective computable features from the Breast Imaging Reporting and Data System (BI-RADS), to develop a computer-aided diagnosis (CAD) system for breast ultrasound. Computerized features corresponding to ultrasound BI-RADs categories were designed and tested using a database of 283 pathology-proven benign and malignant lesions. Features were selected based on classification performance using a "bottom-up" approach for different machine learning methods, including decision tree, artificial neural network, random forest and support vector machine. Using 10-fold cross-validation on the database of 283 cases, the highest area under the receiver operating characteristic (ROC) curve (AUC) was 0.84 from a support vector machine with 77.7% overall accuracy; the highest overall accuracy, 78.5%, was from a random forest with the AUC 0.83. Lesion margin and orientation were optimum features common to all of the different machine learning methods. These features can be used in CAD systems to help distinguish benign from worrisome lesions. Copyright © 2016 World Federation for Ultrasound in Medicine & Biology. All rights reserved.

  6. Interactive governance

    DEFF Research Database (Denmark)

    Sørensen, Eva; Torfing, Jacob; Peters, B. Guy

    Governance has become one of the most commonly used concepts in contemporary political science. It is, however, often used to mean a variety of different things. This book helps to clarify this conceptual muddle by concentrating on one variety of governance-interactive governance. The authors argue...... that although the state may remain important for many aspects of governing, interactions between state and society represent an important, and perhaps increasingly important, dimension of governance. These interactions may be with social actors such as networks, with market actors or with other governments......, but all these forms represent means of governing involving mixtures of state action with the actions of other entities.This book explores thoroughly this meaning of governance, and links it to broader questions of governance. In the process of explicating this dimension of governance the authors also...

  7. Database Description - eSOL | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name eSOL Alternative nam...eator Affiliation: The Research and Development of Biological Databases Project, National Institute of Genet...nology 4259 Nagatsuta-cho, Midori-ku, Yokohama, Kanagawa 226-8501 Japan Email: Tel.: +81-45-924-5785 Database... classification Protein sequence databases - Protein properties Organism Taxonomy Name: Escherichia coli Taxonomy ID: 562 Database...i U S A. 2009 Mar 17;106(11):4201-6. External Links: Original website information Database maintenance site

  8. "Mr. Database" : Jim Gray and the History of Database Technologies.

    Science.gov (United States)

    Hanwahr, Nils C

    2017-12-01

    Although the widespread use of the term "Big Data" is comparatively recent, it invokes a phenomenon in the developments of database technology with distinct historical contexts. The database engineer Jim Gray, known as "Mr. Database" in Silicon Valley before his disappearance at sea in 2007, was involved in many of the crucial developments since the 1970s that constitute the foundation of exceedingly large and distributed databases. Jim Gray was involved in the development of relational database systems based on the concepts of Edgar F. Codd at IBM in the 1970s before he went on to develop principles of Transaction Processing that enable the parallel and highly distributed performance of databases today. He was also involved in creating forums for discourse between academia and industry, which influenced industry performance standards as well as database research agendas. As a co-founder of the San Francisco branch of Microsoft Research, Gray increasingly turned toward scientific applications of database technologies, e. g. leading the TerraServer project, an online database of satellite images. Inspired by Vannevar Bush's idea of the memex, Gray laid out his vision of a Personal Memex as well as a World Memex, eventually postulating a new era of data-based scientific discovery termed "Fourth Paradigm Science". This article gives an overview of Gray's contributions to the development of database technology as well as his research agendas and shows that central notions of Big Data have been occupying database engineers for much longer than the actual term has been in use.

  9. Mathematics for Databases

    NARCIS (Netherlands)

    ir. Sander van Laar

    2007-01-01

    A formal description of a database consists of the description of the relations (tables) of the database together with the constraints that must hold on the database. Furthermore the contents of a database can be retrieved using queries. These constraints and queries for databases can very well be

  10. Alternatives to relational database: comparison of NoSQL and XML approaches for clinical data storage.

    Science.gov (United States)

    Lee, Ken Ka-Yin; Tang, Wai-Choi; Choi, Kup-Sze

    2013-04-01

    Clinical data are dynamic in nature, often arranged hierarchically and stored as free text and numbers. Effective management of clinical data and the transformation of the data into structured format for data analysis are therefore challenging issues in electronic health records development. Despite the popularity of relational databases, the scalability of the NoSQL database model and the document-centric data structure of XML databases appear to be promising features for effective clinical data management. In this paper, three database approaches--NoSQL, XML-enabled and native XML--are investigated to evaluate their suitability for structured clinical data. The database query performance is reported, together with our experience in the databases development. The results show that NoSQL database is the best choice for query speed, whereas XML databases are advantageous in terms of scalability, flexibility and extensibility, which are essential to cope with the characteristics of clinical data. While NoSQL and XML technologies are relatively new compared to the conventional relational database, both of them demonstrate potential to become a key database technology for clinical data management as the technology further advances. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  11. Representation and Metrics Extraction from Feature Basis: An Object Oriented Approach

    Directory of Open Access Journals (Sweden)

    Fausto Neri da Silva Vanin

    2010-10-01

    Full Text Available This tutorial presents an object oriented approach to data reading and metrics extraction from feature basis. Structural issues about basis are discussed first, then the Object Oriented Programming (OOP is aplied to modeling the main elements in this context. The model implementation is then discussed using C++ as programing language. To validate the proposed model, we apply on some feature basis from the University of Carolina, Irvine Machine Learning Database.

  12. Improved binary dragonfly optimization algorithm and wavelet packet based non-linear features for infant cry classification.

    Science.gov (United States)

    Hariharan, M; Sindhu, R; Vijean, Vikneswaran; Yazid, Haniza; Nadarajaw, Thiyagar; Yaacob, Sazali; Polat, Kemal

    2018-03-01

    Infant cry signal carries several levels of information about the reason for crying (hunger, pain, sleepiness and discomfort) or the pathological status (asphyxia, deaf, jaundice, premature condition and autism, etc.) of an infant and therefore suited for early diagnosis. In this work, combination of wavelet packet based features and Improved Binary Dragonfly Optimization based feature selection method was proposed to classify the different types of infant cry signals. Cry signals from 2 different databases were utilized. First database contains 507 cry samples of normal (N), 340 cry samples of asphyxia (A), 879 cry samples of deaf (D), 350 cry samples of hungry (H) and 192 cry samples of pain (P). Second database contains 513 cry samples of jaundice (J), 531 samples of premature (Prem) and 45 samples of normal (N). Wavelet packet transform based energy and non-linear entropies (496 features), Linear Predictive Coding (LPC) based cepstral features (56 features), Mel-frequency Cepstral Coefficients (MFCCs) were extracted (16 features). The combined feature set consists of 568 features. To overcome the curse of dimensionality issue, improved binary dragonfly optimization algorithm (IBDFO) was proposed to select the most salient attributes or features. Finally, Extreme Learning Machine (ELM) kernel classifier was used to classify the different types of infant cry signals using all the features and highly informative features as well. Several experiments of two-class and multi-class classification of cry signals were conducted. In binary or two-class experiments, maximum accuracy of 90.18% for H Vs P, 100% for A Vs N, 100% for D Vs N and 97.61% J Vs Prem was achieved using the features selected (only 204 features out of 568) by IBDFO. For the classification of multiple cry signals (multi-class problem), the selected features could differentiate between three classes (N, A & D) with the accuracy of 100% and seven classes with the accuracy of 97.62%. The experimental

  13. Improving school governance through participative democracy and the law

    Directory of Open Access Journals (Sweden)

    Marius H Smit

    2011-01-01

    Full Text Available There is an inextricable link between democracy, education and the law. After 15 yearsofconstitutional democracy, the alarming percentage of dysfunctional schools raises questions about the efficacy of the system of local school governance. We report on the findings of quantitative and qualitative research on the democratisation of schools and the education system in North-West Province. Several undemocratic features are attributable to systemic weaknesses of traditional models of democracy as well as the misapplication of democratic and legal principles. The findings of the qualitative study confirmed that parents often misconceive participatory democracy for political democracy and misunderstand the role of the school governing body to be a political forum. Despite the shortcomings, the majority of the respondents agreed that parental participation improves school effectiveness and that the decentralised model of local school governance should continue. Recommendations to effect the inculcation of substantive democratic knowledge, values and attitudes into school governance are based on theory of deliberative democracy and principles of responsiveness, accountability and justification of decisions through rational discourse.

  14. Evidence-based health care: its place within clinical governance.

    Science.gov (United States)

    McSherry, R; Haddock, J

    This article explores the principles of evidence-based practice and its role in achieving quality improvements within the clinical governance framework advocated by the recent White Papers 'The New NHS: Modern, Dependable' (Department of Health (DoH), 1997) and 'A First Class Service: Quality in the New NHS' (DoH, 1998a). Within these White Papers there is an emphasis on improving quality of care, treatment and services through employing the principles of clinical governance. A major feature of clinical governance is guaranteeing quality to the public and the NHS, and ensuring that clinical, managerial and educational practice is based on scientific evidence. This article also examines what evidence-based practice is and what processes are required to promote effective healthcare interventions. The authors also look at how clinical governance relates to other methods/systems involved in clinical effectiveness. Finally, the importance for nurses and other healthcare professionals of familiarizing themselves with the development of critical appraisal skills, and their implications for developing evidence-based practice, is emphasized.

  15. Differentiation of several interstitial lung disease patterns in HRCT images using support vector machine: role of databases on performance

    Science.gov (United States)

    Kale, Mandar; Mukhopadhyay, Sudipta; Dash, Jatindra K.; Garg, Mandeep; Khandelwal, Niranjan

    2016-03-01

    Interstitial lung disease (ILD) is complicated group of pulmonary disorders. High Resolution Computed Tomography (HRCT) considered to be best imaging technique for analysis of different pulmonary disorders. HRCT findings can be categorised in several patterns viz. Consolidation, Emphysema, Ground Glass Opacity, Nodular, Normal etc. based on their texture like appearance. Clinician often find it difficult to diagnosis these pattern because of their complex nature. In such scenario computer-aided diagnosis system could help clinician to identify patterns. Several approaches had been proposed for classification of ILD patterns. This includes computation of textural feature and training /testing of classifier such as artificial neural network (ANN), support vector machine (SVM) etc. In this paper, wavelet features are calculated from two different ILD database, publically available MedGIFT ILD database and private ILD database, followed by performance evaluation of ANN and SVM classifiers in terms of average accuracy. It is found that average classification accuracy by SVM is greater than ANN where trained and tested on same database. Investigation continued further to test variation in accuracy of classifier when training and testing is performed with alternate database and training and testing of classifier with database formed by merging samples from same class from two individual databases. The average classification accuracy drops when two independent databases used for training and testing respectively. There is significant improvement in average accuracy when classifiers are trained and tested with merged database. It infers dependency of classification accuracy on training data. It is observed that SVM outperforms ANN when same database is used for training and testing.

  16. Applying TOGAF for e-government implementation based on service oriented architecture methodology towards good government governance

    Science.gov (United States)

    Hodijah, A.; Sundari, S.; Nugraha, A. C.

    2018-05-01

    As a Local Government Agencies who perform public services, General Government Office already has utilized Reporting Information System of Local Government Implementation (E-LPPD). However, E-LPPD has upgrade limitation for the integration processes that cannot accommodate General Government Offices’ needs in order to achieve Good Government Governance (GGG), while success stories of the ultimate goal of e-government implementation requires good governance practices. Currently, citizen demand public services as private sector do, which needs service innovation by utilizing the legacy system as a service based e-government implementation, while Service Oriented Architecture (SOA) to redefine a business processes as a set of IT enabled services and Enterprise Architecture from the Open Group Architecture Framework (TOGAF) as a comprehensive approach in redefining business processes as service innovation towards GGG. This paper takes a case study on Performance Evaluation of Local Government Implementation (EKPPD) system on General Government Office. The results show that TOGAF will guide the development of integrated business processes of EKPPD system that fits good governance practices to attain GGG with SOA methodology as technical approach.

  17. School characteristics influencing the implementation of a data-based decision making intervention

    NARCIS (Netherlands)

    van Geel, Marieke Johanna Maria; Visscher, Arend J.; Teunis, B.

    2017-01-01

    There is an increasing global emphasis on using data for decision making, with a growing body of research on interventions aimed at implementing and sustaining data-based decision making (DBDM) in schools. Yet, little is known about the school features that facilitate or hinder the implementation of

  18. Do PES Improve the Governance of Forest Restoration?

    Directory of Open Access Journals (Sweden)

    Romain Pirard

    2014-03-01

    Full Text Available Payments for Environmental Services (PES are praised as innovative policy instruments and they influence the governance of forest restoration efforts in two major ways. The first is the establishment of multi-stakeholder agencies as intermediary bodies between funders and planters to manage the funds and to distribute incentives to planters. The second implication is that specific contracts assign objectives to land users in the form of conditions for payments that are believed to increase the chances for sustained impacts on the ground. These implications are important in the assessment of the potential of PES to operate as new and effective funding schemes for forest restoration. They are analyzed by looking at two prominent payments for watershed service programs in Indonesia—Cidanau (Banten province in Java and West Lombok (Eastern Indonesia—with combined economic and political science approaches. We derive lessons for the governance of funding efforts (e.g., multi-stakeholder agencies are not a guarantee of success; mixed results are obtained from a reliance on mandatory funding with ad hoc regulations, as opposed to voluntary contributions by the service beneficiary and for the governance of financial expenditure (e.g., absolute need for evaluation procedures for the internal governance of farmer groups. Furthermore, we observe that these governance features provide no guarantee that restoration plots with the highest relevance for ecosystem services are targeted by the PES.

  19. Overcoming the Confucian psychological barrier in government cyberspace.

    Science.gov (United States)

    Lee, Ook; Gong, Sung Jin

    2004-02-01

    The Confucian tradition still dictates the behavior of many people in East Asian countries such as South Korea. Even in e-mail communication, people try their best to show signs of respect which is required by the Confucian tradition. This psychological barrier can be detrimental to the development of democracy as people are educated not to challenge opinions of elders or bosses. After a long military dictatorship, South Korea has emerged as a newly democratized nation where the Confucian tradition is less emphasized. However, this tradition dies hard, and citizens are still afraid of offending government officials who have the power to affect lives of citizens. In light of creating a more democratic society, the e-government project has been implemented, and one of the features of cyber-government is to give citizens a place in cyberspace to express their concerns. Even though citizens have to use their real names, it is found that those who wrote messages in the bulletin board of the city of Seoul government's web pages tend not to use terms that are often used in e-mails for the purpose of expressing respect. A survey was conducted, and results show that people were able to overcome the Confucian psychological barrier in government cyberspace. Self-efficacy is proposed to explain this phenomenon.

  20. Use of the PISCES Database: power plant aqueous stream compositions

    International Nuclear Information System (INIS)

    Behrens, G.P.; Orr, D.A.; Wetherold, R.G.; O'Neil, B.T.

    1996-01-01

    The Power Plant Integrated Systems: Chemical Emissions Studies (PISCES) Database sponsored by the Electric Power Research Institute is a powerful tool for evaluating and comparing the level of trace substances in power plant process streams. In this paper, data are presented on the level of several selected trace metals found in a few of the aqueous streams present in power plants. A brief discussion of other features of the Database is presented. The majority of the data is for coal fired power plants, with only 5% pertaining to oil and gas. Sources of pollution include: ash streams; cooling water; coal pile runoff; FGD liquids; makeup water; and wastewater. 11 refs., 10 figs., 1 tab

  1. DSSTOX STRUCTURE-SEARCHABLE PUBLIC TOXICITY DATABASE NETWORK: CURRENT PROGRESS AND NEW INITIATIVES TO IMPROVE CHEMO-BIOINFORMATICS CAPABILITIES

    Science.gov (United States)

    The EPA DSSTox website (http://www/epa.gov/nheerl/dsstox) publishes standardized, structure-annotated toxicity databases, covering a broad range of toxicity disciplines. Each DSSTox database features documentation written in collaboration with the source authors and toxicity expe...

  2. Construction Method of the Topographical Features Model for Underwater Terrain Navigation

    Directory of Open Access Journals (Sweden)

    Wang Lihui

    2015-09-01

    Full Text Available Terrain database is the reference basic for autonomous underwater vehicle (AUV to implement underwater terrain navigation (UTN functions, and is the important part of building topographical features model for UTN. To investigate the feasibility and correlation of a variety of terrain parameters as terrain navigation information metrics, this paper described and analyzed the underwater terrain features and topography parameters calculation method. Proposing a comprehensive evaluation method for terrain navigation information, and constructing an underwater navigation information analysis model, which is associated with topographic features. Simulation results show that the underwater terrain features, are associated with UTN information directly or indirectly, also affect the terrain matching capture probability and the positioning accuracy directly.

  3. Database development and management

    CERN Document Server

    Chao, Lee

    2006-01-01

    Introduction to Database Systems Functions of a DatabaseDatabase Management SystemDatabase ComponentsDatabase Development ProcessConceptual Design and Data Modeling Introduction to Database Design Process Understanding Business ProcessEntity-Relationship Data Model Representing Business Process with Entity-RelationshipModelTable Structure and NormalizationIntroduction to TablesTable NormalizationTransforming Data Models to Relational Databases .DBMS Selection Transforming Data Models to Relational DatabasesEnforcing ConstraintsCreating Database for Business ProcessPhysical Design and Database

  4. Dynamic Agricultural Land Unit Profile Database Generation using Landsat Time Series Images

    Science.gov (United States)

    Torres-Rua, A. F.; McKee, M.

    2012-12-01

    Agriculture requires continuous supply of inputs to production, while providing final or intermediate outputs or products (food, forage, industrial uses, etc.). Government and other economic agents are interested in the continuity of this process and make decisions based on the available information about current conditions within the agriculture area. From a government point of view, it is important that the input-output chain in agriculture for a given area be enhanced in time, while any possible abrupt disruption be minimized or be constrained within the variation tolerance of the input-output chain. The stability of the exchange of inputs and outputs becomes of even more important in disaster-affected zones, where government programs will look for restoring the area to equal or enhanced social and economical conditions before the occurrence of the disaster. From an economical perspective, potential and existing input providers require up-to-date, precise information of the agriculture area to determine present and future inputs and stock amounts. From another side, agriculture output acquirers might want to apply their own criteria to sort out present and future providers (farmers or irrigators) based on the management done during the irrigation season. In the last 20 years geospatial information has become available for large areas in the globe, providing accurate, unbiased historical records of actual agriculture conditions at individual land units for small and large agricultural areas. This data, adequately processed and stored in any database format, can provide invaluable information for government and economic interests. Despite the availability of the geospatial imagery records, limited or no geospatial-based information about past and current farming conditions at the level of individual land units exists for many agricultural areas in the world. The absence of this information challenges the work of policy makers to evaluate previous or current

  5. Multiscale deep features learning for land-use scene recognition

    Science.gov (United States)

    Yuan, Baohua; Li, Shijin; Li, Ning

    2018-01-01

    The features extracted from deep convolutional neural networks (CNNs) have shown their promise as generic descriptors for land-use scene recognition. However, most of the work directly adopts the deep features for the classification of remote sensing images, and does not encode the deep features for improving their discriminative power, which can affect the performance of deep feature representations. To address this issue, we propose an effective framework, LASC-CNN, obtained by locality-constrained affine subspace coding (LASC) pooling of a CNN filter bank. LASC-CNN obtains more discriminative deep features than directly extracted from CNNs. Furthermore, LASC-CNN builds on the top convolutional layers of CNNs, which can incorporate multiscale information and regions of arbitrary resolution and sizes. Our experiments have been conducted using two widely used remote sensing image databases, and the results show that the proposed method significantly improves the performance when compared to other state-of-the-art methods.

  6. Database reliability engineering designing and operating resilient database systems

    CERN Document Server

    Campbell, Laine

    2018-01-01

    The infrastructure-as-code revolution in IT is also affecting database administration. With this practical book, developers, system administrators, and junior to mid-level DBAs will learn how the modern practice of site reliability engineering applies to the craft of database architecture and operations. Authors Laine Campbell and Charity Majors provide a framework for professionals looking to join the ranks of today’s database reliability engineers (DBRE). You’ll begin by exploring core operational concepts that DBREs need to master. Then you’ll examine a wide range of database persistence options, including how to implement key technologies to provide resilient, scalable, and performant data storage and retrieval. With a firm foundation in database reliability engineering, you’ll be ready to dive into the architecture and operations of any modern database. This book covers: Service-level requirements and risk management Building and evolving an architecture for operational visibility ...

  7. User's manual (UM) for the enhanced logistics intratheater support tool (ELIST) database utility segment version 8.1.0.0 for solaris 7.; TOPICAL

    International Nuclear Information System (INIS)

    Dritz, K.

    2002-01-01

    This document is the User's Manual (UM) for the Enhanced Logistics Intratheater Support Tool (ELIST) Database Utility Segment. It tells how to use its features to administer ELIST database user accounts

  8. Solving Relational Database Problems with ORDBMS in an Advanced Database Course

    Science.gov (United States)

    Wang, Ming

    2011-01-01

    This paper introduces how to use the object-relational database management system (ORDBMS) to solve relational database (RDB) problems in an advanced database course. The purpose of the paper is to provide a guideline for database instructors who desire to incorporate the ORDB technology in their traditional database courses. The paper presents…

  9. Engineering governance: introducing a governance meta framework.

    NARCIS (Netherlands)

    Brand, N.; Beens, B.; Vuuregge, E.; Batenburg, R.

    2011-01-01

    There is a need for a framework that depicts strategic choices within an organisation with regard to potential governance structures. The governance meta framework provides the necessary structure in the current developments of governance. Performance as well as conformance are embedded in this

  10. Government and governance strategies in medical tourism

    NARCIS (Netherlands)

    Ormond, M.E.; Mainil, T.

    2015-01-01

    This chapter provides an overview of current government and governance strategies relative to medical tourism development and management around the world. Most studies on medical tourism have privileged national governments as key actors in medical tourism regulation and, in some cases, even

  11. BDHI: a French national database on historical floods

    Directory of Open Access Journals (Sweden)

    Lang Michel

    2016-01-01

    Full Text Available The paper describes the various features of the BDHI database (objects, functions, content. This document database provides document sheets on historical floods from various sources: technical reports from water authorities, scientific accounts (meteorology, hydrology, hydraulics..., post-disaster reports, newspapers or book extracts... It is complemented by fact sheets on flood events, which provide a summary text on significant past floods: location, date and duration, type of flood, extent, probability, adverse consequences A search engine is provided for information search based on time (specific date or period, on location (district, basin, city or thematic topic (document type, flood type, flood magnitude, flood impact.... We conclude by some future challenges in relation to the next cycle of the Floods Directive (2016-2022, with the inventory of past floods which had significant adverse impacts. What are the flood events that need to be integrated (new ones later than 2011 and/or previous floods that had not yet been selected? How can the process of historical data integration be extended at a local scale, with an adequate process of validation? How to promote the use of BDHI database in relation with the development of the culture of risk?

  12. A protable Database driven control system for SPEAR

    International Nuclear Information System (INIS)

    Howry, S.; Gromme, T.; King, A.; Sullenberger, M.

    1985-01-01

    The new computer control system software for SPEAR is presented as a transfer from the PEP system. Features of the target ring (SPEAR) such as symmetries, magnet groupings, etc., are all contained in a design file which is read by both people and computer. People use it as documentation; a program reads it to generate the database structure, which becomes the center of communication for all the software. Geometric information, such as element positions and lengths, and CAMAC I/O routing information is entered into the database as it is developed. Since application processes refer only to the database and since they do so only in generic terms, almost all of this software (representing more then fifteen man years) is transferred with few changes. Operator console menus (touchpanels) are also transferred with only superficial changes for the same reasons. The system is modular: the CAMAC I/O software is all in one process; the menu control software is a process; the ring optics model and the orbit model are separate processes, each of which runs concurrently with about 15 others in the multiprogramming environment of the VAX/VMS operating system

  13. A new online database of nuclear electromagnetic moments

    Science.gov (United States)

    Mertzimekis, Theo J.

    2017-09-01

    Nuclear electromagnetic (EM) moments, i.e., the magnetic dipole and the electric quadrupole moments, provide important information of nuclear structure. As in other types of experimental data available to the community, measurements of nuclear EM moments have been organized systematically in compilations since the dawn of nuclear science. However, the wealth of recent moments measurements with radioactive beams, as well as earlier existing measurements, lack an online, easy-to-access, systematically organized presence to disseminate information to researchers. In addition, available printed compilations suffer a rather long life cycle, being left behind experimental measurements published in journals or elsewhere. A new, online database (http://magneticmoments.info) focusing on nuclear EM moments has been recently developed to disseminate experimental data to the community. The database includes non-evaluated experimental data of nuclear EM moments, giving strong emphasis on frequent updates (life cycle is 3 months) and direct connection to the sources via DOI and NSR hyperlinks. It has been recently integrated in IAEA LiveChart [1], but can also be found as a standalone webapp [2]. A detailed review of the database features, as well as plans for further development and expansion in the near future is discussed.

  14. Which type of government revenue leads government expenditure?

    OpenAIRE

    Abdi, Zeinab; Masih, Mansur

    2014-01-01

    This Malaysia is a developing Islamic state that faced government budget deficit since 1998. It is undeniable that a budget deficit or inability to cover government spending is not positively seen by external parties. The optimum level of government budget is the state where government spending is totally offset by government revenue and that can be achieved through an increase in tax revenue or decrease in spending. The paper aims to discover the existence of a theoretical relationship betwe...

  15. Linking international trademark databases to inform IP research and policy

    Energy Technology Data Exchange (ETDEWEB)

    Petrie, P.

    2016-07-01

    Researchers and policy makers are concerned with many international issues regarding trademarks, such as trademark squatting, cluttering, and dilution. Trademark application data can provide an evidence base to inform government policy regarding these issues, and can also produce quantitative insights into economic trends and brand dynamics. Currently, national trademark databases can provide insight into economic and brand dynamics at the national level, but gaining such insight at an international level is more difficult due to a lack of internationally linked trademark data. We are in the process of building a harmonised international trademark database (the “Patstat of trademarks”), in which equivalent trademarks have been identified across national offices. We have developed a pilot database that incorporates 6.4 million U.S., 1.3 million Australian, and 0.5 million New Zealand trademark applications, spanning over 100 years. The database will be extended to incorporate trademark data from other participating intellectual property (IP) offices as they join the project. Confirmed partners include the United Kingdom, WIPO, and OHIM. We will continue to expand the scope of the project, and intend to include many more IP offices from around the world. In addition to building the pilot database, we have developed a linking algorithm that identifies equivalent trademarks (TMs) across the three jurisdictions. The algorithm can currently be applied to all applications that contain TM text; i.e. around 96% of all applications. In its current state, the algorithm successfully identifies ~ 97% of equivalent TMs that are known to be linked a priori, as they have shared international registration number through the Madrid protocol. When complete, the internationally linked trademark database will be a valuable resource for researchers and policy-makers in fields such as econometrics, intellectual property rights, and brand policy. (Author)

  16. Genebanks: a comparison of eight proposed international genetic databases.

    Science.gov (United States)

    Austin, Melissa A; Harding, Sarah; McElroy, Courtney

    2003-01-01

    To identify and compare population-based genetic databases, or "genebanks", that have been proposed in eight international locations between 1998 and 2002. A genebank can be defined as a stored collection of genetic samples in the form of blood or tissue, that can be linked with medical and genealogical or lifestyle information from a specific population, gathered using a process of generalized consent. Genebanks were identified by searching Medline and internet search engines with key words such as "genetic database" and "biobank" and by reviewing literature on previously identified databases such as the deCode project. Collection of genebank characteristics was by an electronic and literature search, augmented by correspondence with informed individuals. The proposed genebanks are located in Iceland, the United Kingdom, Estonia, Latvia, Sweden, Singapore, the Kingdom of Tonga, and Quebec, Canada. Comparisons of the genebanks were based on the following criteria: genebank location and description of purpose, role of government, commercial involvement, consent and confidentiality procedures, opposition to the genebank, and current progress. All of the groups proposing the genebanks plan to search for susceptibility genes for complex diseases while attempting to improve public health and medical care in the region and, in some cases, stimulating the local economy through expansion of the biotechnology sector. While all of the identified plans share these purposes, they differ in many aspects, including funding, subject participation, and organization. The balance of government and commercial involvement in the development of each project varies. Genetic samples and health information will be collected from participants and coded in all of the genebanks, but consent procedures range from presumed consent of the entire eligible population to recruitment of volunteers with informed consent. Issues regarding confidentiality and consent have resulted in opposition to

  17. Design of Integrated Database on Mobile Information System: A Study of Yogyakarta Smart City App

    Science.gov (United States)

    Nurnawati, E. K.; Ermawati, E.

    2018-02-01

    An integration database is a database which acts as the data store for multiple applications and thus integrates data across these applications (in contrast to an Application Database). An integration database needs a schema that takes all its client applications into account. The benefit of the schema that sharing data among applications does not require an extra layer of integration services on the applications. Any changes to data made in a single application are made available to all applications at the time of database commit - thus keeping the applications’ data use better synchronized. This study aims to design and build an integrated database that can be used by various applications in a mobile device based system platforms with the based on smart city system. The built-in database can be used by various applications, whether used together or separately. The design and development of the database are emphasized on the flexibility, security, and completeness of attributes that can be used together by various applications to be built. The method used in this study is to choice of the appropriate database logical structure (patterns of data) and to build the relational-database models (Design Databases). Test the resulting design with some prototype apps and analyze system performance with test data. The integrated database can be utilized both of the admin and the user in an integral and comprehensive platform. This system can help admin, manager, and operator in managing the application easily and efficiently. This Android-based app is built based on a dynamic clientserver where data is extracted from an external database MySQL. So if there is a change of data in the database, then the data on Android applications will also change. This Android app assists users in searching of Yogyakarta (as smart city) related information, especially in culture, government, hotels, and transportation.

  18. BIOPEP database and other programs for processing bioactive peptide sequences.

    Science.gov (United States)

    Minkiewicz, Piotr; Dziuba, Jerzy; Iwaniak, Anna; Dziuba, Marta; Darewicz, Małgorzata

    2008-01-01

    This review presents the potential for application of computational tools in peptide science based on a sample BIOPEP database and program as well as other programs and databases available via the World Wide Web. The BIOPEP application contains a database of biologically active peptide sequences and a program enabling construction of profiles of the potential biological activity of protein fragments, calculation of quantitative descriptors as measures of the value of proteins as potential precursors of bioactive peptides, and prediction of bonds susceptible to hydrolysis by endopeptidases in a protein chain. Other bioactive and allergenic peptide sequence databases are also presented. Programs enabling the construction of binary and multiple alignments between peptide sequences, the construction of sequence motifs attributed to a given type of bioactivity, searching for potential precursors of bioactive peptides, and the prediction of sites susceptible to proteolytic cleavage in protein chains are available via the Internet as are other approaches concerning secondary structure prediction and calculation of physicochemical features based on amino acid sequence. Programs for prediction of allergenic and toxic properties have also been developed. This review explores the possibilities of cooperation between various programs.

  19. Tools and Databases of the KOMICS Web Portal for Preprocessing, Mining, and Dissemination of Metabolomics Data

    Directory of Open Access Journals (Sweden)

    Nozomu Sakurai

    2014-01-01

    Full Text Available A metabolome—the collection of comprehensive quantitative data on metabolites in an organism—has been increasingly utilized for applications such as data-intensive systems biology, disease diagnostics, biomarker discovery, and assessment of food quality. A considerable number of tools and databases have been developed to date for the analysis of data generated by various combinations of chromatography and mass spectrometry. We report here a web portal named KOMICS (The Kazusa Metabolomics Portal, where the tools and databases that we developed are available for free to academic users. KOMICS includes the tools and databases for preprocessing, mining, visualization, and publication of metabolomics data. Improvements in the annotation of unknown metabolites and dissemination of comprehensive metabolomic data are the primary aims behind the development of this portal. For this purpose, PowerGet and FragmentAlign include a manual curation function for the results of metabolite feature alignments. A metadata-specific wiki-based database, Metabolonote, functions as a hub of web resources related to the submitters' work. This feature is expected to increase citation of the submitters' work, thereby promoting data publication. As an example of the practical use of KOMICS, a workflow for a study on Jatropha curcas is presented. The tools and databases available at KOMICS should contribute to enhanced production, interpretation, and utilization of metabolomic Big Data.

  20. Tools and databases of the KOMICS web portal for preprocessing, mining, and dissemination of metabolomics data.

    Science.gov (United States)

    Sakurai, Nozomu; Ara, Takeshi; Enomoto, Mitsuo; Motegi, Takeshi; Morishita, Yoshihiko; Kurabayashi, Atsushi; Iijima, Yoko; Ogata, Yoshiyuki; Nakajima, Daisuke; Suzuki, Hideyuki; Shibata, Daisuke

    2014-01-01

    A metabolome--the collection of comprehensive quantitative data on metabolites in an organism--has been increasingly utilized for applications such as data-intensive systems biology, disease diagnostics, biomarker discovery, and assessment of food quality. A considerable number of tools and databases have been developed to date for the analysis of data generated by various combinations of chromatography and mass spectrometry. We report here a web portal named KOMICS (The Kazusa Metabolomics Portal), where the tools and databases that we developed are available for free to academic users. KOMICS includes the tools and databases for preprocessing, mining, visualization, and publication of metabolomics data. Improvements in the annotation of unknown metabolites and dissemination of comprehensive metabolomic data are the primary aims behind the development of this portal. For this purpose, PowerGet and FragmentAlign include a manual curation function for the results of metabolite feature alignments. A metadata-specific wiki-based database, Metabolonote, functions as a hub of web resources related to the submitters' work. This feature is expected to increase citation of the submitters' work, thereby promoting data publication. As an example of the practical use of KOMICS, a workflow for a study on Jatropha curcas is presented. The tools and databases available at KOMICS should contribute to enhanced production, interpretation, and utilization of metabolomic Big Data.