WorldWideScience

Sample records for computer database application

  1. Database computing in HEP

    Science.gov (United States)

    Day, C. T.; Loken, S.; Macfarlane, J. F.; May, E.; Lifka, D.; Lusk, E.; Price, L. E.; Baden, A.; Grossman, R.; Qin, X.

    1992-01-01

    The major SSC experiments are expected to produce up to 1 Petabyte of data per year each. Once the primary reconstruction is completed by farms of inexpensive processors, I/O becomes a major factor in further analysis of the data. We believe that the application of database techniques can significantly reduce the I/O performed in these analyses. We present examples of such I/O reductions in prototypes based on relational and object-oriented databases of CDF data samples.

  2. Internal combustion engines: Computer applications. (Latest citations from the EI Compendex plus database). Published Search

    Energy Technology Data Exchange (ETDEWEB)

    1993-10-01

    The bibliography contains citations concerning the application of computers and computerized simulations in the design, analysis, operation, and evaluation of various types of internal combustion engines and associated components and apparatus. Special attention is given to engine control and performance. (Contains a minimum of 67 citations and includes a subject term index and title list.)

  3. Using clinical databases in tertiary nurse education: an innovative application of computer technology.

    Science.gov (United States)

    Cheek, J; Gillham, D; Mills, P

    1998-02-01

    This paper provides an initial report of an educational innovation in nursing that promotes exchange of information and close cooperation between hospitals and a university. Data from a computerized nursing care planning system are used as the basis for the design of the acute care clinical component of the nursing curriculum. The project has been developed with the aim of minimizing the theory-practice gap and making the transition from university to hospital an easier process for students and new graduates. From the very early stages of the project, it was recognized that the introduction of new computer-based innovations or educational technology, in itself, would not necessarily improve teaching and learning. Therefore, strong emphasis was placed on how the database would be used as the basis for sound curriculum development while maintaining the clinical and practical focus required by students. Difficulties associated with the project, ranging from lengthy legal negotiations to the challenge of integrating a curriculum strongly based on critical reflection and problem-solving with a highly prescriptive hospital database are reported. The project not only provides an example of the efficient exchange and use of hospital-based data for teaching purposes but also provides the groundwork for many potential and exciting developments in national and international nursing data exchange.

  4. Database Application Schema Forensics

    Directory of Open Access Journals (Sweden)

    Hector Quintus Beyers

    2014-12-01

    Full Text Available The application schema layer of a Database Management System (DBMS can be modified to deliver results that may warrant a forensic investigation. Table structures can be corrupted by changing the metadata of a database or operators of the database can be altered to deliver incorrect results when used in queries. This paper will discuss categories of possibilities that exist to alter the application schema with some practical examples. Two forensic environments are introduced where a forensic investigation can take place in. Arguments are provided why these environments are important. Methods are presented how these environments can be achieved for the application schema layer of a DBMS. A process is proposed on how forensic evidence should be extracted from the application schema layer of a DBMS. The application schema forensic evidence identification process can be applied to a wide range of forensic settings.

  5. Product Licenses Database Application

    CERN Document Server

    Tonkovikj, Petar

    2016-01-01

    The goal of this project is to organize and centralize the data about software tools available to CERN employees, as well as provide a system that would simplify the license management process by providing information about the available licenses and their expiry dates. The project development process is consisted of two steps: modeling the products (software tools), product licenses, legal agreements and other data related to these entities in a relational database and developing the front-end user interface so that the user can interact with the database. The result is an ASP.NET MVC web application with interactive views for displaying and managing the data in the underlying database.

  6. Regionalized life cycle assessment: computational methodology and application to inventory databases.

    Science.gov (United States)

    Mutel, Christopher L; Hellweg, Stefanie

    2009-08-01

    Life cycle assessment (LCA) studies have shown that site-dependent impact assessment for categories like acidification and eutrophication give more accurate and realistic results than site-generic assessments. To date, existing geography-specific, or regionalized, impact assessment factors have not been applied to LCA databases and software tools. We describe a simple, generic methodology to couple existing regionalized characterization factors with large life cycle inventory databases. This approach allows for detailed geographic life cycle impact assessment results. Case-study results for European country-specific electricity mixes are calculated using the Ecoinvent 2.01 database and the EDIP 2003 and Accumulated Exceedance impact assessment methods and CASES project external energy cost characterization factors. In most cases, regionalization shows different total scores, different processes of high importance, and varying geographic distributions of environmental impacts. As the methodology requires no additional input other than the geographic information already in existing LCA databases, it can be used routinely. Better and more consistent geographic information in life cycle inventory databases and impact assessment methods, tailored to the specific spatial range of all environmental effects considered, would be beneficial.

  7. Database tomography for commercial application

    Science.gov (United States)

    Kostoff, Ronald N.; Eberhart, Henry J.

    1994-01-01

    Database tomography is a method for extracting themes and their relationships from text. The algorithms, employed begin with word frequency and word proximity analysis and build upon these results. When the word 'database' is used, think of medical or police records, patents, journals, or papers, etc. (any text information that can be computer stored). Database tomography features a full text, user interactive technique enabling the user to identify areas of interest, establish relationships, and map trends for a deeper understanding of an area of interest. Database tomography concepts and applications have been reported in journals and presented at conferences. One important feature of the database tomography algorithm is that it can be used on a database of any size, and will facilitate the users ability to understand the volume of content therein. While employing the process to identify research opportunities it became obvious that this promising technology has potential applications for business, science, engineering, law, and academe. Examples include evaluating marketing trends, strategies, relationships and associations. Also, the database tomography process would be a powerful component in the area of competitive intelligence, national security intelligence and patent analysis. User interests and involvement cannot be overemphasized.

  8. Database and Expert Systems Applications

    DEFF Research Database (Denmark)

    Viborg Andersen, Kim; Debenham, John; Wagner, Roland

    submissions. The papers are organized in topical sections on workflow automation, database queries, data classification and recommendation systems, information retrieval in multimedia databases, Web applications, implementational aspects of databases, multimedia databases, XML processing, security, XML...... schemata, query evaluation, semantic processing, information retrieval, temporal and spatial databases, querying XML, organisational aspects of databases, natural language processing, ontologies, Web data extraction, semantic Web, data stream management, data extraction, distributed database systems...

  9. Building a medical multimedia database system to integrate clinical information: an application of high-performance computing and communications technology.

    Science.gov (United States)

    Lowe, H J; Buchanan, B G; Cooper, G F; Vries, J K

    1995-01-01

    The rapid growth of diagnostic-imaging technologies over the past two decades has dramatically increased the amount of nontextual data generated in clinical medicine. The architecture of traditional, text-oriented, clinical information systems has made the integration of digitized clinical images with the patient record problematic. Systems for the classification, retrieval, and integration of clinical images are in their infancy. Recent advances in high-performance computing, imaging, and networking technology now make it technologically and economically feasible to develop an integrated, multimedia, electronic patient record. As part of The National Library of Medicine's Biomedical Applications of High-Performance Computing and Communications program, we plan to develop Image Engine, a prototype microcomputer-based system for the storage, retrieval, integration, and sharing of a wide range of clinically important digital images. Images stored in the Image Engine database will be indexed and organized using the Unified Medical Language System Metathesaurus and will be dynamically linked to data in a text-based, clinical information system. We will evaluate Image Engine by initially implementing it in three clinical domains (oncology, gastroenterology, and clinical pathology) at the University of Pittsburgh Medical Center.

  10. Computational Intelligence Challenges and Applications on Large-Scale Astronomical Time Series Databases

    CERN Document Server

    Huijse, Pablo; Protopapas, Pavlos; Principe, Jose C; Zegers, Pablo

    2015-01-01

    Time-domain astronomy (TDA) is facing a paradigm shift caused by the exponential growth of the sample size, data complexity and data generation rates of new astronomical sky surveys. For example, the Large Synoptic Survey Telescope (LSST), which will begin operations in northern Chile in 2022, will generate a nearly 150 Petabyte imaging dataset of the southern hemisphere sky. The LSST will stream data at rates of 2 Terabytes per hour, effectively capturing an unprecedented movie of the sky. The LSST is expected not only to improve our understanding of time-varying astrophysical objects, but also to reveal a plethora of yet unknown faint and fast-varying phenomena. To cope with a change of paradigm to data-driven astronomy, the fields of astroinformatics and astrostatistics have been created recently. The new data-oriented paradigms for astronomy combine statistics, data mining, knowledge discovery, machine learning and computational intelligence, in order to provide the automated and robust methods needed for...

  11. Computational 2D Materials Database

    DEFF Research Database (Denmark)

    Rasmussen, Filip Anselm; Thygesen, Kristian Sommer

    2015-01-01

    We present a comprehensive first-principles study of the electronic structure of 51 semiconducting monolayer transition-metal dichalcogenides and -oxides in the 2H and 1T hexagonal phases. The quasiparticle (QP) band structures with spin-orbit coupling are calculated in the G(0)W(0) approximation...... and used as input to a 2D hydrogenic model to estimate exciton binding energies. Throughout the paper we focus on trends and correlations in the electronic structure rather than detailed analysis of specific materials. All the computed data is available in an open database....

  12. Databases and their application

    NARCIS (Netherlands)

    E.C. Grimm; R.H.W Bradshaw; S. Brewer; S. Flantua; T. Giesecke; A.M. Lézine; H. Takahara; J.W.,Jr Williams

    2013-01-01

    During the past 20 years, several pollen database cooperatives have been established. These databases are now constituent databases of the Neotoma Paleoecology Database, a public domain, multiproxy, relational database designed for Quaternary-Pliocene fossil data and modern surface samples. The poll

  13. Multimedia database retrieval technology and applications

    CERN Document Server

    Muneesawang, Paisarn; Guan, Ling

    2014-01-01

    This book explores multimedia applications that emerged from computer vision and machine learning technologies. These state-of-the-art applications include MPEG-7, interactive multimedia retrieval, multimodal fusion, annotation, and database re-ranking. The application-oriented approach maximizes reader understanding of this complex field. Established researchers explain the latest developments in multimedia database technology and offer a glimpse of future technologies. The authors emphasize the crucial role of innovation, inspiring users to develop new applications in multimedia technologies

  14. Segmentation of pulmonary nodules in computed tomography using a regression neural network approach and its application to the Lung Image Database Consortium and Image Database Resource Initiative dataset.

    Science.gov (United States)

    Messay, Temesguen; Hardie, Russell C; Tuinstra, Timothy R

    2015-05-01

    We present new pulmonary nodule segmentation algorithms for computed tomography (CT). These include a fully-automated (FA) system, a semi-automated (SA) system, and a hybrid system. Like most traditional systems, the new FA system requires only a single user-supplied cue point. On the other hand, the SA system represents a new algorithm class requiring 8 user-supplied control points. This does increase the burden on the user, but we show that the resulting system is highly robust and can handle a variety of challenging cases. The proposed hybrid system starts with the FA system. If improved segmentation results are needed, the SA system is then deployed. The FA segmentation engine has 2 free parameters, and the SA system has 3. These parameters are adaptively determined for each nodule in a search process guided by a regression neural network (RNN). The RNN uses a number of features computed for each candidate segmentation. We train and test our systems using the new Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI) data. To the best of our knowledge, this is one of the first nodule-specific performance benchmarks using the new LIDC-IDRI dataset. We also compare the performance of the proposed methods with several previously reported results on the same data used by those other methods. Our results suggest that the proposed FA system improves upon the state-of-the-art, and the SA system offers a considerable boost over the FA system.

  15. A Visual Database System for Image Analysis on Parallel Computers and its Application to the EOS Amazon Project

    Science.gov (United States)

    Shapiro, Linda G.; Tanimoto, Steven L.; Ahrens, James P.

    1996-01-01

    The goal of this task was to create a design and prototype implementation of a database environment that is particular suited for handling the image, vision and scientific data associated with the NASA's EOC Amazon project. The focus was on a data model and query facilities that are designed to execute efficiently on parallel computers. A key feature of the environment is an interface which allows a scientist to specify high-level directives about how query execution should occur.

  16. A computational framework for the statistical analysis of cardiac diffusion tensors: application to a small database of canine hearts.

    Science.gov (United States)

    Peyrat, Jean-Marc; Sermesant, Maxime; Pennec, Xavier; Delingette, Hervé; Xu, Chenyang; McVeigh, Elliot R; Ayache, Nicholas

    2007-11-01

    We propose a unified computational framework to build a statistical atlas of the cardiac fiber architecture from diffusion tensor magnetic resonance images (DT-MRIs). We apply this framework to a small database of nine ex vivo canine hearts. An average cardiac fiber architecture and a measure of its variability are computed using most recent advances in diffusion tensor statistics. This statistical analysis confirms the already established good stability of the fiber orientations and a higher variability of the laminar sheet orientations within a given species. The statistical comparison between the canine atlas and a standard human cardiac DT-MRI shows a better stability of the fiber orientations than their laminar sheet orientations between the two species. The proposed computational framework can be applied to larger databases of cardiac DT-MRIs from various species to better establish intraspecies and interspecies statistics on the anatomical structure of cardiac fibers. This information will be useful to guide the adjustment of average fiber models onto specific patients from in vivo anatomical imaging modalities.

  17. Computational Chemistry Comparison and Benchmark Database

    Science.gov (United States)

    SRD 101 NIST Computational Chemistry Comparison and Benchmark Database (Web, free access)   The NIST Computational Chemistry Comparison and Benchmark Database is a collection of experimental and ab initio thermochemical properties for a selected set of molecules. The goals are to provide a benchmark set of molecules for the evaluation of ab initio computational methods and allow the comparison between different ab initio computational methods for the prediction of thermochemical properties.

  18. Research on computer virus database management system

    Science.gov (United States)

    Qi, Guoquan

    2011-12-01

    The growing proliferation of computer viruses becomes the lethal threat and research focus of the security of network information. While new virus is emerging, the number of viruses is growing, virus classification increasing complex. Virus naming because of agencies' capture time differences can not be unified. Although each agency has its own virus database, the communication between each other lacks, or virus information is incomplete, or a small number of sample information. This paper introduces the current construction status of the virus database at home and abroad, analyzes how to standardize and complete description of virus characteristics, and then gives the information integrity, storage security and manageable computer virus database design scheme.

  19. International Ventilation Cooling Application Database

    DEFF Research Database (Denmark)

    Holzer, Peter; Psomas, Theofanis Ch.; OSullivan, Paul

    2016-01-01

    The currently running International Energy Agency, Energy and Conservation in Buildings, Annex 62 Ventilative Cooling (VC) project, is coordinating research towards extended use of VC. Within this Annex 62 the joint research activity of International VC Application Database has been carried out...... European and international Low Energy buildings. Still it’s not really widespread. Obstacles are challenges as regards noise, dust, weather and burglary, proving the research efforts of the Annex being necessary. The VC database forms a worthwhile basis for both dissemination and further research targets......., systematically investigating the distribution of technologies and strategies within VC. The database is structured as both a ticking-list-like building-spreadsheet and a collection of building-datasheets. The content of both closely follows Annex 62 State-Of-The- Art-Report. The database has been filled, based...

  20. International Ventilation Cooling Application Database

    DEFF Research Database (Denmark)

    Holzer, Peter; Psomas, Theofanis Ch.; OSullivan, Paul

    2016-01-01

    The currently running International Energy Agency, Energy and Conservation in Buildings, Annex 62 Ventilative Cooling (VC) project, is coordinating research towards extended use of VC. Within this Annex 62 the joint research activity of International VC Application Database has been carried out, ...

  1. Database Transformations for Biological Applications

    Energy Technology Data Exchange (ETDEWEB)

    Overton, C.; Davidson, S. B.; Buneman, P.; Tannen, V.

    2001-04-11

    The goal of this project was to develop tools to facilitate data transformations between heterogeneous data sources found throughout biomedical applications. Such transformations are necessary when sharing data between different groups working on related problems as well as when querying data spread over different databases, files and software analysis packages.

  2. Database and applications security integrating information security and data management

    CERN Document Server

    Thuraisingham, Bhavani

    2005-01-01

    This is the first book to provide an in-depth coverage of all the developments, issues and challenges in secure databases and applications. It provides directions for data and application security, including securing emerging applications such as bioinformatics, stream information processing and peer-to-peer computing. Divided into eight sections, each of which focuses on a key concept of secure databases and applications, this book deals with all aspects of technology, including secure relational databases, inference problems, secure object databases, secure distributed databases and emerging

  3. Cluster based parallel database management system for data intensive computing

    Institute of Scientific and Technical Information of China (English)

    Jianzhong LI; Wei ZHANG

    2009-01-01

    This paper describes a computer-cluster based parallel database management system (DBMS), InfiniteDB, developed by the authors. InfiniteDB aims at efficiently sup-port data intensive computing in response to the rapid grow-ing in database size and the need of high performance ana-lyzing of massive databases. It can be efficiently executed in the computing system composed by thousands of computers such as cloud computing system. It supports the parallelisms of intra-query, inter-query, intra-operation, inter-operation and pipelining. It provides effective strategies for managing massive databases including the multiple data declustering methods, the declustering-aware algorithms for relational operations and other database operations, and the adaptive query optimization method. It also provides the functions of parallel data warehousing and data mining, the coordinator-wrapper mechanism to support the integration of heteroge-neous information resources on the Internet, and the fault tol-erant and resilient infrastructures. It has been used in many applications and has proved quite effective for data intensive computing.

  4. Advanced Scientific Computing Environment Team new scientific database management task

    Energy Technology Data Exchange (ETDEWEB)

    Church, J.P.; Roberts, J.C.; Sims, R.N.; Smetana, A.O.; Westmoreland, B.W.

    1991-06-01

    The mission of the ASCENT Team is to continually keep pace with, evaluate, and select emerging computing technologies to define and implement prototypic scientific environments that maximize the ability of scientists and engineers to manage scientific data. These environments are to be implemented in a manner consistent with the site computing architecture and standards and NRTSC/SCS strategic plans for scientific computing. The major trends in computing hardware and software technology clearly indicate that the future computer'' will be a network environment that comprises supercomputers, graphics boxes, mainframes, clusters, workstations, terminals, and microcomputers. This network computer'' will have an architecturally transparent operating system allowing the applications code to run on any box supplying the required computing resources. The environment will include a distributed database and database managing system(s) that permits use of relational, hierarchical, object oriented, GIS, et al, databases. To reach this goal requires a stepwise progression from the present assemblage of monolithic applications codes running on disparate hardware platforms and operating systems. The first steps include converting from the existing JOSHUA system to a new J80 system that complies with modern language standards, development of a new J90 prototype to provide JOSHUA capabilities on Unix platforms, development of portable graphics tools to greatly facilitate preparation of input and interpretation of output; and extension of Jvv'' concepts and capabilities to distributed and/or parallel computing environments.

  5. Building Database-Powered Mobile Applications

    Directory of Open Access Journals (Sweden)

    Paul POCATILU

    2012-01-01

    Full Text Available Almost all mobile applications use persistency for their data. A common way for complex mobile applications is to store data in local relational databases. Almost all major mobile platforms include a relational database engine. These databases engines expose specific API (Application Programming Interface to be used by mobile applications developers for data definition and manipulation. This paper focus on database-based application models for several mobile platforms (Android, Symbian, Windows CE/Mobile and Windows Phone. For each selected platform the API and specific database operations are presented.

  6. Analysis of the Characteristics and Applications of Computer Mobile Database%浅析计算机移动数据库的特点及应用

    Institute of Scientific and Technical Information of China (English)

    乐瑞卿

    2011-01-01

    With the rapid Social and economic development,mobile technology also will be developed,gradually moving database applications in the embedded operating system in mobile database also shows its superiority.%随着社会经济的快速发展,移动计算技术也随之发展,移动数据库逐步走向应用,在嵌入式操作系统中移动数据库更显示出其优越性。

  7. Application of Integrated Database to the Casting Design

    Institute of Scientific and Technical Information of China (English)

    In-Sung Cho; Seung-Mok Yoo; Chae-Ho Lim; Jeong-Kil Choi

    2008-01-01

    Construction of integrated database including casting shapes with their casting design, technical knowledge, and thermophysical properties of the casting alloys were introduced in the present study. Recognition tech- nique for casting design by industrial computer tomography was used for the construction of shape database. Technical knowledge of the casting processes such as ferrous and non-ferrous alloys and their manufacturing process of the castings were accumulated and the search engine for the knowledge was developed. Database of thermophysical properties of the casting alloys were obtained via the experimental study, and the properties were used for .the in-house computer simulation of casting process. The databases were linked with intelligent casting expert system developed in center for e-design, KITECH. It is expected that the databases can help non casting experts to devise the casting and its process. Various examples of the application by using the databases were shown in the present study.

  8. Software for Distributed Computation on Medical Databases: A Demonstration Project

    Directory of Open Access Journals (Sweden)

    Balasubramanian Narasimhan

    2017-05-01

    Full Text Available Bringing together the information latent in distributed medical databases promises to personalize medical care by enabling reliable, stable modeling of outcomes with rich feature sets (including patient characteristics and treatments received. However, there are barriers to aggregation of medical data, due to lack of standardization of ontologies, privacy concerns, proprietary attitudes toward data, and a reluctance to give up control over end use. Aggregation of data is not always necessary for model fitting. In models based on maximizing a likelihood, the computations can be distributed, with aggregation limited to the intermediate results of calculations on local data, rather than raw data. Distributed fitting is also possible for singular value decomposition. There has been work on the technical aspects of shared computation for particular applications, but little has been published on the software needed to support the "social networking" aspect of shared computing, to reduce the barriers to collaboration. We describe a set of software tools that allow the rapid assembly of a collaborative computational project, based on the flexible and extensible R statistical software and other open source packages, that can work across a heterogeneous collection of database environments, with full transparency to allow local officials concerned with privacy protections to validate the safety of the method. We describe the principles, architecture, and successful test results for the site-stratified Cox model and rank-k singular value decomposition.

  9. Reverse engineering of relational database applications

    NARCIS (Netherlands)

    Vermeer, W.W.M.; Apers, P.M.G.

    1995-01-01

    This paper presents techniques for reverse engineering of relational database applications. The target of such an effort is the definition of a fully equipped object-oriented view of the relational database, including methods and constraints. Such views can be seen as a full specification of the dat

  10. A database application for wilderness character monitoring

    Science.gov (United States)

    Ashley Adams; Peter Landres; Simon Kingston

    2012-01-01

    The National Park Service (NPS) Wilderness Stewardship Division, in collaboration with the Aldo Leopold Wilderness Research Institute and the NPS Inventory and Monitoring Program, developed a database application to facilitate tracking and trend reporting in wilderness character. The Wilderness Character Monitoring Database allows consistent, scientifically based...

  11. Databases

    Data.gov (United States)

    National Aeronautics and Space Administration — The databases of computational and experimental data from the first Aeroelastic Prediction Workshop are located here. The databases file names tell their contents by...

  12. Computer database of ambulatory EEG signals.

    Science.gov (United States)

    Jayakar, P B; Brusse, E; Patrick, J P; Shwedyk, E; Seshia, S S

    1987-01-01

    The paper describes an ambulatory EEG database. The database contains segments of AEEGs done on 45 subjects. Each epoch (1/8th second or more) of AEEG data has been annotated into 1 of 40 classes. The classes represent background activity, paroxysmal patterns and artifacts. The majority of classes have over 200 discrete epochs. The structure is flexible enough to allow additional epochs to be readily added. The database is stored on transportable media such as digital magnetic tape or hard disk and is thus available to other researchers in the field. The database can be used to design, evaluate and compare EEG signal processing algorithms and pattern recognition systems. It can also serve as an educational medium in EEG laboratories.

  13. Computer applications in bioprocessing.

    Science.gov (United States)

    Bungay, H R

    2000-01-01

    Biotechnologists have stayed at the forefront for practical applications for computing. As hardware and software for computing have evolved, the latest advances have found eager users in the area of bioprocessing. Accomplishments and their significance can be appreciated by tracing the history and the interplay between the computing tools and the problems that have been solved in bioprocessing.

  14. A Computational Chemistry Database for Semiconductor Processing

    Science.gov (United States)

    Jaffe, R.; Meyyappan, M.; Arnold, J. O. (Technical Monitor)

    1998-01-01

    The concept of 'virtual reactor' or 'virtual prototyping' has received much attention recently in the semiconductor industry. Commercial codes to simulate thermal CVD and plasma processes have become available to aid in equipment and process design efforts, The virtual prototyping effort would go nowhere if codes do not come with a reliable database of chemical and physical properties of gases involved in semiconductor processing. Commercial code vendors have no capabilities to generate such a database, rather leave the task to the user of finding whatever is needed. While individual investigations of interesting chemical systems continue at Universities, there has not been any large scale effort to create a database. In this presentation, we outline our efforts in this area. Our effort focuses on the following five areas: 1. Thermal CVD reaction mechanism and rate constants. 2. Thermochemical properties. 3. Transport properties.4. Electron-molecule collision cross sections. and 5. Gas-surface interactions.

  15. Applications of membrane computing

    CERN Document Server

    Ciobanu, Gabriel; Păun, Gheorghe

    2006-01-01

    Membrane computing is a branch of natural computing which investigates computing models abstracted from the structure and functioning of living cells and from their interactions in tissues or higher-order biological structures. The models considered, called membrane systems (P systems), are parallel, distributed computing models, processing multisets of symbols in cell-like compartmental architectures. In many applications membrane systems have considerable advantages - among these are their inherently discrete nature, parallelism, transparency, scalability and nondeterminism. In dedicated cha

  16. Maintaining Stored Procedures in Database Application

    Directory of Open Access Journals (Sweden)

    Santosh Kakade

    2012-06-01

    Full Text Available Stored procedure and triggers have an irreplaceable importance in any database application, as they provide a powerful way to code application logic that can be stored on the server and execute according to the need of application. Writing stored procedures for database application involves set of sql statements with an assigned name that's stored in the database in compiled form so that it can be shared by a number of programs. The use of stored procedures can be helpful in controlling access to data end-users may enter or change data but do not write procedures, preserving data integrity and improving productivity statements in a stored procedure only need to be written one time

  17. Professional iOS database application programming

    CERN Document Server

    Alessi, Patrick

    2013-01-01

    Updated and revised coverage that includes the latest versions of iOS and Xcode Whether you're a novice or experienced developer, you will want to dive into this updated resource on database application programming for the iPhone and iPad. Packed with more than 50 percent new and revised material - including completely rebuilt code, screenshots, and full coverage of new features pertaining to database programming and enterprise integration in iOS 6 - this must-have book intends to continue the precedent set by the previous edition by helping thousands of developers master database

  18. GPU computing and applications

    CERN Document Server

    See, Simon

    2015-01-01

    This book presents a collection of state of the art research on GPU Computing and Application. The major part of this book is selected from the work presented at the 2013 Symposium on GPU Computing and Applications held in Nanyang Technological University, Singapore (Oct 9, 2013). Three major domains of GPU application are covered in the book including (1) Engineering design and simulation; (2) Biomedical Sciences; and (3) Interactive & Digital Media. The book also addresses the fundamental issues in GPU computing with a focus on big data processing. Researchers and developers in GPU Computing and Applications will benefit from this book. Training professionals and educators can also benefit from this book to learn the possible application of GPU technology in various areas.

  19. Databases for Computer Science and Electronics: COMPENDEX, ELCOM, and INSPEC.

    Science.gov (United States)

    Marsden, Tom; Laub, Barbara

    1981-01-01

    Describes the selection policies, subject access, search aids, indexing, coverage, and currency of three online databases in the fields of electronics and computer science: COMPENDEX, ELCOM, and INSPEC. Sample searches are displayed for each database. A bibliography cites five references. (FM)

  20. Database challenges and solutions in neuroscientific applications.

    Science.gov (United States)

    Dashti, A E; Ghandeharizadeh, S; Stone, J; Swanson, L W; Thompson, R H

    1997-02-01

    In the scientific community, the quality and progress of various endeavors depend in part on the ability of researchers to share and exchange large quantities of heterogeneous data with one another efficiently. This requires controlled sharing and exchange of information among autonomous, distributed, and heterogeneous databases. In this paper, we focus on a neuroscience application, Neuroanatomical Rat Brain Viewer (NeuART Viewer) to demonstrate alternative database concepts that allow neuroscientists to manage and exchange data. Requirements for the NeuART application, in combination with an underlying network-aware database, are described at a conceptual level. Emphasis is placed on functionality from the user's perspective and on requirements that the database must fulfill. The most important functionality required by neuroscientists is the ability to construct brain models using information from different repositories. To accomplish such a task, users need to browse remote and local sources and summaries of data and capture relevant information to be used in building and extending the brain models. Other functionalities are also required, including posing queries related to brain models, augmenting and customizing brain models, and sharing brain models in a collaborative environment. An extensible object-oriented data model is presented to capture the many data types expected in this application. After presenting conceptual level design issues, we describe several known database solutions that support these requirements and discuss requirements that demand further research. Data integration for heterogeneous databases is discussed in terms of reducing or eliminating semantic heterogeneity when translations are made from one system to another. Performance enhancement mechanisms such as materialized views and spatial indexing for three-dimensional objects are explained and evaluated in the context of browsing, incorporating, and sharing. Policies for providing

  1. Applications of interval computations

    CERN Document Server

    Kreinovich, Vladik

    1996-01-01

    Primary Audience for the Book • Specialists in numerical computations who are interested in algorithms with automatic result verification. • Engineers, scientists, and practitioners who desire results with automatic verification and who would therefore benefit from the experience of suc­ cessful applications. • Students in applied mathematics and computer science who want to learn these methods. Goal Of the Book This book contains surveys of applications of interval computations, i. e. , appli­ cations of numerical methods with automatic result verification, that were pre­ sented at an international workshop on the subject in EI Paso, Texas, February 23-25, 1995. The purpose of this book is to disseminate detailed and surveyed information about existing and potential applications of this new growing field. Brief Description of the Papers At the most fundamental level, interval arithmetic operations work with sets: The result of a single arithmetic operation is the set of all possible results as the o...

  2. The Art of Handling Databases by Cloud Computing

    Directory of Open Access Journals (Sweden)

    R. Anandhi

    2014-03-01

    Full Text Available It is obvious that there are tremendous inventions and technological growth in IT industry. All upcoming technologies are almost trying to deal with only one concept “DATA” -how the data can be effectively stored, easily and accurately retrieved, efficiently distributed and queried etc. Hence one of the new such computing facility is Cloud Computing-accessing various computing resources that are geographically apart. It surely gets a significant attraction over the IT industry. This study explains and analyses the capabilities of Cloud Computing while dealing with databases such as Scalability, Availability, Consistency and Elasticity.

  3. Alternative treatment technology information center computer database system

    Energy Technology Data Exchange (ETDEWEB)

    Sullivan, D. [Environmental Protection Agency, Edison, NJ (United States)

    1995-10-01

    The Alternative Treatment Technology Information Center (ATTIC) computer database system was developed pursuant to the 1986 Superfund law amendments. It provides up-to-date information on innovative treatment technologies to clean up hazardous waste sites. ATTIC v2.0 provides access to several independent databases as well as a mechanism for retrieving full-text documents of key literature. It can be accessed with a personal computer and modem 24 hours a day, and there are no user fees. ATTIC provides {open_quotes}one-stop shopping{close_quotes} for information on alternative treatment options by accessing several databases: (1) treatment technology database; this contains abstracts from the literature on all types of treatment technologies, including biological, chemical, physical, and thermal methods. The best literature as viewed by experts is highlighted. (2) treatability study database; this provides performance information on technologies to remove contaminants from wastewaters and soils. It is derived from treatability studies. This database is available through ATTIC or separately as a disk that can be mailed to you. (3) underground storage tank database; this presents information on underground storage tank corrective actions, surface spills, emergency response, and remedial actions. (4) oil/chemical spill database; this provides abstracts on treatment and disposal of spilled oil and chemicals. In addition to these separate databases, ATTIC allows immediate access to other disk-based systems such as the Vendor Information System for Innovative Treatment Technologies (VISITT) and the Bioremediation in the Field Search System (BFSS). The user may download these programs to their own PC via a high-speed modem. Also via modem, users are able to download entire documents through the ATTIC system. Currently, about fifty publications are available, including Superfund Innovative Technology Evaluation (SITE) program documents.

  4. Computer application for database management and networking of service radio physics; Aplicacion informatica para la gestion de bases de datos y conexiones en red de un servicio de radiofisica

    Energy Technology Data Exchange (ETDEWEB)

    Ferrando Sanchez, A.; Cabello Murillo, E.; Diaz Fuentes, R.; Castro Novais, J.; Clemente Gutierrez, F.; Casa de Juan, M. A. de la; Adaimi Hernandez, P.

    2011-07-01

    The databases in the quality control prove to be a powerful tool for recording, management and statistical process control. Developed in a Windows environment and under Access (Microsoft Office) our service implements this philosophy on the centers computer network. A computer that acts as the server provides the database to the treatment units to record quality control measures daily and incidents. To remove shortcuts stop working with data migration, possible use of duplicate and erroneous data loss because of errors in network connections, which are common problems, we proceeded to manage connections and access to databases ease of maintenance and use horn to all service personnel.

  5. Security in Computer Applications

    CERN Document Server

    CERN. Geneva

    2004-01-01

    Computer security has been an increasing concern for IT professionals for a number of years, yet despite all the efforts, computer systems and networks remain highly vulnerable to attacks of different kinds. Design flaws and security bugs in the underlying software are among the main reasons for this. This lecture addresses the following question: how to create secure software? The lecture starts with a definition of computer security and an explanation of why it is so difficult to achieve. It then introduces the main security principles (like least-privilege, or defense-in-depth) and discusses security in different phases of the software development cycle. The emphasis is put on the implementation part: most common pitfalls and security bugs are listed, followed by advice on best practice for security development. The last part of the lecture covers some miscellaneous issues like the use of cryptography, rules for networking applications, and social engineering threats. This lecture was first given on Thursd...

  6. Software Application for Supporting the Education of Database Systems

    Science.gov (United States)

    Vágner, Anikó

    2015-01-01

    The article introduces an application which supports the education of database systems, particularly the teaching of SQL and PL/SQL in Oracle Database Management System environment. The application has two parts, one is the database schema and its content, and the other is a C# application. The schema is to administrate and store the tasks and the…

  7. The analysis of control trajectories using symbolic and database computing

    Science.gov (United States)

    Grossman, Robert

    1995-01-01

    This final report comprises the formal semi-annual status reports for this grant for the periods June 30-December 31, 1993, January 1-June 30, 1994, and June 1-December 31, 1994. The research supported by this grant is broadly concerned with the symbolic computation, mixed numeric-symbolic computation, and database computation of trajectories of dynamical systems, especially control systems. A review of work during the report period covers: trajectories and approximating series, the Cayley algebra of trees, actions of differential operators, geometrically stable integration algorithms, hybrid systems, trajectory stores, PTool, and other activities. A list of publications written during the report period is attached.

  8. Handbook of video databases design and applications

    CERN Document Server

    Furht, Borko

    2003-01-01

    INTRODUCTIONIntroduction to Video DatabasesOge Marques and Borko FurhtVIDEO MODELING AND REPRESENTATIONModeling Video Using Input/Output Markov Models with Application to Multi-Modal Event DetectionAshutosh Garg, Milind R. Naphade, and Thomas S. HuangStatistical Models of Video Structure and SemanticsNuno VasconcelosFlavor: A Language for Media RepresentationAlexandros Eleftheriadis and Danny HongIntegrating Domain Knowledge and Visual Evidence to Support Highlight Detection in Sports VideosJuergen Assfalg, Marco Bertini, Carlo Colombo, and Alberto Del BimboA Generic Event Model and Sports Vid

  9. Advanced Scientific Computing Environment Team new scientific database management task. Progress report

    Energy Technology Data Exchange (ETDEWEB)

    Church, J.P.; Roberts, J.C.; Sims, R.N.; Smetana, A.O.; Westmoreland, B.W.

    1991-06-01

    The mission of the ASCENT Team is to continually keep pace with, evaluate, and select emerging computing technologies to define and implement prototypic scientific environments that maximize the ability of scientists and engineers to manage scientific data. These environments are to be implemented in a manner consistent with the site computing architecture and standards and NRTSC/SCS strategic plans for scientific computing. The major trends in computing hardware and software technology clearly indicate that the future ``computer`` will be a network environment that comprises supercomputers, graphics boxes, mainframes, clusters, workstations, terminals, and microcomputers. This ``network computer`` will have an architecturally transparent operating system allowing the applications code to run on any box supplying the required computing resources. The environment will include a distributed database and database managing system(s) that permits use of relational, hierarchical, object oriented, GIS, et al, databases. To reach this goal requires a stepwise progression from the present assemblage of monolithic applications codes running on disparate hardware platforms and operating systems. The first steps include converting from the existing JOSHUA system to a new J80 system that complies with modern language standards, development of a new J90 prototype to provide JOSHUA capabilities on Unix platforms, development of portable graphics tools to greatly facilitate preparation of input and interpretation of output; and extension of ``Jvv`` concepts and capabilities to distributed and/or parallel computing environments.

  10. Computer Application in Daqing Oilfield

    Institute of Scientific and Technical Information of China (English)

    Li Manfu

    1994-01-01

    @@ Daqing is the first oilfield of CNPC, where computer was the earlist application in China's oil industry. From 1961 to present, it underwent four generations in evolution and application of computer there.

  11. Database application platform for earthquake numerical simulation

    Institute of Scientific and Technical Information of China (English)

    LUO Yan; ZHENG Yue-jun; CHEN Lian-wang; LU Yuan-zhong; HUANG Zhong-xian

    2006-01-01

    @@ Introduction In recent years, all kinds of observation networks of seismology have been established, which have been continuously producing numerous digital information. In addition, there are many study results about 3D velocity structure model and tectonic model of crust (Huang and Zhao, 2006; Huang et al, 2003; Li and Mooney, 1998),which are valuable for studying the inner structure of the earth and earthquake preparation process. It is badly needed to combine the observed data, experimental study and theoretical analyses results by the way of numerical simulation and develop a database and a corresponding application platform to be used by numerical simulation,and is also a significant way to promote earthquake prediction.

  12. Development of a computational database for probabilistic safety assessment of nuclear research reactors

    Energy Technology Data Exchange (ETDEWEB)

    Macedo, Vagner S.; Oliveira, Patricia S. Pagetti de; Andrade, Delvonei Alves de, E-mail: vagner.macedo@usp.br, E-mail: patricia@ipen.br, E-mail: delvonei@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2015-07-01

    The objective of this work is to describe the database being developed at IPEN - CNEN / SP for application in the Probabilistic Safety Assessment of nuclear research reactors. The database can be accessed by means of a computational program installed in the corporate computer network, named IPEN Intranet, and this access will be allowed only to professionals previously registered. Data updating, editing and searching tasks will be controlled by a system administrator according to IPEN Intranet security rules. The logical model and the physical structure of the database can be represented by an Entity Relationship Model, which is based on the operational routines performed by IPEN - CNEN / SP users. The web application designed for the management of the database is named PSADB. It is being developed with MySQL database software and PHP programming language is being used. Data stored in this database are divided into modules that refer to technical specifications, operating history, maintenance history and failure events associated with the main components of the nuclear facilities. (author)

  13. DEVELOPING MULTITHREADED DATABASE APPLICATION USING JAVA TOOLS AND ORACLE DATABASE MANAGEMENT SYSTEM IN INTRANET ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    Raied Salman

    2015-11-01

    Full Text Available In many business organizations, database applications are designed and implemented using various DBMS and Programming Languages. These applications are used to maintain databases for the organizations. The organization departments can be located at different locations and can be connected by intranet environment. In such environment maintenance of database records become an assignment of complexity which needs to be resolved. In this paper an intranet application is designed and implemented using Object-Oriented Programming Language Java and Object-Relational Database Management System Oracle in multithreaded Operating System environment.

  14. Reversible computing fundamentals, quantum computing, and applications

    CERN Document Server

    De Vos, Alexis

    2010-01-01

    Written by one of the few top internationally recognized experts in the field, this book concentrates on those topics that will remain fundamental, such as low power computing, reversible programming languages, and applications in thermodynamics. It describes reversible computing from various points of view: Boolean algebra, group theory, logic circuits, low-power electronics, communication, software, quantum computing. It is this multidisciplinary approach that makes it unique.Backed by numerous examples, this is useful for all levels of the scientific and academic community, from undergr

  15. Applications of Computer Algebra Conference

    CERN Document Server

    Martínez-Moro, Edgar

    2017-01-01

    The Applications of Computer Algebra (ACA) conference covers a wide range of topics from Coding Theory to Differential Algebra to Quantam Computing, focusing on the interactions of these and other areas with the discipline of Computer Algebra. This volume provides the latest developments in the field as well as its applications in various domains, including communications, modelling, and theoretical physics. The book will appeal to researchers and professors of computer algebra, applied mathematics, and computer science, as well as to engineers and computer scientists engaged in research and development.

  16. Addressing Security Challenges in Pervasive Computing Applications

    Science.gov (United States)

    2010-10-10

    Conference on Engineering of Complex Computer Systems, Auckland, New Zealand, July 2007. 5. Kyriakos Anastasakis, Behzad Bordbar, Geri Georg and...tending Database Technology, Saint-Petersburg, Russia, March 2009. 24. Geri Georg, Indrakshi Ray, Kyriakos Anastasakis, Behzad Bordbar, Manachai...and Behzad Bor- dbar, "Ensuring Spatio-Temporal Access Control for Real-World Applications", Proceed- ings of the 14 th ACM Symposium on Access

  17. Applications of computer algebra

    CERN Document Server

    1985-01-01

    Today, certain computer software systems exist which surpass the computational ability of researchers when their mathematical techniques are applied to many areas of science and engineering. These computer systems can perform a large portion of the calculations seen in mathematical analysis. Despite this massive power, thousands of people use these systems as a routine resource for everyday calculations. These software programs are commonly called "Computer Algebra" systems. They have names such as MACSYMA, MAPLE, muMATH, REDUCE and SMP. They are receiving credit as a computational aid with in­ creasing regularity in articles in the scientific and engineering literature. When most people think about computers and scientific research these days, they imagine a machine grinding away, processing numbers arithmetically. It is not generally realized that, for a number of years, computers have been performing non-numeric computations. This means, for example, that one inputs an equa­ tion and obtains a closed for...

  18. Molten salts database for energy applications

    CERN Document Server

    Serrano-López, Roberto; Cuesta-López, Santiago

    2013-01-01

    The growing interest in energy applications of molten salts is justified by several of their properties. Their possibilities of usage as a coolant, heat transfer fluid or heat storage substrate, require thermo-hydrodynamic refined calculations. Many researchers are using simulation techniques, such as Computational Fluid Dynamics (CFD) for their projects or conceptual designs. The aim of this work is providing a review of basic properties (density, viscosity, thermal conductivity and heat capacity) of the most common and referred salt mixtures. After checking data, tabulated and graphical outputs are given in order to offer the most suitable available values to be used as input parameters for other calculations or simulations. The reviewed values show a general scattering in characterization, mainly in thermal properties. This disagreement suggests that, in several cases, new studies must be started (and even new measurement techniques should be developed) to obtain accurate values.

  19. FERN Ethnomedicinal Plant Database: Exploring Fern Ethnomedicinal Plants Knowledge for Computational Drug Discovery.

    Science.gov (United States)

    Thakar, Sambhaji B; Ghorpade, Pradnya N; Kale, Manisha V; Sonawane, Kailas D

    2015-01-01

    Fern plants are known for their ethnomedicinal applications. Huge amount of fern medicinal plants information is scattered in the form of text. Hence, database development would be an appropriate endeavor to cope with the situation. So by looking at the importance of medicinally useful fern plants, we developed a web based database which contains information about several group of ferns, their medicinal uses, chemical constituents as well as protein/enzyme sequences isolated from different fern plants. Fern ethnomedicinal plant database is an all-embracing, content management web-based database system, used to retrieve collection of factual knowledge related to the ethnomedicinal fern species. Most of the protein/enzyme sequences have been extracted from NCBI Protein sequence database. The fern species, family name, identification, taxonomy ID from NCBI, geographical occurrence, trial for, plant parts used, ethnomedicinal importance, morphological characteristics, collected from various scientific literatures and journals available in the text form. NCBI's BLAST, InterPro, phylogeny, Clustal W web source has also been provided for the future comparative studies. So users can get information related to fern plants and their medicinal applications at one place. This Fern ethnomedicinal plant database includes information of 100 fern medicinal species. This web based database would be an advantageous to derive information specifically for computational drug discovery, botanists or botanical interested persons, pharmacologists, researchers, biochemists, plant biotechnologists, ayurvedic practitioners, doctors/pharmacists, traditional medicinal users, farmers, agricultural students and teachers from universities as well as colleges and finally fern plant lovers. This effort would be useful to provide essential knowledge for the users about the adventitious applications for drug discovery, applications, conservation of fern species around the world and finally to create

  20. Applications of GIS and database technologies to manage a Karst Feature Database

    Science.gov (United States)

    Gao, Y.; Tipping, R.G.; Alexander, E.C.

    2006-01-01

    This paper describes the management of a Karst Feature Database (KFD) in Minnesota. Two sets of applications in both GIS and Database Management System (DBMS) have been developed for the KFD of Minnesota. These applications were used to manage and to enhance the usability of the KFD. Structured Query Language (SQL) was used to manipulate transactions of the database and to facilitate the functionality of the user interfaces. The Database Administrator (DBA) authorized users with different access permissions to enhance the security of the database. Database consistency and recovery are accomplished by creating data logs and maintaining backups on a regular basis. The working database provides guidelines and management tools for future studies of karst features in Minnesota. The methodology of designing this DBMS is applicable to develop GIS-based databases to analyze and manage geomorphic and hydrologic datasets at both regional and local scales. The short-term goal of this research is to develop a regional KFD for the Upper Mississippi Valley Karst and the long-term goal is to expand this database to manage and study karst features at national and global scales.

  1. MINEs: open access databases of computationally predicted enzyme promiscuity products for untargeted metabolomics.

    Science.gov (United States)

    Jeffryes, James G; Colastani, Ricardo L; Elbadawi-Sidhu, Mona; Kind, Tobias; Niehaus, Thomas D; Broadbelt, Linda J; Hanson, Andrew D; Fiehn, Oliver; Tyo, Keith E J; Henry, Christopher S

    2015-01-01

    In spite of its great promise, metabolomics has proven difficult to execute in an untargeted and generalizable manner. Liquid chromatography-mass spectrometry (LC-MS) has made it possible to gather data on thousands of cellular metabolites. However, matching metabolites to their spectral features continues to be a bottleneck, meaning that much of the collected information remains uninterpreted and that new metabolites are seldom discovered in untargeted studies. These challenges require new approaches that consider compounds beyond those available in curated biochemistry databases. Here we present Metabolic In silico Network Expansions (MINEs), an extension of known metabolite databases to include molecules that have not been observed, but are likely to occur based on known metabolites and common biochemical reactions. We utilize an algorithm called the Biochemical Network Integrated Computational Explorer (BNICE) and expert-curated reaction rules based on the Enzyme Commission classification system to propose the novel chemical structures and reactions that comprise MINE databases. Starting from the Kyoto Encyclopedia of Genes and Genomes (KEGG) COMPOUND database, the MINE contains over 571,000 compounds, of which 93% are not present in the PubChem database. However, these MINE compounds have on average higher structural similarity to natural products than compounds from KEGG or PubChem. MINE databases were able to propose annotations for 98.6% of a set of 667 MassBank spectra, 14% more than KEGG alone and equivalent to PubChem while returning far fewer candidates per spectra than PubChem (46 vs. 1715 median candidates). Application of MINEs to LC-MS accurate mass data enabled the identity of an unknown peak to be confidently predicted. MINE databases are freely accessible for non-commercial use via user-friendly web-tools at http://minedatabase.mcs.anl.gov and developer-friendly APIs. MINEs improve metabolomics peak identification as compared to general chemical

  2. Large Scale Explorative Oligonucleotide Probe Selection for Thousands of Genetic Groups on a Computing Grid: Application to Phylogenetic Probe Design Using a Curated Small Subunit Ribosomal RNA Gene Database

    Directory of Open Access Journals (Sweden)

    Faouzi Jaziri

    2014-01-01

    Full Text Available Phylogenetic Oligonucleotide Arrays (POAs were recently adapted for studying the huge microbial communities in a flexible and easy-to-use way. POA coupled with the use of explorative probes to detect the unknown part is now one of the most powerful approaches for a better understanding of microbial community functioning. However, the selection of probes remains a very difficult task. The rapid growth of environmental databases has led to an exponential increase of data to be managed for an efficient design. Consequently, the use of high performance computing facilities is mandatory. In this paper, we present an efficient parallelization method to select known and explorative oligonucleotide probes at large scale using computing grids. We implemented a software that generates and monitors thousands of jobs over the European Computing Grid Infrastructure (EGI. We also developed a new algorithm for the construction of a high-quality curated phylogenetic database to avoid erroneous design due to bad sequence affiliation. We present here the performance and statistics of our method on real biological datasets based on a phylogenetic prokaryotic database at the genus level and a complete design of about 20,000 probes for 2,069 genera of prokaryotes.

  3. A computational framework for a database of terrestrial biosphere models

    Science.gov (United States)

    Metzler, Holger; Müller, Markus; Ceballos-Núñez, Verónika; Sierra, Carlos A.

    2016-04-01

    Most terrestrial biosphere models consist of a set of coupled ordinary first order differential equations. Each equation represents a pool containing carbon with a certain turnover rate. Although such models share some basic mathematical structures, they can have very different properties such as number of pools, cycling rates, and internal fluxes. We present a computational framework that helps analyze the structure and behavior of terrestrial biosphere models using as an example the process of soil organic matter decomposition. The same framework can also be used for other sub-processes such as carbon fixation or allocation. First, the models have to be fed into a database consisting of simple text files with a common structure. Then they are read in using Python and transformed into an internal 'Model Class' that can be used to automatically create an overview stating the model's structure, state variables, internal and external fluxes. SymPy, a Python library for symbolic mathematics, helps to also calculate the Jacobian matrix at possibly given steady states and the eigenvalues of this matrix. If complete parameter sets are available, the model can also be run using R to simulate its behavior under certain conditions and to support a deeper stability analysis. In this case, the framework is also able to provide phase-plane plots if appropriate. Furthermore, an overview of all the models in the database can be given to help identify their similarities and differences.

  4. BUSINESS MODELLING AND DATABASE DESIGN IN CLOUD COMPUTING

    Directory of Open Access Journals (Sweden)

    Mihai-Constantin AVORNICULUI

    2015-04-01

    Full Text Available Electronic commerce is growing constantly from one year to another in the last decade, few are the areas that also register such a growth. It covers the exchanges of computerized data, but also electronic messaging, linear data banks and electronic transfer payment. Cloud computing, a relatively new concept and term, is a model of access services via the internet to distributed systems of configurable calculus resources at request which can be made available quickly with minimum management effort and intervention from the client and the provider. Behind an electronic commerce system in cloud there is a data base which contains the necessary information for the transactions in the system. Using business modelling, we get many benefits, which makes the design of the database used by electronic commerce systems in cloud considerably easier.

  5. Application Design for Wearable Computing

    CERN Document Server

    Siewiorek, Dan; Starner, Thad

    2008-01-01

    This lecture describes application design for wearable computing, providing a blend of experience based insights, learning in application development, and guidelines on how to frame problems and address a specific design context, followed by more detailed issues and solution approaches at the next level of the application development. The lecture takes the viewpoint of a potential designer or researcher in this field and aims to present such an integrated material in one place. Designing wearable computer interfaces requires attention to many different factors because of the computer's closene

  6. Analyzing high energy physics data using database computing: Preliminary report

    Science.gov (United States)

    Baden, Andrew; Day, Chris; Grossman, Robert; Lifka, Dave; Lusk, Ewing; May, Edward; Price, Larry

    1991-01-01

    A proof of concept system is described for analyzing high energy physics (HEP) data using data base computing. The system is designed to scale up to the size required for HEP experiments at the Superconducting SuperCollider (SSC) lab. These experiments will require collecting and analyzing approximately 10 to 100 million 'events' per year during proton colliding beam collisions. Each 'event' consists of a set of vectors with a total length of approx. one megabyte. This represents an increase of approx. 2 to 3 orders of magnitude in the amount of data accumulated by present HEP experiments. The system is called the HEPDBC System (High Energy Physics Database Computing System). At present, the Mark 0 HEPDBC System is completed, and can produce analysis of HEP experimental data approx. an order of magnitude faster than current production software on data sets of approx. 1 GB. The Mark 1 HEPDBC System is currently undergoing testing and is designed to analyze data sets 10 to 100 times larger.

  7. Evolution and applications of plant pathway resources and databases

    DEFF Research Database (Denmark)

    Sucaet, Yves; Deva, Taru

    2011-01-01

    Plants are important sources of food and plant products are essential for modern human life. Plants are increasingly gaining importance as drug and fuel resources, bioremediation tools and as tools for recombinant technology. Considering these applications, database infrastructure for plant model...... quantitative modeling of plant biosystems. We propose Good Database Practice as a possible model for collaboration and to ease future integration efforts....

  8. Klaim-DB: A Modeling Language for Distributed Database Applications

    DEFF Research Database (Denmark)

    Wu, Xi; Li, Ximeng; Lluch Lafuente, Alberto;

    2015-01-01

    We present the modelling language, Klaim-DB, for distributed database applications. Klaim-DB borrows the distributed nets of the coordination language Klaim but essentially re-incarnates the tuple spaces of Klaim as databases, and provides high-level language abstractions for the access and manip...

  9. Cloud Computing Utility and Applications

    Directory of Open Access Journals (Sweden)

    Pradeep Kumar Tiwari

    2011-12-01

    Full Text Available Cloud Architecture provides services on demand basis via internet (WWW services. Application design in cloud computing environment or the applications which support cloud paradigm are on demand on the basis of user requirement. Those applications provide the support on various hardware, software and other resource requirement on demand. API used in the cloud computing provide the greater advantage to provide industrial strength, where the complex reliability and scalability logic of the underlying services remains implemented and hidden in the cloud environment. Cloud Computing provide the highest utilization in terms of utilization, resource sharing, requirement gathering and utility to the other needful resources. In this paper we discuss several utility and their applications. We provide a broad discussion which is useful for cloud computing research.

  10. Applications of Graph Theory in Computer Science

    Directory of Open Access Journals (Sweden)

    U. Sekar

    2013-11-01

    Full Text Available The field of mathematics plays vital role in various fields. One of the important areas in mathematics is graph theory which is used in structural models. This structural arrangements of various objects or technologies lead to new inventions and modifications in the existing environment for enhancement in those fields. The field graph theory started its journey from the problem of Konigsberg Bridge in 1735. This paper gives an overview of the applications of graph theory in heterogeneous fields to some extent but mainly focuses on the computer science applications that uses graph theoretical concepts. Various papers based on graph theory have been studied related to scheduling concepts, computer science applications and an overview has been presented here.Graph theoretical ideas are highly utilized by computer science applications. Especially in research areas of computer science such data mining, image segmentation, clustering, image capturing, networking etc., For example a data structure can be designed in the form of tree which in turn utilized vertices and edges. Similarly modeling of network topologies can be done using graph concepts. In the same way the most important concept of graph coloring is utilized in resource allocation, scheduling. Also, paths, walks and circuits in graph theory are used in tremendous applications say traveling salesman problem, database design concepts, resource networking. This leads to the development of new algorithms and new theorems that can be used in tremendous applications. First section gives the historical background of graph theory and some applications in scheduling. Second section emphasizes how graph theory is utilized in various computer applications.

  11. Teaching Psychology Students Computer Applications.

    Science.gov (United States)

    Atnip, Gilbert W.

    This paper describes an undergraduate-level course designed to teach the applications of computers that are most relevant in the social sciences, especially psychology. After an introduction to the basic concepts and terminology of computing, separate units were devoted to word processing, data analysis, data acquisition, artificial intelligence,…

  12. Computational fluid dynamic applications

    Energy Technology Data Exchange (ETDEWEB)

    Chang, S.-L.; Lottes, S. A.; Zhou, C. Q.

    2000-04-03

    The rapid advancement of computational capability including speed and memory size has prompted the wide use of computational fluid dynamics (CFD) codes to simulate complex flow systems. CFD simulations are used to study the operating problems encountered in system, to evaluate the impacts of operation/design parameters on the performance of a system, and to investigate novel design concepts. CFD codes are generally developed based on the conservation laws of mass, momentum, and energy that govern the characteristics of a flow. The governing equations are simplified and discretized for a selected computational grid system. Numerical methods are selected to simplify and calculate approximate flow properties. For turbulent, reacting, and multiphase flow systems the complex processes relating to these aspects of the flow, i.e., turbulent diffusion, combustion kinetics, interfacial drag and heat and mass transfer, etc., are described in mathematical models, based on a combination of fundamental physics and empirical data, that are incorporated into the code. CFD simulation has been applied to a large variety of practical and industrial scale flow systems.

  13. FaceWarehouse: a 3D facial expression database for visual computing.

    Science.gov (United States)

    Cao, Chen; Weng, Yanlin; Zhou, Shun; Tong, Yiying; Zhou, Kun

    2014-03-01

    We present FaceWarehouse, a database of 3D facial expressions for visual computing applications. We use Kinect, an off-the-shelf RGBD camera, to capture 150 individuals aged 7-80 from various ethnic backgrounds. For each person, we captured the RGBD data of her different expressions, including the neutral expression and 19 other expressions such as mouth-opening, smile, kiss, etc. For every RGBD raw data record, a set of facial feature points on the color image such as eye corners, mouth contour, and the nose tip are automatically localized, and manually adjusted if better accuracy is required. We then deform a template facial mesh to fit the depth data as closely as possible while matching the feature points on the color image to their corresponding points on the mesh. Starting from these fitted face meshes, we construct a set of individual-specific expression blendshapes for each person. These meshes with consistent topology are assembled as a rank-3 tensor to build a bilinear face model with two attributes: identity and expression. Compared with previous 3D facial databases, for every person in our database, there is a much richer matching collection of expressions, enabling depiction of most human facial actions. We demonstrate the potential of FaceWarehouse for visual computing with four applications: facial image manipulation, face component transfer, real-time performance-based facial image animation, and facial animation retargeting from video to image.

  14. Database security and encryption technology research and application

    Science.gov (United States)

    Zhu, Li-juan

    2013-03-01

    The main purpose of this paper is to discuss the current database information leakage problem, and discuss the important role played by the message encryption techniques in database security, As well as MD5 encryption technology principle and the use in the field of website or application. This article is divided into introduction, the overview of the MD5 encryption technology, the use of MD5 encryption technology and the final summary. In the field of requirements and application, this paper makes readers more detailed and clearly understood the principle, the importance in database security, and the use of MD5 encryption technology.

  15. Computer applications in clinical psychology

    CERN Document Server

    Zamoşteanu, Alina Oana

    2012-01-01

    The computer-assisted analysis is not currently a novelty, but a necessity in all areas of psychology. A number of studies that examine the limits of the computer assisted and analyzed interpretations, also its advantages. A series of studies aim to assess how the computer assisting programs are able to establish a diagnosis referring to the presence of certain mental disorders. We will present the results of one computer application in clinical psychology regarding the assessment of Theory of Mind capacity by animation.

  16. Computer applications in clinical psychology

    Directory of Open Access Journals (Sweden)

    Alina Oana Zamoşteanu

    2009-01-01

    Full Text Available The computer-assisted analysis is not currently a novelty, but a necessity in all areas of psychology. A number of studies that examine the limits of the computer assisted and analyzed interpretations, also its advantages. A series of studies aim to assess how the computer assisting programs are able to establish a diagnosis referring to the presence of certain mental disorders. We will present the results of one computer application in clinical psychology regarding the assessment of Theory of Mind capacity by animation.

  17. Engineering applications of soft computing

    CERN Document Server

    Díaz-Cortés, Margarita-Arimatea; Rojas, Raúl

    2017-01-01

    This book bridges the gap between Soft Computing techniques and their applications to complex engineering problems. In each chapter we endeavor to explain the basic ideas behind the proposed applications in an accessible format for readers who may not possess a background in some of the fields. Therefore, engineers or practitioners who are not familiar with Soft Computing methods will appreciate that the techniques discussed go beyond simple theoretical tools, since they have been adapted to solve significant problems that commonly arise in such areas. At the same time, the book will show members of the Soft Computing community how engineering problems are now being solved and handled with the help of intelligent approaches. Highlighting new applications and implementations of Soft Computing approaches in various engineering contexts, the book is divided into 12 chapters. Further, it has been structured so that each chapter can be read independently of the others.

  18. VENDOR-INDEPENDENT DATABASE APPLICATIONS – AN ARCHITECTURAL APPROACH

    Directory of Open Access Journals (Sweden)

    Mircea Petrescu

    2004-12-01

    Full Text Available The ability to switch between different Database Management Systems (DBMS is a requirement for many database applications in which effort was invested by many researchers. The main obstacle is the non-uniformity across vendors of the SQL language, the de-facto standard in the industry. Also, an application that maps between an object-oriented application and a relation database needs to be designed in a proper way, in order to achieve the required level of performance and maintainability. This paper presents, extends and further details the Vendor-Independent Database Application (VIDA framework, initially proposed by us in [9]. The proposed VIDA architecture is described in-depth, based on our practice and experience in this field. The design decisions are presented along with supporting arguments. The VIDA architecture presented here aims to fully decouple the application both from the query language and from the database access technology, providing a uniform view of the database. The problems encountered, both during design and implementation, are presented along with their solutions. Also, the available data access technologies and languages are surveyed and their conformity with a standard is debated.

  19. Registry of EPA Applications, Models, and Databases

    Data.gov (United States)

    U.S. Environmental Protection Agency — READ is EPA's authoritative source for information about Agency information resources, including applications/systems, datasets and models. READ is one component of...

  20. Computational Linguistics Applications

    CERN Document Server

    Piasecki, Maciej; Jassem, Krzysztof; Fuglewicz, Piotr

    2013-01-01

    The ever-growing popularity of Google over the recent decade has required a specific method of man-machine communication: human query should be short, whereas the machine answer may take a form of a wide range of documents. This type of communication has triggered a rapid development in the domain of Information Extraction, aimed at providing the asker with a  more precise information. The recent success of intelligent personal assistants supporting users in searching or even extracting information and answers from large collections of electronic documents signals the onset of a new era in man-machine communication – we shall soon explain to our small devices what we need to know and expect valuable answers quickly and automatically delivered. The progress of man-machine communication is accompanied by growth in the significance of applied Computational Linguistics – we need machines to understand much more from the language we speak naturally than it is the case of up-to-date search systems. Moreover, w...

  1. Scalable Database Access Technologies for ATLAS Distributed Computing

    CERN Document Server

    Vaniachine, A

    2009-01-01

    ATLAS event data processing requires access to non-event data (detector conditions, calibrations, etc.) stored in relational databases. The database-resident data are crucial for the event data reconstruction processing steps and often required for user analysis. A main focus of ATLAS database operations is on the worldwide distribution of the Conditions DB data, which are necessary for every ATLAS data processing job. Since Conditions DB access is critical for operations with real data, we have developed the system where a different technology can be used as a redundant backup. Redundant database operations infrastructure fully satisfies the requirements of ATLAS reprocessing, which has been proven on a scale of one billion database queries during two reprocessing campaigns of 0.5 PB of single-beam and cosmics data on the Grid. To collect experience and provide input for a best choice of technologies, several promising options for efficient database access in user analysis were evaluated successfully. We pre...

  2. Key Technologies and Applications of Secure Multiparty Computation

    Directory of Open Access Journals (Sweden)

    Xiaoqiang Guo

    2013-07-01

    Full Text Available With the advent of the information age, the network security is particularly important. The secure multiparty computation is a very important branch of cryptography. It is a hotspot in the field of information security. It expanded the scope of the traditional distributed computing and information security, provided a new computing model for the network collaborative computing. First we introduced several key technologies of secure multiparty computation: secret sharing and verifiable secret sharing, homomorphic public key cryptosystem, mix network, zero knowledge proof, oblivious transfer, millionaire protocol. Second we discussed the applications of secure multiparty computation in electronic voting, electronic auctions, threshold signature, database queries, data mining, mechanical engineering and other fields.

  3. Thermodynamic database of multi-component Mg alloys and its application to solidification and heat treatment

    Directory of Open Access Journals (Sweden)

    Guanglong Xu

    2016-12-01

    Full Text Available An overview about one thermodynamic database of multi-component Mg alloys is given in this work. This thermodynamic database includes thermodynamic descriptions for 145 binary systems and 48 ternary systems in 23-component (Mg–Ag–Al–Ca–Ce–Cu–Fe–Gd–K–La–Li–Mn–Na–Nd–Ni–Pr–Si–Sn–Sr–Th–Y–Zn–Zr system. First, the major computational and experimental tools to establish the thermodynamic database of Mg alloys are briefly described. Subsequently, among the investigated binary and ternary systems, representative binary and ternary systems are shown to demonstrate the major feature of the database. Finally, application of the thermodynamic database to solidification simulation and selection of heat treatment schedule is described.

  4. Computer Applications in Metallurgical Research

    Directory of Open Access Journals (Sweden)

    V. Madhu

    1994-04-01

    Full Text Available This paper outlines the current efforts in computer applications in metallurgical research at the Defence Metallurgical Research Laboratory, Hyderabad. Work being done on armour penetration studies, optimization of armour profiles for fighting vehicles, computer control of multifunction 2000 tonne forge press, drawing of processing mechanism maps, process modelling of titanium sponge production and methods of curve fitting to experimental data, is described and briefly discussed.

  5. Computational nanophotonics modeling and applications

    CERN Document Server

    Musa, Sarhan M

    2013-01-01

    This reference offers tools for engineers, scientists, biologists, and others working with the computational techniques of nanophotonics. It introduces the key concepts of computational methods in a manner that is easily digestible for newcomers to the field. The book also examines future applications of nanophotonics in the technical industry and covers new developments and interdisciplinary research in engineering, science, and medicine. It provides an overview of the key computational nanophotonics and describes the technologies with an emphasis on how they work and their key benefits.

  6. Application-driven computational imaging

    Science.gov (United States)

    McCloskey, Scott

    2016-05-01

    This paper addresses how the image processing steps involved in computational imaging can be adapted to specific image-based recognition tasks, and how significant reductions in computational complexity can be achieved by leveraging the recognition algorithm's robustness to defocus, poor exposure, and the like. Unlike aesthetic applications of computational imaging, recognition systems need not produce the best possible image quality, but instead need only satisfy certain quality thresholds that allow for reliable recognition. The paper specifically addresses light field processing for barcode scanning, and presents three optimizations which bring light field processing within the complexity limits of low-powered embedded processors.

  7. Migration of legacy mumps applications to relational database servers.

    Science.gov (United States)

    O'Kane, K C

    2001-07-01

    An extended implementation of the Mumps language is described that facilitates vendor neutral migration of legacy Mumps applications to SQL-based relational database servers. Implemented as a compiler, this system translates Mumps programs to operating system independent, standard C code for subsequent compilation to fully stand-alone, binary executables. Added built-in functions and support modules extend the native hierarchical Mumps database with access to industry standard, networked, relational database management servers (RDBMS) thus freeing Mumps applications from dependence upon vendor specific, proprietary, unstandardized database models. Unlike Mumps systems that have added captive, proprietary RDMBS access, the programs generated by this development environment can be used with any RDBMS system that supports common network access protocols. Additional features include a built-in web server interface and the ability to interoperate directly with programs and functions written in other languages.

  8. A distributed computing tool for generating neural simulation databases.

    Science.gov (United States)

    Calin-Jageman, Robert J; Katz, Paul S

    2006-12-01

    After developing a model neuron or network, it is important to systematically explore its behavior across a wide range of parameter values or experimental conditions, or both. However, compiling a very large set of simulation runs is challenging because it typically requires both access to and expertise with high-performance computing facilities. To lower the barrier for large-scale model analysis, we have developed NeuronPM, a client/server application that creates a "screen-saver" cluster for running simulations in NEURON (Hines & Carnevale, 1997). NeuronPM provides a user-friendly way to use existing computing resources to catalog the performance of a neural simulation across a wide range of parameter values and experimental conditions. The NeuronPM client is a Windows-based screen saver, and the NeuronPM server can be hosted on any Apache/PHP/MySQL server. During idle time, the client retrieves model files and work assignments from the server, invokes NEURON to run the simulation, and returns results to the server. Administrative panels make it simple to upload model files, define the parameters and conditions to vary, and then monitor client status and work progress. NeuronPM is open-source freeware and is available for download at http://neuronpm.homeip.net . It is a useful entry-level tool for systematically analyzing complex neuron and network simulations.

  9. Sustainable Transport Data Collection and Application: China Urban Transport Database

    OpenAIRE

    Tian Jiang; Zhongyi Wu; Yu Song; Xianglong Liu; Haode Liu; Haozhi Zhang

    2013-01-01

    Transport policy making process of national and local governments should be supported by a comprehensive database to ensure a sustainable and healthy development of urban transport. China Urban Transport Database (CUTD) has been built to play such a role. This paper is to make an introduction of CUTD framework including user management, data warehouse, and application modules. Considering the urban transport development features of Chinese cities, sustainable urban transport development indic...

  10. Secure and robust cloud computing for high-throughput forensic microsatellite sequence analysis and databasing.

    Science.gov (United States)

    Bailey, Sarah F; Scheible, Melissa K; Williams, Christopher; Silva, Deborah S B S; Hoggan, Marina; Eichman, Christopher; Faith, Seth A

    2017-08-08

    Next-generation Sequencing (NGS) is a rapidly evolving technology with demonstrated benefits for forensic genetic applications, and the strategies to analyze and manage the massive NGS datasets are currently in development. Here, the computing, data storage, connectivity, and security resources of the Cloud were evaluated as a model for forensic laboratory systems that produce NGS data. A complete front-to-end Cloud system was developed to upload, process, and interpret raw NGS data using a web browser dashboard. The system was extensible, demonstrating analysis capabilities of autosomal and Y-STRs from a variety of NGS instrumentation (Illumina MiniSeq and MiSeq, and Oxford Nanopore MinION). NGS data for STRs were concordant with standard reference materials previously characterized with capillary electrophoresis and Sanger sequencing. The computing power of the Cloud was implemented with on-demand auto-scaling to allow multiple file analysis in tandem. The system was designed to store resulting data in a relational database, amenable to downstream sample interpretations and databasing applications following the most recent guidelines in nomenclature for sequenced alleles. Lastly, a multi-layered Cloud security architecture was tested and showed that industry standards for securing data and computing resources were readily applied to the NGS system without disadvantageous effects for bioinformatic analysis, connectivity or data storage/retrieval. The results of this study demonstrate the feasibility of using Cloud-based systems for secured NGS data analysis, storage, databasing, and multi-user distributed connectivity. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. A relational database application in support of integrated neuroscience research.

    Science.gov (United States)

    Rudowsky, Ira; Kulyba, Olga; Kunin, Mikhail; Ogarodnikov, Dmitri; Raphan, Theodore

    2004-12-01

    The development of relational databases has significantly improved the performance of storage, search, and retrieval functions and has made it possible for applications that perform real-time data acquisition and analysis to interact with these types of databases. The purpose of this research was to develop a user interface for interaction between a data acquisition and analysis application and a relational database using the Oracle9i system. The overall system was designed to have an indexing capability that threads into the data acquisition and analysis programs. Tables were designed and relations within the database for indexing the files and information contained within the files were established. The system provides retrieval capabilities over a broad range of media, including analog, event, and video data types. The system's ability to interact with a data capturing program at the time of the experiment to create both multimedia files as well as the meta-data entries in the relational database avoids manual entries in the database and ensures data integrity and completeness for further interaction with the data by analysis applications.

  12. JACOB: a dynamic database for computational chemistry benchmarking.

    Science.gov (United States)

    Yang, Jack; Waller, Mark P

    2012-12-21

    JACOB (just a collection of benchmarks) is a database that contains four diverse benchmark studies, which in-turn included 72 data sets, with a total of 122,356 individual results. The database is constructed upon a dynamic web framework that allows users to retrieve data from the database via predefined categories. Additional flexibility is made available via user-defined text-based queries. Requested sets of results are then automatically presented as bar graphs, with parameters of the graphs being controllable via the URL. JACOB is currently available at www.wallerlab.org/jacob.

  13. Needs assessment for next generation computer-aided mammography reference image databases and evaluation studies.

    Science.gov (United States)

    Horsch, Alexander; Hapfelmeier, Alexander; Elter, Matthias

    2011-11-01

    Breast cancer is globally a major threat for women's health. Screening and adequate follow-up can significantly reduce the mortality from breast cancer. Human second reading of screening mammograms can increase breast cancer detection rates, whereas this has not been proven for current computer-aided detection systems as "second reader". Critical factors include the detection accuracy of the systems and the screening experience and training of the radiologist with the system. When assessing the performance of systems and system components, the choice of evaluation methods is particularly critical. Core assets herein are reference image databases and statistical methods. We have analyzed characteristics and usage of the currently largest publicly available mammography database, the Digital Database for Screening Mammography (DDSM) from the University of South Florida, in literature indexed in Medline, IEEE Xplore, SpringerLink, and SPIE, with respect to type of computer-aided diagnosis (CAD) (detection, CADe, or diagnostics, CADx), selection of database subsets, choice of evaluation method, and quality of descriptions. 59 publications presenting 106 evaluation studies met our selection criteria. In 54 studies (50.9%), the selection of test items (cases, images, regions of interest) extracted from the DDSM was not reproducible. Only 2 CADx studies, not any CADe studies, used the entire DDSM. The number of test items varies from 100 to 6000. Different statistical evaluation methods are chosen. Most common are train/test (34.9% of the studies), leave-one-out (23.6%), and N-fold cross-validation (18.9%). Database-related terminology tends to be imprecise or ambiguous, especially regarding the term "case". Overall, both the use of the DDSM as data source for evaluation of mammography CAD systems, and the application of statistical evaluation methods were found highly diverse. Results reported from different studies are therefore hardly comparable. Drawbacks of the DDSM

  14. Computational Systems for Multidisciplinary Applications

    Science.gov (United States)

    Soni, Bharat; Haupt, Tomasz; Koomullil, Roy; Luke, Edward; Thompson, David

    2002-01-01

    In this paper, we briefly describe our efforts to develop complex simulation systems. We focus first on four key infrastructure items: enterprise computational services, simulation synthesis, geometry modeling and mesh generation, and a fluid flow solver for arbitrary meshes. We conclude by presenting three diverse applications developed using these technologies.

  15. Evolutionary Computation and its Application

    Institute of Scientific and Technical Information of China (English)

    Licheng Jiao; Lishan Kang; Zhenya He; Tao Xie

    2006-01-01

    @@ On Mar.23,2006,a project in the Major Program of NSFC-"Evolutionary computation and its application",managed by Prof.Licheng Jiao,Prof.Lishan Kang,Prof.Zhenya He,and Prof.Tao Xie,passed its Final Qualification Process and was evaluated as Excellent.

  16. Application of a Database in the Monitoring of Workstations in a Local Area Network

    Directory of Open Access Journals (Sweden)

    Eyo O. Ukem

    2009-01-01

    Full Text Available Problem statement: Computer hardware fault management and repairs can be a big challenge, especially if the number of staff available for the job is small. The task becomes more complicated if remote sites are managed and an engineer or technician has to be dispatched. Approach: Availability of relevant information when needed could ease the burden of maintenance by removing uncertainties. Such required information could be accumulated in a database and accessed as needed. Results: This study considered such a database, to assist a third party hardware maintenance firm keep track of its operations, including the machines that it services, together with their owners. A software application was developed in Java programming language, in the form of a database, using Microsoft Access as the database management system. It was designed to run on a local area network and to allow remote workstations to log on to a central computer in a client/server configuration. With this application it was possible to enter fault reports into the database residing on the central computer from any workstation on the network. Conclusion/Recommendations: The information generated from this data can be used by the third party hardware maintenance firm to speed up its service delivery, thus putting the firm in a position to render more responsive and efficient service to the customers.

  17. Application of the Non—Stationary Oil Film Force Database

    Institute of Scientific and Technical Information of China (English)

    WANGWen; ZHANGZHi-ming; 等

    2001-01-01

    The technique of non-stationary oll film force database for hydrodynamic bearing is introduced and its potential applications in nonlinear rotor-dynamics are demonstrated.Through simulations of the locus of the shaft center aided by the database technique,nonlinear stability analysis can be performed and the natural frequency can be obtained as well.The easiness of “assembling” the individual bush forces from the database to form the bearing force.makes it very convenient to evaluate the stability of various types of journal bearings,Examples are demonstrated to show how the database technique makes it possible to get technically abundant simulation results at the expense of very short calculation time.

  18. Creating a semantic lesion database for computer-aided MR mammography

    Science.gov (United States)

    Wang, Xiaogang; Martel, Anne

    2012-02-01

    This work presents the creation of a semantic lesion database which will support research into computer-aided lesion detection (CAD) in breast screening MRI. As an adjunct to conventional X-ray mammography, MR-mammography has become a popular screening tool for women with a high risk of breast cancer because of its high sensitivity in detecting malignancy. To address the needs of research and development into CAD for breast MRI an integrated tool has been designed to collect all lesion related information, conduct quantitative analysis, and then present crucial data to clinicians and researchers. A lesion database is an essential component of this system as it provides a link between the DICOM database of MR images and the meta-information contained in the Electronic Patient Record. The patient history, radiology reports from MRI screening visits and pathology reports are all collected, dissected, and stored in a hierarchical structure in the database. Moreover, internal links between pathology specimens and the location of the corresponding lesion in the image are established allowing diagnostic information to be displayed alongside the relevant images. If "ground truth" for an imaging visit can be established either by biopsy or by 2-year follow-up, then the case is labeled as suitable for use in training and testing CAD algorithms. At present a total of 1882 lesions (benign/malignant), 200 pathology specimens over 405 subjects and 1794 screening (455 CAD studies) are included in the database. As well as providing an excellent resource for CAD development this also has potential applications in resident radiologists' training and education.

  19. [A systematic evaluation of application of the web-based cancer database].

    Science.gov (United States)

    Huang, Tingting; Liu, Jialin; Li, Yong; Zhang, Rui

    2013-10-01

    In order to support the theory and practice of the web-based cancer database development in China, we applied a systematic evaluation to assess the development condition of the web-based cancer databases at home and abroad. We performed computer-based retrieval of the Ovid-MEDLINE, Springerlink, EBSCOhost, Wiley Online Library and CNKI databases, the papers of which were published between Jan. 1995 and Dec. 2011, and retrieved the references of these papers by hand. We selected qualified papers according to the pre-established inclusion and exclusion criteria, and carried out information extraction and analysis of the papers. Eventually, searching the online database, we obtained 1244 papers, and checking the reference lists, we found other 19 articles. Thirty-one articles met the inclusion and exclusion criteria and we extracted the proofs and assessed them. Analyzing these evidences showed that the U.S.A. counted for 26% in the first place. Thirty-nine percent of these web-based cancer databases are comprehensive cancer databases. As for single cancer databases, breast cancer and prostatic cancer are on the top, both counting for 10% respectively. Thirty-two percent of the cancer database are associated with cancer gene information. For the technical applications, MySQL and PHP applied most widely, nearly 23% each.

  20. Analysis on Cloud Computing Database in Cloud Environment – Concept and Adoption Paradigm

    Directory of Open Access Journals (Sweden)

    Elena-Geanina ULARU

    2012-08-01

    Full Text Available With the development of the Internet’s new technical functionalities, new concepts have started to take shape. These concepts have an important role especially in the development of corporate IT. Such a concept is „the Cloud”. Various marketing campaigns have started to focus on the Cloud and began to promote it in different but confusing ways. This campaigns do little, to explain what cloud computing is and why it is becoming increasingly necessary. The lack of understanding in this new technology generates a lack of knowledge in business cloud adoption regarding database and also application. Only by focusing on the business processes and objectives an enterprise can achieve the full benefits of the cloud and mitigate the potential risks. In this article we create our own complete definition of the cloud and we analyze the essential aspects of cloud adoption for a banking financial reporting application.

  1. DEVELOPMENT OF AN APPLICATION WITH THE PURPOSE OF MAINTAINING A DATABASE

    Directory of Open Access Journals (Sweden)

    Wilson Fadlo Curi

    2011-04-01

    Full Text Available Nowadays, sustainable development is the great paradigm of human development, because of that new methodologies for the planning and management of systems, specially the ones for hydric resources, are being developed, this forms of evaluation are no longer restricted to mere economic evaluation, but are also submitted to social and environmental sustainability evaluation. The use of databases is essential for the manipulation of the most diverse kinds of data and information, which can be utilized for storing historical data and other information necessary for future use. Therefore, this article focuses mainly on presenting the application developed to manipulate tables in a database, to allow and facilitate the inclusion, elimination, renaming of tables and table fields in order to substitute some SQL commands available in programs of the various available databases. Thus, this application will add value to the decision support system that is being developed by GOTA (Group of Water Total Optimization which in many cases needs to make changes in its database for obtaining greater flexibility and data manipulation, such as register of reservoirs, irrigated perimeters, meteorological stations, gauged stations, institutions, etc. This application allows an intelligent and fast manipulation of tables in a database, which the present version runs on the PostgreSQL database and was developed in Java platform, which is a platform that permits its installation in several types of operational systems and many kinds of computers.

  2. An Application to WIN/ISIS Database on Local Network

    Directory of Open Access Journals (Sweden)

    Robert Lechien

    2005-07-01

    Full Text Available A Translated Article containing an application to how WIN/ISIS database work on local network. It starts with main definitions, and how to install WIN/ISIS on PC, and how to install it on the local network server.

  3. Database Design Methodology and Database Management System for Computer-Aided Structural Design Optimization.

    Science.gov (United States)

    1984-12-01

    1983). Several researchers Lillehagen and Dokkar (1982), Grabowski, Eigener and Ranch (1978), and Eberlein and Wedekind (1982) have worked on database...Proceedings of International Federation of Information Processing. pp. 335-366. Eberlein, W. and Wedekind , H., 1982, "A Methodology for Embedding Design

  4. Industrial applications of computed tomography

    DEFF Research Database (Denmark)

    De Chiffre, Leonardo; Carmignato, S.; Kruth, J. -P.

    2014-01-01

    The number of industrial applications of Computed Tomography(CT) is large and rapidly increasing. After a brief market overview, the paper gives a survey of state of the art and upcoming CT technologies, covering types of CT systems, scanning capabilities, and technological advances. The paper...... contains a survey of application examples from the manufacturing industry as well as from other industries, e.g., electrical and electronic devices, inhomogeneous materials, and from the food industry. Challenges as well as major national and international coordinated activities in the field of industrial...

  5. Combinatorial methods with computer applications

    CERN Document Server

    Gross, Jonathan L

    2007-01-01

    Combinatorial Methods with Computer Applications provides in-depth coverage of recurrences, generating functions, partitions, and permutations, along with some of the most interesting graph and network topics, design constructions, and finite geometries. Requiring only a foundation in discrete mathematics, it can serve as the textbook in a combinatorial methods course or in a combined graph theory and combinatorics course.After an introduction to combinatorics, the book explores six systematic approaches within a comprehensive framework: sequences, solving recurrences, evaluating summation exp

  6. Databases

    Directory of Open Access Journals (Sweden)

    Nick Ryan

    2004-01-01

    Full Text Available Databases are deeply embedded in archaeology, underpinning and supporting many aspects of the subject. However, as well as providing a means for storing, retrieving and modifying data, databases themselves must be a result of a detailed analysis and design process. This article looks at this process, and shows how the characteristics of data models affect the process of database design and implementation. The impact of the Internet on the development of databases is examined, and the article concludes with a discussion of a range of issues associated with the recording and management of archaeological data.

  7. Construction of crystal structure prototype database: methods and applications.

    Science.gov (United States)

    Su, Chuanxun; Lv, Jian; Li, Quan; Wang, Hui; Zhang, Lijun; Wang, Yanchao; Ma, Yanming

    2017-04-26

    Crystal structure prototype data have become a useful source of information for materials discovery in the fields of crystallography, chemistry, physics, and materials science. This work reports the development of a robust and efficient method for assessing the similarity of structures on the basis of their interatomic distances. Using this method, we proposed a simple and unambiguous definition of crystal structure prototype based on hierarchical clustering theory, and constructed the crystal structure prototype database (CSPD) by filtering the known crystallographic structures in a database. With similar method, a program structure prototype analysis package (SPAP) was developed to remove similar structures in CALYPSO prediction results and extract predicted low energy structures for a separate theoretical structure database. A series of statistics describing the distribution of crystal structure prototypes in the CSPD was compiled to provide an important insight for structure prediction and high-throughput calculations. Illustrative examples of the application of the proposed database are given, including the generation of initial structures for structure prediction and determination of the prototype structure in databases. These examples demonstrate the CSPD to be a generally applicable and useful tool for materials discovery.

  8. Construction of crystal structure prototype database: methods and applications

    Science.gov (United States)

    Su, Chuanxun; Lv, Jian; Li, Quan; Wang, Hui; Zhang, Lijun; Wang, Yanchao; Ma, Yanming

    2017-04-01

    Crystal structure prototype data have become a useful source of information for materials discovery in the fields of crystallography, chemistry, physics, and materials science. This work reports the development of a robust and efficient method for assessing the similarity of structures on the basis of their interatomic distances. Using this method, we proposed a simple and unambiguous definition of crystal structure prototype based on hierarchical clustering theory, and constructed the crystal structure prototype database (CSPD) by filtering the known crystallographic structures in a database. With similar method, a program structure prototype analysis package (SPAP) was developed to remove similar structures in CALYPSO prediction results and extract predicted low energy structures for a separate theoretical structure database. A series of statistics describing the distribution of crystal structure prototypes in the CSPD was compiled to provide an important insight for structure prediction and high-throughput calculations. Illustrative examples of the application of the proposed database are given, including the generation of initial structures for structure prediction and determination of the prototype structure in databases. These examples demonstrate the CSPD to be a generally applicable and useful tool for materials discovery.

  9. Proposal for grid computing for nuclear applications

    Energy Technology Data Exchange (ETDEWEB)

    Idris, Faridah Mohamad; Ismail, Saaidi; Haris, Mohd Fauzi B.; Sulaiman, Mohamad Safuan B.; Aslan, Mohd Dzul Aiman Bin.; Samsudin, Nursuliza Bt.; Ibrahim, Maizura Bt.; Ahmad, Megat Harun Al Rashid B. Megat; Yazid, Hafizal B.; Jamro, Rafhayudi B.; Azman, Azraf B.; Rahman, Anwar B. Abdul; Ibrahim, Mohd Rizal B. Mamat; Muhamad, Shalina Bt. Sheik; Hassan, Hasni [Malaysian Nuclear Agency, Bangi, 43000 Kajang, Selangor (Malaysia); Abdullah, Wan Ahmad Tajuddin Wan; Ibrahim, Zainol Abidin; Zolkapli, Zukhaimira; Anuar, Afiq Aizuddin; Norjoharuddeen, Nurfikri [Physics Department, University of Malaya, 56003 Kuala Lumpur (Malaysia); and others

    2014-02-12

    The use of computer clusters for computational sciences including computational physics is vital as it provides computing power to crunch big numbers at a faster rate. In compute intensive applications that requires high resolution such as Monte Carlo simulation, the use of computer clusters in a grid form that supplies computational power to any nodes within the grid that needs computing power, has now become a necessity. In this paper, we described how the clusters running on a specific application could use resources within the grid, to run the applications to speed up the computing process.

  10. Advanced in Computer Science and its Applications

    CERN Document Server

    Yen, Neil; Park, James; CSA 2013

    2014-01-01

    The theme of CSA is focused on the various aspects of computer science and its applications for advances in computer science and its applications and provides an opportunity for academic and industry professionals to discuss the latest issues and progress in the area of computer science and its applications. Therefore this book will be include the various theories and practical applications in computer science and its applications.

  11. Cyclo-lib: a database of computational molecular dynamics simulations of cyclodextrins.

    Science.gov (United States)

    Mixcoha, Edgar; Rosende, Roberto; Garcia-Fandino, Rebeca; Piñeiro, Ángel

    2016-11-01

    Cyclodextrins (CDs) are amongst the most versatile/multi-functional molecules used in molecular research and chemical applications. They are natural cyclic oligosaccharides typically employed to encapsulate hydrophobic groups in their central cavity. This allows solubilizing, protecting or reducing the toxicity of a large variety of different molecules including drugs, dyes and surfactant agents. In spite of their great potential, atomic level information of these molecules, which is key for their function, is really scarce. Computational Molecular Dynamics (MD) simulations have the potential to efficiently fill this gap, providing structural-dynamic information at atomic level in time scales ranging from ps to μs. Cyclo-lib is a database with a publicly accessible web-interface containing structural and dynamic analysis obtained from computational MD simulation trajectories (250 ns long) of native and modified CDs in explicit water molecules. Cyclo-lib currently includes 70 CDs typically employed for fundamental and industrial research. Tools for comparative analysis between different CDs, as well as to restrict the analysis to specific time-segments within the trajectories are also available. Cyclo-lib provides atomic resolution information aimed to complement experimental results performed with the same molecules. The database is freely available under http://cyclo-lib.mduse.com/ CONTACT: Angel.Pineiro@usc.es. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  12. Performance Evaluation of a Parallel Cascade Semijoin Algorithm for Computing Path Expressions inObject Database Systems

    Institute of Scientific and Technical Information of China (English)

    王国仁; 于戈

    2002-01-01

    With the emerging of new applications, especially in Web, such asCommerce,Digital Library and DNA Bank, object database systems show their stronger functions than other kinds of database systems due to their powerful representation ability on complex semantics and relationship. One distinguished feature of object databasesystems is path expression,and most queries on an objectdatabase are based on path expression because it is the most natural and convenient way to access the object database, for example, to navigate the hyperlinks in a webased database.The execution of path expression is usually extremely expensive on a very large database.Therefore,the improvement of path expression execution efficiency is criticlfor the performance of object databases. As an important approach realizing highperformance query processing,theparallel processing of path expression on distributed object databases is explored in this paper. Up to now, some algorithms about how to compute path expressions and how to optimize path expression processing havebeenproposedforcentralized environments.But,few approaches have beenpresented for computing path expressionsi parallel.Inthispaper,anewparallelalgoritm for computing path expression named Parallel Cascade Semijoin (PCS J) is proposed.Moreover,a new scheduling strategy called right-deep zigzag tree is designed to further improve the performance of the PCSJalgorithm. The experiments have been implemented in an NOW distributed and parallel environment. The results show that the PCSJ algorithm outperforms the other two parallel algorithms (the parallel version of forward pointer chasing algorithm(PFPC) and the index splitting parallel algorithm (IndexSplit)) when computing path expressions with restrictive predicates and that the rightdeep zigzag tree scheduling strategy has better performance than the rightdeep tree scheduling strategy.

  13. Database application for changing data models in environmental engineering

    Energy Technology Data Exchange (ETDEWEB)

    Hussels, Ulrich; Camarinopoulos, Stephanos; Luedtke, Torsten; Pampoukis, Georgios [RISA Sicherheitsanalysen GmbH, Berlin-Charlottenburg (Germany)

    2013-07-01

    Whenever a technical task is to be solved with the help of a database application and uncertainties regarding the structure, scope or level of detail of the data model exist (either currently or in the future) the use of a generic database application can reduce considerably the cost of implementation and maintenance. Simultaneously the approach described in this contribution permits the operation with different views on the data and even finding and defining new views which had not been considered before. The prerequisite for this is that the preliminary information (structure as well as data) stored into the generic application matches the intended use. In this case, parts of the generic model developed with the generic approach can be reused and according efforts for a major rebuild can be saved. This significantly reduces the development time. At the same time flexibility is achieved concerning the environmental data model, which is not given in the context of conventional developments. (orig.)

  14. The Use of a Relational Database in Qualitative Research on Educational Computing.

    Science.gov (United States)

    Winer, Laura R.; Carriere, Mario

    1990-01-01

    Discusses the use of a relational database as a data management and analysis tool for nonexperimental qualitative research, and describes the use of the Reflex Plus database in the Vitrine 2001 project in Quebec to study computer-based learning environments. Information systems are also discussed, and the use of a conceptual model is explained.…

  15. Current trends and new challenges of databases and web applications for systems driven biological research

    Directory of Open Access Journals (Sweden)

    Pradeep Kumar eSreenivasaiah

    2010-12-01

    Full Text Available Dynamic and rapidly evolving nature of systems driven research imposes special requirements on the technology, approach, design and architecture of computational infrastructure including database and web application. Several solutions have been proposed to meet the expectations and novel methods have been developed to address the persisting problems of data integration. It is important for researchers to understand different technologies and approaches. Having familiarized with the pros and cons of the existing technologies, researchers can exploit its capabilities to the maximum potential for integrating data. In this review we discuss the architecture, design and key technologies underlying some of the prominent databases (DBs and web applications. We will mention their roles in integration of biological data and investigate some of the emerging design concepts and computational technologies that are likely to have a key role in the future of systems driven biomedical research.

  16. Mastering cloud computing foundations and applications programming

    CERN Document Server

    Buyya, Rajkumar; Selvi, SThamarai

    2013-01-01

    Mastering Cloud Computing is designed for undergraduate students learning to develop cloud computing applications. Tomorrow's applications won't live on a single computer but will be deployed from and reside on a virtual server, accessible anywhere, any time. Tomorrow's application developers need to understand the requirements of building apps for these virtual systems, including concurrent programming, high-performance computing, and data-intensive systems. The book introduces the principles of distributed and parallel computing underlying cloud architectures and specifical

  17. Cloud Computing and Its Applications in GIS

    Science.gov (United States)

    Kang, Cao

    2011-12-01

    Cloud computing is a novel computing paradigm that offers highly scalable and highly available distributed computing services. The objectives of this research are to: 1. analyze and understand cloud computing and its potential for GIS; 2. discover the feasibilities of migrating truly spatial GIS algorithms to distributed computing infrastructures; 3. explore a solution to host and serve large volumes of raster GIS data efficiently and speedily. These objectives thus form the basis for three professional articles. The first article is entitled "Cloud Computing and Its Applications in GIS". This paper introduces the concept, structure, and features of cloud computing. Features of cloud computing such as scalability, parallelization, and high availability make it a very capable computing paradigm. Unlike High Performance Computing (HPC), cloud computing uses inexpensive commodity computers. The uniform administration systems in cloud computing make it easier to use than GRID computing. Potential advantages of cloud-based GIS systems such as lower barrier to entry are consequently presented. Three cloud-based GIS system architectures are proposed: public cloud- based GIS systems, private cloud-based GIS systems and hybrid cloud-based GIS systems. Public cloud-based GIS systems provide the lowest entry barriers for users among these three architectures, but their advantages are offset by data security and privacy related issues. Private cloud-based GIS systems provide the best data protection, though they have the highest entry barriers. Hybrid cloud-based GIS systems provide a compromise between these extremes. The second article is entitled "A cloud computing algorithm for the calculation of Euclidian distance for raster GIS". Euclidean distance is a truly spatial GIS algorithm. Classical algorithms such as the pushbroom and growth ring techniques require computational propagation through the entire raster image, which makes it incompatible with the distributed nature

  18. Experiences with distributed computing for meteorological applications: grid computing and cloud computing

    OpenAIRE

    Oesterle, F.; Ostermann, S; R. Prodan; G. J. Mayr

    2015-01-01

    Experiences with three practical meteorological applications with different characteristics are used to highlight the core computer science aspects and applicability of distributed computing to meteorology. Through presenting cloud and grid computing this paper shows use case scenarios fitting a wide range of meteorological applications from operational to research studies. The paper concludes that distributed computing complements and extends existing high performance comput...

  19. Structure design and establishment of database application system for alien species in Shandong Province, China

    Institute of Scientific and Technical Information of China (English)

    GUO Wei-hua; LIU Heng; DU Ning; ZHANG Xin-shi; WANG Ren-qing

    2007-01-01

    This paper presents a case study on structure design and establishment of database application system for alien species in Shandong Province, integrating with Geographic Information System, computer network, and database technology to the research of alien species. The modules of alien species database, including classified data input, statistics and analysis, species pictures and distribution maps,and out date input, were approached by Visual Studio.net 2003 and Microsoft SQL server 2000. The alien species information contains the information of classification, species distinction characteristics, biological characteristics, original area, distribution area, the entering fashion and route, invasion time, invasion reason, interaction with the endemic species, growth state, danger state and spatial information, i.e.distribution map. Based on the above bases, several models including application, checking, modifying, printing, adding and returning models were developed. Furthermore, through the establishment of index tables and index maps, we can also spatially query the data like picture,text and GIS map data. This research established the technological platform of sharing information about scientific resource of alien species in Shandong Province, offering the basis for the dynamic inquiry of alien species, the warning technology of prevention and the fast reaction system. The database application system possessed the principles of good practicability, friendly user interface and convenient usage. It can supply full and accurate information inquiry services of alien species for the users and provide functions of dynamically managing the database for the administrator.

  20. Soft computing application in bioinformatics

    Institute of Scientific and Technical Information of China (English)

    MITRA Sushmita

    2008-01-01

    This article provides an outline on a recent application of soft computing for the mining of microarray gene expres-sions. We describe investigations with an evolutionary-rough feature selection algorithm for feature selection and classifica-tion on cancer data. Rough set theory is employed to generate reducts, which represent the minimal sets of non-redundant eatures capable of discerning between all objects, in a multi-objective framework. The experimental results demonstrate the ffectiveness of the methodology on three cancer datasets.

  1. Computer based screening of compound databases: 1. Preselection of benzamidine-based thrombin inhibitors.

    Science.gov (United States)

    Fox, T; Haaksma, E E

    2000-07-01

    We present a computational protocol which uses the known three-dimensional structure of a target enzyme to identify possible ligands from databases of compounds with low molecular weight. This is accomplished by first mapping the essential interactions in the binding site with the program GRID. The resulting regions of favorable interaction between target and ligand are translated into a database query, and with UNITY a flexible 3D database search is performed. The feasibility of this approach is calibrated with thrombin as the target. Our results show that the resulting hit lists are enriched with thrombin inhibitors compared to the total database.

  2. Assessing Database and Network Threats in Traditional and Cloud Computing

    Directory of Open Access Journals (Sweden)

    Katerina Lourida

    2015-05-01

    Full Text Available Cloud Computing is currently one of the most widely-spoken terms in IT. While it offers a range of technological and financial benefits, its wide acceptance by organizations is not yet wide spread. Security concerns are a main reason for this and this paper studies the data and network threats posed in both traditional and cloud paradigms in an effort to assert in which areas cloud computing addresses security issues and where it does introduce new ones. This evaluation is based on Microsoft’s STRIDE threat model and discusses the stakeholders, the impact and recommendations for tackling each threat.

  3. Trajectory Based Optimal Segment Computation in Road Network Databases

    DEFF Research Database (Denmark)

    Li, Xiaohui; Ceikute, Vaida; Jensen, Christian S.

    that adopt different approaches to computing the query. Algorithm AUG uses graph augmentation, and ITE uses iterative road-network partitioning. Empirical studies with real data sets demonstrate that the algorithms are capable of offering high performance in realistic settings....... that are shown empirically to be scalable. Given a road network, a set of existing facilities, and a collection of customer route traversals, an optimal segment query returns the optimal road network segment(s) for a new facility. We propose a practical framework for computing this query, where each route...

  4. Trajectory Based Optimal Segment Computation in Road Network Databases

    DEFF Research Database (Denmark)

    Li, Xiaohui; Ceikute, Vaida; Jensen, Christian S.

    2013-01-01

    that adopt different approaches to computing the query. Algorithm AUG uses graph augmentation, and ITE uses iterative road-network partitioning. Empirical studies with real data sets demonstrate that the algorithms are capable of offering high performance in realistic settings....... that are shown empirically to be scalable. Given a road network, a set of existing facilities, and a collection of customer route traversals, an optimal segment query returns the optimal road network segment(s) for a new facility. We propose a practical framework for computing this query, where each route...

  5. Turning text into research networks: information retrieval and computational ontologies in the creation of scientific databases.

    Directory of Open Access Journals (Sweden)

    Flávio Ceci

    Full Text Available BACKGROUND: Web-based, free-text documents on science and technology have been increasing growing on the web. However, most of these documents are not immediately processable by computers slowing down the acquisition of useful information. Computational ontologies might represent a possible solution by enabling semantically machine readable data sets. But, the process of ontology creation, instantiation and maintenance is still based on manual methodologies and thus time and cost intensive. METHOD: We focused on a large corpus containing information on researchers, research fields, and institutions. We based our strategy on traditional entity recognition, social computing and correlation. We devised a semi automatic approach for the recognition, correlation and extraction of named entities and relations from textual documents which are then used to create, instantiate, and maintain an ontology. RESULTS: We present a prototype demonstrating the applicability of the proposed strategy, along with a case study describing how direct and indirect relations can be extracted from academic and professional activities registered in a database of curriculum vitae in free-text format. We present evidence that this system can identify entities to assist in the process of knowledge extraction and representation to support ontology maintenance. We also demonstrate the extraction of relationships among ontology classes and their instances. CONCLUSION: We have demonstrated that our system can be used for the conversion of research information in free text format into database with a semantic structure. Future studies should test this system using the growing number of free-text information available at the institutional and national levels.

  6. Lunar Applications in Reconfigurable Computing

    Science.gov (United States)

    Somervill, Kevin

    2008-01-01

    NASA s Constellation Program is developing a lunar surface outpost in which reconfigurable computing will play a significant role. Reconfigurable systems provide a number of benefits over conventional software-based implementations including performance and power efficiency, while the use of standardized reconfigurable hardware provides opportunities to reduce logistical overhead. The current vision for the lunar surface architecture includes habitation, mobility, and communications systems, each of which greatly benefit from reconfigurable hardware in applications including video processing, natural feature recognition, data formatting, IP offload processing, and embedded control systems. In deploying reprogrammable hardware, considerations similar to those of software systems must be managed. There needs to be a mechanism for discovery enabling applications to locate and utilize the available resources. Also, application interfaces are needed to provide for both configuring the resources as well as transferring data between the application and the reconfigurable hardware. Each of these topics are explored in the context of deploying reconfigurable resources as an integral aspect of the lunar exploration architecture.

  7. Perspectives on a Big Data Application: What Database Engineers and IT Students Need to Know

    Directory of Open Access Journals (Sweden)

    E. Erturk

    2015-10-01

    Full Text Available Cloud Computing and Big Data are important and related current trends in the world of information technology. They will have significant impact on the curricula of computer engineering and information systems at universities and higher education institutions. Learning about big data is useful for both working database professionals and students, in accordance with the increase in jobs requiring these skills. It is also important to address a broad gamut of database engineering skills, i.e. database design, installation, and operation. Therefore the authors have investigated MongoDB, a popular application, both from the perspective of industry retraining for database specialists and for teaching. This paper demonstrates some practical activities that can be done by students at the Eastern Institute of Technology New Zealand. In addition to testing and preparing new content for future students, this paper contributes to the very recent and emerging academic literature in this area. This paper concludes with general recommendations for IT educators, database engineers, and other IT professionals.

  8. Mining the Galaxy Zoo Database: Machine Learning Applications

    Science.gov (United States)

    Borne, Kirk D.; Wallin, J.; Vedachalam, A.; Baehr, S.; Lintott, C.; Darg, D.; Smith, A.; Fortson, L.

    2010-01-01

    The new Zooniverse initiative is addressing the data flood in the sciences through a transformative partnership between professional scientists, volunteer citizen scientists, and machines. As part of this project, we are exploring the application of machine learning techniques to data mining problems associated with the large and growing database of volunteer science results gathered by the Galaxy Zoo citizen science project. We will describe the basic challenge, some machine learning approaches, and early results. One of the motivators for this study is the acquisition (through the Galaxy Zoo results database) of approximately 100 million classification labels for roughly one million galaxies, yielding a tremendously large and rich set of training examples for improving automated galaxy morphological classification algorithms. In our first case study, the goal is to learn which morphological and photometric features in the Sloan Digital Sky Survey (SDSS) database correlate most strongly with user-selected galaxy morphological class. As a corollary to this study, we are also aiming to identify which galaxy parameters in the SDSS database correspond to galaxies that have been the most difficult to classify (based upon large dispersion in their volunter-provided classifications). Our second case study will focus on similar data mining analyses and machine leaning algorithms applied to the Galaxy Zoo catalog of merging and interacting galaxies. The outcomes of this project will have applications in future large sky surveys, such as the LSST (Large Synoptic Survey Telescope) project, which will generate a catalog of 20 billion galaxies and will produce an additional astronomical alert database of approximately 100 thousand events each night for 10 years -- the capabilities and algorithms that we are exploring will assist in the rapid characterization and classification of such massive data streams. This research has been supported in part through NSF award #0941610.

  9. Basis Set Exchange: A Community Database for Computational Sciences

    Energy Technology Data Exchange (ETDEWEB)

    Schuchardt, Karen L.; Didier, Brett T.; Elsethagen, Todd O.; Sun, Lisong; Gurumoorthi, Vidhya; Chase, Jared M.; Li, Jun; Windus, Theresa L.

    2007-05-01

    Basis sets are one of the most important input data for computational models in the chemistry, materials, biology and other science domains that utilize computational quantum mechanics methods. Providing a shared, web accessible environment where researchers can not only download basis sets in their required format, but browse the data, contribute new basis sets, and ultimately curate and manage the data as a community will facilitate growth of this resource and encourage sharing both data and knowledge. We describe the Basis Set Exchange (BSE), a web portal that provides advanced browsing and download capabilities, facilities for contributing basis set data, and an environment that incorporates tools to foster development and interaction of communities. The BSE leverages and enables continued development of the basis set library originally assembled at the Environmental Molecular Sciences Laboratory.

  10. Basis set exchange: a community database for computational sciences.

    Science.gov (United States)

    Schuchardt, Karen L; Didier, Brett T; Elsethagen, Todd; Sun, Lisong; Gurumoorthi, Vidhya; Chase, Jared; Li, Jun; Windus, Theresa L

    2007-01-01

    Basis sets are some of the most important input data for computational models in the chemistry, materials, biology, and other science domains that utilize computational quantum mechanics methods. Providing a shared, Web-accessible environment where researchers can not only download basis sets in their required format but browse the data, contribute new basis sets, and ultimately curate and manage the data as a community will facilitate growth of this resource and encourage sharing both data and knowledge. We describe the Basis Set Exchange (BSE), a Web portal that provides advanced browsing and download capabilities, facilities for contributing basis set data, and an environment that incorporates tools to foster development and interaction of communities. The BSE leverages and enables continued development of the basis set library originally assembled at the Environmental Molecular Sciences Laboratory.

  11. Using AMDD method for Database Design in Mobile Cloud Computing Systems

    Directory of Open Access Journals (Sweden)

    Silviu Claudiu POPA

    2013-01-01

    Full Text Available The development of the technologies of wireless telecommunications gave birth of new kinds of e-commerce, the so called Mobile e-Commerce or m-Commerce. Mobile Cloud Computing (MCC represents a new IT research area that combines mobile computing and cloud compu-ting techniques. Behind a cloud mobile commerce system there is a database containing all necessary information for transactions. By means of Agile Model Driven Development (AMDD method, we are able to achieve many benefits that smooth the progress of designing the databases utilized for cloud m-Commerce systems.

  12. Advancements in web-database applications for rabies surveillance

    Directory of Open Access Journals (Sweden)

    Bélanger Denise

    2011-08-01

    Full Text Available Abstract Background Protection of public health from rabies is informed by the analysis of surveillance data from human and animal populations. In Canada, public health, agricultural and wildlife agencies at the provincial and federal level are responsible for rabies disease control, and this has led to multiple agency-specific data repositories. Aggregation of agency-specific data into one database application would enable more comprehensive data analyses and effective communication among participating agencies. In Québec, RageDB was developed to house surveillance data for the raccoon rabies variant, representing the next generation in web-based database applications that provide a key resource for the protection of public health. Results RageDB incorporates data from, and grants access to, all agencies responsible for the surveillance of raccoon rabies in Québec. Technological advancements of RageDB to rabies surveillance databases include 1 automatic integration of multi-agency data and diagnostic results on a daily basis; 2 a web-based data editing interface that enables authorized users to add, edit and extract data; and 3 an interactive dashboard to help visualize data simply and efficiently, in table, chart, and cartographic formats. Furthermore, RageDB stores data from citizens who voluntarily report sightings of rabies suspect animals. We also discuss how sightings data can indicate public perception to the risk of racoon rabies and thus aid in directing the allocation of disease control resources for protecting public health. Conclusions RageDB provides an example in the evolution of spatio-temporal database applications for the storage, analysis and communication of disease surveillance data. The database was fast and inexpensive to develop by using open-source technologies, simple and efficient design strategies, and shared web hosting. The database increases communication among agencies collaborating to protect human health from

  13. Independent College Computer Major Database Principle and Application Teaching Reform%独立学院计算机专业数据库原理及应用教学改革

    Institute of Scientific and Technical Information of China (English)

    王兴柱

    2015-01-01

    针对独立学院计算机专业数据原理及应用教学中存在的问题,结合笔者教学实践,从独立学院学生特征分析入手,对教学目标及大纲、教学方法、教师自身三方面进行了教学改革的探讨,提出一些经过实践检验的改革建议,以期对本课程的教学改革有所帮助。%For Independent Institute computer science theory and application data problems in teaching, combined with the author teaching practice, starting from the characteristics of independent school students, teaching objectives and syllabus, teaching methods, teachers themselves were discussed three aspects of teaching reform, put forward some after the test of practice reform proposals, hope to help with the teaching reform of this course.

  14. Application Program Interface for the Orion Aerodynamics Database

    Science.gov (United States)

    Robinson, Philip E.; Thompson, James

    2013-01-01

    The Application Programming Interface (API) for the Crew Exploration Vehicle (CEV) Aerodynamic Database has been developed to provide the developers of software an easily implemented, fully self-contained method of accessing the CEV Aerodynamic Database for use in their analysis and simulation tools. The API is programmed in C and provides a series of functions to interact with the database, such as initialization, selecting various options, and calculating the aerodynamic data. No special functions (file read/write, table lookup) are required on the host system other than those included with a standard ANSI C installation. It reads one or more files of aero data tables. Previous releases of aerodynamic databases for space vehicles have only included data tables and a document of the algorithm and equations to combine them for the total aerodynamic forces and moments. This process required each software tool to have a unique implementation of the database code. Errors or omissions in the documentation, or errors in the implementation, led to a lengthy and burdensome process of having to debug each instance of the code. Additionally, input file formats differ for each space vehicle simulation tool, requiring the aero database tables to be reformatted to meet the tool s input file structure requirements. Finally, the capabilities for built-in table lookup routines vary for each simulation tool. Implementation of a new database may require an update to and verification of the table lookup routines. This may be required if the number of dimensions of a data table exceeds the capability of the simulation tools built-in lookup routines. A single software solution was created to provide an aerodynamics software model that could be integrated into other simulation and analysis tools. The highly complex Orion aerodynamics model can then be quickly included in a wide variety of tools. The API code is written in ANSI C for ease of portability to a wide variety of systems. The

  15. A user's guide to particle physics computer-searchable databases on the SLAC-SPIRES system

    Energy Technology Data Exchange (ETDEWEB)

    Rittenberg, A.; Armstrong, F.E.; Levine, B.S.; Trippe, T.G.; Wohl, C.G.; Yost, G.P.; Whalley, M.R.; Addis, L.

    1986-09-01

    This report discusses five computer-searchable databases located at SLAC which are of interest to particle physicists. These databases assist the user in literature-searching, provide numerical data extracted from papers, and contain information about experiments. We describe the databases briefly, tell how to use the SPIRES database management system to access them interactively, and give several examples of their use.

  16. Computer Applications in the Design Process.

    Science.gov (United States)

    Winchip, Susan

    Computer Assisted Design (CAD) and Computer Assisted Manufacturing (CAM) are emerging technologies now being used in home economics and interior design applications. A microcomputer in a computer network system is capable of executing computer graphic functions such as three-dimensional modeling, as well as utilizing office automation packages to…

  17. 计算机数据库安全管理研究%Research on Security Management of Computer Database

    Institute of Scientific and Technical Information of China (English)

    石玉芳

    2015-01-01

    近年来,随着科学技术的快速发展,计算机的广泛应用给人们的日常生活工作带来翻天覆地的变化,但计算机自身系统也在不断地完善,存在着一定的缺陷与问题,尤其是技术及数据库的安全问题。计算机数据库作为计算机系统操作数据以及存储的形式,作为计算机数据系统的重要组成环节,它的安全问题直接关系到整个计算机数据系统未来的发展。该文从计算机数据库安全管理等方面入手,针对目前计算机数据库安全管理的现状以及存在的问题,提出几点有效的建议和措施,从而提高计算机数据库安全管理水平。%in recent years, with the rapid development of science and technology, the extensive application of the computer to the people's daily life and work bring earth shaking changes, but the computer system also in the continuous improvement, there is a certain defects and problems, especially the problem of database technology and safety. Computer database as a computer system data manipulation and storage form, as an important part of the computer data system and its security problems directly related to the computer data of the future development of the system. The from the computer database security management of aiming at the status quo and the existing problems of safety management of computer database, puts forward some effective suggestions and mea⁃sures, so as to improve the safety management level of computer database.

  18. Nanoinformatics: developing new computing applications for nanomedicine

    Science.gov (United States)

    Maojo, V.; Fritts, M.; Martin-Sanchez, F.; De la Iglesia, D.; Cachau, R.E.; Garcia-Remesal, M.; Crespo, J.; Mitchell, J.A.; Anguita, A.; Baker, N.; Barreiro, J.M.; Benitez, S. E.; De la Calle, G.; Facelli, J. C.; Ghazal, P.; Geissbuhler, A.; Gonzalez-Nilo, F.; Graf, N.; Grangeat, P.; Hermosilla, I.; Hussein, R.; Kern, J.; Koch, S.; Legre, Y.; Lopez-Alonso, V.; Lopez-Campos, G.; Milanesi, L.; Moustakis, V.; Munteanu, C.; Otero, P.; Pazos, A.; Perez-Rey, D.; Potamias, G.; Sanz, F.; Kulikowski, C.

    2012-01-01

    Nanoinformatics has recently emerged to address the need of computing applications at the nano level. In this regard, the authors have participated in various initiatives to identify its concepts, foundations and challenges. While nanomaterials open up the possibility for developing new devices in many industrial and scientific areas, they also offer breakthrough perspectives for the prevention, diagnosis and treatment of diseases. In this paper, we analyze the different aspects of nanoinformatics and suggest five research topics to help catalyze new research and development in the area, particularly focused on nanomedicine. We also encompass the use of informatics to further the biological and clinical applications of basic research in nanoscience and nanotechnology, and the related concept of an extended “nanotype” to coalesce information related to nanoparticles. We suggest how nanoinformatics could accelerate developments in nanomedicine, similarly to what happened with the Human Genome and other –omics projects, on issues like exchanging modeling and simulation methods and tools, linking toxicity information to clinical and personal databases or developing new approaches for scientific ontologies, among many others. PMID:22942787

  19. New Trend of Database for the Internet Era --Object database and its application

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    In the Internet era, the relational database in general usecannot be applied to some problems which should be solved. In this thesis, we describe the necessary capabilities for database management systems and compare them with th e limitation of RDB. And also we introduce object database and its efficiency, w hich will be the new trend of the database. We use “Jasmine2000” as a concrete e xample of object database in business use, and are going to verify its efficienc y with its applied cases. At the end, we will point the way of database's future .

  20. Object relationship notation (ORN) for database applications enhancing the modeling and implementation of associations

    CERN Document Server

    Ehlmann, Bryon K

    2009-01-01

    Conceptually, a database consists of objects and relationships. Object Relationship Notation (ORN) is a simple notation that more precisely defines relationships by combining UML multiplicities with uniquely defined referential actions. ""Object Relationship Notation (ORN) for Database Applications: Enhancing the Modeling and Implementation of Associations"" shows how ORN can be used in UML class diagrams and database definition languages (DDLs) to better model and implement relationships and thus more productively develop database applications. For the database developer, it presents many exa

  1. Computational Modelling in Cancer: Methods and Applications

    Directory of Open Access Journals (Sweden)

    Konstantina Kourou

    2015-01-01

    Full Text Available Computational modelling of diseases is an emerging field, proven valuable for the diagnosis, prognosis and treatment of the disease. Cancer is one of the diseases where computational modelling provides enormous advancements, allowing the medical professionals to perform in silico experiments and gain insights prior to any in vivo procedure. In this paper, we review the most recent computational models that have been proposed for cancer. Well known databases used for computational modelling experiments, as well as, the various markup language representations are discussed. In addition, recent state of the art research studies related to tumour growth and angiogenesis modelling are presented.

  2. DNAStat, version 2.1--a computer program for processing genetic profile databases and biostatistical calculations.

    Science.gov (United States)

    Berent, Jarosław

    2010-01-01

    This paper presents the new DNAStat version 2.1 for processing genetic profile databases and biostatistical calculations. The popularization of DNA studies employed in the judicial system has led to the necessity of developing appropriate computer programs. Such programs must, above all, address two critical problems, i.e. the broadly understood data processing and data storage, and biostatistical calculations. Moreover, in case of terrorist attacks and mass natural disasters, the ability to identify victims by searching related individuals is very important. DNAStat version 2.1 is an adequate program for such purposes. The DNAStat version 1.0 was launched in 2005. In 2006, the program was updated to 1.1 and 1.2 versions. There were, however, slight differences between those versions and the original one. The DNAStat version 2.0 was launched in 2007 and the major program improvement was an introduction of the group calculation options with the potential application to personal identification of mass disasters and terrorism victims. The last 2.1 version has the option of language selection--Polish or English, which will enhance the usage and application of the program also in other countries.

  3. Construction of a robust, large-scale, collaborative database for raw data in computational chemistry: the Collaborative Chemistry Database Tool (CCDBT).

    Science.gov (United States)

    Chen, Mingyang; Stott, Amanda C; Li, Shenggang; Dixon, David A

    2012-04-01

    A robust metadata database called the Collaborative Chemistry Database Tool (CCDBT) for massive amounts of computational chemistry raw data has been designed and implemented. It performs data synchronization and simultaneously extracts the metadata. Computational chemistry data in various formats from different computing sources, software packages, and users can be parsed into uniform metadata for storage in a MySQL database. Parsing is performed by a parsing pyramid, including parsers written for different levels of data types and sets created by the parser loader after loading parser engines and configurations.

  4. EXPLORATIONS IN QUANTUM COMPUTING FOR FINANCIAL APPLICATIONS

    OpenAIRE

    Gare, Jesse

    2010-01-01

    Quantum computers have the potential to increase the solution speed for many computational problems. This paper is a first step into possible applications for quantum computing in the context of computational finance. The fundamental ideas of quantum computing are introduced, followed by an exposition of the algorithms of Deutsch and Grover. Improved mean and median estimation are shown as results of Grover?s generalized framework. The algorithm for mean estimation is refined to an improved M...

  5. Definitions of database files and fields of the Personal Computer-Based Water Data Sources Directory

    Science.gov (United States)

    Green, J. Wayne

    1991-01-01

    This report describes the data-base files and fields of the personal computer-based Water Data Sources Directory (WDSD). The personal computer-based WDSD was derived from the U.S. Geological Survey (USGS) mainframe computer version. The mainframe version of the WDSD is a hierarchical data-base design. The personal computer-based WDSD is a relational data- base design. This report describes the data-base files and fields of the relational data-base design in dBASE IV (the use of brand names in this abstract is for identification purposes only and does not constitute endorsement by the U.S. Geological Survey) for the personal computer. The WDSD contains information on (1) the type of organization, (2) the major orientation of water-data activities conducted by each organization, (3) the names, addresses, and telephone numbers of offices within each organization from which water data may be obtained, (4) the types of data held by each organization and the geographic locations within which these data have been collected, (5) alternative sources of an organization's data, (6) the designation of liaison personnel in matters related to water-data acquisition and indexing, (7) the volume of water data indexed for the organization, and (8) information about other types of data and services available from the organization that are pertinent to water-resources activities.

  6. Brain-Computer Interfaces : Beyond Medical Applications

    NARCIS (Netherlands)

    Erp, J.B.F. van; Lotte, F.; Tangermann, M.

    2012-01-01

    Brain-computer interaction has already moved from assistive care to applications such as gaming. Improvements in usability, hardware, signal processing, and system integration should yield applications in other nonmedical areas.

  7. Brain-Computer Interfaces : Beyond Medical Applications

    NARCIS (Netherlands)

    Erp, J.B.F. van; Lotte, F.; Tangermann, M.

    2012-01-01

    Brain-computer interaction has already moved from assistive care to applications such as gaming. Improvements in usability, hardware, signal processing, and system integration should yield applications in other nonmedical areas.

  8. Development and application of animation technique by using PIV database

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Y.H. [Korea Maritime University, Pusan (Korea); Choi, J.W. [IIT, Pusan (Korea); Seo, M.S.; Ahn, K.H.; Kim, M.Y. [Graduate School, Korea Maritime University, Pusan (Korea)

    1998-11-01

    Animation technique from the PIV database is particularly emphasized to give macroscopic and quantitative description of complex flow fields. As an example, a Karman vortex street(Re=2x10{sup 4}) from the two-dimensional cylinder immersed in a circular water tank is visualized and processed by PIV. Cross correlation algorithm to estimate the peak coefficients is adopted for the identification and its performance is compared to that of the FFT routine. All animation jobs are implemented completely on single personal computer. Compressed digital images are obtained by Motion-JPEG board and various AVI files are finally obtained through graphic processes. As results, continuous pictures of the spatial distribution of the instant vectors, turbulent intensity, turbulent kinetic energy, kinetic energy, vorticity and three Reynolds stress components are animated dynamically on PC monitor. And streak lines, trajectories and streamlines are also displayed in real-time sense. (author). 10 refs., 18 figs.

  9. CLiBE: a database of computed ligand binding energy for ligand-receptor complexes.

    Science.gov (United States)

    Chen, X; Ji, Z L; Zhi, D G; Chen, Y Z

    2002-11-01

    Consideration of binding competitiveness of a drug candidate against natural ligands and other drugs that bind to the same receptor site may facilitate the rational development of a candidate into a potent drug. A strategy that can be applied to computer-aided drug design is to evaluate ligand-receptor interaction energy or other scoring functions of a designed drug with that of the relevant ligands known to bind to the same binding site. As a tool to facilitate such a strategy, a database of ligand-receptor interaction energy is developed from known ligand-receptor 3D structural entries in the Protein Databank (PDB). The Energy is computed based on a molecular mechanics force field that has been used in the prediction of therapeutic and toxicity targets of drugs. This database also contains information about ligand function and other properties and it can be accessed at http://xin.cz3.nus.edu.sg/group/CLiBE.asp. The computed energy components may facilitate the probing of the mode of action and other profiles of binding. A number of computed energies of some PDB ligand-receptor complexes in this database are studied and compared to experimental binding affinity. A certain degree of correlation between the computed energy and experimental binding affinity is found, which suggests that the computed energy may be useful in facilitating a qualitative analysis of drug binding competitiveness.

  10. Computer systems and methods for the query and visualization multidimensional databases

    Science.gov (United States)

    Stolte, Chris; Tang, Diane L.; Hanrahan, Patrick

    2017-04-25

    A method of generating a data visualization is performed at a computer having a display, one or more processors, and memory. The memory stores one or more programs for execution by the one or more processors. The process receives user specification of a plurality of characteristics of a data visualization. The data visualization is based on data from a multidimensional database. The characteristics specify at least x-position and y-position of data marks corresponding to tuples of data retrieved from the database. The process generates a data visualization according to the specified plurality of characteristics. The data visualization has an x-axis defined based on data for one or more first fields from the database that specify x-position of the data marks and the data visualization has a y-axis defined based on data for one or more second fields from the database that specify y-position of the data marks.

  11. Computational Unified Set Theory and Application

    Institute of Scientific and Technical Information of China (English)

    Zhang Jiang; Li Xuewei; He Zhongxiong

    2006-01-01

    The computational unified set model (CUSM) as the latest progress of Unified Set theory is introduced in this paper. The model combines unified set theory, information granule, complex adaptive system and cognitive science to present a new approach to simulate the cognition of human beings that can be viewed as the evolutionary process through the automatic learning from data sets. The information granule, which is the unit of cognition in CUSM, can be synthesized and created by the basic operators. It also can form the granule network by linking with other granules. With the learning from database, the system can evolve under the pressure of selection. As the adaptive results, fuzzy sets, vague sets and rough sets, etc can emerge out spontaneously. The CUSM answers the question of the origin of the uncertainties in thinking process described by unified set theory, that is due to the emergent properties of a holistic system of multiple cognitive units. And also the CUSM creates a dynamic model that can adapt to the environment. As a result, the "closed world" limitation in machine learning may be broken. The paper also discusses the applications of CUSM in rules discovery, problem solving, clustering analysis and data mining etc. The main features of the model comparing with the classical approaches toward those problems are its adaptability, flexibility and robustness but not accuracy.

  12. Grid computing infrastructure, service, and applications

    CERN Document Server

    Jie, Wei; Chen, Jinjun

    2009-01-01

    Offering a comprehensive discussion of advances in grid computing, this book summarizes the concepts, methods, technologies, and applications. It covers topics such as philosophy, middleware, architecture, services, and applications. It also includes technical details to demonstrate how grid computing works in the real world

  13. Codesign Analysis of a Computer Graphics Application

    DEFF Research Database (Denmark)

    Madsen, Jan; Brage, Jens P.

    1996-01-01

    This paper describes a codesign case study where a computer graphics application is examined with the intention to speed up its execution. The application is specified as a C program, and is characterized by the lack of a simple compute-intensive kernel. The hardware/software partitioning is based...

  14. Practical applications of soft computing in engineering

    CERN Document Server

    2001-01-01

    Soft computing has been presented not only with the theoretical developments but also with a large variety of realistic applications to consumer products and industrial systems. Application of soft computing has provided the opportunity to integrate human-like vagueness and real-life uncertainty into an otherwise hard computer program. This book highlights some of the recent developments in practical applications of soft computing in engineering problems. All the chapters have been sophisticatedly designed and revised by international experts to achieve wide but in-depth coverage. Contents: Au

  15. Mobile Cloud Computing and Applications

    Institute of Scientific and Technical Information of China (English)

    Chengzhong Xu

    2011-01-01

    @@ In 2010, cloud computing gained momentum.Cloud computing is a model for real-time, on-demand, pay-for-use network access to a shared pool of configurable computing and storage resources.It has matured from a promising business concept to a working reality in both the private and public IT sectors.The U.S.government, for example, has requested all its agencies to evaluate cloud computing alternatives as part of their budget submissions for new IT investment.

  16. Integer programming theory, applications, and computations

    CERN Document Server

    Taha, Hamdy A

    1975-01-01

    Integer Programming: Theory, Applications, and Computations provides information pertinent to the theory, applications, and computations of integer programming. This book presents the computational advantages of the various techniques of integer programming.Organized into eight chapters, this book begins with an overview of the general categorization of integer applications and explains the three fundamental techniques of integer programming. This text then explores the concept of implicit enumeration, which is general in a sense that it is applicable to any well-defined binary program. Other

  17. An Embedded Database Application for the Aggregation of Farming Device Data

    DEFF Research Database (Denmark)

    Iftikhar, Nadeem; Pedersen, Torben Bach

    2010-01-01

    In order to store massive amounts of data produced by the farming devices and to keep data that spans long intervals of time for analysis, reporting and maintenance purposes; it is desirable to reduce the size of the data by maintaining the data at different aggregate levels. The older data can...... be made coarse-grained while keeping the newest data fine-grained. Considering the availability of a limited amount of storage capacity on the farm machinery, an application written in C was developed to collect the data from a CAN-BUS, store it into the embedded database efficiently and perform gradual...... data aggregation effectively. Furthermore, the aggregation is achieved by using either two ratio-based aggregation methods or a time-granularity based aggregation method. A detailed description of the embedded database technology on a tractor computer is also presented in this paper....

  18. Multimedia Database Applications: Issues and Concerns for Classroom Teaching

    CERN Document Server

    Yu, Chien

    2011-01-01

    The abundance of multimedia data and information is challenging educators to effectively search, browse, access, use, and store the data for their classroom teaching. However, many educators could still be accustomed to teaching or searching for information using conventional methods, but often the conventional methods may not function well with multimedia data. Educators need to efficiently interact and manage a variety of digital media files too. The purpose of this study is to review current multimedia database applications in teaching and learning, and further discuss some of the issues or concerns that educators may have while incorporating multimedia data into their classrooms. Some strategies and recommendations are also provided in order for educators to be able to use multimedia data more effectively in their teaching environments.

  19. Applications of the Cambridge Structural Database in chemical education.

    Science.gov (United States)

    Battle, Gary M; Ferrence, Gregory M; Allen, Frank H

    2010-10-01

    The Cambridge Structural Database (CSD) is a vast and ever growing compendium of accurate three-dimensional structures that has massive chemical diversity across organic and metal-organic compounds. For these reasons, the CSD is finding significant uses in chemical education, and these applications are reviewed. As part of the teaching initiative of the Cambridge Crystallographic Data Centre (CCDC), a teaching subset of more than 500 CSD structures has been created that illustrate key chemical concepts, and a number of teaching modules have been devised that make use of this subset in a teaching environment. All of this material is freely available from the CCDC website, and the subset can be freely viewed and interrogated using WebCSD, an internet application for searching and displaying CSD information content. In some cases, however, the complete CSD System is required for specific educational applications, and some examples of these more extensive teaching modules are also discussed. The educational value of visualizing real three-dimensional structures, and of handling real experimental results, is stressed throughout.

  20. Applications of the Cambridge Structural Database in chemical education1

    Science.gov (United States)

    Battle, Gary M.; Ferrence, Gregory M.; Allen, Frank H.

    2010-01-01

    The Cambridge Structural Database (CSD) is a vast and ever growing compendium of accurate three-dimensional structures that has massive chemical diversity across organic and metal–organic compounds. For these reasons, the CSD is finding significant uses in chemical education, and these applications are reviewed. As part of the teaching initiative of the Cambridge Crystallographic Data Centre (CCDC), a teaching subset of more than 500 CSD structures has been created that illustrate key chemical concepts, and a number of teaching modules have been devised that make use of this subset in a teaching environment. All of this material is freely available from the CCDC website, and the subset can be freely viewed and interrogated using WebCSD, an internet application for searching and displaying CSD information content. In some cases, however, the complete CSD System is required for specific educational applications, and some examples of these more extensive teaching modules are also discussed. The educational value of visualizing real three-dimensional structures, and of handling real experimental results, is stressed throughout. PMID:20877495

  1. Android as a platform for database application development case : Winha mobile

    OpenAIRE

    Muli, Joseph

    2013-01-01

    This thesis aims to help beginner Android developers and anyone interested in database intergration on Android Operating System understand the basic fundamentals on how to design database applications, specifically for mobile devices. A review of Android applications has been made to give an overview of the general properties of the applications in relation to database creation and management. To accomplish this thesis, SQL (Structured Query Language) and Android application development w...

  2. A new on-line electrocardiographic records database and computer routines for data analysis.

    Science.gov (United States)

    Ledezma, Carlos A; Severeyn, Erika; Perpiñán, Gilberto; Altuve, Miguel; Wong, Sara

    2014-01-01

    Gathering experimental data to test computer methods developed during a research is a hard work. Nowadays, some databases have been stored online that can be freely downloaded, however there is not a wide range of databases yet and not all pathologies are covered. Researchers with low resources are in need of more data they can consult for free. To cope with this we present an on-line portal containing a compilation of ECG databases recorded over the last two decades for research purposes. The first version of this portal contains four databases of ECG records: ischemic cardiopathy (72 patients, 3-lead ECG each), ischemic preconditioning (20 patients, 3-lead ECG each), diabetes (51 patients, 8-lead ECG each) and metabolic syndrome (25 subjects, 12-lead ECG each). In addition, one computer program and three routines are provided in order to correctly read the signals, and two digital filters along with two ECG waves detectors are provided for further processing. This portal will be constantly growing, other ECG databases and signal processing software will be uploaded. With this project, we give the scientific community a resource to avoid hours of data collection and to develop free software.

  3. Web Application Software for Ground Operations Planning Database (GOPDb) Management

    Science.gov (United States)

    Lanham, Clifton; Kallner, Shawn; Gernand, Jeffrey

    2013-01-01

    A Web application facilitates collaborative development of the ground operations planning document. This will reduce costs and development time for new programs by incorporating the data governance, access control, and revision tracking of the ground operations planning data. Ground Operations Planning requires the creation and maintenance of detailed timelines and documentation. The GOPDb Web application was created using state-of-the-art Web 2.0 technologies, and was deployed as SaaS (Software as a Service), with an emphasis on data governance and security needs. Application access is managed using two-factor authentication, with data write permissions tied to user roles and responsibilities. Multiple instances of the application can be deployed on a Web server to meet the robust needs for multiple, future programs with minimal additional cost. This innovation features high availability and scalability, with no additional software that needs to be bought or installed. For data governance and security (data quality, management, business process management, and risk management for data handling), the software uses NAMS. No local copy/cloning of data is permitted. Data change log/tracking is addressed, as well as collaboration, work flow, and process standardization. The software provides on-line documentation and detailed Web-based help. There are multiple ways that this software can be deployed on a Web server to meet ground operations planning needs for future programs. The software could be used to support commercial crew ground operations planning, as well as commercial payload/satellite ground operations planning. The application source code and database schema are owned by NASA.

  4. Data management and language enhancement for generalized set theory computer language for operation of large relational databases

    Science.gov (United States)

    Finley, Gail T.

    1988-01-01

    This report covers the study of the relational database implementation in the NASCAD computer program system. The existing system is used primarily for computer aided design. Attention is also directed to a hidden-surface algorithm for final drawing output.

  5. Computer programme for control and maintenance and object oriented database: application to the realisation of an particle accelerator, the VIVITRON; Logiciel de controle et commande et base de donnees orientee objet: application dans le cadre de la mise en oeuvre d`un accelerateur de particules, le VIVITRON

    Energy Technology Data Exchange (ETDEWEB)

    Diaz, A

    1996-01-11

    The command and control system for the Vivitron, a new generation electrostatic particle accelerator, has been implemented using workstations and front-end computers using VME standards, the whole within an environment of UNIX/VxWorks. This architecture is distributed over an Ethernet network. Measurements and commands of the different sensors and actuators are concentrated in the front-end computers. The development of a second version of the software giving better performance and more functionality is described. X11 based communication has been utilised to transmit all the necessary informations to display parameters within the front-end computers on to the graphic screens. All other communications between processes use the Remote Procedure Call method (RPC). The conception of the system is based largely on the object oriented database O{sub 2} which integrates a full description of equipments and the code necessary to manage it. This code is generated by the database. This innovation permits easy maintenance of the system and bypasses the need of a specialist when adding new equipments. The new version of the command and control system has been progressively installed since August 1995. (author) 38 refs.

  6. APPLICATION OF GEOGRAPHICAL PARAMETER DATABASE TO ESTABLISHMENT OF UNIT POPULATION DATABASE

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    Now GIS is turning into a good tool in handling geographical, economical, and population data, so we can obtain more and more information from these data. On the other hand, in some cases, for a calamity, such as hurricane, earthquake, flood, drought etc., or a decision-making, such as setting up a broadcasting transmitter, building a chemical plant etc., we have to evaluate the total population in the region influenced by a calamity or a project. In this paper, a method is put forward to evaluate the population in such special region. Through exploring the correlation of geographical parameters and the distribution of people in the same region by means of quantitative analysis and qualitative analysis, unit population database (1km× 1km) is established. In this way, estimating the number of people in a special region is capable by adding up the population in every grid involved in this region boundary. The geographical parameters are obtained from topographic database and DEM database on the scale of 1∶ 250 000. The fundamental geographical parameter database covering county administrative boundaries and 1km× 1km grid is set up and the population database at county level is set up as well. Both geographical parameter database and unit population database are able to offer sufficient conditions for quantitative analysis. They will have important role in the research fields of data mining (DM), Decision-making Support Systems (DSS), and regional sustainable development.

  7. Implementing and developing cloud computing applications

    CERN Document Server

    Sarna, David E Y

    2010-01-01

    From small start-ups to major corporations, companies of all sizes have embraced cloud computing for the scalability, reliability, and cost benefits it can provide. It has even been said that cloud computing may have a greater effect on our lives than the PC and dot-com revolutions combined.Filled with comparative charts and decision trees, Implementing and Developing Cloud Computing Applications explains exactly what it takes to build robust and highly scalable cloud computing applications in any organization. Covering the major commercial offerings available, it provides authoritative guidan

  8. Real-time solution of linear computational problems using databases of parametric reduced-order models with arbitrary underlying meshes

    Science.gov (United States)

    Amsallem, David; Tezaur, Radek; Farhat, Charbel

    2016-12-01

    A comprehensive approach for real-time computations using a database of parametric, linear, projection-based reduced-order models (ROMs) based on arbitrary underlying meshes is proposed. In the offline phase of this approach, the parameter space is sampled and linear ROMs defined by linear reduced operators are pre-computed at the sampled parameter points and stored. Then, these operators and associated ROMs are transformed into counterparts that satisfy a certain notion of consistency. In the online phase of this approach, a linear ROM is constructed in real-time at a queried but unsampled parameter point by interpolating the pre-computed linear reduced operators on matrix manifolds and therefore computing an interpolated linear ROM. The proposed overall model reduction framework is illustrated with two applications: a parametric inverse acoustic scattering problem associated with a mockup submarine, and a parametric flutter prediction problem associated with a wing-tank system. The second application is implemented on a mobile device, illustrating the capability of the proposed computational framework to operate in real-time.

  9. Research on Performance Evaluation of Biological Database based on Layered Queuing Network Model under the Cloud Computing Environment

    OpenAIRE

    Zhengbin Luo; Dongmei Sun

    2013-01-01

    To evaluate the performance of biological database based on layered queuing network model and under cloud computing environment is a premise, as well as an important step for biological database optimization. Based on predecessors’ researches concerning computer software and hardware performance evaluation under cloud environment, the study has further constructed a model system to evaluate the performance of biological database based on layered queuing network model and under cloud environme...

  10. Color in Computer Vision Fundamentals and Applications

    CERN Document Server

    Gevers, Theo; van de Weijer, Joost; Geusebroek, Jan-Mark

    2012-01-01

    While the field of computer vision drives many of today’s digital technologies and communication networks, the topic of color has emerged only recently in most computer vision applications. One of the most extensive works to date on color in computer vision, this book provides a complete set of tools for working with color in the field of image understanding. Based on the authors’ intense collaboration for more than a decade and drawing on the latest thinking in the field of computer science, the book integrates topics from color science and computer vision, clearly linking theor

  11. Computer-aided diagnosis system for bone scintigrams from Japanese patients: importance of training database

    DEFF Research Database (Denmark)

    Horikoshi, Hiroyuki; Kikuchi, Akihiro; Onoguchi, Masahisa

    2012-01-01

    Computer-aided diagnosis (CAD) software for bone scintigrams have recently been introduced as a clinical quality assurance tool. The purpose of this study was to compare the diagnostic accuracy of two CAD systems, one based on a European and one on a Japanese training database, in a group of bone...... scans from Japanese patients.The two CAD software are trained to interpret bone scans using training databases consisting of bone scans with the desired interpretation, metastatic disease or not. One software was trained using 795 bone scans from European patients and the other with 904 bone scans from...... a higher specificity and accuracy compared to the European CAD software [81 vs. 57 % (p database showed significantly...

  12. The Effect of Relational Database Technology on Administrative Computing at Carnegie Mellon University.

    Science.gov (United States)

    Golden, Cynthia; Eisenberger, Dorit

    1990-01-01

    Carnegie Mellon University's decision to standardize its administrative system development efforts on relational database technology and structured query language is discussed and its impact is examined in one of its larger, more widely used applications, the university information system. Advantages, new responsibilities, and challenges of the…

  13. Grid computing techniques and applications

    CERN Document Server

    Wilkinson, Barry

    2009-01-01

    ''… the most outstanding aspect of this book is its excellent structure: it is as though we have been given a map to help us move around this technology from the base to the summit … I highly recommend this book …''Jose Lloret, Computing Reviews, March 2010

  14. Internet Databases of the Properties, Enzymatic Reactions, and Metabolism of Small Molecules—Search Options and Applications in Food Science

    Science.gov (United States)

    Minkiewicz, Piotr; Darewicz, Małgorzata; Iwaniak, Anna; Bucholska, Justyna; Starowicz, Piotr; Czyrko, Emilia

    2016-01-01

    Internet databases of small molecules, their enzymatic reactions, and metabolism have emerged as useful tools in food science. Database searching is also introduced as part of chemistry or enzymology courses for food technology students. Such resources support the search for information about single compounds and facilitate the introduction of secondary analyses of large datasets. Information can be retrieved from databases by searching for the compound name or structure, annotating with the help of chemical codes or drawn using molecule editing software. Data mining options may be enhanced by navigating through a network of links and cross-links between databases. Exemplary databases reviewed in this article belong to two classes: tools concerning small molecules (including general and specialized databases annotating food components) and tools annotating enzymes and metabolism. Some problems associated with database application are also discussed. Data summarized in computer databases may be used for calculation of daily intake of bioactive compounds, prediction of metabolism of food components, and their biological activity as well as for prediction of interactions between food component and drugs. PMID:27929431

  15. Internet Databases of the Properties, Enzymatic Reactions, and Metabolism of Small Molecules—Search Options and Applications in Food Science

    Directory of Open Access Journals (Sweden)

    Piotr Minkiewicz

    2016-12-01

    Full Text Available Internet databases of small molecules, their enzymatic reactions, and metabolism have emerged as useful tools in food science. Database searching is also introduced as part of chemistry or enzymology courses for food technology students. Such resources support the search for information about single compounds and facilitate the introduction of secondary analyses of large datasets. Information can be retrieved from databases by searching for the compound name or structure, annotating with the help of chemical codes or drawn using molecule editing software. Data mining options may be enhanced by navigating through a network of links and cross-links between databases. Exemplary databases reviewed in this article belong to two classes: tools concerning small molecules (including general and specialized databases annotating food components and tools annotating enzymes and metabolism. Some problems associated with database application are also discussed. Data summarized in computer databases may be used for calculation of daily intake of bioactive compounds, prediction of metabolism of food components, and their biological activity as well as for prediction of interactions between food component and drugs.

  16. Internet Databases of the Properties, Enzymatic Reactions, and Metabolism of Small Molecules-Search Options and Applications in Food Science.

    Science.gov (United States)

    Minkiewicz, Piotr; Darewicz, Małgorzata; Iwaniak, Anna; Bucholska, Justyna; Starowicz, Piotr; Czyrko, Emilia

    2016-12-06

    Internet databases of small molecules, their enzymatic reactions, and metabolism have emerged as useful tools in food science. Database searching is also introduced as part of chemistry or enzymology courses for food technology students. Such resources support the search for information about single compounds and facilitate the introduction of secondary analyses of large datasets. Information can be retrieved from databases by searching for the compound name or structure, annotating with the help of chemical codes or drawn using molecule editing software. Data mining options may be enhanced by navigating through a network of links and cross-links between databases. Exemplary databases reviewed in this article belong to two classes: tools concerning small molecules (including general and specialized databases annotating food components) and tools annotating enzymes and metabolism. Some problems associated with database application are also discussed. Data summarized in computer databases may be used for calculation of daily intake of bioactive compounds, prediction of metabolism of food components, and their biological activity as well as for prediction of interactions between food component and drugs.

  17. Real-life applications with membrane computing

    CERN Document Server

    Zhang, Gexiang; Gheorghe, Marian

    2017-01-01

    This book thoroughly investigates the underlying theoretical basis of membrane computing models, and reveals their latest applications. In addition, to date there have been no illustrative case studies or complex real-life applications that capitalize on the full potential of the sophisticated membrane systems computational apparatus; gaps that this book remedies. By studying various complex applications – including engineering optimization, power systems fault diagnosis, mobile robot controller design, and complex biological systems involving data modeling and process interactions – the book also extends the capabilities of membrane systems models with features such as formal verification techniques, evolutionary approaches, and fuzzy reasoning methods. As such, the book offers a comprehensive and up-to-date guide for all researchers, PhDs and undergraduate students in the fields of computer science, engineering and the bio-sciences who are interested in the applications of natural computing models.

  18. Photo-z-SQL: integrated, flexible photometric redshift computation in a database

    CERN Document Server

    Beck, Róbert; Budavári, Tamás; Szalay, Alexander S; Csabai, István

    2016-01-01

    We present a flexible template-based photometric redshift estimation framework, implemented in C#, that can be seamlessly integrated into a SQL database (or DB) server and executed on-demand in SQL. The DB integration eliminates the need to move large photometric datasets outside a database for redshift estimation, and utilizes the computational capabilities of DB hardware. The code is able to perform both maximum likelihood and Bayesian estimation, and can handle inputs of variable photometric filter sets and corresponding broad-band magnitudes. It is possible to take into account the full covariance matrix between filters, and filter zero points can be empirically calibrated using measurements with given redshifts. The list of spectral templates and the prior can be specified flexibly, and the expensive synthetic magnitude computations are done via lazy evaluation, coupled with a caching of results. Parallel execution is fully supported. For large upcoming photometric surveys such as the LSST, the ability t...

  19. Large-Scale 1:1 Computing Initiatives: An Open Access Database

    Science.gov (United States)

    Richardson, Jayson W.; McLeod, Scott; Flora, Kevin; Sauers, Nick J.; Kannan, Sathiamoorthy; Sincar, Mehmet

    2013-01-01

    This article details the spread and scope of large-scale 1:1 computing initiatives around the world. What follows is a review of the existing literature around 1:1 programs followed by a description of the large-scale 1:1 database. Main findings include: 1) the XO and the Classmate PC dominate large-scale 1:1 initiatives; 2) if professional…

  20. Computer Applications in Teaching and Learning.

    Science.gov (United States)

    Halley, Fred S.; And Others

    Some examples of the usage of computers in teaching and learning are examination generation, automatic exam grading, student tracking, problem generation, computational examination generators, program packages, simulation, and programing skills for problem solving. These applications are non-trivial and do fulfill the basic assumptions necessary…

  1. DB90: A Fortran Callable Relational Database Routine for Scientific and Engineering Computer Programs

    Science.gov (United States)

    Wrenn, Gregory A.

    2005-01-01

    This report describes a database routine called DB90 which is intended for use with scientific and engineering computer programs. The software is written in the Fortran 90/95 programming language standard with file input and output routines written in the C programming language. These routines should be completely portable to any computing platform and operating system that has Fortran 90/95 and C compilers. DB90 allows a program to supply relation names and up to 5 integer key values to uniquely identify each record of each relation. This permits the user to select records or retrieve data in any desired order.

  2. Monomial ideals, computations and applications

    CERN Document Server

    Gimenez, Philippe; Sáenz-de-Cabezón, Eduardo

    2013-01-01

    This work covers three important aspects of monomials ideals in the three chapters "Stanley decompositions" by Jürgen Herzog, "Edge ideals" by Adam Van Tuyl and "Local cohomology" by Josep Álvarez Montaner. The chapters, written by top experts, include computer tutorials that emphasize the computational aspects of the respective areas. Monomial ideals and algebras are, in a sense, among the simplest structures in commutative algebra and the main objects of combinatorial commutative algebra. Also, they are of major importance for at least three reasons. Firstly, Gröbner basis theory allows us to treat certain problems on general polynomial ideals by means of monomial ideals. Secondly, the combinatorial structure of monomial ideals connects them to other combinatorial structures and allows us to solve problems on both sides of this correspondence using the techniques of each of the respective areas. And thirdly, the combinatorial nature of monomial ideals also makes them particularly well suited to the devel...

  3. Tempest: GPU-CPU computing for high-throughput database spectral matching.

    Science.gov (United States)

    Milloy, Jeffrey A; Faherty, Brendan K; Gerber, Scott A

    2012-07-06

    Modern mass spectrometers are now capable of producing hundreds of thousands of tandem (MS/MS) spectra per experiment, making the translation of these fragmentation spectra into peptide matches a common bottleneck in proteomics research. When coupled with experimental designs that enrich for post-translational modifications such as phosphorylation and/or include isotopically labeled amino acids for quantification, additional burdens are placed on this computational infrastructure by shotgun sequencing. To address this issue, we have developed a new database searching program that utilizes the massively parallel compute capabilities of a graphical processing unit (GPU) to produce peptide spectral matches in a very high throughput fashion. Our program, named Tempest, combines efficient database digestion and MS/MS spectral indexing on a CPU with fast similarity scoring on a GPU. In our implementation, the entire similarity score, including the generation of full theoretical peptide candidate fragmentation spectra and its comparison to experimental spectra, is conducted on the GPU. Although Tempest uses the classical SEQUEST XCorr score as a primary metric for evaluating similarity for spectra collected at unit resolution, we have developed a new "Accelerated Score" for MS/MS spectra collected at high resolution that is based on a computationally inexpensive dot product but exhibits scoring accuracy similar to that of the classical XCorr. In our experience, Tempest provides compute-cluster level performance in an affordable desktop computer.

  4. Non-Boolean computing with nanomagnets for computer vision applications

    Science.gov (United States)

    Bhanja, Sanjukta; Karunaratne, D. K.; Panchumarthy, Ravi; Rajaram, Srinath; Sarkar, Sudeep

    2016-02-01

    The field of nanomagnetism has recently attracted tremendous attention as it can potentially deliver low-power, high-speed and dense non-volatile memories. It is now possible to engineer the size, shape, spacing, orientation and composition of sub-100 nm magnetic structures. This has spurred the exploration of nanomagnets for unconventional computing paradigms. Here, we harness the energy-minimization nature of nanomagnetic systems to solve the quadratic optimization problems that arise in computer vision applications, which are computationally expensive. By exploiting the magnetization states of nanomagnetic disks as state representations of a vortex and single domain, we develop a magnetic Hamiltonian and implement it in a magnetic system that can identify the salient features of a given image with more than 85% true positive rate. These results show the potential of this alternative computing method to develop a magnetic coprocessor that might solve complex problems in fewer clock cycles than traditional processors.

  5. Collectively loading an application in a parallel computer

    Energy Technology Data Exchange (ETDEWEB)

    Aho, Michael E.; Attinella, John E.; Gooding, Thomas M.; Miller, Samuel J.; Mundy, Michael B.

    2016-01-05

    Collectively loading an application in a parallel computer, the parallel computer comprising a plurality of compute nodes, including: identifying, by a parallel computer control system, a subset of compute nodes in the parallel computer to execute a job; selecting, by the parallel computer control system, one of the subset of compute nodes in the parallel computer as a job leader compute node; retrieving, by the job leader compute node from computer memory, an application for executing the job; and broadcasting, by the job leader to the subset of compute nodes in the parallel computer, the application for executing the job.

  6. Artificial immune system applications in computer security

    CERN Document Server

    Tan, Ying

    2016-01-01

    This book provides state-of-the-art information on the use, design, and development of the Artificial Immune System (AIS) and AIS-based solutions to computer security issues. Artificial Immune System: Applications in Computer Security focuses on the technologies and applications of AIS in malware detection proposed in recent years by the Computational Intelligence Laboratory of Peking University (CIL@PKU). It offers a theoretical perspective as well as practical solutions for readers interested in AIS, machine learning, pattern recognition and computer security. The book begins by introducing the basic concepts, typical algorithms, important features, and some applications of AIS. The second chapter introduces malware and its detection methods, especially for immune-based malware detection approaches. Successive chapters present a variety of advanced detection approaches for malware, including Virus Detection System, K-Nearest Neighbour (KNN), RBF networ s, and Support Vector Machines (SVM), Danger theory, ...

  7. PhasePlot: A Software Program for Visualizing Phase Relations Computed Using Thermochemical Models and Databases

    Science.gov (United States)

    Ghiorso, M. S.

    2011-12-01

    A new software program has been developed for Macintosh computers that permits the visualization of phase relations calculated from thermodynamic data-model collections. The data-model collections of MELTS (Ghiorso and Sack, 1995, CMP 119, 197-212), pMELTS (Ghiorso et al., 2002, G-cubed 3, 10.1029/2001GC000217) and the deep mantle database of Stixrude and Lithgow-Bertelloni (2011, GJI 184, 1180-1213) are currently implemented. The software allows users to enter a system bulk composition and a range of reference conditions and then calculate a grid of phase relations. These relations may be visualized in a variety of ways including phase diagrams, phase proportion plots, and contour diagrams of phase compositions and abundances. Results may be exported into Excel or similar spreadsheet applications. Flexibility in stipulating reference conditions permit the construction of temperature-pressure, temperature-volume, entropy-pressure, or entropy-volume display grids. Calculations on the grid are performed for fixed bulk composition or in open systems governed by user specified constraints on component chemical potentials (e.g., specified oxygen fugacity buffers). The calculation engine for the software is optimized for multi-core compute architectures and is very fast, allowing a typical grid of 64 points to be calculated in under 10 seconds on a dual-core laptop/iMac. The underlying computational thermodynamic algorithms have been optimized for speed and robust behavior. Taken together, both of these advances facilitate in classroom demonstrations and permit novice users to work with the program effectively, focusing on problem specification and interpretation of results rather than on manipulation and mechanics of computation - a key feature of an effective instructional tool. The emphasis in this software package is graphical visualization, which aids in better comprehension of complex phase relations in multicomponent systems. Anecdotal experience in using Phase

  8. Conditioning Probabilistic Databases

    CERN Document Server

    Koch, Christoph

    2008-01-01

    Past research on probabilistic databases has studied the problem of answering queries on a static database. Application scenarios of probabilistic databases however often involve the conditioning of a database using additional information in the form of new evidence. The conditioning problem is thus to transform a probabilistic database of priors into a posterior probabilistic database which is materialized for subsequent query processing or further refinement. It turns out that the conditioning problem is closely related to the problem of computing exact tuple confidence values. It is known that exact confidence computation is an NP-hard problem. This has lead researchers to consider approximation techniques for confidence computation. However, neither conditioning nor exact confidence computation can be solved using such techniques. In this paper we present efficient techniques for both problems. We study several problem decomposition methods and heuristics that are based on the most successful search techn...

  9. Pre-computed tsunami inundation database and forecast simulation in Pelabuhan Ratu, Indonesia

    Science.gov (United States)

    Setiyono, Urip; Gusman, Aditya Riadi; Satake, Kenji; Fujii, Yushiro

    2017-08-01

    We built a pre-computed tsunami inundation database in Pelabuhan Ratu, one of tsunami-prone areas on the southern coast of Java, Indonesia, which can be employed for a rapid estimation of tsunami inundation during an event. The pre-computed tsunami waveforms and inundations are from a total of 340 scenarios ranging from 7.5 to 9.2 in moment magnitude scale (Mw), including simple fault models of 208 thrust faults and 44 tsunami earthquakes on the plate interface, as well as 44 normal faults and 44 reverse faults in the outer-rise region. Using our tsunami inundation forecasting algorithm (NearTIF), we could rapidly estimate the tsunami inundation in Pelabuhan Ratu for three different hypothetical earthquakes. The first hypothetical earthquake is a megathrust earthquake type (Mw 9.0) offshore Sumatra which is about 600 km from Pelabuhan Ratu to represent a worst-case event in the far-field. The second hypothetical earthquake (Mw 8.5) is based on a slip deficit rate estimation from geodetic measurements and represents a most likely large event. The third hypothetical earthquake is a tsunami earthquake type (Mw 8.1) which often occurs south of Java. We compared the tsunami inundation maps produced by the NearTIF algorithm with results of direct forward inundation modeling for the hypothetical earthquakes. The tsunami inundation maps produced from both methods are similar for the three cases. However, the tsunami inundation map from the inundation database can be obtained in much shorter time (1 min) than the one from a forward inundation modeling (40 min). These indicate that the NearTIF algorithm based on pre-computed inundation database is reliable and useful for tsunami warning purposes. This study also demonstrates that the NearTIF algorithm can work well, though the earthquake source is located outside the area of fault model database because it uses a time shifting procedure for the best-fit scenario searching.

  10. Computer image processing: Geologic applications

    Science.gov (United States)

    Abrams, M. J.

    1978-01-01

    Computer image processing of digital data was performed to support several geological studies. The specific goals were to: (1) relate the mineral content to the spectral reflectance of certain geologic materials, (2) determine the influence of environmental factors, such as atmosphere and vegetation, and (3) improve image processing techniques. For detection of spectral differences related to mineralogy, the technique of band ratioing was found to be the most useful. The influence of atmospheric scattering and methods to correct for the scattering were also studied. Two techniques were used to correct for atmospheric effects: (1) dark object subtraction, (2) normalization of use of ground spectral measurements. Of the two, the first technique proved to be the most successful for removing the effects of atmospheric scattering. A digital mosaic was produced from two side-lapping LANDSAT frames. The advantages were that the same enhancement algorithm can be applied to both frames, and there is no seam where the two images are joined.

  11. Cloud Databases: A Paradigm Shift in Databases

    Directory of Open Access Journals (Sweden)

    Indu Arora

    2012-07-01

    Full Text Available Relational databases ruled the Information Technology (IT industry for almost 40 years. But last few years have seen sea changes in the way IT is being used and viewed. Stand alone applications have been replaced with web-based applications, dedicated servers with multiple distributed servers and dedicated storage with network storage. Cloud computing has become a reality due to its lesser cost, scalability and pay-as-you-go model. It is one of the biggest changes in IT after the rise of World Wide Web. Cloud databases such as Big Table, Sherpa and SimpleDB are becoming popular. They address the limitations of existing relational databases related to scalability, ease of use and dynamic provisioning. Cloud databases are mainly used for data-intensive applications such as data warehousing, data mining and business intelligence. These applications are read-intensive, scalable and elastic in nature. Transactional data management applications such as banking, airline reservation, online e-commerce and supply chain management applications are write-intensive. Databases supporting such applications require ACID (Atomicity, Consistency, Isolation and Durability properties, but these databases are difficult to deploy in the cloud. The goal of this paper is to review the state of the art in the cloud databases and various architectures. It further assesses the challenges to develop cloud databases that meet the user requirements and discusses popularly used Cloud databases.

  12. Development and Use of an EFL Reading Practice Application for an Android Tablet Computer

    Science.gov (United States)

    Ishikawa, Yasushige; Smith, Craig; Kondo, Mutsumi; Akano, Ichiro; Maher, Kate; Wada, Norihisa

    2013-01-01

    This paper reports on the use of an English-language reading practice application for an android tablet computer operating system with students who are not native speakers of English. The application materials for vocabulary learning in reading-passage contexts were created to include words from a database of low-frequency and technical noun-verb…

  13. Development and Use of an EFL Reading Practice Application for an Android Tablet Computer

    Science.gov (United States)

    Ishikawa, Yasushige; Smith, Craig; Kondo, Mutsumi; Akano, Ichiro; Maher, Kate; Wada, Norihisa

    2014-01-01

    This paper reports on the use of an English-language reading practice application for an Android tablet computer with students who are not native speakers of English. The application materials for vocabulary learning in reading-passage contexts were created to include words from a database of low-frequency and technical noun-verb collocations…

  14. Development and Use of an EFL Reading Practice Application for an Android Tablet Computer

    Science.gov (United States)

    Ishikawa, Yasushige; Smith, Craig; Kondo, Mutsumi; Akano, Ichiro; Maher, Kate; Wada, Norihisa

    2014-01-01

    This paper reports on the use of an English-language reading practice application for an Android tablet computer with students who are not native speakers of English. The application materials for vocabulary learning in reading-passage contexts were created to include words from a database of low-frequency and technical noun-verb collocations…

  15. Soft computing techniques in engineering applications

    CERN Document Server

    Zhong, Baojiang

    2014-01-01

    The Soft Computing techniques, which are based on the information processing of biological systems are now massively used in the area of pattern recognition, making prediction & planning, as well as acting on the environment. Ideally speaking, soft computing is not a subject of homogeneous concepts and techniques; rather, it is an amalgamation of distinct methods that confirms to its guiding principle. At present, the main aim of soft computing is to exploit the tolerance for imprecision and uncertainty to achieve tractability, robustness and low solutions cost. The principal constituents of soft computing techniques are probabilistic reasoning, fuzzy logic, neuro-computing, genetic algorithms, belief networks, chaotic systems, as well as learning theory. This book covers contributions from various authors to demonstrate the use of soft computing techniques in various applications of engineering.  

  16. Statistical methods and computer applications

    CERN Document Server

    Arora, PN

    2009-01-01

    Some of the exclusive features of the book are: Every concept has been explained with the help of solved examples. Working rules showing the various steps for the applications of formulae have also been given. The diagrams and graphs have been neatly and correctly drawn in such a way that the students have the complete understanding of the problem by simply looking at them. Efforts have been made to make the subject throughly exhaustive and nothing important has been omitted. Answer to all the problems have been throughly checked. It is a user-friendly book containing many, solved problems and

  17. Application of China's National Forest Continuous Inventory Database

    Science.gov (United States)

    Xie, Xiaokui; Wang, Qingli; Dai, Limin; Su, Dongkai; Wang, Xinchuang; Qi, Guang; Ye, Yujing

    2011-12-01

    The maintenance of a timely, reliable and accurate spatial database on current forest ecosystem conditions and changes is essential to characterize and assess forest resources and support sustainable forest management. Information for such a database can be obtained only through a continuous forest inventory. The National Forest Continuous Inventory (NFCI) is the first level of China's three-tiered inventory system. The NFCI is administered by the State Forestry Administration; data are acquired by five inventory institutions around the country. Several important components of the database include land type, forest classification and ageclass/ age-group. The NFCI database in China is constructed based on 5-year inventory periods, resulting in some of the data not being timely when reports are issued. To address this problem, a forest growth simulation model has been developed to update the database for years between the periodic inventories. In order to aid in forest plan design and management, a three-dimensional virtual reality system of forest landscapes for selected units in the database (compartment or sub-compartment) has also been developed based on Virtual Reality Modeling Language. In addition, a transparent internet publishing system for a spatial database based on open source WebGIS (UMN Map Server) has been designed and utilized to enhance public understanding and encourage free participation of interested parties in the development, implementation, and planning of sustainable forest management.

  18. Computer systems and methods for the query and visualization of multidimensional databases

    Science.gov (United States)

    Stolte, Chris [Palo Alto, CA; Tang, Diane L [Palo Alto, CA; Hanrahan, Patrick [Portola Valley, CA

    2012-03-20

    In response to a user request, a computer generates a graphical user interface on a computer display. A schema information region of the graphical user interface includes multiple operand names, each operand name associated with one or more fields of a multi-dimensional database. A data visualization region of the graphical user interface includes multiple shelves. Upon detecting a user selection of the operand names and a user request to associate each user-selected operand name with a respective shelf in the data visualization region, the computer generates a visual table in the data visualization region in accordance with the associations between the operand names and the corresponding shelves. The visual table includes a plurality of panes, each pane having at least one axis defined based on data for the fields associated with a respective operand name.

  19. Improving the analysis, storage and sharing of neuroimaging data using relational databases and distributed computing.

    Science.gov (United States)

    Hasson, Uri; Skipper, Jeremy I; Wilde, Michael J; Nusbaum, Howard C; Small, Steven L

    2008-01-15

    The increasingly complex research questions addressed by neuroimaging research impose substantial demands on computational infrastructures. These infrastructures need to support management of massive amounts of data in a way that affords rapid and precise data analysis, to allow collaborative research, and to achieve these aims securely and with minimum management overhead. Here we present an approach that overcomes many current limitations in data analysis and data sharing. This approach is based on open source database management systems that support complex data queries as an integral part of data analysis, flexible data sharing, and parallel and distributed data processing using cluster computing and Grid computing resources. We assess the strengths of these approaches as compared to current frameworks based on storage of binary or text files. We then describe in detail the implementation of such a system and provide a concrete description of how it was used to enable a complex analysis of fMRI time series data.

  20. Computer systems and methods for the query and visualization of multidimensional databases

    Science.gov (United States)

    Stolte, Chris; Tang, Diane L; Hanrahan, Patrick

    2014-04-29

    In response to a user request, a computer generates a graphical user interface on a computer display. A schema information region of the graphical user interface includes multiple operand names, each operand name associated with one or more fields of a multi-dimensional database. A data visualization region of the graphical user interface includes multiple shelves. Upon detecting a user selection of the operand names and a user request to associate each user-selected operand name with a respective shelf in the data visualization region, the computer generates a visual table in the data visualization region in accordance with the associations between the operand names and the corresponding shelves. The visual table includes a plurality of panes, each pane having at least one axis defined based on data for the fields associated with a respective operand name.

  1. Computer systems and methods for the query and visualization of multidimensional databases

    Science.gov (United States)

    Stolte, Chris; Tang, Diane L.; Hanrahan, Patrick

    2011-02-01

    In response to a user request, a computer generates a graphical user interface on a computer display. A schema information region of the graphical user interface includes multiple operand names, each operand name associated with one or more fields of a multi-dimensional database. A data visualization region of the graphical user interface includes multiple shelves. Upon detecting a user selection of the operand names and a user request to associate each user-selected operand name with a respective shelf in the data visualization region, the computer generates a visual table in the data visualization region in accordance with the associations between the operand names and the corresponding shelves. The visual table includes a plurality of panes, each pane having at least one axis defined based on data for the fields associated with a respective operand name.

  2. Computer systems and methods for the query and visualization of multidimensional databases

    Energy Technology Data Exchange (ETDEWEB)

    Stolte, Chris; Tang, Diane L.; Hanrahan, Patrick

    2015-11-10

    A computer displays a graphical user interface on its display. The graphical user interface includes a schema information region and a data visualization region. The schema information region includes a plurality of fields of a multi-dimensional database that includes at least one data hierarchy. The data visualization region includes a columns shelf and a rows shelf. The computer detects user actions to associate one or more first fields with the columns shelf and to associate one or more second fields with the rows shelf. The computer generates a visual table in the data visualization region in accordance with the user actions. The visual table includes one or more panes. Each pane has an x-axis defined based on data for the one or more first fields, and each pane has a y-axis defined based on data for the one or more second fields.

  3. High-performance computational analysis and peptide screening from databases of cyclotides from poaceae.

    Science.gov (United States)

    Porto, William F; Miranda, Vivian J; Pinto, Michelle F S; Dohms, Stephan M; Franco, Octavio L

    2016-01-01

    Cyclotides are a family of head-to-tail cyclized peptides containing three conserved disulfide bonds, in a structural scaffold also known as a cyclic cysteine knot. Due to the high degree of cysteine conservation, novel members from this peptide family can be identified in protein databases through a search through regular expression (REGEX). In this work, six novel cyclotide-like precursors from the Poaceae were identified from NCBI's non-redundant protein database by the use of REGEX. Two out of six sequences (named Zea mays L and M) showed an Asp residue in the C-terminal, which indicated that they could be cyclic. Gene expression in maize tissues was investigated, showing that the previously described cyclotide-like Z. mays J is expressed in the roots. According to molecular dynamics, the structure of Z. mays J seems to be stable, despite the putative absence of cyclization. As regards cyclotide evolution, it was hypothesized that this is an outcome from convergent evolution and/or horizontal gene transfer. The results showed that peptide screening from databases should be performed periodically in order to include novel sequences, which are deposited as the databases grow. Indeed, the advances in computational and experimental methods will together help to answer key questions and reach new horizons in defense-related peptide identification. © 2015 Wiley Periodicals, Inc.

  4. GSAC - Generic Seismic Application Computing

    Science.gov (United States)

    Herrmann, R. B.; Ammon, C. J.; Koper, K. D.

    2004-12-01

    With the success of the IRIS data management center, the use of large data sets in seismological research has become common. Such data sets, and especially the significantly larger data sets expected from EarthScope, present challenges for analysis with existing tools developed over the last 30 years. For much of the community, the primary format for data analysis is the Seismic Analysis Code (SAC) format developed by Lawrence Livermore National Laboratory. Although somewhat restrictive in meta-data storage, the simplicity and stability of the format has established it as an important component of seismological research. Tools for working with SAC files fall into two categories - custom research quality processing codes and shared display - processing tools such as SAC2000, MatSeis,etc., which were developed primarily for the needs of individual seismic research groups. While the current graphics display and platform dependence of SAC2000 may be resolved if the source code is released, the code complexity and the lack of large-data set analysis or even introductory tutorials could preclude code improvements and development of expertise in its use. We believe that there is a place for new, especially open source, tools. The GSAC effort is an approach that focuses on ease of use, computational speed, transportability, rapid addition of new features and openness so that new and advanced students, researchers and instructors can quickly browse and process large data sets. We highlight several approaches toward data processing under this model. gsac - part of the Computer Programs in Seismology 3.30 distribution has much of the functionality of SAC2000 and works on UNIX/LINUX/MacOS-X/Windows (CYGWIN). This is completely programmed in C from scratch, is small, fast, and easy to maintain and extend. It is command line based and is easily included within shell processing scripts. PySAC is a set of Python functions that allow easy access to SAC files and enable efficient

  5. Applications of computational quantum mechanics

    Science.gov (United States)

    Temel, Burcin

    This original research dissertation is composed of a new numerical technique based on Chebyshev polynomials that is applied on scattering problems, a phenomenological kinetics study for CO oxidation on RuO2 surface, and an experimental study on methanol coupling with doped metal oxide catalysts. Minimum Error Method (MEM), a least-squares minimization method, provides an efficient and accurate alternative to solve systems of ordinary differential equations. Existing methods usually utilize matrix methods which are computationally costful. MEM, which is based on the Chebyshev polynomials as a basis set, uses the recursion relationships and fast Chebyshev transforms which scale as O(N). For large basis set calculations this provides an enormous computational efficiency in the calculations. Chebyshev polynomials are also able to represent non-periodic problems very accurately. We applied MEM on elastic and inelastic scattering problems: it is more efficient and accurate than traditionally used Kohn variational principle, and it also provides the wave function in the interaction region. Phenomenological kinetics (PK) is widely used in industry to predict the optimum conditions for a chemical reaction. PK neglects the fluctuations, assumes no lateral interactions, and considers an ideal mix of reactants. The rate equations are tested by fitting the rate constants to the results of the experiments. Unfortunately, there are numerous examples where a fitted mechanism was later shown to be erroneous. We have undertaken a thorough comparison between the phenomenological equations and the results of kinetic Monte Carlo (KMC) simulations performed on the same system. The PK equations are qualitatively consistent with the KMC results but are quantitatively erroneous as a result of interplays between the adsorption and desorption events. The experimental study on methanol coupling with doped metal oxide catalysts demonstrates the doped metal oxides as a new class of catalysts

  6. A Preliminary Study on the Construction of Curriculum Resource Database Based on the New Line Platform of Vocational Education:A Case Study on the Course of"Fundamentals of Computer Application"%基于职教新干线平台的课程资源库建设初探--以“计算机应用基础”课程为例

    Institute of Scientific and Technical Information of China (English)

    张拥军

    2014-01-01

    本文以基于职教新干线平台的“计算机应用基础课程”资源库建设的探讨与研究,提倡共建共享、边建边用,实现教学资源库的应用推广和持续更新。%This paper explores and researches the construction of curriculum database for "Fundamentals of Computer Applica-tion"based on the new line platform of vocational education, ad-vocates co-construction and resource sharing, and using while constructing, aiming to realize the application promotion and sus-tainable updating of teaching resource database.

  7. Design and development of the individualized teaching system support platform of teaching resource database for Computer Application Technology%个性化计算机应用技术专业教学资源库教学系统支持平台的设计与开发

    Institute of Scientific and Technical Information of China (English)

    黄力明

    2015-01-01

    This paper introduces the design and realization of the individualized teaching system support platform of teaching resource database for Computer Application Technology.The function modules of the system includes teaching plan,exercise test,question answering,test,and all teaching links.Using the internet platform and the browser software,students can refer to the learning strategies system provided by free and flexible ways to complete all learning tasks of the course and teachers can easily complete all the teaching activities.%介绍计算机应用技术专业教学资源库教学系统支持平台的设计与实现,该系统包括课程教案、习题测验、答疑、考试等全部教学环节在内的功能模块。通过Internet平台并利用浏览器软件,学生可参考系统提供的学习策略采用自由灵活的方式完成课程的全部学习任务,教师可轻松地完成课程的全部教学活动。

  8. [Knowledge discovery in database and its application in clinical diagnosis].

    Science.gov (United States)

    Lui, Hui; Qiu, Tianshuang

    2004-08-01

    Nowadays the tremendous amount of data has far exceeded our human ability for comprehension, and this has been particularly true for the medical database. However, traditional statistical techniques are no longer adequate for analyzing this vast collection of data. Knowledge discovery in database and data mining play an important role in analyzing data and uncovering important data patterns. This paper briefly presents the concepts of knowledge discovery in database and data mining, then describes the rough set theory, and gives some examples based on rough set.

  9. Working with HITRAN Database Using Hapi: HITRAN Application Programming Interface

    Science.gov (United States)

    Kochanov, Roman V.; Hill, Christian; Wcislo, Piotr; Gordon, Iouli E.; Rothman, Laurence S.; Wilzewski, Jonas

    2015-06-01

    A HITRAN Application Programing Interface (HAPI) has been developed to allow users on their local machines much more flexibility and power. HAPI is a programming interface for the main data-searching capabilities of the new "HITRANonline" web service (http://www.hitran.org). It provides the possibility to query spectroscopic data from the HITRAN database in a flexible manner using either functions or query language. Some of the prominent current features of HAPI are: a) Downloading line-by-line data from the HITRANonline site to a local machine b) Filtering and processing the data in SQL-like fashion c) Conventional Python structures (lists, tuples, and dictionaries) for representing spectroscopic data d) Possibility to use a large set of third-party Python libraries to work with the data e) Python implementation of the HT lineshape which can be reduced to a number of conventional line profiles f) Python implementation of total internal partition sums (TIPS-2011) for spectra simulations g) High-resolution spectra calculation accounting for pressure, temperature and optical path length h) Providing instrumental functions to simulate experimental spectra i) Possibility to extend HAPI's functionality by custom line profiles, partitions sums and instrumental functions Currently the API is a module written in Python and uses Numpy library providing fast array operations. The API is designed to deal with data in multiple formats such as ASCII, CSV, HDF5 and XSAMS. This work has been supported by NASA Aura Science Team Grant NNX14AI55G and NASA Planetary Atmospheres Grant NNX13AI59G. L.S. Rothman et al. JQSRT, Volume 130, 2013, Pages 4-50 N.H. Ngo et al. JQSRT, Volume 129, November 2013, Pages 89-100 A. L. Laraia at al. Icarus, Volume 215, Issue 1, September 2011, Pages 391-400

  10. Computer applications in railway operation

    Directory of Open Access Journals (Sweden)

    Mohamed Hafez Fahmy Aly

    2016-06-01

    Full Text Available One of the main goals of the railway simulation technique is the formation of a model that can be easily tested for any desired changes and modifications in infrastructure, control system, or in train operations in order to improve the network operation and its productivity. RailSys3.0 is a German railway simulation program that deals with this goal. In this paper, a railway network operation, with different suggested modifications in infrastructure, rolling stocks, and control system, using RailSys3.0, has been studied, optimized, and evaluated. The proposed simulation program (RailSys 3.0 was applied on ABO-KIR railway line in Alexandria city, as a case study, to assess the impact of changing track configuration, operating and control systems on the performance measures, time-table, track capacity and productivity. Simulation input, such as track element, train and operation components of the ABO-KIR railway line, has been entered to the computer program to construct the simulation model. The simulation process has been carried out for the existing operation system to construct a graphical model of the case-study track including line alignment and train movements, as well as to evaluate the existing operation system. To improve the operation system of the railway line, eight different innovative alternatives are generated, analyzed and evaluated. Finally, different track measures to improve the operation system of the ABO-KIR railway line have been introduced.

  11. Analysis of Turbulence Datasets using a Database Cluster: Requirements, Design, and Sample Applications

    Science.gov (United States)

    Meneveau, Charles

    2007-11-01

    The massive datasets now generated by Direct Numerical Simulations (DNS) of turbulent flows create serious new challenges. During a simulation, DNS provides only a few time steps at any instant, owing to storage limitations within the computational cluster. Therefore, traditional numerical experiments done during the simulation examine each time slice only a few times before discarding it. Conversely, if a few large datasets from high-resolution simulations are stored, they are practically inaccessible to most in the turbulence research community, who lack the cyber resources to handle the massive amounts of data. Even those who can compute at that scale must run simulations again forward in time in order to answer new questions about the dynamics, duplicating computational effort. The result is that most turbulence datasets are vastly underutilized and not available as they should be for creative experimentation. In this presentation, we discuss the desired features and requirements of a turbulence database that will enable its widest access to the research community. The guiding principle of large databases is ``move the program to the data'' (Szalay et al. ``Designing and mining multi-terabyte Astronomy archives: the Sloan Digital Sky Survey,'' in ACM SIGMOD, 2000). However, in the case of turbulence research, the questions and analysis techniques are highly specific to the client and vary widely from one client to another. This poses particularly hard challenges in the design of database analysis tools. We propose a minimal set of such tools that are of general utility across various applications. And, we describe a new approach based on a Web services interface that allows a client to access the data in a user-friendly fashion while allowing maximum flexibility to execute desired analysis tasks. Sample applications will be discussed. This work is performed by the interdisciplinary ITR group, consisting of the author and Yi Li(1), Eric Perlman(2), Minping Wan(1

  12. Tempest: Accelerated MS/MS Database Search Software for Heterogeneous Computing Platforms.

    Science.gov (United States)

    Adamo, Mark E; Gerber, Scott A

    2016-09-07

    MS/MS database search algorithms derive a set of candidate peptide sequences from in silico digest of a protein sequence database, and compute theoretical fragmentation patterns to match these candidates against observed MS/MS spectra. The original Tempest publication described these operations mapped to a CPU-GPU model, in which the CPU (central processing unit) generates peptide candidates that are asynchronously sent to a discrete GPU (graphics processing unit) to be scored against experimental spectra in parallel. The current version of Tempest expands this model, incorporating OpenCL to offer seamless parallelization across multicore CPUs, GPUs, integrated graphics chips, and general-purpose coprocessors. Three protocols describe how to configure and run a Tempest search, including discussion of how to leverage Tempest's unique feature set to produce optimal results. © 2016 by John Wiley & Sons, Inc. Copyright © 2016 John Wiley & Sons, Inc.

  13. Solution based on Case-Based Reasoning for supporting a computer auditing database

    Directory of Open Access Journals (Sweden)

    Yasser Azán-Basallo

    2014-04-01

    Full Text Available In the Security Department Computing ETECSA through diagnostic matrices or checklists, the audit process is performed to Database Management Systems. After completing the monitoring of DBMS, experts determine the risk level of information security in terms of High, Medium and Low. The use of artificial intelligence technique Reasoning Case-Based, for use in the analysis phase of evaluation of the risk of security of the information to take advantage of the experience gained in previous audits of this type is proposed. He leaned on ETECSA specialists in determining the features that make the vector cases. The incorporation of Reasoning Case-Based technique to support the analysis of information security audits managers’ database, streamlines the process and helps in the analysis of risks to information security auditors.

  14. Hybrid soft computing approaches research and applications

    CERN Document Server

    Dutta, Paramartha; Chakraborty, Susanta

    2016-01-01

    The book provides a platform for dealing with the flaws and failings of the soft computing paradigm through different manifestations. The different chapters highlight the necessity of the hybrid soft computing methodology in general with emphasis on several application perspectives in particular. Typical examples include (a) Study of Economic Load Dispatch by Various Hybrid Optimization Techniques, (b) An Application of Color Magnetic Resonance Brain Image Segmentation by ParaOptiMUSIG activation Function, (c) Hybrid Rough-PSO Approach in Remote Sensing Imagery Analysis,  (d) A Study and Analysis of Hybrid Intelligent Techniques for Breast Cancer Detection using Breast Thermograms, and (e) Hybridization of 2D-3D Images for Human Face Recognition. The elaborate findings of the chapters enhance the exhibition of the hybrid soft computing paradigm in the field of intelligent computing.

  15. A Green's function database platform for seismological research and education: applications and examples

    Science.gov (United States)

    Heimann, Sebastian; Kriegerowski, Marius; Dahm, Torsten; Simone, Cesca; Wang, Rongjiang

    2016-04-01

    The study of seismic sources from measured waveforms requires synthetic elementary seismograms (Green's functions, GF) calculated for specific earth models and source receiver geometries. Since the calculation of GFs is computationally expensive and requires careful parameter testing and quality control, pre-calculated GF databases, which can be re-used for different types of applications, can be of advantage. We developed a GF database web platform for the seismological community (http://kinherd.org/), where a researcher can share Green's function stores and retrieve synthetic seismograms on the fly for various point and extended earthquake source models for many different earth models at local, regional and global scale. This web service is part of a rich new toolset for the creation and handling of Green's functions and synthetic seismograms (http://emolch.github.com/pyrocko/gf). It can be used off-line or in client mode. We demonstrate core features of the GF platform with different applications on global, regional and local scales. These include the automatic inversion of kinematic source parameter from teleseismic body waves, the improved depth estimate of shallow induced earthquakes from regional seismological arrays, or the relative moment tensor inversion of local earthquakes from volcanic induced seismicity.

  16. A Comparative Study of Relational and Non-Relational Database Models in a Web- Based Application

    Directory of Open Access Journals (Sweden)

    Cornelia Gyorödi

    2015-11-01

    Full Text Available The purpose of this paper is to present a comparative study between relational and non-relational database models in a web-based application, by executing various operations on both relational and on non-relational databases thus highlighting the results obtained during performance comparison tests. The study was based on the implementation of a web-based application for population records. For the non-relational database, we used MongoDB and for the relational database, we used MSSQL 2014. We will also present the advantages of using a non-relational database compared to a relational database integrated in a web-based application, which needs to manipulate a big amount of data.

  17. Computer-aided diagnosis workstation and database system for chest diagnosis based on multihelical CT images

    Science.gov (United States)

    Sato, Hitoshi; Niki, Noboru; Mori, Kiyoshi; Eguchi, Kenji; Kaneko, Masahiro; Moriyama, Noriyuki; Ohmatsu, Hironobu; Kakinuma, Ryutaro; Masuda, Hideo; Machida, Suguru; Sasagawa, Michizou

    2004-04-01

    Lung cancer is the most common cause, accounting for about 20% of all cancer deaths for males in Japan. Myocardial infarction is also known as a most fearful adult disease. Recently, multi-helical CT scanner advanced remarkably at the speed at which the chest CT images were acquired for screening examination. This screening examination requires a considerable number of images to be read. It is this time-consuming step that makes the use of multi-helical CT for mass screening. To overcome this problem, our group has developed a computer-aided diagnosis algorithm to automatically detect suspicious regions of lung cancer and coronary calcifications in chest CT images, so far. And in this time, our group has developed a newly computer-aided diagnosis workstation and database. These consist in three. First, it is an image processing system to automatically detect suspicious bronchial regions, pulmonary artery regions, plumonary vein regions and myocardial infarction regions at high speed. Second, they are two 1600 x 1200 matrix black and white liquid crystal monitor. Third, it is a terminal of image storage. These are connected mutually on the network. This makes it much easier to read images, since the 3D image of suspicious regions and shadow of suspicious regions can be displayed simultaneously on two 1600 x 1200 matrix liquid crystal monitor. The experimental results indicate that a newly computer-aided diagnosis workstation and database system can be effectively used in clinical practice to increase the speed and accuracy of routine diagnosis.

  18. Advanced computational electromagnetic methods and applications

    CERN Document Server

    Li, Wenxing; Elsherbeni, Atef; Rahmat-Samii, Yahya

    2015-01-01

    This new resource covers the latest developments in computational electromagnetic methods, with emphasis on cutting-edge applications. This book is designed to extend existing literature to the latest development in computational electromagnetic methods, which are of interest to readers in both academic and industrial areas. The topics include advanced techniques in MoM, FEM and FDTD, spectral domain method, GPU and Phi hardware acceleration, metamaterials, frequency and time domain integral equations, and statistics methods in bio-electromagnetics.

  19. Cloud computing for data-intensive applications

    CERN Document Server

    Li, Xiaolin

    2014-01-01

    This book presents a range of cloud computing platforms for data-intensive scientific applications. It covers systems that deliver infrastructure as a service, including: HPC as a service; virtual networks as a service; scalable and reliable storage; algorithms that manage vast cloud resources and applications runtime; and programming models that enable pragmatic programming and implementation toolkits for eScience applications. Many scientific applications in clouds are also introduced, such as bioinformatics, biology, weather forecasting and social networks. Most chapters include case studie

  20. New Concepts and Applications in Soft Computing

    CERN Document Server

    Fodor, János; Várkonyi-Kóczy, Annamária

    2013-01-01

                  The book provides a sample of research on the innovative theory and applications of soft computing paradigms.             The idea of Soft Computing was initiated in 1981 when Professor Zadeh published his first paper on soft data analysis and constantly evolved ever since. Professor Zadeh defined Soft Computing as the fusion of the fields of fuzzy logic (FL), neural network theory (NN) and probabilistic reasoning (PR), with the latter subsuming belief networks, evolutionary computing including DNA computing, chaos theory and parts of learning theory into one multidisciplinary system. As Zadeh said the essence of soft computing is that unlike the traditional, hard computing, soft computing is aimed at an accommodation with the pervasive imprecision of the real world. Thus, the guiding principle of soft computing is to exploit the tolerance for imprecision, uncertainty and partial truth to achieve tractability, robustness, low solution cost and better rapport with reality. ...

  1. Cloud computing with e-science applications

    CERN Document Server

    Terzo, Olivier

    2015-01-01

    The amount of data in everyday life has been exploding. This data increase has been especially significant in scientific fields, where substantial amounts of data must be captured, communicated, aggregated, stored, and analyzed. Cloud Computing with e-Science Applications explains how cloud computing can improve data management in data-heavy fields such as bioinformatics, earth science, and computer science. The book begins with an overview of cloud models supplied by the National Institute of Standards and Technology (NIST), and then:Discusses the challenges imposed by big data on scientific

  2. Codesign Analysis of a Computer Graphics Application

    DEFF Research Database (Denmark)

    Madsen, Jan; Brage, Jens P.

    1996-01-01

    This paper describes a codesign case study where a computer graphics application is examined with the intention to speed up its execution. The application is specified as a C program, and is characterized by the lack of a simple compute-intensive kernel. The hardware/software partitioning is based...... on information obtained from software profiling and the resulting design is validated through cosimulation. The achieved speed-up is estimated based on an analysis of profiling information from different sets of input data and various architectural options....

  3. Computer-aided detection of pulmonary nodules: a comparative study using the public LIDC/IDRI database

    NARCIS (Netherlands)

    Jacobs, C.; Rikxoort, E.M. van; Murphy, K.; Prokop, M.; Schaefer-Prokop, C.M.; Ginneken, B. van

    2016-01-01

    To benchmark the performance of state-of-the-art computer-aided detection (CAD) of pulmonary nodules using the largest publicly available annotated CT database (LIDC/IDRI), and to show that CAD finds lesions not identified by the LIDC's four-fold double reading process.The LIDC/IDRI database

  4. A Comparative Study of Relational and Non-Relational Database Models in a Web- Based Application

    OpenAIRE

    Cornelia Gyorödi; Robert Gyorödi; Roxana Sotoc

    2015-01-01

    The purpose of this paper is to present a comparative study between relational and non-relational database models in a web-based application, by executing various operations on both relational and on non-relational databases thus highlighting the results obtained during performance comparison tests. The study was based on the implementation of a web-based application for population records. For the non-relational database, we used MongoDB and for the relational database, we used MSSQL 2014. W...

  5. A Two-folded Impact Analysis of Schema Changes on Database Applications

    Institute of Scientific and Technical Information of China (English)

    Spyridon K.Gardikiotis; Nicos Malevris

    2009-01-01

    Database applications are becoming increasingly popular, mainly due to the advanced data management facilities that the underlying database management system offers compared against traditional legacy software applications. The interaction, however, of such applications with the database system introduces a number of issues, among which, this paper addresses the impact analysis of the changes performed at the database schema level. Our motivation is to provide the software engineers of database applications with automated methods that facilitate major maintenance tasks, such as source code corrections and regression testing, which should be triggered by the occurrence of such changes. The presented impact analysis is thus two-folded: the impact is analysed in terms of both the affected source code statements and the affected test suites concerning the testing of these applications. To achieve the former objective, a program slicing technique is employed, which is based on an extended version of the program dependency graph. The latter objective requires the analysis of test suites generated for database applications, which is accomplished by employing testing techniques tailored for this type of applications. Utilising both the slicing and the testing techniques enhances program comprehension of database applications, while also supporting the development of a number of practical metrics regarding their maintainability against schema changes. To evaluate the feasibility and effectiveness of the presented techniques and metrics, a software tool, called DATA, has been implemented. The experimental results from its usage on the TPC-C case study are reported and analysed.

  6. Applicability of computational systems biology in toxicology.

    Science.gov (United States)

    Kongsbak, Kristine; Hadrup, Niels; Audouze, Karine; Vinggaard, Anne Marie

    2014-07-01

    Systems biology as a research field has emerged within the last few decades. Systems biology, often defined as the antithesis of the reductionist approach, integrates information about individual components of a biological system. In integrative systems biology, large data sets from various sources and databases are used to model and predict effects of chemicals on, for instance, human health. In toxicology, computational systems biology enables identification of important pathways and molecules from large data sets; tasks that can be extremely laborious when performed by a classical literature search. However, computational systems biology offers more advantages than providing a high-throughput literature search; it may form the basis for establishment of hypotheses on potential links between environmental chemicals and human diseases, which would be very difficult to establish experimentally. This is possible due to the existence of comprehensive databases containing information on networks of human protein-protein interactions and protein-disease associations. Experimentally determined targets of the specific chemical of interest can be fed into these networks to obtain additional information that can be used to establish hypotheses on links between the chemical and human diseases. Such information can also be applied for designing more intelligent animal/cell experiments that can test the established hypotheses. Here, we describe how and why to apply an integrative systems biology method in the hypothesis-generating phase of toxicological research.

  7. Of red planets and indigo computers: Mars database visualization as an example of platform downsizing

    Science.gov (United States)

    Kaiser, M. K.; Montegut, M. J.

    1997-01-01

    The last decade has witnessed tremendous advancements in the computer hardware and software used to perform scientific visualization. In this paper, we consider how the visualization of a particular data set, the digital terrain model derived from the Viking orbiter imagery, has been realized in four distinct projects over this period. These examples serve to demonstrate how the vast improvements in computational performance both decrease the cost of such visualization efforts and permit an increasing level of interactivity. We then consider how even today's graphical systems require the visualization designer to make intelligent choices and tradeoffs in database rendering. Finally, we discuss how insights gleaned from an understanding of human visual perception can guide these design decisions, and suggest new options for visualization hardware and software.

  8. A Medical Image Backup Architecture Based on a NoSQL Database and Cloud Computing Services.

    Science.gov (United States)

    Santos Simões de Almeida, Luan Henrique; Costa Oliveira, Marcelo

    2015-01-01

    The use of digital systems for storing medical images generates a huge volume of data. Digital images are commonly stored and managed on a Picture Archiving and Communication System (PACS), under the DICOM standard. However, PACS is limited because it is strongly dependent on the server's physical space. Alternatively, Cloud Computing arises as an extensive, low cost, and reconfigurable resource. However, medical images contain patient information that can not be made available in a public cloud. Therefore, a mechanism to anonymize these images is needed. This poster presents a solution for this issue by taking digital images from PACS, converting the information contained in each image file to a NoSQL database, and using cloud computing to store digital images.

  9. Database-Backed Web Applications in the Wild: How Well Do They Work?

    OpenAIRE

    Yan, Cong; Cheung, Alvin; Lu, Shan

    2016-01-01

    Most modern database-backed web applications are built upon Object Relational Mapping (ORM) frameworks. While ORM frameworks ease application development by abstracting persistent data as objects, such convenience often comes with a performance cost. In this paper, we present CADO, a tool that analyzes the application logic and its interaction with databases using the Ruby on Rails ORM framework. CADO includes a static program analyzer, a profiler and a synthetic data generator to extract and...

  10. Computational social networks tools, perspectives and applications

    CERN Document Server

    Abraham, Ajith

    2012-01-01

    Provides the latest advances in computational social networks, and illustrates how organizations can gain a competitive advantage by applying these ideas in real-world scenarios Presents a specific focus on practical tools and applications Provides experience reports, survey articles, and intelligence techniques and theories relating to specific problems in network technology

  11. Applications of Computer Graphics in Engineering

    Science.gov (United States)

    1975-01-01

    Various applications of interactive computer graphics to the following areas of science and engineering were described: design and analysis of structures, configuration geometry, animation, flutter analysis, design and manufacturing, aircraft design and integration, wind tunnel data analysis, architecture and construction, flight simulation, hydrodynamics, curve and surface fitting, gas turbine engine design, analysis, and manufacturing, packaging of printed circuit boards, spacecraft design.

  12. A personal digital assistant application (MobilDent) for dental fieldwork data collection, information management and database handling.

    Science.gov (United States)

    Forsell, M; Häggström, M; Johansson, O; Sjögren, P

    2008-11-08

    To develop a personal digital assistant (PDA) application for oral health assessment fieldwork, including back-office and database systems (MobilDent). System design, construction and implementation of PDA, back-office and database systems. System requirements for MobilDent were collected, analysed and translated into system functions. User interfaces were implemented and system architecture was outlined. MobilDent was based on a platform with. NET (Microsoft) components, using an SQL Server 2005 (Microsoft) for data storage with Windows Mobile (Microsoft) operating system. The PDA devices were Dell Axim. System functions and user interfaces were specified for MobilDent. User interfaces for PDA, back-office and database systems were based on. NET programming. The PDA user interface was based on Windows suitable to a PDA display, whereas the back-office interface was designed for a normal-sized computer screen. A synchronisation module (MS Active Sync, Microsoft) was used to enable download of field data from PDA to the database. MobilDent is a feasible application for oral health assessment fieldwork, and the oral health assessment database may prove a valuable source for care planning, educational and research purposes. Further development of the MobilDent system will include wireless connectivity with download-on-demand technology.

  13. Database Application for a Youth Market Livestock Production Education Program

    Science.gov (United States)

    Horney, Marc R.

    2013-01-01

    This article offers an example of a database designed to support teaching animal production and husbandry skills in county youth livestock programs. The system was used to manage production goals, animal growth and carcass data, photos and other imagery, and participant records. These were used to produce a variety of customized reports to help…

  14. DockScreen: A Database of In Silico Biomolecular Interactions to Support Computational Toxicology

    Directory of Open Access Journals (Sweden)

    Michael-Rock Goldsmith

    2014-01-01

    Full Text Available We have developed DockScreen, a database of in silico biomolecular interactions designed to enable rational molecular toxicological insight within a computational toxicology framework. This database is composed of chemical/target (receptor and enzyme binding scores calculated by molecular docking of more than 1000 chemicals into 150 protein targets and contains nearly 135 thousand unique ligand/target binding scores. Obtaining this dataset was achieved using eHiTS (Simbiosys Inc., a fragment-based molecular docking approach with an exhaustive search algorithm, on a heterogeneous distributed high-performance computing framework. The chemical landscape covered in DockScreen comprises selected environmental and therapeutic chemicals. The target landscape covered in DockScreen was selected based on the availability of high-quality crystal structures that covered the assay space of phase I ToxCast in vitro assays. This in silico data provides continuous information that establishes a means for quantitatively comparing, on a structural biophysical basis, a chemical’s profile of biomolecular interactions. The combined minimum-score chemical/target matrix is provided.

  15. Wearable computer technology for dismounted applications

    Science.gov (United States)

    Daniels, Reginald

    2010-04-01

    Small computing devices which rival the compact size of traditional personal digital assistants (PDA) have recently established a market niche. These computing devices are small enough to be considered unobtrusive for humans to wear. The computing devices are also powerful enough to run full multi-tasking general purpose operating systems. This paper will explore the wearable computer information system for dismounted applications recently fielded for ground-based US Air Force use. The environments that the information systems are used in will be reviewed, as well as a description of the net-centric, ground-based warrior. The paper will conclude with a discussion regarding the importance of intuitive, usable, and unobtrusive operator interfaces for dismounted operators.

  16. Computational electromagnetics recent advances and engineering applications

    CERN Document Server

    2014-01-01

    Emerging Topics in Computational Electromagnetics in Computational Electromagnetics presents advances in Computational Electromagnetics. This book is designed to fill the existing gap in current CEM literature that only cover the conventional numerical techniques for solving traditional EM problems. The book examines new algorithms, and applications of these algorithms for solving problems of current interest that are not readily amenable to efficient treatment by using the existing techniques. The authors discuss solution techniques for problems arising in nanotechnology, bioEM, metamaterials, as well as multiscale problems. They present techniques that utilize recent advances in computer technology, such as parallel architectures, and the increasing need to solve large and complex problems in a time efficient manner by using highly scalable algorithms.

  17. In-database processing of a large collection of remote sensing data: applications and implementation

    Science.gov (United States)

    Kikhtenko, Vladimir; Mamash, Elena; Chubarov, Dmitri; Voronina, Polina

    2016-04-01

    Large archives of remote sensing data are now available to scientists, yet the need to work with individual satellite scenes or product files constrains studies that span a wide temporal range or spatial extent. The resources (storage capacity, computing power and network bandwidth) required for such studies are often beyond the capabilities of individual geoscientists. This problem has been tackled before in remote sensing research and inspired several information systems. Some of them such as NASA Giovanni [1] and Google Earth Engine have already proved their utility for science. Analysis tasks involving large volumes of numerical data are not unique to Earth Sciences. Recent advances in data science are enabled by the development of in-database processing engines that bring processing closer to storage, use declarative query languages to facilitate parallel scalability and provide high-level abstraction of the whole dataset. We build on the idea of bridging the gap between file archives containing remote sensing data and databases by integrating files into relational database as foreign data sources and performing analytical processing inside the database engine. Thereby higher level query language can efficiently address problems of arbitrary size: from accessing the data associated with a specific pixel or a grid cell to complex aggregation over spatial or temporal extents over a large number of individual data files. This approach was implemented using PostgreSQL for a Siberian regional archive of satellite data products holding hundreds of terabytes of measurements from multiple sensors and missions taken over a decade-long span. While preserving the original storage layout and therefore compatibility with existing applications the in-database processing engine provides a toolkit for provisioning remote sensing data in scientific workflows and applications. The use of SQL - a widely used higher level declarative query language - simplifies interoperability

  18. Web application for detailed real-time database transaction monitoring for CMS condition data

    CERN Document Server

    Di Guida, Salvatore; Innocente, Vincenzo; Pierro, Antonio

    2009-01-01

    In the upcoming LHC era, databases have become an essential part for the experiments collecting data from LHC, in order to safely store, and consistently retrieve, a wide amount of data, which are produced by different sources. In the CMS experiment at CERN, all this information is stored in ORACLE databases, located on several servers, both inside and outside the CERN network. The task of monitoring different databases is a crucial database administration issue, since different information may be required depending on different users' tasks that involve data transfer, inspection, planning and security. We present here a web application based on a Python web framework and Python modules for data mining purposes. To customize the GUI we record traces of user interactions that are used to build use-case models. In addition the application detects errors in database transactions, such as user errors, application failures, unexpected network shutdown or Structured Query Language (SQL) statement errors, and provid...

  19. 6th International Workshop Soft Computing Applications

    CERN Document Server

    Jain, Lakhmi; Kovačević, Branko

    2016-01-01

    These volumes constitute the Proceedings of the 6th International Workshop on Soft Computing Applications, or SOFA 2014, held on 24-26 July 2014 in Timisoara, Romania. This edition was organized by the University of Belgrade, Serbia in conjunction with Romanian Society of Control Engineering and Technical Informatics (SRAIT) - Arad Section, The General Association of Engineers in Romania - Arad Section, Institute of Computer Science, Iasi Branch of the Romanian Academy and IEEE Romanian Section.                 The Soft Computing concept was introduced by Lotfi Zadeh in 1991 and serves to highlight the emergence of computing methodologies in which the accent is on exploiting the tolerance for imprecision and uncertainty to achieve tractability, robustness and low solution cost. Soft computing facilitates the use of fuzzy logic, neurocomputing, evolutionary computing and probabilistic computing in combination, leading to the concept of hybrid intelligent systems.        The combination of ...

  20. ADVANCED COMPUTATIONAL METHODS IN DOSE MODELING: APPLICATION OF COMPUTATIONAL BIOPHYSICAL TRANSPORT, COMPUTATIONAL CHEMISTRY, AND COMPUTATIONAL BIOLOGY

    Science.gov (United States)

    Computational toxicology (CompTox) leverages the significant gains in computing power and computational techniques (e.g., numerical approaches, structure-activity relationships, bioinformatics) realized over the last few years, thereby reducing costs and increasing efficiency i...

  1. GaussDal: An open source database management system for quantum chemical computations

    Science.gov (United States)

    Alsberg, Bjørn K.; Bjerke, Håvard; Navestad, Gunn M.; Åstrand, Per-Olof

    2005-09-01

    An open source software system called GaussDal for management of results from quantum chemical computations is presented. Chemical data contained in output files from different quantum chemical programs are automatically extracted and incorporated into a relational database (PostgreSQL). The Structural Query Language (SQL) is used to extract combinations of chemical properties (e.g., molecules, orbitals, thermo-chemical properties, basis sets etc.) into data tables for further data analysis, processing and visualization. This type of data management is particularly suited for projects involving a large number of molecules. In the current version of GaussDal, parsers for Gaussian and Dalton output files are supported, however future versions may also include parsers for other quantum chemical programs. For visualization and analysis of generated data tables from GaussDal we have used the locally developed open source software SciCraft. Program summaryTitle of program: GaussDal Catalogue identifier: ADVT Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVT Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computers: Any Operating system under which the system has been tested: Linux Programming language used: Python Memory required to execute with typical data: 256 MB No. of bits in word: 32 or 64 No. of processors used: 1 Has the code been vectorized or parallelized?: No No. of lines in distributed program, including test data, etc: 543 531 No. of bytes in distribution program, including test data, etc: 7 718 121 Distribution format: tar.gzip file Nature of physical problem: Handling of large amounts of data from quantum chemistry computations. Method of solution: Use of SQL based database and quantum chemistry software specific parsers. Restriction on the complexity of the problem: Program is currently limited to Gaussian and Dalton output, but expandable to other formats. Generates subsets of multiple data tables from

  2. Soft Computing Techniques for Process Control Applications

    Directory of Open Access Journals (Sweden)

    Rahul Malhotra

    2011-09-01

    Full Text Available Technological innovations in soft computing techniques have brought automation capabilities to new levelsof applications. Process control is an important application of any industry for controlling the complexsystem parameters, which can greatly benefit from such advancements. Conventional control theory isbased on mathematical models that describe the dynamic behaviour of process control systems. Due to lackin comprehensibility, conventional controllers are often inferior to the intelligent controllers. Softcomputing techniques provide an ability to make decisions and learning from the reliable data or expert’sexperience. Moreover, soft computing techniques can cope up with a variety of environmental and stabilityrelated uncertainties. This paper explores the different areas of soft computing techniques viz. Fuzzy logic,genetic algorithms and hybridization of two and abridged the results of different process control casestudies. It is inferred from the results that the soft computing controllers provide better control on errorsthan conventional controllers. Further, hybrid fuzzy genetic algorithm controllers have successfullyoptimized the errors than standalone soft computing and conventional techniques.

  3. Application of XML database technology to biological pathway datasets.

    Science.gov (United States)

    Jiang, Keyuan; Nash, Christopher

    2006-01-01

    The study of biological systems has accumulated a significant amount of biological pathway data, which is evident through the continued growth in both the number of databases and amount of data available. The development of BioPAX standard leads to the increased availability of biological pathway datasets through the use of a special XML format, but the lack of standard storage mechanism makes the querying and aggregation of BioPAX compliant data challenging. To address this shortcoming, we have developed a storage mechanism leveraging the existing XML technologies: the XML database and XQuery. The goal of our project is to provide a generic and centralized store with efficient queries for the needs of biomedical research. A SOAP-based Web service and direct HTTP request methods have also developed to facilitate public consumption of the datasets online.

  4. The Application of an Anatomical Database for Fetal Congenital Heart Disease

    Institute of Scientific and Technical Information of China (English)

    Li Yang; Qiu-Yan Pei; Yun-Tao Li; Zhen-Juan Yang

    2015-01-01

    Background:Fetal congenital heart anomalies are the most common congenital anomalies in live births.Fetal echocardiography (FECG) is the only prenatal diagnostic approach used to detect fetal congenital heart disease (CHD).FECG is not widely used,and the antenatal diagnosis rate of CHD varies considerably.Thus,mastering the anatomical characteristics of different kinds of CHD is critical for ultrasound physicians to improve FECG technology.The aim of this study is to investigate the applications of a fetal CHD anatomic database in FECG teaching and training program.Methods:We evaluated 60 transverse section databases including 27 types of fetal CHD built in the Prenatal Diagnosis Center in Peking University People's Hospital.Each original database contained 400-700 cross-sectional digital images with a resolution of 3744 pixels × 5616 pixels.We imported the database into Amira 5.3.1 (Australia Visage Imaging Company,Australia) three-dimensional (3D) software.The database functions use a series of 3D software visual operations.The features of the fetal CHD anatomical database were analyzed to determine its applications in FECG continuing education and training.Results:The database was rebuilt using the 3D software.The original and rebuilt databases can be displayed dynamically,continuously,and synchronically and can be rotated at arbitrary angles.The sections from the dynamic displays and rotating angles are consistent with the sections in FECG.The database successfully reproduced the anatomic structures and spatial relationship features of different fetal CHDs.We established a fetal CHD anatomy training database and a standardized training database for FECG.Ultrasound physicians and students can learn the anatomical features of fetal CHD and FECG through either centralized training or distance education.Conclusions:The database of fetal CHD successfully reproduced the anatomic structures and spatial relationship of different kinds of fetal CHD.This database can be

  5. Applicability of Computational Systems Biology in Toxicology

    DEFF Research Database (Denmark)

    Kongsbak, Kristine Grønning; Hadrup, Niels; Audouze, Karine Marie Laure

    2014-01-01

    be used to establish hypotheses on links between the chemical and human diseases. Such information can also be applied for designing more intelligent animal/cell experiments that can test the established hypotheses. Here, we describe how and why to apply an integrative systems biology method......Systems biology as a research field has emerged within the last few decades. Systems biology, often defined as the antithesis of the reductionist approach, integrates information about individual components of a biological system. In integrative systems biology, large data sets from various sources...... and databases are used to model and predict effects of chemicals on, for instance, human health. In toxicology, computational systems biology enables identification of important pathways and molecules from large data sets; tasks that can be extremely laborious when performed by a classical literature search...

  6. Drug-target interaction prediction: databases, web servers and computational models.

    Science.gov (United States)

    Chen, Xing; Yan, Chenggang Clarence; Zhang, Xiaotian; Zhang, Xu; Dai, Feng; Yin, Jian; Zhang, Yongdong

    2016-07-01

    Identification of drug-target interactions is an important process in drug discovery. Although high-throughput screening and other biological assays are becoming available, experimental methods for drug-target interaction identification remain to be extremely costly, time-consuming and challenging even nowadays. Therefore, various computational models have been developed to predict potential drug-target associations on a large scale. In this review, databases and web servers involved in drug-target identification and drug discovery are summarized. In addition, we mainly introduced some state-of-the-art computational models for drug-target interactions prediction, including network-based method, machine learning-based method and so on. Specially, for the machine learning-based method, much attention was paid to supervised and semi-supervised models, which have essential difference in the adoption of negative samples. Although significant improvements for drug-target interaction prediction have been obtained by many effective computational models, both network-based and machine learning-based methods have their disadvantages, respectively. Furthermore, we discuss the future directions of the network-based drug discovery and network approach for personalized drug discovery based on personalized medicine, genome sequencing, tumor clone-based network and cancer hallmark-based network. Finally, we discussed the new evaluation validation framework and the formulation of drug-target interactions prediction problem by more realistic regression formulation based on quantitative bioactivity data.

  7. Computer Applications in Health Science Education.

    Science.gov (United States)

    Juanes, Juan A; Ruisoto, Pablo

    2015-09-01

    In recent years, computer application development has experienced exponential growth, not only in the number of publications but also in the scope or contexts that have benefited from its use. In health science training, and medicine specifically, the gradual incorporation of technological developments has transformed the teaching and learning process, resulting in true "educational technology". The goal of this paper is to review the main features involved in these applications and highlight the main lines of research for the future. The results of peer reviewed literature published recently indicate the following features shared by the key technological developments in the field of health science education: first, development of simulation and visualization systems for a more complete and realistic representation of learning material over traditional paper format; second, portability and versatility of the applications, adapted for an increasing number of devices and operative systems; third, increasing focus on open source applications such as Massive Open Online Course (MOOC).

  8. Applications of the Cambridge Structural Database in organic chemistry and crystal chemistry.

    Science.gov (United States)

    Allen, Frank H; Motherwell, W D Samuel

    2002-06-01

    The Cambridge Structural Database (CSD) and its associated software systems have formed the basis for more than 800 research applications in structural chemistry, crystallography and the life sciences. Relevant references, dating from the mid-1970s, and brief synopses of these papers are collected in a database, DBUse, which is freely available via the CCDC website. This database has been used to review research applications of the CSD in organic chemistry, including supramolecular applications, and in organic crystal chemistry. The review concentrates on applications that have been published since 1990 and covers a wide range of topics, including structure correlation, conformational analysis, hydrogen bonding and other intermolecular interactions, studies of crystal packing, extended structural motifs, crystal engineering and polymorphism, and crystal structure prediction. Applications of CSD information in studies of crystal structure precision, the determination of crystal structures from powder diffraction data, together with applications in chemical informatics, are also discussed.

  9. Computer performance optimization systems, applications, processes

    CERN Document Server

    Osterhage, Wolfgang W

    2013-01-01

    Computing power performance was important at times when hardware was still expensive, because hardware had to be put to the best use. Later on this criterion was no longer critical, since hardware had become inexpensive. Meanwhile, however, people have realized that performance again plays a significant role, because of the major drain on system resources involved in developing complex applications. This book distinguishes between three levels of performance optimization: the system level, application level and business processes level. On each, optimizations can be achieved and cost-cutting p

  10. An Application Development Platform for Neuromorphic Computing

    Energy Technology Data Exchange (ETDEWEB)

    Dean, Mark [University of Tennessee (UT); Chan, Jason [University of Tennessee (UT); Daffron, Christopher [University of Tennessee (UT); Disney, Adam [University of Tennessee (UT); Reynolds, John [University of Tennessee (UT); Rose, Garrett [University of Tennessee (UT); Plank, James [University of Tennessee (UT); Birdwell, John Douglas [University of Tennessee (UT); Schuman, Catherine D [ORNL

    2016-01-01

    Dynamic Adaptive Neural Network Arrays (DANNAs) are neuromorphic computing systems developed as a hardware based approach to the implementation of neural networks. They feature highly adaptive and programmable structural elements, which model arti cial neural networks with spiking behavior. We design them to solve problems using evolutionary optimization. In this paper, we highlight the current hardware and software implementations of DANNA, including their features, functionalities and performance. We then describe the development of an Application Development Platform (ADP) to support efficient application implementation and testing of DANNA based solutions. We conclude with future directions.

  11. A Web-Based Spectral Database for Environmental Application

    Directory of Open Access Journals (Sweden)

    Jing Fang

    2007-12-01

    Full Text Available The administration and storage of environmental characteristic spectral data are highly relevant in many fields of environmental study, such as measurement of trace gases in the atmosphere and air quality estimation. For this reason, a web-accessible database has been developed, offering ready access to the main parameters of molecular absorption spectral data. Web-based and friendly interfaces allow for interactive queries as well as previews of plots and downloads of files of the resulting spectral data for thorough comparative analyses.

  12. Computer system SANC: its development and applications

    Science.gov (United States)

    Arbuzov, A.; Bardin, D.; Bondarenko, S.; Christova, P.; Kalinovskaya, L.; Sadykov, R.; Sapronov, A.; Riemann, T.

    2016-10-01

    The SANC system is used for systematic calculations of various processes within the Standard Model in the one-loop approximation. QED, electroweak, and QCD corrections are computed to a number of processes being of interest for modern and future high-energy experiments. Several applications for the LHC physics program are presented. Development of the system and the general problems and perspectives for future improvement of the theoretical precision are discussed.

  13. Cavity QED: applications to quantum computation

    Science.gov (United States)

    Xiong, Han; Zubairy, M. Suhail

    2004-10-01

    Possible schemes to implement the basic quantum gates for quantum computation have been presented based on cavity quantum electrodynamics (QED) systems. We then discuss schemes to implement several important quantum algorithms such as the discrete quantum fourier transform (QFT) algorithm and Grover's quantum search algorithm based on these quantum gates. Some other applications of cavity QED based systems including the implementations of a quantum disentanglement eraser and an entanglement amplifier are also discussed.

  14. Application of parallel computing to robot dynamics

    OpenAIRE

    Schäfer, Peter; Schiehlen, Werner

    1993-01-01

    In this paper an approach for the application of parallel processing to the dynamic analysis of robots based on the multibody system method is presented. The inherent structure of the symbolic equations of motion is used for partitioning those into independent modules for concurrent evaluation. The applied strategies for parallelization include the parallel evaluation of subsystem equations and the parallel computation of the inertia matrix along with its factorization, and of the force vecto...

  15. Accomplish the Application Area in Cloud Computing

    CERN Document Server

    Bansal, Nidhi

    2012-01-01

    In the cloud computing application area of accomplish, we find the fact that cloud computing covers a lot of areas are its main asset. At a top level, it is an approach to IT where many users, some even from different companies get access to shared IT resources such as servers, routers and various file extensions, instead of each having their own dedicated servers. This offers many advantages like lower costs and higher efficiency. Unfortunately there have been some high profile incidents where some of the largest cloud providers have had outages and even lost data, and this underscores that it is important to have backup, security and disaster recovery capabilities. In education field, it gives better choice and flexibility to IT departments than others. The platform and applications you use can be on-premises, off-premises, or a combination of both, depending on your academic organization's needs. With cloud computing in education, you get powerful software and massive computing resources where and when you...

  16. DEVELOPING FLEXIBLE APPLICATIONS WITH XML AND DATABASE INTEGRATION

    Directory of Open Access Journals (Sweden)

    Hale AS

    2004-04-01

    Full Text Available In recent years the most popular subject in Information System area is Enterprise Application Integration (EAI. It can be defined as a process of forming a standart connection between different systems of an organization?s information system environment. The incorporating, gaining and marriage of corporations are the major reasons of popularity in Enterprise Application Integration. The main purpose is to solve the application integrating problems while similar systems in such corporations continue working together for a more time. With the help of XML technology, it is possible to find solutions to the problems of application integration either within the corporation or between the corporations.

  17. Enterprise Android programming Android database applications for the enterprise

    CERN Document Server

    Mednieks, Zigurd; Dornin, Laird; Pan, Zane

    2013-01-01

    The definitive guide to building data-driven Android applications for enterprise systems Android devices represent a rapidly growing share of the mobile device market. With the release of Android 4, they are moving beyond consumer applications into corporate/enterprise use. Developers who want to start building data-driven Android applications that integrate with enterprise systems will learn how with this book. In the tradition of Wrox Professional guides, it thoroughly covers sharing and displaying data, transmitting data to enterprise applications, and much more. Shows Android developers w

  18. Interactive Computing Framework for Engineering Applications

    Directory of Open Access Journals (Sweden)

    J. Knezevic

    2011-01-01

    Full Text Available Problem statement: Even though the computational steering state-of-the-art environments allow users to embed their simulation codes as a module for an interactive steering without the necessity for their own expertise in high-performance computing and visualisation, e.g., these environments are limited in their possible applications and mostly entail heavy code changes in order to integrate the existing code. Approach: In this study, we introduce an integration framework for engineering applications that supports distributed computations as well as visualization on-the-fly in order to reduce latency and enable a high degree of interactivity with only minor code alterations involved. Moreover, we tackle the problem of long communication delays in the case of huge data advent, which occur due to rigid coupling of simulation back-ends with visualization front-ends and handicap a user in exploring intuitively the relation of cause and effect. Results: The results for the first test cases are encouraging, both showing that we obtain excellent speedup in parallel scenarios and proving that the overhead introduced by the framework itself is negligible. Conclusion/Recommendations: Testing the case involving massively parallel simulation, as well as the integration of the framework into several parallel engineering applications are part of our imminent research.

  19. Explorative Study of SQL Injection Attacks and Mechanisms to Secure Web Application Database- A Review

    Directory of Open Access Journals (Sweden)

    Chandershekhar Sharma

    2016-03-01

    Full Text Available The increasing innovations in web development technologies direct the augmentation of user friendly web applications. With activities like - online banking, shopping, booking, trading etc. these applications have become an integral part of everyone’s daily routine. The profit driven online business industry has also acknowledged this growth because a thriving application provides the global platform to an organization. Database of web application is the most valuable asset which stores sensitive information of an individual and of an organization. SQLIA is the topmost threat as it targets the database on web application. It allows the attacker to gain control over the application ensuing financial fraud, leak of confidential data and even deleting the database. The exhaustive survey of SQL injection attacks presented in this paper is based on empirical analysis. This comprises the deployment of injection mechanism for each attack with respective types on various websites, dummy databases and web applications. The paramount security mechanism for web application database is also discussed to mitigate SQL injection attacks.

  20. Development of Integrated PSA Database and Application Technology

    Energy Technology Data Exchange (ETDEWEB)

    Han, Sang Hoon; Park, Jin Hee; Kim, Seung Hwan; Choi, Sun Yeong; Jung, Woo Sik; Jeong, Kwang Sub; Ha Jae Joo; Yang, Joon Eon; Min Kyung Ran; Kim, Tae Woon

    2005-04-15

    The purpose of this project is to develop 1) the reliability database framework, 2) the methodology for the reactor trip and abnormal event analysis, and 3) the prototype PSA information DB system. We already have a part of the reactor trip and component reliability data. In this study, we extend the collection of data up to 2002. We construct the pilot reliability database for common cause failure and piping failure data. A reactor trip or a component failure may have an impact on the safety of a nuclear power plant. We perform the precursor analysis for such events that occurred in the KSNP, and to develop a procedure for the precursor analysis. A risk monitor provides a mean to trace the changes in the risk following the changes in the plant configurations. We develop a methodology incorporating the model of secondary system related to the reactor trip into the risk monitor model. We develop a prototype PSA information system for the UCN 3 and 4 PSA models where information for the PSA is inputted into the system such as PSA reports, analysis reports, thermal-hydraulic analysis results, system notebooks, and so on. We develop a unique coherent BDD method to quantify a fault tree and the fastest fault tree quantification engine FTREX. We develop quantification software for a full PSA model and a one top model.

  1. A bioinformatics knowledge discovery in text application for grid computing.

    Science.gov (United States)

    Castellano, Marcello; Mastronardi, Giuseppe; Bellotti, Roberto; Tarricone, Gianfranco

    2009-06-16

    A fundamental activity in biomedical research is Knowledge Discovery which has the ability to search through large amounts of biomedical information such as documents and data. High performance computational infrastructures, such as Grid technologies, are emerging as a possible infrastructure to tackle the intensive use of Information and Communication resources in life science. The goal of this work was to develop a software middleware solution in order to exploit the many knowledge discovery applications on scalable and distributed computing systems to achieve intensive use of ICT resources. The development of a grid application for Knowledge Discovery in Text using a middleware solution based methodology is presented. The system must be able to: perform a user application model, process the jobs with the aim of creating many parallel jobs to distribute on the computational nodes. Finally, the system must be aware of the computational resources available, their status and must be able to monitor the execution of parallel jobs. These operative requirements lead to design a middleware to be specialized using user application modules. It included a graphical user interface in order to access to a node search system, a load balancing system and a transfer optimizer to reduce communication costs. A middleware solution prototype and the performance evaluation of it in terms of the speed-up factor is shown. It was written in JAVA on Globus Toolkit 4 to build the grid infrastructure based on GNU/Linux computer grid nodes. A test was carried out and the results are shown for the named entity recognition search of symptoms and pathologies. The search was applied to a collection of 5,000 scientific documents taken from PubMed. In this paper we discuss the development of a grid application based on a middleware solution. It has been tested on a knowledge discovery in text process to extract new and useful information about symptoms and pathologies from a large collection of

  2. Research on Performance Evaluation of Biological Database based on Layered Queuing Network Model under the Cloud Computing Environment

    Directory of Open Access Journals (Sweden)

    Zhengbin Luo

    2013-06-01

    Full Text Available To evaluate the performance of biological database based on layered queuing network model and under cloud computing environment is a premise, as well as an important step for biological database optimization. Based on predecessors’ researches concerning computer software and hardware performance evaluation under cloud environment, the study has further constructed a model system to evaluate the performance of biological database based on layered queuing network model and under cloud environment. Moreover, traditional layered queuing network model is also optimized and upgraded in this process. After having constructed the performance evaluation system, the study applies laboratory experiment method to test the validity of the constructed performance model. Shown by the test result, this model is effective in evaluating the performance of biological system under cloud environment and the predicted result is quite close to the tested result. This has demonstrated the validity of the model in evaluating the performance of biological database.

  3. Bacterial computing: a form of natural computing and its applications.

    Science.gov (United States)

    Lahoz-Beltra, Rafael; Navarro, Jorge; Marijuán, Pedro C

    2014-01-01

    The capability to establish adaptive relationships with the environment is an essential characteristic of living cells. Both bacterial computing and bacterial intelligence are two general traits manifested along adaptive behaviors that respond to surrounding environmental conditions. These two traits have generated a variety of theoretical and applied approaches. Since the different systems of bacterial signaling and the different ways of genetic change are better known and more carefully explored, the whole adaptive possibilities of bacteria may be studied under new angles. For instance, there appear instances of molecular "learning" along the mechanisms of evolution. More in concrete, and looking specifically at the time dimension, the bacterial mechanisms of learning and evolution appear as two different and related mechanisms for adaptation to the environment; in somatic time the former and in evolutionary time the latter. In the present chapter it will be reviewed the possible application of both kinds of mechanisms to prokaryotic molecular computing schemes as well as to the solution of real world problems.

  4. Applications of computational intelligence in biomedical technology

    CERN Document Server

    Majernik, Jaroslav; Pancerz, Krzysztof; Zaitseva, Elena

    2016-01-01

    This book presents latest results and selected applications of Computational Intelligence in Biomedical Technologies. Most of contributions deal with problems of Biomedical and Medical Informatics, ranging from theoretical considerations to practical applications. Various aspects of development methods and algorithms in Biomedical and Medical Informatics as well as Algorithms for medical image processing, modeling methods are discussed. Individual contributions also cover medical decision making support, estimation of risks of treatments, reliability of medical systems, problems of practical clinical applications and many other topics  This book is intended for scientists interested in problems of Biomedical Technologies, for researchers and academic staff, for all dealing with Biomedical and Medical Informatics, as well as PhD students. Useful information is offered also to IT companies, developers of equipment and/or software for medicine and medical professionals.  .

  5. The new Cloud Dynamics and Radiation Database algorithms for AMSR2 and GMI: exploitation of the GPM observational database for operational applications

    Science.gov (United States)

    Cinzia Marra, Anna; Casella, Daniele; Martins Costa do Amaral, Lia; Sanò, Paolo; Dietrich, Stefano; Panegrossi, Giulia

    2017-04-01

    Two new precipitation retrieval algorithms for the Advanced Microwave Scanning Radiometer 2 (AMSR2) and for the GPM Microwave Imager (GMI) are presented. The algorithms are based on the Cloud Dynamics and Radiation Database (CDRD) Bayesian approach and represent an evolution of the previous version applied to Special Sensor Microwave Imager/Sounder (SSMIS) observations, and used operationally within the EUMETSAT Satellite Application Facility on support to Operational Hydrology and Water Management (H-SAF). These new products present as main innovation the use of an extended database entirely empirical, derived from coincident radar and radiometer observations from the NASA/JAXA Global Precipitation Measurement Core Observatory (GPM-CO) (Dual-frequency Precipitation Radar-DPR and GMI). The other new aspects are: 1) a new rain-no-rain screening approach; 2) the use of Empirical Orthogonal Functions (EOF) and Canonical Correlation Analysis (CCA) both in the screening approach, and in the Bayesian algorithm; 2) the use of new meteorological and environmental ancillary variables to categorize the database and mitigate the problem of non-uniqueness of the retrieval solution; 3) the development and implementations of specific modules for computational time minimization. The CDRD algorithms for AMSR2 and GMI are able to handle an extremely large observational database available from GPM-CO and provide the rainfall estimate with minimum latency, making them suitable for near-real time hydrological and operational applications. As far as CDRD for AMSR2, a verification study over Italy using ground-based radar data and over the MSG full disk area using coincident GPM-CO/AMSR2 observations has been carried out. Results show remarkable AMSR2 capabilities for rainfall rate (RR) retrieval over ocean (for RR > 0.25 mm/h), good capabilities over vegetated land (for RR > 1 mm/h), while for coastal areas the results are less certain. Comparisons with NASA GPM products, and with

  6. Navigating through the Jungle of Allergens: Features and Applications of Allergen Databases.

    Science.gov (United States)

    Radauer, Christian

    2017-01-01

    The increasing number of available data on allergenic proteins demanded the establishment of structured, freely accessible allergen databases. In this review article, features and applications of 6 of the most widely used allergen databases are discussed. The WHO/IUIS Allergen Nomenclature Database is the official resource of allergen designations. Allergome is the most comprehensive collection of data on allergens and allergen sources. AllergenOnline is aimed at providing a peer-reviewed database of allergen sequences for prediction of allergenicity of proteins, such as those planned to be inserted into genetically modified crops. The Structural Database of Allergenic Proteins (SDAP) provides a database of allergen sequences, structures, and epitopes linked to bioinformatics tools for sequence analysis and comparison. The Immune Epitope Database (IEDB) is the largest repository of T-cell, B-cell, and major histocompatibility complex protein epitopes including epitopes of allergens. AllFam classifies allergens into families of evolutionarily related proteins using definitions from the Pfam protein family database. These databases contain mostly overlapping data, but also show differences in terms of their targeted users, the criteria for including allergens, data shown for each allergen, and the availability of bioinformatics tools. © 2017 S. Karger AG, Basel.

  7. Cloud database development and management

    CERN Document Server

    Chao, Lee

    2013-01-01

    Nowadays, cloud computing is almost everywhere. However, one can hardly find a textbook that utilizes cloud computing for teaching database and application development. This cloud-based database development book teaches both the theory and practice with step-by-step instructions and examples. This book helps readers to set up a cloud computing environment for teaching and learning database systems. The book will cover adequate conceptual content for students and IT professionals to gain necessary knowledge and hands-on skills to set up cloud based database systems.

  8. 6th International Conference on Computer Science and its Applications

    CERN Document Server

    Stojmenovic, Ivan; Jeong, Hwa; Yi, Gangman

    2015-01-01

    The 6th FTRA International Conference on Computer Science and its Applications (CSA-14) will be held in Guam, USA, Dec. 17 - 19, 2014. CSA-14 presents a comprehensive conference focused on the various aspects of advances in engineering systems in computer science, and applications, including ubiquitous computing, U-Health care system, Big Data, UI/UX for human-centric computing, Computing Service, Bioinformatics and Bio-Inspired Computing and will show recent advances on various aspects of computing technology, Ubiquitous Computing Services and its application.

  9. Distributed Computing and its Scope in Defence Applications

    Directory of Open Access Journals (Sweden)

    B.V. George

    2005-10-01

    Full Text Available Distributed computing is one of the paradigms in the world of information technology. Middleware is the essential tool for implementing distributed computing for overtaking theheterogeneity of platform and language. DRDO’s intranet, DRONA, has the potential of hosting distributed applications across the network. This paper deals with the essentials of distributed computing, architecture of DRONA network, and the scope of distributed computing in Defence applications. It also suggests a few possible applications of distributed computing.

  10. Fast Computation of Global Sensitivity Kernel Database Based on Spectral-Element Simulations

    Science.gov (United States)

    Sales de Andrade, Elliott; Liu, Qinya

    2017-07-01

    Finite-frequency sensitivity kernels, a theoretical improvement from simple infinitely thin ray paths, have been used extensively in recent global and regional tomographic inversions. These sensitivity kernels provide more consistent and accurate interpretation of a growing number of broadband measurements, and are critical in mapping 3D heterogeneous structures of the mantle. Based on Born approximation, the calculation of sensitivity kernels requires the interaction of the forward wavefield and an adjoint wavefield generated by placing adjoint sources at stations. Both fields can be obtained accurately through numerical simulations of seismic wave propagation, particularly important for kernels of phases that cannot be sufficiently described by ray theory (such as core-diffracted waves). However, the total number of forward and adjoint numerical simulations required to build kernels for individual source-receiver pairs and to form the design matrix for classical tomography is computationally unaffordable. In this paper, we take advantage of the symmetry of 1D reference models, perform moment tensor forward and point force adjoint spectral-element simulations, and save six-component strain fields only on the equatorial plane based on the open-source spectral-element simulation package, SPECFEM3D_GLOBE. Sensitivity kernels for seismic phases at any epicentral distance can be efficiently computed by combining forward and adjoint strain wavefields from the saved strain field database, which significantly reduces both the number of simulations and the amount of storage required for global tomographic problems. Based on this technique, we compute traveltime, amplitude and/or boundary kernels of isotropic and radially anisotropic elastic parameters for various (P, S, P_{diff}, S_{diff}, depth, surface-reflected, surface wave, S 660 S boundary, etc.) phases for 1D ak135 model, in preparation for future global tomographic inversions.

  11. Relational databases

    CERN Document Server

    Bell, D A

    1986-01-01

    Relational Databases explores the major advances in relational databases and provides a balanced analysis of the state of the art in relational databases. Topics covered include capture and analysis of data placement requirements; distributed relational database systems; data dependency manipulation in database schemata; and relational database support for computer graphics and computer aided design. This book is divided into three sections and begins with an overview of the theory and practice of distributed systems, using the example of INGRES from Relational Technology as illustration. The

  12. Distributed Computing Framework for Synthetic Radar Application

    Science.gov (United States)

    Gurrola, Eric M.; Rosen, Paul A.; Aivazis, Michael

    2006-01-01

    We are developing an extensible software framework, in response to Air Force and NASA needs for distributed computing facilities for a variety of radar applications. The objective of this work is to develop a Python based software framework, that is the framework elements of the middleware that allows developers to control processing flow on a grid in a distributed computing environment. Framework architectures to date allow developers to connect processing functions together as interchangeable objects, thereby allowing a data flow graph to be devised for a specific problem to be solved. The Pyre framework, developed at the California Institute of Technology (Caltech), and now being used as the basis for next-generation radar processing at JPL, is a Python-based software framework. We have extended the Pyre framework to include new facilities to deploy processing components as services, including components that monitor and assess the state of the distributed network for eventual real-time control of grid resources.

  13. Discovering Knowledge from AIS Database for Application in VTS

    Science.gov (United States)

    Tsou, Ming-Cheng

    The widespread use of the Automatic Identification System (AIS) has had a significant impact on maritime technology. AIS enables the Vessel Traffic Service (VTS) not only to offer commonly known functions such as identification, tracking and monitoring of vessels, but also to provide rich real-time information that is useful for marine traffic investigation, statistical analysis and theoretical research. However, due to the rapid accumulation of AIS observation data, the VTS platform is often unable quickly and effectively to absorb and analyze it. Traditional observation and analysis methods are becoming less suitable for the modern AIS generation of VTS. In view of this, we applied the same data mining technique used for business intelligence discovery (in Customer Relation Management (CRM) business marketing) to the analysis of AIS observation data. This recasts the marine traffic problem as a business-marketing problem and integrates technologies such as Geographic Information Systems (GIS), database management systems, data warehousing and data mining to facilitate the discovery of hidden and valuable information in a huge amount of observation data. Consequently, this provides the marine traffic managers with a useful strategic planning resource.

  14. Annual review of computer science

    Energy Technology Data Exchange (ETDEWEB)

    Traub, J.F. (Columbia Univ., New York, NY (USA)); Grosz, B.J. (Harvard Univ., Cambridge, MA (USA)); Lampson, B.W. (Digital Equipment Corp. (US)); Nilsson, N.J. (Stanford Univ., CA (USA))

    1988-01-01

    This book contains the annual review of computer science. Topics covered include: Database security, parallel algorithmic techniques for combinatorial computation, algebraic complexity theory, computer applications in manufacturing, and computational geometry.

  15. Decomposability queueing and computer system applications

    CERN Document Server

    Courtois, P J

    1977-01-01

    Decomposability: Queueing and Computer System Applications presents a set of powerful methods for systems analysis. This 10-chapter text covers the theory of nearly completely decomposable systems upon which specific analytic methods are based.The first chapters deal with some of the basic elements of a theory of nearly completely decomposable stochastic matrices, including the Simon-Ando theorems and the perturbation theory. The succeeding chapters are devoted to the analysis of stochastic queuing networks that appear as a type of key model. These chapters also discuss congestion problems in

  16. Web application for genetic modification flux with database to estimate metabolic fluxes of genetic mutants.

    Science.gov (United States)

    Mohd Ali, Noorlin; Tsuboi, Ryo; Matsumoto, Yuta; Koishi, Daisuke; Inoue, Kentaro; Maeda, Kazuhiro; Kurata, Hiroyuki

    2016-07-01

    Computational analysis of metabolic fluxes is essential in understanding the structure and function of a metabolic network and in rationally designing genetically modified mutants for an engineering purpose. We had presented the genetic modification flux (GMF) that predicts the flux distribution of a broad range of genetically modified mutants. To enhance the feasibility and usability of GMF, we have developed a web application with a metabolic network database to predict a flux distribution of genetically modified mutants. One hundred and twelve data sets of Escherichia coli, Corynebacterium glutamicum, Saccharomyces cerevisiae, and Chinese hamster ovary were registered as standard models.

  17. Medical applications: a database and characterization of apps in Apple iOS and Android platforms.

    Science.gov (United States)

    Seabrook, Heather J; Stromer, Julie N; Shevkenek, Cole; Bharwani, Aleem; de Grood, Jill; Ghali, William A

    2014-08-27

    Medical applications (apps) for smart phones and tablet computers are growing in number and are commonly used in healthcare. In this context, there is a need for a diverse community of app users, medical researchers, and app developers to better understand the app landscape. In mid-2012, we undertook an environmental scan and classification of the medical app landscape in the two dominant platforms by searching the medical category of the Apple iTunes and Google Play app download sites. We identified target audiences, functions, costs and content themes using app descriptions and captured these data in a database. We only included apps released or updated between October 1, 2011 and May 31, 2012, with a primary "medical" app store categorization, in English, that contained health or medical content. Our sample of Android apps was limited to the most popular apps in the medical category. Our final sample of Apple iOS (n = 4561) and Android (n = 293) apps illustrate a diverse medical app landscape. The proportion of Apple iOS apps for the public (35%) and for physicians (36%) is similar. Few Apple iOS apps specifically target nurses (3%). Within the Android apps, those targeting the public dominated in our sample (51%). The distribution of app functions is similar in both platforms with reference being the most common function. Most app functions and content themes vary considerably by target audience. Social media apps are more common for patients and the public, while conference apps target physicians. We characterized existing medical apps and illustrated their diversity in terms of target audience, main functions, cost and healthcare topic. The resulting app database is a resource for app users, app developers and health informatics researchers.

  18. The definitive guide to MongoDB the noSQL database for cloud and desktop computing

    CERN Document Server

    Plugge, Eelco; Hawkins, Tim

    2010-01-01

    MongoDB, a cross-platform NoSQL database, is the fastest-growing new database in the world. MongoDB provides a rich document orientated structure with dynamic queries that you'll recognize from RDMBS offerings such as MySQL. In other words, this is a book about a NoSQL database that does not require the SQL crowd to re-learn how the database world works! MongoDB has reached 1.0 and already boasts 50,000+ users. The community is strong and vibrant and MongoDB is improving at a fast rate. With scalable and fast databases becoming critical for today's applications, this book shows you how to inst

  19. Ethics across the computer science curriculum: privacy modules in an introductory database course.

    Science.gov (United States)

    Appel, Florence

    2005-10-01

    This paper describes the author's experience of infusing an introductory database course with privacy content, and the on-going project entitled Integrating Ethics Into the Database Curriculum, that evolved from that experience. The project, which has received funding from the National Science Foundation, involves the creation of a set of privacy modules that can be implemented systematically by database educators throughout the database design thread of an undergraduate course.

  20. The establishment and initial application of emotional disorder database in brain tumor patients

    Directory of Open Access Journals (Sweden)

    Hong-bo ZHANG

    2015-09-01

    Full Text Available  Objective To establish database for brain tumor patients with mood disorders and to explore the status and epidemiological characteristics of emotional function. Methods By using computer software, establish database of brain tumor with affective disorder based on clinical requirements. Record the data of 140 cases of brain tumors undergoing operation treatment, so as to found perfect public data platform and realize resource sharing. Results The clinical data of 140 brain tumor patients were successfully filled in the registration query system. The database provides simple and complex mood data queries for users to browse. Conclusions The mood disorder database for patients with brain tumors can provide related data samples and resources for basic and clinical research. Besides, it can effectively share clinical research data and reduce research costs. DOI: 10.3969/j.issn.1672-6731.2015.09.010

  1. SWEETLEAD: an in silico database of approved drugs, regulated chemicals, and herbal isolates for computer-aided drug discovery.

    Directory of Open Access Journals (Sweden)

    Paul A Novick

    Full Text Available In the face of drastically rising drug discovery costs, strategies promising to reduce development timelines and expenditures are being pursued. Computer-aided virtual screening and repurposing approved drugs are two such strategies that have shown recent success. Herein, we report the creation of a highly-curated in silico database of chemical structures representing approved drugs, chemical isolates from traditional medicinal herbs, and regulated chemicals, termed the SWEETLEAD database. The motivation for SWEETLEAD stems from the observance of conflicting information in publicly available chemical databases and the lack of a highly curated database of chemical structures for the globally approved drugs. A consensus building scheme surveying information from several publicly accessible databases was employed to identify the correct structure for each chemical. Resulting structures are filtered for the active pharmaceutical ingredient, standardized, and differing formulations of the same drug were combined in the final database. The publically available release of SWEETLEAD (https://simtk.org/home/sweetlead provides an important tool to enable the successful completion of computer-aided repurposing and drug discovery campaigns.

  2. Soft Computing Applications : Proceedings of the 5th International Workshop Soft Computing Applications

    CERN Document Server

    Fodor, János; Várkonyi-Kóczy, Annamária; Dombi, Joszef; Jain, Lakhmi

    2013-01-01

                    This volume contains the Proceedings of the 5thInternational Workshop on Soft Computing Applications (SOFA 2012).                                The book covers a broad spectrum of soft computing techniques, theoretical and practical applications employing knowledge and intelligence to find solutions for world industrial, economic and medical problems. The combination of such intelligent systems tools and a large number of applications introduce a need for a synergy of scientific and technological disciplines in order to show the great potential of Soft Computing in all domains.                   The conference papers included in these proceedings, published post conference, were grouped into the following area of research: ·         Soft Computing and Fusion Algorithms in Biometrics, ·         Fuzzy Theory, Control andApplications, ·         Modelling and Control Applications, ·         Steps towa...

  3. International Conference on Frontiers of Intelligent Computing : Theory and Applications

    CERN Document Server

    Bhateja, Vikrant; Udgata, Siba; Pattnaik, Prasant

    2017-01-01

    The book is a collection of high-quality peer-reviewed research papers presented at International Conference on Frontiers of Intelligent Computing: Theory and applications (FICTA 2016) held at School of Computer Engineering, KIIT University, Bhubaneswar, India during 16 – 17 September 2016. The book presents theories, methodologies, new ideas, experiences and applications in all areas of intelligent computing and its applications to various engineering disciplines like computer science, electronics, electrical and mechanical engineering.

  4. A computational platform to maintain and migrate manual functional annotations for BioCyc databases.

    Science.gov (United States)

    Walsh, Jesse R; Sen, Taner Z; Dickerson, Julie A

    2014-10-12

    BioCyc databases are an important resource for information on biological pathways and genomic data. Such databases represent the accumulation of biological data, some of which has been manually curated from literature. An essential feature of these databases is the continuing data integration as new knowledge is discovered. As functional annotations are improved, scalable methods are needed for curators to manage annotations without detailed knowledge of the specific design of the BioCyc database. We have developed CycTools, a software tool which allows curators to maintain functional annotations in a model organism database. This tool builds on existing software to improve and simplify annotation data imports of user provided data into BioCyc databases. Additionally, CycTools automatically resolves synonyms and alternate identifiers contained within the database into the appropriate internal identifiers. Automating steps in the manual data entry process can improve curation efforts for major biological databases. The functionality of CycTools is demonstrated by transferring GO term annotations from MaizeCyc to matching proteins in CornCyc, both maize metabolic pathway databases available at MaizeGDB, and by creating strain specific databases for metabolic engineering.

  5. Application of computed tomography in paleoanthropological research

    Institute of Scientific and Technical Information of China (English)

    Xiujie Wu; Lynne A.Schepartz

    2009-01-01

    Hominin fossils are the most important materials for exploring questions about human origins and evolution. Because human fossils are very rare, it is impossible to use highly destructive techniques in order to study their morphology. Traditional analyses can only rely on the information gained from the study of the external morphology of specimens, and these approaches limited the study of human evolution. The application of computed tomography (CT) has facilitated major developments in paleoanthropology. To date, few studies on Chinese hominin fossils have used CT scanning methodology, but this is rapidly changing. In order to better understand the appli-cation of CT methodology in paleoanthropology, we review the applications of CT scanning on hominin fossils throughout the world.Studies examined include virtual fossil reconstruction, the use of endocasts to elucidate brain morphology, biomechanical analyses of bone distribution, imaging of mummies and research on early human health, and skeletal and dental microanatomical research.2009 National Natural Science Foundation of China and Chinese Academy of Sciences. Published by Elsevier Limited and Science in China Press. All rights reserved.

  6. SQL Application Optimization of ORACLE Database%ORACLE数据库的SQL应用优化

    Institute of Scientific and Technical Information of China (English)

    杨其鸣

    2011-01-01

    SQLServer2003是一种比较复杂的数据库,主要靠内部的映射关系的一种数据库,这种数据库的服务一般来说是对于复制、集成、分析、通知以及报表等相关服务的融合,此外,VisualStudio.NET等第三方开发工具的有效结合,在SQLServer2003数据库中,SQL语句的应用优化对于数据库的发展很重要,本文就是从SQL应用优化着手,对于数据库的sqL语句进行了分析。%SQL Server 2003 is a more complex database,mainly by the internal mapping of a database,the database service is generally for replication,integration,analysis, notification and reporting and other related services integration,in addition,Visual Studio.NET and so the effective integration of third-party development tools in SQL Server 2003 database,SQL statements in application optimized for the database development is very important,this is the application of optimization started from SQL,the SQL statements for database analysis.

  7. Database Administrator

    Science.gov (United States)

    Moore, Pam

    2010-01-01

    The Internet and electronic commerce (e-commerce) generate lots of data. Data must be stored, organized, and managed. Database administrators, or DBAs, work with database software to find ways to do this. They identify user needs, set up computer databases, and test systems. They ensure that systems perform as they should and add people to the…

  8. Database Administrator

    Science.gov (United States)

    Moore, Pam

    2010-01-01

    The Internet and electronic commerce (e-commerce) generate lots of data. Data must be stored, organized, and managed. Database administrators, or DBAs, work with database software to find ways to do this. They identify user needs, set up computer databases, and test systems. They ensure that systems perform as they should and add people to the…

  9. Database of atomistic reaction mechanisms with application to kinetic Monte Carlo.

    Science.gov (United States)

    Terrell, Rye; Welborn, Matthew; Chill, Samuel T; Henkelman, Graeme

    2012-07-07

    Kinetic Monte Carlo is a method used to model the state-to-state kinetics of atomic systems when all reaction mechanisms and rates are known a priori. Adaptive versions of this algorithm use saddle searches from each visited state so that unexpected and complex reaction mechanisms can also be included. Here, we describe how calculated reaction mechanisms can be stored concisely in a kinetic database and subsequently reused to reduce the computational cost of such simulations. As all accessible reaction mechanisms available in a system are contained in the database, the cost of the adaptive algorithm is reduced towards that of standard kinetic Monte Carlo.

  10. An algorithm of discovering signatures from DNA databases on a computer cluster

    OpenAIRE

    Lee, Hsiao Ping; Sheu, Tzu-Fang

    2014-01-01

    Background Signatures are short sequences that are unique and not similar to any other sequence in a database that can be used as the basis to identify different species. Even though several signature discovery algorithms have been proposed in the past, these algorithms require the entirety of databases to be loaded in the memory, thus restricting the amount of data that they can process. It makes those algorithms unable to process databases with large amounts of data. Also, those algorithms ...

  11. Computational fluid dynamics: Transition to design applications

    Science.gov (United States)

    Bradley, R. G.; Bhateley, I. C.; Howell, G. A.

    1987-01-01

    The development of aerospace vehicles, over the years, was an evolutionary process in which engineering progress in the aerospace community was based, generally, on prior experience and data bases obtained through wind tunnel and flight testing. Advances in the fundamental understanding of flow physics, wind tunnel and flight test capability, and mathematical insights into the governing flow equations were translated into improved air vehicle design. The modern day field of Computational Fluid Dynamics (CFD) is a continuation of the growth in analytical capability and the digital mathematics needed to solve the more rigorous form of the flow equations. Some of the technical and managerial challenges that result from rapidly developing CFD capabilites, some of the steps being taken by the Fort Worth Division of General Dynamics to meet these challenges, and some of the specific areas of application for high performance air vehicles are presented.

  12. Sticker DNA computer model--PartⅡ:Application

    Institute of Scientific and Technical Information of China (English)

    XU Jin; LI Sanping; DONG Yafei; WEI Xiaopeng

    2004-01-01

    Sticker model is one of the basic models in the DNA computer models. This model is coded with single-double stranded DNA molecules. It has the following advantages that the operations require no strands extension and use no enzymes; What's more, the materials are reusable. Therefore, it arouses attention and interest of scientists in many fields. In this paper, we extend and improve the sticker model, which will be definitely beneficial to the construction of DNA computer. This paper is the second part of our series paper, which mainly focuses on the application of sticker model. It mainly consists of the following three sections: the matrix representation of sticker model is first presented; then a brief review of the past research on graph and combinatorial optimization, such as the minimal set covering problem, the vertex covering problem, Hamiltonian path or cycle problem, the maximal clique problem, the maximal independent problem and the Steiner spanning tree problem, is described; Finally a DNA algorithm for the graph isomorphic problem based on the sticker model is given.

  13. Cloud-Based Computational Tools for Earth Science Applications

    Science.gov (United States)

    Arendt, A. A.; Fatland, R.; Howe, B.

    2015-12-01

    Earth scientists are increasingly required to think across disciplines and utilize a wide range of datasets in order to solve complex environmental challenges. Although significant progress has been made in distributing data, researchers must still invest heavily in developing computational tools to accommodate their specific domain. Here we document our development of lightweight computational data systems aimed at enabling rapid data distribution, analytics and problem solving tools for Earth science applications. Our goal is for these systems to be easily deployable, scalable and flexible to accommodate new research directions. As an example we describe "Ice2Ocean", a software system aimed at predicting runoff from snow and ice in the Gulf of Alaska region. Our backend components include relational database software to handle tabular and vector datasets, Python tools (NumPy, pandas and xray) for rapid querying of gridded climate data, and an energy and mass balance hydrological simulation model (SnowModel). These components are hosted in a cloud environment for direct access across research teams, and can also be accessed via API web services using a REST interface. This API is a vital component of our system architecture, as it enables quick integration of our analytical tools across disciplines, and can be accessed by any existing data distribution centers. We will showcase several data integration and visualization examples to illustrate how our system has expanded our ability to conduct cross-disciplinary research.

  14. Database Manager

    Science.gov (United States)

    Martin, Andrew

    2010-01-01

    It is normal practice today for organizations to store large quantities of records of related information as computer-based files or databases. Purposeful information is retrieved by performing queries on the data sets. The purpose of DATABASE MANAGER is to communicate to students the method by which the computer performs these queries. This…

  15. 12th International Conference on Computer Graphics Theory and Applications

    CERN Document Server

    2017-01-01

    The International Conference on Computer Graphics Theory and Applications aims at becoming a major point of contact between researchers, engineers and practitioners in Computer Graphics. The conference will be structured along five main tracks, covering different aspects related to Computer Graphics, from Modelling to Rendering, including Animation, Interactive Environments and Social Agents In Computer Graphics.

  16. Dialog's Knowledge Index and BRS/After Dark: Database Searching on Personal Computers.

    Science.gov (United States)

    Tenopir, Carol

    1983-01-01

    Describes two new bibliographic information services being marketed to microcomputer owners by DIALOG, Inc. and Bibliographic Retrieval Services to allow access to databases at low rates during evening hours. Subject focus, selection of a database, search strategies employed on each system are discussed, and the two services are compared. (EJS)

  17. Advances in Parallel Computing and Databases for Digital Pathology in Cancer Research

    Science.gov (United States)

    2016-11-13

    be used in conjunction for wider analysis. • In-database analytics Using basic linear algebra ap- proaches, several common processing steps such as...and in-database linear algebra operations that are well suited for the storage and analysis of biomedical imaging data. SciDB is a full ACID (atomicity

  18. Undergraduate Use of CD-ROM Databases: Observations of Human-Computer Interaction and Relevance Judgments.

    Science.gov (United States)

    Shaw, Debora

    1996-01-01

    Describes a study that observed undergraduates as they searched bibliographic databases on a CD-ROM local area network. Topics include related research, information needs, evolution of search topics, database selection, search strategies, relevance judgments, CD-ROM interfaces, and library instruction. (Author/LRW)

  19. Computational intelligence and neuromorphic computing potential for cybersecurity applications

    Science.gov (United States)

    Pino, Robinson E.; Shevenell, Michael J.; Cam, Hasan; Mouallem, Pierre; Shumaker, Justin L.; Edwards, Arthur H.

    2013-05-01

    In today's highly mobile, networked, and interconnected internet world, the flow and volume of information is overwhelming and continuously increasing. Therefore, it is believed that the next frontier in technological evolution and development will rely in our ability to develop intelligent systems that can help us process, analyze, and make-sense of information autonomously just as a well-trained and educated human expert. In computational intelligence, neuromorphic computing promises to allow for the development of computing systems able to imitate natural neurobiological processes and form the foundation for intelligent system architectures.

  20. Monet: a next-generation database kernel for query-intensive applications

    NARCIS (Netherlands)

    P.A. Boncz (Peter)

    2002-01-01

    htmlabstractMonet is a database kernel targeted at query-intensive, heavy analysis applications (the opposite of transaction processing), which include OLAP and data mining, but also go beyond the business domain in GIS processing, multi-media retrieval and XML. The clean sheet approach of Monet

  1. Monet: a next-generation database kernel for query-intensive applications

    NARCIS (Netherlands)

    Boncz, P.A.

    2002-01-01

    Monet is a database kernel targeted at query-intensive, heavy analysis applications (the opposite of transaction processing), which include OLAP and data mining, but also go beyond the business domain in GIS processing, multi-media retrieval and XML. The clean sheet approach of Monet tries to depart

  2. Examining the Factors That Contribute to Successful Database Application Implementation Using the Technology Acceptance Model

    Science.gov (United States)

    Nworji, Alexander O.

    2013-01-01

    Most organizations spend millions of dollars due to the impact of improperly implemented database application systems as evidenced by poor data quality problems. The purpose of this quantitative study was to use, and extend, the technology acceptance model (TAM) to assess the impact of information quality and technical quality factors on database…

  3. Implementation of Secondary Index on Cloud Computing NoSQL Database in Big Data Environment

    Directory of Open Access Journals (Sweden)

    Bao Rong Chang

    2015-01-01

    Full Text Available This paper introduces the combination of NoSQL database HBase and enterprise search platform Solr so as to tackle the problem of the secondary index function with fast query. In order to verify the effectiveness and efficiency of the proposed approach, the assessment using Cost-Performance ratio has been done for several competitive benchmark databases and the proposed one. As a result, our proposed approach outperforms the other databases and fulfills secondary index function with fast query in NoSQL database. Moreover, according to the cross-sectional analysis, the proposed combination of HBase and Solr database is capable of performing an excellent query/response in a big data environment.

  4. Cloud Database Management System (CDBMS

    Directory of Open Access Journals (Sweden)

    Snehal B. Shende

    2015-10-01

    Full Text Available Cloud database management system is a distributed database that delivers computing as a service. It is sharing of web infrastructure for resources, software and information over a network. The cloud is used as a storage location and database can be accessed and computed from anywhere. The large number of web application makes the use of distributed storage solution in order to scale up. It enables user to outsource the resource and services to the third party server. This paper include, the recent trend in cloud service based on database management system and offering it as one of the services in cloud. The advantages and disadvantages of database as a service will let you to decide either to use database as a service or not. This paper also will highlight the architecture of cloud based on database management system.

  5. Professional iPhone and iPad Database Application Programming

    CERN Document Server

    Alessi, Patrick

    2010-01-01

    A much-needed resource on database development and enterprise integration for the iPhone. An enormous demand exists for getting iPhone applications into the enterprise and this book guides you through all the necessary steps for integrating an iPhone app within an existing enterprise. Experienced iPhone developers will learn how to take advantage of the built-in capabilities of the iPhone to confidently implement a data-driven application for the iPhone.: Shows you how to integrate iPhone applications into enterprise class systems; Introduces development of data-driven applications on the iPho

  6. The Application of Beowulf-Class Computing to Computational Electromagnetics

    Science.gov (United States)

    Katz, D. S.; Cwik, T.

    1998-01-01

    Current computational developments at the Jet Propulsion Laboratory (JPL) are motivated by the NASA/JPL goal of reducing payload in future space missions while increasing mission capability through miniaturization of active and passive sensors, analytical instruments and communication systems.

  7. Customizable neuroinformatics database system: XooNIps and its application to the pupil platform.

    Science.gov (United States)

    Yamaji, Kazutsuna; Sakai, Hiroyuki; Okumura, Yoshihiro; Usui, Shiro

    2007-07-01

    The developing field of neuroinformatics includes technologies for the collection and sharing of neuro-related digital resources. These resources will be of increasing value for understanding the brain. Developing a database system to integrate these disparate resources is necessary to make full use of these resources. This study proposes a base database system termed XooNIps that utilizes the content management system called XOOPS. XooNIps is designed for developing databases in different research fields through customization of the option menu. In a XooNIps-based database, digital resources are stored according to their respective categories, e.g., research articles, experimental data, mathematical models, stimulations, each associated with their related metadata. Several types of user authorization are supported for secure operations. In addition to the directory and keyword searches within a certain database, XooNIps searches simultaneously across other XooNIps-based databases on the Internet. Reviewing systems for user registration and for data submission are incorporated to impose quality control. Furthermore, XOOPS modules containing news, forums schedules, blogs and other information can be combined to enhance XooNIps functionality. These features provide better scalability, extensibility, and customizability to the general neuroinformatics community. The application of this system to data, models, and other information related to human pupils is described here.

  8. COMPUTER APPLICATION SYSTEM FOR OPERATIONAL EFFICIENCY OF DIESEL RAILBUSES

    Directory of Open Access Journals (Sweden)

    Łukasz WOJCIECHOWSKI

    2016-09-01

    Full Text Available The article presents a computer algorithm to calculate the estimated operating cost analysis rail bus. This computer application system compares the cost of employment locomotive and wagon, the cost of using locomotives and cost of using rail bus. An intensive growth of passenger railway traffic increased a demand for modern computer systems to management means of transportation. Described computer application operates on the basis of selected operating parameters of rail buses.

  9. On the development trend of computer application technology

    Institute of Scientific and Technical Information of China (English)

    Zhang Zhi

    2014-01-01

    The development of computer application technology for people living and working has a very important significance. Dependent on computers are increasingly high. It put forward the development of computer application technology higher requirements. Only constant reform and renewal can meet the needs of social development. It is the only way to meet the development of the concept of sustainable development, and provide ongoing tech driving force for China's development. Based on this, the development trend of computer technology is discused.

  10. The National Landslide Database and GIS for Great Britain: construction, development, data acquisition, application and communication

    Science.gov (United States)

    Pennington, Catherine; Dashwood, Claire; Freeborough, Katy

    2014-05-01

    The National Landslide Database has been developed by the British Geological Survey (BGS) and is the focus for national geohazard research for landslides in Great Britain. The history and structure of the geospatial database and associated Geographical Information System (GIS) are explained, along with the future developments of the database and its applications. The database is the most extensive source of information on landslides in Great Britain with over 16,500 records of landslide events, each documented as fully as possible. Data are gathered through a range of procedures, including: incorporation of other databases; automated trawling of current and historical scientific literature and media reports; new field- and desk-based mapping technologies with digital data capture, and crowd-sourcing information through social media and other online resources. This information is invaluable for the investigation, prevention and mitigation of areas of unstable ground in accordance with Government planning policy guidelines. The national landslide susceptibility map (GeoSure) and a national landslide domain map currently under development rely heavily on the information contained within the landslide database. Assessing susceptibility to landsliding requires knowledge of the distribution of failures and an understanding of causative factors and their spatial distribution, whilst understanding the frequency and types of landsliding present is integral to modelling how rainfall will influence the stability of a region. Communication of landslide data through the Natural Hazard Partnership (NHP) contributes to national hazard mitigation and disaster risk reduction with respect to weather and climate. Daily reports of landslide potential are published by BGS through the NHP and data collected for the National Landslide Database is used widely for the creation of these assessments. The National Landslide Database is freely available via an online GIS and is used by a

  11. Computational intelligence for big data analysis frontier advances and applications

    CERN Document Server

    Dehuri, Satchidananda; Sanyal, Sugata

    2015-01-01

    The work presented in this book is a combination of theoretical advancements of big data analysis, cloud computing, and their potential applications in scientific computing. The theoretical advancements are supported with illustrative examples and its applications in handling real life problems. The applications are mostly undertaken from real life situations. The book discusses major issues pertaining to big data analysis using computational intelligence techniques and some issues of cloud computing. An elaborate bibliography is provided at the end of each chapter. The material in this book includes concepts, figures, graphs, and tables to guide researchers in the area of big data analysis and cloud computing.

  12. Architecture Design & Network Application of Cloud Computing

    Directory of Open Access Journals (Sweden)

    Mehzabul Hoque Nahid

    2015-08-01

    Full Text Available “Cloud” computing a comparatively term, stands on decades of research & analysis in virtualization, analytical distributed computing, utility computing, and more recently computer networking, web technology and software services. Cloud computing represents a shift away from computing as a product that is purchased, to computing as a service that is delivered to consumers over the internet from large-scale data centers – or “clouds”. Whilst cloud computing is obtaining growing popularity in the IT industry, academic appeared to be lagging behind the developments in this field. It also implies a service oriented designed architecture, reduced information technology overhead for the end-user, good flexibility, reduced total cost of private ownership, on-demand services and many other things. This paper discusses the concept of “cloud” computing, some of the issues it tries to address, related research topics, and a “cloud” implementation available today

  13. Design of Student Information Management Database Application System for Office and Departmental Target Responsibility System

    Science.gov (United States)

    Zhou, Hui

    It is the inevitable outcome of higher education reform to carry out office and departmental target responsibility system, in which statistical processing of student's information is an important part of student's performance review. On the basis of the analysis of the student's evaluation, the student information management database application system is designed by using relational database management system software in this paper. In order to implement the function of student information management, the functional requirement, overall structure, data sheets and fields, data sheet Association and software codes are designed in details.

  14. Migrating to the Cloud IT Application, Database, and Infrastructure Innovation and Consolidation

    CERN Document Server

    Laszewski, Tom

    2011-01-01

    Whether your company is planning on database migration, desktop application migration, or has IT infrastructure consolidation projects, this book gives you all the resources you'll need. It gives you recommendations on tools, strategy and best practices and serves as a guide as you plan, determine effort and budget, design, execute and roll your modern Oracle system out to production. Focusing on Oracle grid relational database technology and Oracle Fusion Middleware as the target cloud-based architecture, your company can gain organizational efficiency, agility, increase innovation and reduce

  15. Elements of quantum computing history, theories and engineering applications

    CERN Document Server

    Akama, Seiki

    2015-01-01

    A quantum computer is a computer based on a computational model which uses quantum mechanics, which is a subfield of physics to study phenomena at the micro level. There has been a growing interest on quantum computing in the 1990's, and some quantum computers at the experimental level were recently implemented. Quantum computers enable super-speed computation, and can solve some important problems whose solutions were regarded impossible or intractable with traditional computers. This book provides a quick introduction to quantum computing for readers who have no backgrounds of both theory of computation and quantum mechanics. “Elements of Quantum Computing” presents the history, theories, and engineering applications of quantum computing. The book is suitable to computer scientists, physicist, and software engineers.

  16. Wireless access to a pharmaceutical database: A demonstrator for data driven Wireless Application Prorocol (WAP) applications in medical information processing

    DEFF Research Database (Denmark)

    Hansen, Michael Schacht; Dørup, Jens

    2001-01-01

    catalogue to Wireless Application Protocol using open source freeware at all steps. METHODS: We used Apache 1.3 web software on a Linux server. Data containing the Danish pharmaceutical catalogue were imported from an ASCII file into a MySQL 3.22.32 database using a Practical Extraction and Report Language...

  17. Research in mathematical theory of computation. [computer programming applications

    Science.gov (United States)

    Mccarthy, J.

    1973-01-01

    Research progress in the following areas is reviewed: (1) new version of computer program LCF (logic for computable functions) including a facility to search for proofs automatically; (2) the description of the language PASCAL in terms of both LCF and in first order logic; (3) discussion of LISP semantics in LCF and attempt to prove the correctness of the London compilers in a formal way; (4) design of both special purpose and domain independent proving procedures specifically program correctness in mind; (5) design of languages for describing such proof procedures; and (6) the embedding of ideas in the first order checker.

  18. Applicability of Rydberg atoms to quantum computers

    Science.gov (United States)

    Ryabtsev, Igor I.; Tretyakov, Denis B.; Beterov, Ilya I.

    2005-01-01

    The applicability of Rydberg atoms to quantum computers is examined from an experimental point of view. In many recent theoretical proposals, the excitation of atoms into highly excited Rydberg states was considered as a way to achieve quantum entanglement in cold atomic ensembles via dipole-dipole interactions that could be strong for Rydberg atoms. Appropriate conditions to realize a conditional quantum phase gate have been analysed. We also present the results of modelling experiments on microwave spectroscopy of single- and multi-atom excitations at the one-photon 37S1/2 → 37P1/2 and two-photon 37S1/2 → 38S1/2 transitions in an ensemble of a few sodium Rydberg atoms. The microwave spectra were investigated for various final states of the ensemble initially prepared in its ground state. The results may be applied to the studies on collective laser excitation of ground-state atoms aiming to realize quantum gates.

  19. Computational materials design for energy applications

    Science.gov (United States)

    Ozolins, Vidvuds

    2013-03-01

    General adoption of sustainable energy technologies depends on the discovery and development of new high-performance materials. For instance, waste heat recovery and electricity generation via the solar thermal route require bulk thermoelectrics with a high figure of merit (ZT) and thermal stability at high-temperatures. Energy recovery applications (e.g., regenerative braking) call for the development of rapidly chargeable systems for electrical energy storage, such as electrochemical supercapacitors. Similarly, use of hydrogen as vehicular fuel depends on the ability to store hydrogen at high volumetric and gravimetric densities, as well as on the ability to extract it at ambient temperatures at sufficiently rapid rates. We will discuss how first-principles computational methods based on quantum mechanics and statistical physics can drive the understanding, improvement and prediction of new energy materials. We will cover prediction and experimental verification of new earth-abundant thermoelectrics, transition metal oxides for electrochemical supercapacitors, and kinetics of mass transport in complex metal hydrides. Research has been supported by the US Department of Energy under grant Nos. DE-SC0001342, DE-SC0001054, DE-FG02-07ER46433, and DE-FC36-08GO18136.

  20. Development and application of basis database for materials life cycle assessment in china

    Science.gov (United States)

    Li, Xiaoqing; Gong, Xianzheng; Liu, Yu

    2017-03-01

    As the data intensive method, high quality environmental burden data is an important premise of carrying out materials life cycle assessment (MLCA), and the reliability of data directly influences the reliability of the assessment results and its application performance. Therefore, building Chinese MLCA database is the basic data needs and technical supports for carrying out and improving LCA practice. Firstly, some new progress on database which related to materials life cycle assessment research and development are introduced. Secondly, according to requirement of ISO 14040 series standards, the database framework and main datasets of the materials life cycle assessment are studied. Thirdly, MLCA data platform based on big data is developed. Finally, the future research works were proposed and discussed.

  1. A fusion algorithm for joins based on collections in Odra (Object Database for Rapid Application development)

    CERN Document Server

    Satish, Laika

    2011-01-01

    In this paper we present the functionality of a currently under development database programming methodology called ODRA (Object Database for Rapid Application development) which works fully on the object oriented principles. The database programming language is called SBQL (Stack based query language). We discuss some concepts in ODRA for e.g. the working of ODRA, how ODRA runtime environment operates, the interoperability of ODRA with .net and java .A view of ODRA's working with web services and xml. Currently the stages under development in ODRA are query optimization. So we present the prior work that is done in ODRA related to Query optimization and we also present a new fusion algorithm of how ODRA can deal with joins based on collections like set, lists, and arrays for query optimization.

  2. Application of computational intelligence to biology

    CERN Document Server

    Sekhar, Akula

    2016-01-01

    This book is a contribution of translational and allied research to the proceedings of the International Conference on Computational Intelligence and Soft Computing. It explains how various computational intelligence techniques can be applied to investigate various biological problems. It is a good read for Research Scholars, Engineers, Medical Doctors and Bioinformatics researchers.

  3. Application of computational systems biology to explore environmental toxicity hazards

    DEFF Research Database (Denmark)

    Audouze, Karine Marie Laure; Grandjean, Philippe

    2011-01-01

    Background: Computer-based modeling is part of a new approach to predictive toxicology.Objectives: We investigated the usefulness of an integrated computational systems biology approach in a case study involving the isomers and metabolites of the pesticide dichlorodiphenyltrichloroethane (DDT......) to ascertain their possible links to relevant adverse effects.Methods: We extracted chemical-protein association networks for each DDT isomer and its metabolites using ChemProt, a disease chemical biology database that includes both binding and gene expression data, and we explored protein-protein interactions...... using a human interactome network. To identify associated dysfunctions and diseases, we integrated protein-disease annotations into the protein complexes using the Online Mendelian Inheritance in Man database and the Comparative Toxicogenomics Database.Results: We found 175 human proteins linked to p,p´-DDT...

  4. Exploring Natural Products from the Biodiversity of Pakistan for Computational Drug Discovery Studies: Collection, Optimization, Design and Development of A Chemical Database (ChemDP).

    Science.gov (United States)

    Mirza, Shaher Bano; Bokhari, Habib; Fatmi, Muhammad Qaiser

    2015-01-01

    Pakistan possesses a rich and vast source of natural products (NPs). Some of these secondary metabolites have been identified as potent therapeutic agents. However, the medicinal usage of most of these compounds has not yet been fully explored. The discoveries for new scaffolds of NPs as inhibitors of certain enzymes or receptors using advanced computational drug discovery approaches are also limited due to the unavailability of accurate 3D structures of NPs. An organized database incorporating all relevant information, therefore, can facilitate to explore the medicinal importance of the metabolites from Pakistani Biodiversity. The Chemical Database of Pakistan (ChemDP; release 01) is a fully-referenced, evolving, web-based, virtual database which has been designed and developed to introduce natural products (NPs) and their derivatives from the biodiversity of Pakistan to Global scientific communities. The prime aim is to provide quality structures of compounds with relevant information for computer-aided drug discovery studies. For this purpose, over 1000 NPs have been identified from more than 400 published articles, for which 2D and 3D molecular structures have been generated with a special focus on their stereochemistry, where applicable. The PM7 semiempirical quantum chemistry method has been used to energy optimize the 3D structure of NPs. The 2D and 3D structures can be downloaded as .sdf, .mol, .sybyl, .mol2, and .pdb files - readable formats by many chemoinformatics/bioinformatics software packages. Each entry in ChemDP contains over 100 data fields representing various molecular, biological, physico-chemical and pharmacological properties, which have been properly documented in the database for end users. These pieces of information have been either manually extracted from the literatures or computationally calculated using various computational tools. Cross referencing to a major data repository i.e. ChemSpider has been made available for overlapping

  5. GSTARS computer models and their applications, Part Ⅱ:Applications

    Institute of Scientific and Technical Information of China (English)

    Francisco J.M.SIM(O)ES; Chih Ted YANG

    2008-01-01

    In part 1 of this two-paper series,a brief summary of the basic concepts and theories used in developing the Generalized Stream Tube model for Alluvial River Simulation (GSTARS) computer models was presented.Part 2 provides examples that illustrate some of the capabilities of the GSTARS models and how they can be applied to solve a wide range of river and reservoir sedimentation problems.Laboratory and field case studies are used and the examples show representative applications of the earlier and of the more recent versions of GSTARS.Some of the more recent capabilities implemented in GSTARS3,one of the latest versions of the series,are also discussed here with more detail.

  6. A Method to Ease the Deployment of Web Applications that Involve Database Systems A Method to Ease the Deployment of Web Applications that Involve Database Systems

    Directory of Open Access Journals (Sweden)

    Antonio Vega Corona

    2012-02-01

    Full Text Available El crecimiento continuo de la Internet ha permitido a las personas, alrededor de todo mundo, realizar transacciones en línea, buscar información o navegar usando el explorador de la Web. A medida que más gente se siente cómoda usando los exploradores de Web, más empresas productoras de software tratan de ofrecer interfaces Web como una forma alternativa para proporcionar acceso a sus aplicaciones. La naturaleza de la conexión Web y las restricciones impuestas por el ancho de banda disponible, hacen la integración de aplicaciones Web y los sistemas de bases de datos críticas. Debido a que las aplicaciones que usan bases de datos proporcionan una interfase gráfica para editar la información en la base de datos y debido a que cada columna en una tabla de una base de datos corresponde a un control en una interfase gráfica, el desarrollo de estas aplicaciones puede consumirun tiempo considerable, ya que la validación de campos y reglas de integridad referencial deben ser respetadas. Se propone un diseño orientado a objetos para así facilitar el desarrollo de aplicaciones que usan sistemas de bases de datos.The continuous growth of the Internet has driven people, all around the globe, to performtransactions on-line, search information or navigate using a browser. As more people feelcomfortable using a Web browser, more software companies are trying to alternatively offerWeb interfaces to provide access to their applications. The consequent nature of the Webconnection and the restrictions imposed by the available bandwidth make the successfulintegration of Web applications and database systems critical. Because popular databaseapplications provide a user interface to edit and maintain the information in the databaseand because each column in the database table maps to a graphic user interface control,the deployment of these applications can be time consuming; appropriate fi eld validationand referential integrity rules must be observed

  7. Power-aware applications for scientific cluster and distributed computing

    CERN Document Server

    Abdurachmanov, David; Eulisse, Giulio; Grosso, Paola; Hillegas, Curtis; Holzman, Burt; Klous, Sander; Knight, Robert; Muzaffar, Shahzad

    2014-01-01

    The aggregate power use of computing hardware is an important cost factor in scientific cluster and distributed computing systems. The Worldwide LHC Computing Grid (WLCG) is a major example of such a distributed computing system, used primarily for high throughput computing (HTC) applications. It has a computing capacity and power consumption rivaling that of the largest supercomputers. The computing capacity required from this system is also expected to grow over the next decade. Optimizing the power utilization and cost of such systems is thus of great interest. A number of trends currently underway will provide new opportunities for power-aware optimizations. We discuss how power-aware software applications and scheduling might be used to reduce power consumption, both as autonomous entities and as part of a (globally) distributed system. As concrete examples of computing centers we provide information on the large HEP-focused Tier-1 at FNAL, and the Tigress High Performance Computing Center at Princeton U...

  8. A study on the application of database into engineering geology : Landslide analysis case

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Sang-Kyun; Park, Hyeong-Dong [Seoul National University, Seoul (Korea)

    1999-04-30

    Application of database is required to prevent the damage caused by landslide which is resulted from various factors. Thus it is highly important to select relevant factors as a field for database. After literature review on the previous studies, it was found that the stratigraphic information and slake durability have been rarely used as fields for landslide database in Korea. In this paper the influence of stratigraphy on slope instability was studied. Field survey at Cheju Island confirmed that there were some unstable slopes caused by the movement of lower weak rock. Slake durability test was used to indicate whether slopes of certain rock types were potentially unstable. Five tests were conducted using the rock samples from several slopes from Cheju Island and Kyongju area. The results showed that some rock types were highly susceptible to weathering caused by heavy rainfall. Thus, stratigraphic information and the slake durability of rock materials should be considered as fields for landslide database in Korea. Such database can be used to construct landslide hazard map. (author). 27 refs., 2 tabs., 5 figs.

  9. Preferance of computer technology for analytical support of large database of medical information systems

    Directory of Open Access Journals (Sweden)

    Biryukov А.P.

    2013-12-01

    Full Text Available Aim: to study the use of intelligent technologies for analytical support of large databases of medical information systems. Material and methods. We used the techniques of object-oriented software design and database design. Results. Based on expert review of models and algorithms for analysis of clinical and epidemiological data and principles of knowledge representation in large-scale health information systems, data mining schema were implemented in the software package of the register of Research Center n.a. A. I. Burnazyan of Russia. Identified areas for effective implementation of abstract data model of EAV and procedures Data Maning for the design of database of biomedical registers. Conclusions. Using intelligent software platform that supports different sets of APIs and object models for different operations in different software environments, allows you to build and maintain an information system through the procedures of data biomedical processing.

  10. Constructing Bio-molecular Databases on a DNA-based Computer

    CERN Document Server

    Chang, Weng-Long; Ho,; Guo, Minyi

    2007-01-01

    Codd [Codd 1970] wrote the first paper in which the model of a relational database was proposed. Adleman [Adleman 1994] wrote the first paper in which DNA strands in a test tube were used to solve an instance of the Hamiltonian path problem. From [Adleman 1994], it is obviously indicated that for storing information in molecules of DNA allows for an information density of approximately 1 bit per cubic nm (nanometer) and a dramatic improvement over existing storage media such as video tape which store information at a density of approximately 1 bit per 1012 cubic nanometers. This paper demonstrates that biological operations can be applied to construct bio-molecular databases where data records in relational tables are encoded as DNA strands. In order to achieve the goal, DNA algorithms are proposed to perform eight operations of relational algebra (calculus) on bio-molecular relational databases, which include Cartesian product, union, set difference, selection, projection, intersection, join and division. Fu...

  11. Intelligent computing systems emerging application areas

    CERN Document Server

    Virvou, Maria; Jain, Lakhmi

    2016-01-01

    This book at hand explores emerging scientific and technological areas in which Intelligent Computing Systems provide efficient solutions and, thus, may play a role in the years to come. It demonstrates how Intelligent Computing Systems make use of computational methodologies that mimic nature-inspired processes to address real world problems of high complexity for which exact mathematical solutions, based on physical and statistical modelling, are intractable. Common intelligent computational methodologies are presented including artificial neural networks, evolutionary computation, genetic algorithms, artificial immune systems, fuzzy logic, swarm intelligence, artificial life, virtual worlds and hybrid methodologies based on combinations of the previous. The book will be useful to researchers, practitioners and graduate students dealing with mathematically-intractable problems. It is intended for both the expert/researcher in the field of Intelligent Computing Systems, as well as for the general reader in t...

  12. APPLICATIONS OF CLOUD COMPUTING SERVICES IN EDUCATION – CASE STUDY

    Directory of Open Access Journals (Sweden)

    Tomasz Cieplak

    2014-11-01

    Full Text Available Applications of Cloud Computing in enterprises are very wide-ranging. In opposition, educational applications of Cloud Computing in Poland are someway limited. On the other hand, young people use services of Cloud Computing frequently. Utilization of Facebook, Google or other services in Poland by young people is almost the same as in Western Europe or in the USA. Taking into account those considerations, few years ago authors have started process of popularization and usage of Cloud Computing educational services in their professional work. This article briefly summarizes authors’ experience with selected and most popular Cloud Computing services.

  13. Mobile Cloud Computing: A Comparison of Application Models

    CERN Document Server

    Kovachev, Dejan; Klamma, Ralf

    2011-01-01

    Cloud computing is an emerging concept combining many fields of computing. The foundation of cloud computing is the delivery of services, software and processing capacity over the Internet, reducing cost, increasing storage, automating systems, decoupling of service delivery from underlying technology, and providing flexibility and mobility of information. However, the actual realization of these benefits is far from being achieved for mobile applications and open many new research questions. In order to better understand how to facilitate the building of mobile cloud-based applications, we have surveyed existing work in mobile computing through the prism of cloud computing principles. We give a definition of mobile cloud coputing and provide an overview of the results from this review, in particular, models of mobile cloud applications. We also highlight research challenges in the area of mobile cloud computing. We conclude with recommendations for how this better understanding of mobile cloud computing can ...

  14. A study on retrieval of article and making database in radio technology with personal computer

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Sung Hwan [College of Medicine, Hallym Univ., Anyang (Korea, Republic of)

    1997-02-01

    Although many useful articles appear in journals published in Korea, they are not always cited by researchers mainly due to absence of efficient searching system. The author made a program with 4 predefined filtering forms to detect published articles rapidly and accurately. The program was coded using database management system CA-Clipper VER 5.2. I used 486DX-II(8 Mbyte Ram, VGA, 560 Mbyte Hard Disk), desk-jet printer(HP-560k), and MS-DOS VER 5.0. I inputed twenty articles in the journal of Korean Society Radio technological Technology, And this program test for retrieve article and made database.

  15. Wireless access to a pharmaceutical database: A demonstrator for data driven Wireless Application Prorocol (WAP) applications in medical information processing

    DEFF Research Database (Denmark)

    Hansen, Michael Schacht; Dørup, Jens

    2001-01-01

    catalogue to Wireless Application Protocol using open source freeware at all steps. METHODS: We used Apache 1.3 web software on a Linux server. Data containing the Danish pharmaceutical catalogue were imported from an ASCII file into a MySQL 3.22.32 database using a Practical Extraction and Report Language...... number catalogue and Dictaphone function via answering machine technology. An independent Wireless Application Protocol gateway may be placed within hospital firewalls, which may be an advantage with respect to security. However, if Wireless Application Protocol phones are to become effective tools...

  16. A parallel model for SQL astronomical databases based on solid state storage. Application to the Gaia Archive PostgreSQL database

    Science.gov (United States)

    González-Núñez, J.; Gutiérrez-Sánchez, R.; Salgado, J.; Segovia, J. C.; Merín, B.; Aguado-Agelet, F.

    2017-07-01

    Query planning and optimisation algorithms in most popular relational databases were developed at the times hard disk drives were the only storage technology available. The advent of higher parallel random access capacity devices, such as solid state disks, opens up the way for intra-machine parallel computing over large datasets. We describe a two phase parallel model for the implementation of heavy analytical processes in single instance PostgreSQL astronomical databases. This model is particularised to fulfil two frequent astronomical problems, density maps and crossmatch computation with Quad Tree Cube (Q3C) indexes. They are implemented as part of the relational databases infrastructure for the Gaia Archive and performance is assessed. Improvement of a factor 28.40 in comparison to sequential execution is observed in the reference implementation for a histogram computation. Speedup ratios of 3.7 and 4.0 are attained for the reference positional crossmatches considered. We observe large performance enhancements over sequential execution for both CPU and disk access intensive computations, suggesting these methods might be useful with the growing data volumes in Astronomy.

  17. Computer Science and Technology: Measurement of Interative Computing: Methodology and Application.

    Science.gov (United States)

    Cotton, Ira W.

    This dissertation reports the development and application of a new methodology for the measurement and evaluation of interactive computing, applied to either the users of an interactive computing system or to the system itself, including the service computer and any communications network through which the service is delivered. The focus is on the…

  18. Applications of parallel supercomputers: Scientific results and computer science lessons

    Energy Technology Data Exchange (ETDEWEB)

    Fox, G.C.

    1989-07-12

    Parallel Computing has come of age with several commercial and inhouse systems that deliver supercomputer performance. We illustrate this with several major computations completed or underway at Caltech on hypercubes, transputer arrays and the SIMD Connection Machine CM-2 and AMT DAP. Applications covered are lattice gauge theory, computational fluid dynamics, subatomic string dynamics, statistical and condensed matter physics,theoretical and experimental astronomy, quantum chemistry, plasma physics, grain dynamics, computer chess, graphics ray tracing, and Kalman filters. We use these applications to compare the performance of several advanced architecture computers including the conventional CRAY and ETA-10 supercomputers. We describe which problems are suitable for which computers in the terms of a matching between problem and computer architecture. This is part of a set of lessons we draw for hardware, software, and performance. We speculate on the emergence of new academic disciplines motivated by the growing importance of computers. 138 refs., 23 figs., 10 tabs.

  19. International Conference on Frontiers of Intelligent Computing : Theory and Applications

    CERN Document Server

    Udgata, Siba; Biswal, Bhabendra

    2014-01-01

    This volume contains the papers presented at the Second International Conference on Frontiers in Intelligent Computing: Theory and Applications (FICTA-2013) held during 14-16 November 2013 organized by Bhubaneswar Engineering College (BEC), Bhubaneswar, Odisha, India. It contains 63 papers focusing on application of intelligent techniques which includes evolutionary computation techniques like genetic algorithm, particle swarm optimization techniques, teaching-learning based optimization etc  for various engineering applications such as data mining, Fuzzy systems, Machine Intelligence and ANN, Web technologies and Multimedia applications and Intelligent computing and Networking etc.

  20. TuBaFrost 5: multifunctional central database application for a European tumor bank.

    Science.gov (United States)

    Isabelle, M; Teodorovic, I; Morente, M M; Jaminé, D; Passioukov, A; Lejeune, S; Therasse, P; Dinjens, W N M; Oosterhuis, J W; Lam, K H; Oomen, M H A; Spatz, A; Ratcliffe, C; Knox, K; Mager, R; Kerr, D; Pezzella, F; van de Vijver, M; van Boven, H; Alonso, S; Kerjaschki, D; Pammer, J; Lopez-Guerrero, J A; Llombart Bosch, A; Carbone, A; Gloghini, A; van Veen, E-B; van Damme, B; Riegman, P H J

    2006-12-01

    Developing a tissue bank database has become more than just logically arranging data in tables combined with a search engine. Current demand for high quality samples and data, and the ever-changing legal and ethical regulations mean that the application must reflect TuBaFrost rules and protocols for the collection, exchange and use of tissue. To ensure continuation and extension of the TuBaFrost European tissue bank, the custodianship of the samples, and hence the decision over whether to issue samples to requestors, remains with the local collecting centre. The database application described in this article has been developed to facilitate this open structure virtual tissue bank model serving a large group. It encompasses many key tasks, without the requirement for personnel, hence minimising operational costs. The Internet-accessible database application enables search, selection and request submission for requestors, whereas collectors can upload and edit their collection. Communication between requestor and involved collectors is started with automatically generated e-mails.

  1. Pivotal role of computers and software in mass spectrometry - SEQUEST and 20 years of tandem MS database searching.

    Science.gov (United States)

    Yates, John R

    2015-11-01

    Advances in computer technology and software have driven developments in mass spectrometry over the last 50 years. Computers and software have been impactful in three areas: the automation of difficult calculations to aid interpretation, the collection of data and control of instruments, and data interpretation. As the power of computers has grown, so too has the utility and impact on mass spectrometers and their capabilities. This has been particularly evident in the use of tandem mass spectrometry data to search protein and nucleotide sequence databases to identify peptide and protein sequences. This capability has driven the development of many new approaches to study biological systems, including the use of "bottom-up shotgun proteomics" to directly analyze protein mixtures. Graphical Abstract ᅟ.

  2. The Emdros Text Database Engine as a Platform for Persuasive Computing

    DEFF Research Database (Denmark)

    Sandborg-Petersen, Ulrik

    2013-01-01

    This paper describes the nature and scope of Emdros, a text database engine for annotated text. Three case-studies of persuasive learning systems using Emdros as an important architectural component are described, and their status as to participation in the three legs of BJ Fogg's Functional Tria...

  3. Generalised Computability and Applications to Hybrid Systems

    DEFF Research Database (Denmark)

    Korovina, Margarita V.; Kudinov, Oleg V.

    2001-01-01

    We investigate the concept of generalised computability of operators and functionals defined on the set of continuous functions, firstly introduced in [9]. By working in the reals, with equality and without equality, we study properties of generalised computable operators and functionals. Also we...

  4. Database design and SQL for DB2

    CERN Document Server

    Cooper, James

    2013-01-01

    Thorough and updated coverage of database design and SQL for DB2 are the focus of this guide for the relational database-management system used on IBM i computer systems. Suitable for classroom instruction or self-study, this book explains the most widely used database language and the way that language is implemented on a variety of computer platforms. Topics covered include database concepts, SQL inquiries, web applications, and database security, and the material is reinforced by numerous illustrations, examples, and exercises.

  5. A Performance/Cost Evaluation for a GPU-Based Drug Discovery Application on Volunteer Computing

    Directory of Open Access Journals (Sweden)

    Ginés D. Guerrero

    2014-01-01

    Full Text Available Bioinformatics is an interdisciplinary research field that develops tools for the analysis of large biological databases, and, thus, the use of high performance computing (HPC platforms is mandatory for the generation of useful biological knowledge. The latest generation of graphics processing units (GPUs has democratized the use of HPC as they push desktop computers to cluster-level performance. Many applications within this field have been developed to leverage these powerful and low-cost architectures. However, these applications still need to scale to larger GPU-based systems to enable remarkable advances in the fields of healthcare, drug discovery, genome research, etc. The inclusion of GPUs in HPC systems exacerbates power and temperature issues, increasing the total cost of ownership (TCO. This paper explores the benefits of volunteer computing to scale bioinformatics applications as an alternative to own large GPU-based local infrastructures. We use as a benchmark a GPU-based drug discovery application called BINDSURF that their computational requirements go beyond a single desktop machine. Volunteer computing is presented as a cheap and valid HPC system for those bioinformatics applications that need to process huge amounts of data and where the response time is not a critical factor.

  6. Social Studies: Application Units. Course II, Teachers. Computer-Oriented Curriculum. REACT (Relevant Educational Applications of Computer Technology).

    Science.gov (United States)

    Tecnica Education Corp., San Carlos, CA.

    This book is one of a series in Course II of the Relevant Educational Applications of Computer Technology (REACT) Project. It is designed to point out to teachers two of the major applications of computers in the social sciences: simulation and data analysis. The first section contains a variety of simulation units organized under the following…

  7. Computer Applications in Production and Engineering

    DEFF Research Database (Denmark)

    Sørensen, Torben

    1997-01-01

    into an integrated manufacturing unit. Such units are known as Computer Integrated Manufacturing and Engineering (CIME) systems.The basic concept in CIME is to share and reuse information between the different computer based subsystems. Consequently, for the integration purposes, the CIME systems are highly......This paper address how neutral product model interfaces can be identified, specified, and implemented to provide intelligent and flexible means for information management in manufacturing of discrete mechanical products.The use of advanced computer based systems, such as CAD, CAE, CNC, and robotics...

  8. Hetero-DB:Next Generation High-Performance Database Systems by Best Utilizing Heterogeneous Computing and Storage Resources

    Institute of Scientific and Technical Information of China (English)

    张凯; 陈峰; 丁晓宁; 槐寅; 李如豹; 罗天; 王凯博; 袁源; 张晓东

    2015-01-01

    With recent advancement on hardware technologies, new general-purpose high-performance devices have been widely adopted, such as the graphics processing unit (GPU) and solid state drive (SSD). GPU may offer an order of higher throughput for applications with massive data parallelism, compared with the multicore CPU. Moreover, new storage device SSD is also capable of offering a much higher I/O throughput and lower latency than a traditional hard disk device (HDD). These new hardware devices can significantly boost the performance of many applications;thus the database community has been actively engaging in adopting them into database systems. However, the performance benefit cannot be easily reaped if the new hardwares are improperly used. In this paper, we propose Hetero-DB, a high-performance database system by exploiting both the characteristics of the database system and the special properties of the new hardware devices in system’s design and implementation. Hetero-DB develops a GPU-aware query execution engine with GPU device memory management and query scheduling mechanism to support concurrent query execution. Furthermore, with the SSD-HDD hybrid storage system, we redesign the storage engine by organizing HDD and SSD into a two-level caching hierarchy in Hetero-DB. To best utilize the hybrid hardware devices, the semantic information that is critical for storage I/O is identified and passed to the storage manager, which has a great potential to improve the efficiency and performance. Hetero-DB aims to maximize the performance benefits of GPU and SSD, and demonstrates the effectiveness for designing next generation database systems.

  9. 浅析Access数据库在高校招生管理系统中的应用%Application of the Access Database in Enrollment Management System

    Institute of Scientific and Technical Information of China (English)

    殷洪杰

    2012-01-01

    With the rapid development of computer technology and the progress of the Access database, the colleges and universities should fully apply the Access database in the enrollment management system. This article will briefly analyze and research the application of Access database in enrollment management system.%随着计算机技术的突飞猛进,Access数据库的进步,全国高校应该充分地将Access数据库应用于招生管理系统当中.本文就将针对Access数据库在高校招生管理系统中的应用进行简要的分析和研究.

  10. High-Performance Cloud Computing: A View of Scientific Applications

    CERN Document Server

    Vecchiola, Christian; Buyya, Rajkumar

    2009-01-01

    Scientific computing often requires the availability of a massive number of computers for performing large scale experiments. Traditionally, these needs have been addressed by using high-performance computing solutions and installed facilities such as clusters and super computers, which are difficult to setup, maintain, and operate. Cloud computing provides scientists with a completely new model of utilizing the computing infrastructure. Compute resources, storage resources, as well as applications, can be dynamically provisioned (and integrated within the existing infrastructure) on a pay per use basis. These resources can be released when they are no more needed. Such services are often offered within the context of a Service Level Agreement (SLA), which ensure the desired Quality of Service (QoS). Aneka, an enterprise Cloud computing solution, harnesses the power of compute resources by relying on private and public Clouds and delivers to users the desired QoS. Its flexible and service based infrastructure...

  11. 2nd International Conference on Intelligent Computing and Applications

    CERN Document Server

    Dash, Subhransu; Das, Swagatam; Panigrahi, Bijaya

    2017-01-01

    Second International Conference on Intelligent Computing and Applications was the annual research conference aimed to bring together researchers around the world to exchange research results and address open issues in all aspects of Intelligent Computing and Applications. The main objective of the second edition of the conference for the scientists, scholars, engineers and students from the academia and the industry is to present ongoing research activities and hence to foster research relations between the Universities and the Industry. The theme of the conference unified the picture of contemporary intelligent computing techniques as an integral concept that highlights the trends in computational intelligence and bridges theoretical research concepts with applications. The conference covered vital issues ranging from intelligent computing, soft computing, and communication to machine learning, industrial automation, process technology and robotics. This conference also provided variety of opportunities for ...

  12. Mathematical modeling and computational intelligence in engineering applications

    CERN Document Server

    Silva Neto, Antônio José da; Silva, Geraldo Nunes

    2016-01-01

    This book brings together a rich selection of studies in mathematical modeling and computational intelligence, with application in several fields of engineering, like automation, biomedical, chemical, civil, electrical, electronic, geophysical and mechanical engineering, on a multidisciplinary approach. Authors from five countries and 16 different research centers contribute with their expertise in both the fundamentals and real problems applications based upon their strong background on modeling and computational intelligence. The reader will find a wide variety of applications, mathematical and computational tools and original results, all presented with rigorous mathematical procedures. This work is intended for use in graduate courses of engineering, applied mathematics and applied computation where tools as mathematical and computational modeling, numerical methods and computational intelligence are applied to the solution of real problems.

  13. 生物信息学相关数据库的应用%Application of Bioinformatics Related Databases

    Institute of Scientific and Technical Information of China (English)

    赵苏苏; 赖仁胜

    2011-01-01

    介绍生物信息学数据库的分类,详细阐述综合数据库、蛋白序列数据库、RNA序列数据库、人类和其他脊椎动物基因组数据库、人类基因疾病数据库、结构数据库等数据库资源的内容及应用。%The paper introduces the classification of bioinformatics databases, concretely elaborates the contents and application of following databases: integrated database, protein sequence database, RNA sequence database, humans and other vertebrates genome database, human genetic disease database, structure database, etc.

  14. Managing Associated Risks in Cloud Computer Applications ...

    African Journals Online (AJOL)

    West African Journal of Industrial and Academic Research ... This paper focuses on an overview of cloud computing as a technology and has chosen ... Java programming language and Google App engine were the tools used to develop and ...

  15. Study on the application of mobile internet cloud computing platform

    Science.gov (United States)

    Gong, Songchun; Fu, Songyin; Chen, Zheng

    2012-04-01

    The innovative development of computer technology promotes the application of the cloud computing platform, which actually is the substitution and exchange of a sort of resource service models and meets the needs of users on the utilization of different resources after changes and adjustments of multiple aspects. "Cloud computing" owns advantages in many aspects which not merely reduce the difficulties to apply the operating system and also make it easy for users to search, acquire and process the resources. In accordance with this point, the author takes the management of digital libraries as the research focus in this paper, and analyzes the key technologies of the mobile internet cloud computing platform in the operation process. The popularization and promotion of computer technology drive people to create the digital library models, and its core idea is to strengthen the optimal management of the library resource information through computers and construct an inquiry and search platform with high performance, allowing the users to access to the necessary information resources at any time. However, the cloud computing is able to promote the computations within the computers to distribute in a large number of distributed computers, and hence implement the connection service of multiple computers. The digital libraries, as a typical representative of the applications of the cloud computing, can be used to carry out an analysis on the key technologies of the cloud computing.

  16. Automatic detection of lung nodules in computed tomography images: training and validation of algorithms using public research databases

    Science.gov (United States)

    Camarlinghi, Niccolò

    2013-09-01

    Lung cancer is one of the main public health issues in developed countries. Lung cancer typically manifests itself as non-calcified pulmonary nodules that can be detected reading lung Computed Tomography (CT) images. To assist radiologists in reading images, researchers started, a decade ago, the development of Computer Aided Detection (CAD) methods capable of detecting lung nodules. In this work, a CAD composed of two CAD subprocedures is presented: , devoted to the identification of parenchymal nodules, and , devoted to the identification of the nodules attached to the pleura surface. Both CADs are an upgrade of two methods previously presented as Voxel Based Neural Approach CAD . The novelty of this paper consists in the massive training using the public research Lung International Database Consortium (LIDC) database and on the implementation of new features for classification with respect to the original VBNA method. Finally, the proposed CAD is blindly validated on the ANODE09 dataset. The result of the validation is a score of 0.393, which corresponds to the average sensitivity of the CAD computed at seven predefined false positive rates: 1/8, 1/4, 1/2, 1, 2, 4, and 8 FP/CT.

  17. Statistical and thermal physics with computer applications

    CERN Document Server

    Gould, Harvey

    2010-01-01

    This textbook carefully develops the main ideas and techniques of statistical and thermal physics and is intended for upper-level undergraduate courses. The authors each have more than thirty years' experience in teaching, curriculum development, and research in statistical and computational physics. Statistical and Thermal Physics begins with a qualitative discussion of the relation between the macroscopic and microscopic worlds and incorporates computer simulations throughout the book to provide concrete examples of important conceptual ideas. Unlike many contemporary texts on the

  18. PPARgene: A Database of Experimentally Verified and Computationally Predicted PPAR Target Genes.

    Science.gov (United States)

    Fang, Li; Zhang, Man; Li, Yanhui; Liu, Yan; Cui, Qinghua; Wang, Nanping

    2016-01-01

    The peroxisome proliferator-activated receptors (PPARs) are ligand-activated transcription factors of the nuclear receptor superfamily. Upon ligand binding, PPARs activate target gene transcription and regulate a variety of important physiological processes such as lipid metabolism, inflammation, and wound healing. Here, we describe the first database of PPAR target genes, PPARgene. Among the 225 experimentally verified PPAR target genes, 83 are for PPARα, 83 are for PPARβ/δ, and 104 are for PPARγ. Detailed information including tissue types, species, and reference PubMed IDs was also provided. In addition, we developed a machine learning method to predict novel PPAR target genes by integrating in silico PPAR-responsive element (PPRE) analysis with high throughput gene expression data. Fivefold cross validation showed that the performance of this prediction method was significantly improved compared to the in silico PPRE analysis method. The prediction tool is also implemented in the PPARgene database.

  19. 6th International Conference on Practical Applications of Computational Biology & Bioinformatics

    CERN Document Server

    Luscombe, Nicholas; Fdez-Riverola, Florentino; Rodríguez, Juan; Practical Applications of Computational Biology & Bioinformatics

    2012-01-01

    The growth in the Bioinformatics and Computational Biology fields over the last few years has been remarkable.. The analysis of the datasets of Next Generation Sequencing needs new algorithms and approaches from fields such as Databases, Statistics, Data Mining, Machine Learning, Optimization, Computer Science and Artificial Intelligence. Also Systems Biology has also been emerging as an alternative to the reductionist view that dominated biological research in the last decades. This book presents the results of the  6th International Conference on Practical Applications of Computational Biology & Bioinformatics held at University of Salamanca, Spain, 28-30th March, 2012 which brought together interdisciplinary scientists that have a strong background in the biological and computational sciences.

  20. High-performance computing for airborne applications

    Energy Technology Data Exchange (ETDEWEB)

    Quinn, Heather M [Los Alamos National Laboratory; Manuzzato, Andrea [Los Alamos National Laboratory; Fairbanks, Tom [Los Alamos National Laboratory; Dallmann, Nicholas [Los Alamos National Laboratory; Desgeorges, Rose [Los Alamos National Laboratory

    2010-06-28

    Recently, there has been attempts to move common satellite tasks to unmanned aerial vehicles (UAVs). UAVs are significantly cheaper to buy than satellites and easier to deploy on an as-needed basis. The more benign radiation environment also allows for an aggressive adoption of state-of-the-art commercial computational devices, which increases the amount of data that can be collected. There are a number of commercial computing devices currently available that are well-suited to high-performance computing. These devices range from specialized computational devices, such as field-programmable gate arrays (FPGAs) and digital signal processors (DSPs), to traditional computing platforms, such as microprocessors. Even though the radiation environment is relatively benign, these devices could be susceptible to single-event effects. In this paper, we will present radiation data for high-performance computing devices in a accelerated neutron environment. These devices include a multi-core digital signal processor, two field-programmable gate arrays, and a microprocessor. From these results, we found that all of these devices are suitable for many airplane environments without reliability problems.

  1. An integrated computational pipeline and database to support whole-genome sequence annotation.

    Science.gov (United States)

    Mungall, C J; Misra, S; Berman, B P; Carlson, J; Frise, E; Harris, N; Marshall, B; Shu, S; Kaminker, J S; Prochnik, S E; Smith, C D; Smith, E; Tupy, J L; Wiel, C; Rubin, G M; Lewis, S E

    2002-01-01

    We describe here our experience in annotating the Drosophila melanogaster genome sequence, in the course of which we developed several new open-source software tools and a database schema to support large-scale genome annotation. We have developed these into an integrated and reusable software system for whole-genome annotation. The key contributions to overall annotation quality are the marshalling of high-quality sequences for alignments and the design of a system with an adaptable and expandable flexible architecture.

  2. Web Application and its Marketing as Cloud Computing

    OpenAIRE

    Shiv Kumar

    2012-01-01

    A web application uses two words “web” and “application”.Where web means web browser and application meanscomputer software. Web browser is used to search theinformation on the World Wide Web i.e. www or on Internet,where as application is used to solve the single or multiple tasks,depending on the type of application. In this way, we can saythat a web application is computer software to perform single ormultiple tasks on the computer network using web browser.Now, the questions arise for the...

  3. Platforms for Building and Deploying Applications for Cloud Computing

    CERN Document Server

    Buyya, Rajkumar

    2011-01-01

    Cloud computing is rapidly emerging as a new paradigm for delivering IT services as utlity-oriented services on subscription-basis. The rapid development of applications and their deployment in Cloud computing environments in efficient manner is a complex task. In this article, we give a brief introduction to Cloud computing technology and Platform as a Service, we examine the offerings in this category, and provide the basis for helping readers to understand basic application platform opportunities in Cloud by technologies such as Microsoft Azure, Sales Force, Google App, and Aneka for Cloud computing. We demonstrate that Manjrasoft Aneka is a Cloud Application Platform (CAP) leveraging these concepts and allowing an easy development of Cloud ready applications on a Private/Public/Hybrid Cloud. Aneka CAP offers facilities for quickly developing Cloud applications and a modular platform where additional services can be easily integrated to extend the system capabilities, thus being at pace with the rapidly ev...

  4. Constructing a Database of Similar Exposure Groups: The Application of the Exporisq-HAP Database from 1995 to 2015.

    Science.gov (United States)

    Petit, Pascal; Bicout, Dominique J; Persoons, Renaud; Bonneterre, Vincent; Barbeau, Damien; Maître, Anne

    2017-05-01

    Similar exposure groups (SEGs) are needed to reliably assess occupational exposures and health risks. However, the construction of SEGs can turn out to be rather challenging because of the multifactorial variability of exposures. The objective of this study is to put forward a semi-empirical approach developed to construct and implement a SEG database for exposure assessments. An occupational database of airborne levels of polycyclic aromatic hydrocarbons (PAHs) was used as an illustrative and working example. The approach that was developed consisted of four steps. The first three steps addressed the construction and implementation of the occupational Exporisq-HAP database (E-HAP). E-HAP was structured into three hierarchical levels of exposure groups, each of which was based on exposure determinants, along 16 dimensions that represented the sampled PAHs. A fourth step was implemented to identify and generate SEGs using the geometric standard deviation (GSD) of PAH concentrations. E-HAP was restructured into 16 (for 16 sampled PAHs) 3 × 3 matrices: three hierarchical levels of description versus three degrees of dispersion, which included low (the SEG database: GSD ≤ 3), medium (3 6). Benzo[a]pyrene (BaP) was the least dispersed particulate PAH with 41.5% of groups that could be considered as SEGs, 48.5% of groups of medium dispersion, and only 8% with high dispersion. These results were comparable for BaP, BaP equivalent toxic, or the sum of all carcinogenic PAHs but were different when individual gaseous PAHs or ∑PAHG were chosen. Within the framework of risk assessment, such an approach, based on groundwork studies, allows for both the construction of an SEG database and the identification of exposure groups that require improvements in either the description level or the homogeneity degree toward SEG.

  5. Proceedings of the 2011 2nd International Congress on Computer Applications and Computational Science

    CERN Document Server

    Nguyen, Quang

    2012-01-01

    The latest inventions in computer technology influence most of human daily activities. In the near future, there is tendency that all of aspect of human life will be dependent on computer applications. In manufacturing, robotics and automation have become vital for high quality products. In education, the model of teaching and learning is focusing more on electronic media than traditional ones. Issues related to energy savings and environment is becoming critical.   Computational Science should enhance the quality of human life,  not only solve their problems. Computational Science should help humans to make wise decisions by presenting choices and their possible consequences. Computational Science should help us make sense of observations, understand natural language, plan and reason with extensive background knowledge. Intelligence with wisdom is perhaps an ultimate goal for human-oriented science.   This book is a compilation of some recent research findings in computer application and computational sci...

  6. Proceedings of the 2011 2nd International Congress on Computer Applications and Computational Science

    CERN Document Server

    Nguyen, Quang

    2012-01-01

    The latest inventions in computer technology influence most of human daily activities. In the near future, there is tendency that all of aspect of human life will be dependent on computer applications. In manufacturing, robotics and automation have become vital for high quality products. In education, the model of teaching and learning is focusing more on electronic media than traditional ones. Issues related to energy savings and environment is becoming critical.   Computational Science should enhance the quality of human life,  not only solve their problems. Computational Science should help humans to make wise decisions by presenting choices and their possible consequences. Computational Science should help us make sense of observations, understand natural language, plan and reason with extensive background knowledge. Intelligence with wisdom is perhaps an ultimate goal for human-oriented science.   This book is a compilation of some recent research findings in computer application and computational sci...

  7. International Conference on Frontiers of Intelligent Computing : Theory and Applications

    CERN Document Server

    Udgata, Siba; Biswal, Bhabendra

    2013-01-01

    The volume contains the papers presented at FICTA 2012: International Conference on Frontiers in Intelligent Computing: Theory and Applications held on December 22-23, 2012 in Bhubaneswar engineering College, Bhubaneswar, Odissa, India. It contains 86 papers contributed by authors from the globe. These research papers mainly focused on application of intelligent techniques which includes evolutionary computation techniques like genetic algorithm, particle swarm optimization techniques, teaching-learning based optimization etc  for various engineering applications such as data mining, image processing, cloud computing, networking etc.

  8. A public turbulence database cluster and applications to study Lagrangian evolution of velocity increments in turbulence

    CERN Document Server

    Li, Yi; Wan, Minping; Yang, Yunke; Meneveau, Charles; Burns, Randal; Chen, Shiyi; Szalay, Alexander; Eyink, Gregory

    2008-01-01

    A public database system archiving a direct numerical simulation (DNS) data set of isotropic, forced turbulence is described in this paper. The data set consists of the DNS output on $1024^3$ spatial points and 1024 time-samples spanning about one large-scale turn-over timescale. This complete $1024^4$ space-time history of turbulence is accessible to users remotely through an interface that is based on the Web-services model. Users may write and execute analysis programs on their host computers, while the programs make subroutine-like calls that request desired parts of the data over the network. The users are thus able to perform numerical experiments by accessing the 27 Terabytes of DNS data using regular platforms such as laptops. The architecture of the database is explained, as are some of the locally defined functions, such as differentiation and interpolation. Test calculations are performed to illustrate the usage of the system and to verify the accuracy of the methods. The database is then used to a...

  9. An integrated medical image database and retrieval system using a web application server.

    Science.gov (United States)

    Cao, Pengyu; Hashiba, Masao; Akazawa, Kouhei; Yamakawa, Tomoko; Matsuto, Takayuki

    2003-08-01

    We developed an Integrated Medical Image Database and Retrieval System (INIS) for easy access by medical staff. The INIS mainly consisted of four parts: specific servers to save medical images from multi-vendor modalities of CT, MRI, CR, ECG and endoscopy; an integrated image database (DB) server to save various kinds of images in a DICOM format; a Web application server to connect clients to the integrated image DB and the Web browser terminals connected to an HIS system. The INIS provided a common screen design to retrieve CT, MRI, CR, endoscopic and ECG images, and radiological reports, which would allow doctors to retrieve radiological images and corresponding reports, or ECG images of a patient simultaneously on a screen. Doctors working in internal medicine on average accessed information 492 times a month. Doctors working in cardiological and gastroenterological accessed information 308 times a month. Using the INIS, medical staff could browse all or parts of a patient's medical images and reports.

  10. Map-Based Querying for Multimedia Database

    Science.gov (United States)

    2014-09-01

    development. This IDE has Android support for development of Android applications . The Android software development kit (SDK) and the associated tools are......for Multimedia Database Somiya Metu Computational and Information Sciences Directorate, ARL

  11. AiiDA: Automated Interactive Infrastructure and Database for Computational Science

    CERN Document Server

    Pizzi, Giovanni; Sabatini, Riccardo; Marzari, Nicola; Kozinsky, Boris

    2016-01-01

    Computational science has seen in the last decades a spectacular rise in the scope, breadth, and depth of its efforts. Notwithstanding this prevalence and impact, it is often still performed using the renaissance model of individual artisans gathered in a workshop, under the guidance of an established practitioner. Great benefits could follow instead from adopting concepts and tools coming from computer science to manage, preserve, and share these computational efforts. We illustrate here our paradigm sustaining such vision, based around the four pillars of Automation, Data, Environment, and Sharing, and discuss its implementation in the open-source AiiDA platform (http://www.aiida.net). The platform is tuned first to the demands of computational materials science: coupling remote management with automatic data generation; ensuring provenance, preservation, and searchability of heterogeneous data through a design based on directed acyclic graphs; encoding complex sequences of low-level codes into scientific w...

  12. The power of an ontology-driven developmental toxicity database for data mining and computational modeling

    Science.gov (United States)

    Modeling of developmental toxicology presents a significant challenge to computational toxicology due to endpoint complexity and lack of data coverage. These challenges largely account for the relatively few modeling successes using the structure–activity relationship (SAR) parad...

  13. The power of an ontology-driven developmental toxicity database for data mining and computational modeling

    Science.gov (United States)

    Modeling of developmental toxicology presents a significant challenge to computational toxicology due to endpoint complexity and lack of data coverage. These challenges largely account for the relatively few modeling successes using the structure–activity relationship (SAR) parad...

  14. Guide to cloud computing for business and technology managers from distributed computing to cloudware applications

    CERN Document Server

    Kale, Vivek

    2014-01-01

    Guide to Cloud Computing for Business and Technology Managers: From Distributed Computing to Cloudware Applications unravels the mystery of cloud computing and explains how it can transform the operating contexts of business enterprises. It provides a clear understanding of what cloud computing really means, what it can do, and when it is practical to use. Addressing the primary management and operation concerns of cloudware, including performance, measurement, monitoring, and security, this pragmatic book:Introduces the enterprise applications integration (EAI) solutions that were a first ste

  15. 数据库保护的原则及适用法律综述%An Overview of the Principles of Database Protection and Its Applicable Laws

    Institute of Scientific and Technical Information of China (English)

    相丽玲; 黄富国

    2003-01-01

    At present, the principles of database protection are varied in developed countries. So are the applicable laws. This article summarizes these principles and applicable laws in an attempt to provide reference material for China in her legislation for databases.

  16. Computational intelligence applications in modeling and control

    CERN Document Server

    Vaidyanathan, Sundarapandian

    2015-01-01

    The development of computational intelligence (CI) systems was inspired by observable and imitable aspects of intelligent activity of human being and nature. The essence of the systems based on computational intelligence is to process and interpret data of various nature so that that CI is strictly connected with the increase of available data as well as capabilities of their processing, mutually supportive factors. Developed theories of computational intelligence were quickly applied in many fields of engineering, data analysis, forecasting, biomedicine and others. They are used in images and sounds processing and identifying, signals processing, multidimensional data visualization, steering of objects, analysis of lexicographic data, requesting systems in banking, diagnostic systems, expert systems and many other practical implementations. This book consists of 16 contributed chapters by subject experts who are specialized in the various topics addressed in this book. The special chapters have been brought ...

  17. Computer Applications in Production and Engineering

    DEFF Research Database (Denmark)

    Sørensen, Torben

    1997-01-01

    This paper address how neutral product model interfaces can be identified, specified, and implemented to provide intelligent and flexible means for information management in manufacturing of discrete mechanical products.The use of advanced computer based systems, such as CAD, CAE, CNC, and robotics......, offers a potential for significant cost-savings and quality improvements in manufacturing of discrete mechanical products.However, these systems are introduced into production as 'islands of automation' or 'islands of information', and to benefit from the said potential, the systems must be integrated...... into an integrated manufacturing unit. Such units are known as Computer Integrated Manufacturing and Engineering (CIME) systems.The basic concept in CIME is to share and reuse information between the different computer based subsystems. Consequently, for the integration purposes, the CIME systems are highly...

  18. Computational Phase Imaging for Biomedical Applications

    Science.gov (United States)

    Nguyen, Tan Huu

    laser comes at the expense of speckles, which degrades image quality. Therefore, solutions purely based on physical modeling and computations to remove these artifacts, using white-light illumination, are highly desirable. Here, using physical optics, we develop a theoretical model that accurately explains the effects of partial coherence on image information and phase information. The model is further combined with numerical processing to suppress the artifacts, and recover the correct phase information. The third topic is devoted to applying QPI to clinical applications. Traditionally, stained tissues are used in prostate cancer diagnosis instead. The reason is that tissue samples used in diagnosis are nearly transparent under bright field inspection if unstained. Contrast-enhanced microscopy techniques, e.g., phase contrast microscopy (PC) and differential interference contrast microscopy (DIC), can render visibility of the untagged samples with high throughput. However, since these methods are intensity-based, the contrast of acquired images varies significantly from one imaging facility to another, preventing them from being used in diagnosis. Inheriting the merits of PC, SLIM produces phase maps, which measure the refractive index of label-free samples. However, the maps measured by SLIM are not affected by variation in imaging conditions, e.g., illumination, magnification, etc., allowing consistent imaging results when using SLIM across different clinical institutions. Here, we combine SLIM images with machine learning for automatic diagnosis results for prostate cancer. We focus on two diagnosis problems of automatic Gleason grading and cancer vs. non-cancer diagnosis. Finally, we introduce a new imaging modality, named Gradient Light Interference Microscopy (GLIM), which is able to image through optically thick samples using low spatial coherence illumination. The key benefit of GLIM comes from a large numerical aperture of the condenser, which is 0.55 NA

  19. Soft Computing Models in Industrial and Environmental Applications

    CERN Document Server

    Abraham, Ajith; Corchado, Emilio; 7th International Conference, SOCO’12

    2013-01-01

    This volume of Advances in Intelligent and Soft Computing contains accepted papers presented at SOCO 2012, held in the beautiful and historic city of Ostrava (Czech Republic), in September 2012.   Soft computing represents a collection or set of computational techniques in machine learning, computer science and some engineering disciplines, which investigate, simulate, and analyze very complex issues and phenomena.   After a through peer-review process, the SOCO 2012 International Program Committee selected 75 papers which are published in these conference proceedings, and represents an acceptance rate of 38%. In this relevant edition a special emphasis was put on the organization of special sessions. Three special sessions were organized related to relevant topics as: Soft computing models for Control Theory & Applications in Electrical Engineering, Soft computing models for biomedical signals and data processing and Advanced Soft Computing Methods in Computer Vision and Data Processing.   The selecti...

  20. The Emdros Text Database Engine as a Platform for Persuasive Computing

    DEFF Research Database (Denmark)

    Sandborg-Petersen, Ulrik

    2013-01-01

    This paper describes the nature and scope of Emdros, a text database engine for annotated text. Three case-studies of persuasive learning systems using Emdros as an important architectural component are described, and their status as to participation in the three legs of BJ Fogg's Functional Triad...... of Persuasive Design is assessed. Various properties of Emdros are discussed, both with respect to competing systems, and with respect to the three case studies. It is argued that these properties together enable Emdros to form part of the foundation for a large class of systems whose primary function involves...

  1. Engineering applications of computational fluid dynamics

    CERN Document Server

    Awang, Mokhtar

    2015-01-01

    This volume presents the results of Computational Fluid Dynamics (CFD) analysis that can be used for conceptual studies of product design, detail product development, process troubleshooting. It demonstrates the benefit of CFD modeling as a cost saving, timely, safe and easy to scale-up methodology.

  2. Intelligent computational systems for space applications

    Science.gov (United States)

    Lum, Henry, Jr.; Lau, Sonie

    1989-01-01

    The evolution of intelligent computation systems is discussed starting with the Spaceborne VHSIC Multiprocessor System (SVMS). The SVMS is a six-processor system designed to provide at least a 100-fold increase in both numeric and symbolic processing over the i386 uniprocessor. The significant system performance parameters necessary to achieve the performance increase are discussed.

  3. Managing Associated Risks in Cloud Computer Applications Abstract

    African Journals Online (AJOL)

    2012-12-01

    Dec 1, 2012 ... cloud computing as a technology and has chosen the party agent reports of an electoral process as a case ... Business applications are moving to the cloud. ..... activity going on. .... Retrieved from IEEE Xplore Digital Library:.

  4. Recent Applications of Hidden Markov Models in Computational Biology

    Institute of Scientific and Technical Information of China (English)

    Khar Heng Choo; Joo Chuan Tong; Louxin Zhang

    2004-01-01

    This paper examines recent developments and applications of Hidden Markov Models (HMMs) to various problems in computational biology, including multiple sequence alignment, homology detection, protein sequences classification, and genomic annotation.

  5. Advanced computational aeroelasticity and multidisciplinary application for composite curved wing

    OpenAIRE

    Kim, Dong-Hyun; Kim, Yu-Sung

    2008-01-01

    This article preferentially describes advanced computational aeroelasticity and its multidisciplinary applications based on the coupled CFD (Computational Fluid Dynamics) and CSD (Computational Structural Dynamics) method. A modal-based coupled nonlinear aeroelastic analysis system incorporated with unsteady Euler aerodynamics has been developed based on the high-speed parallel processing technique. It is clearly expected to give accurate and practical engineering data in the design fields of...

  6. Scalable Computational Chemistry: New Developments and Applications

    Energy Technology Data Exchange (ETDEWEB)

    Alexeev, Yuri [Iowa State Univ., Ames, IA (United States)

    2002-01-01

    The computational part of the thesis is the investigation of titanium chloride (II) as a potential catalyst for the bis-silylation reaction of ethylene with hexaclorodisilane at different levels of theory. Bis-silylation is an important reaction for producing bis(silyl) compounds and new C-Si bonds, which can serve as monomers for silicon containing polymers and silicon carbides. Ab initio calculations on the steps involved in a proposed mechanism are presented. This choice of reactants allows them to study this reaction at reliable levels of theory without compromising accuracy. The calculations indicate that this is a highly exothermic barrierless reaction. The TiCl2 catalyst removes a 50 kcal/mol activation energy barrier required for the reaction without the catalyst. The first step is interaction of TiCl2 with ethylene to form an intermediate that is 60 kcal/mol below the energy of the reactants. This is the driving force for the entire reaction. Dynamic correlation plays a significant role because RHF calculations indicate that the net barrier for the catalyzed reaction is 50 kcal/mol. They conclude that divalent Ti has the potential to become an important industrial catalyst for silylation reactions. In the programming part of the thesis, parallelization of different quantum chemistry methods is presented. The parallelization of code is becoming important aspects of quantum chemistry code development. Two trends contribute to it: the overall desire to study large chemical systems and the desire to employ highly correlated methods which are usually computationally and memory expensive. In the presented distributed data algorithms computation is parallelized and the largest arrays are evenly distributed among CPUs. First, the parallelization of the Hartree-Fock self-consistent field (SCF) method is considered. SCF method is the most common starting point for more accurate calculations. The Fock build (sub step of SCF) from AO integrals is

  7. Scalable Computational Chemistry: New Developments and Applications

    Energy Technology Data Exchange (ETDEWEB)

    Yuri Alexeev

    2002-12-31

    The computational part of the thesis is the investigation of titanium chloride (II) as a potential catalyst for the bis-silylation reaction of ethylene with hexaclorodisilane at different levels of theory. Bis-silylation is an important reaction for producing bis(silyl) compounds and new C-Si bonds, which can serve as monomers for silicon containing polymers and silicon carbides. Ab initio calculations on the steps involved in a proposed mechanism are presented. This choice of reactants allows them to study this reaction at reliable levels of theory without compromising accuracy. The calculations indicate that this is a highly exothermic barrierless reaction. The TiCl{sub 2} catalyst removes a 50 kcal/mol activation energy barrier required for the reaction without the catalyst. The first step is interaction of TiCl{sub 2} with ethylene to form an intermediate that is 60 kcal/mol below the energy of the reactants. This is the driving force for the entire reaction. Dynamic correlation plays a significant role because RHF calculations indicate that the net barrier for the catalyzed reaction is 50 kcal/mol. They conclude that divalent Ti has the potential to become an important industrial catalyst for silylation reactions. In the programming part of the thesis, parallelization of different quantum chemistry methods is presented. The parallelization of code is becoming important aspects of quantum chemistry code development. Two trends contribute to it: the overall desire to study large chemical systems and the desire to employ highly correlated methods which are usually computationally and memory expensive. In the presented distributed data algorithms computation is parallelized and the largest arrays are evenly distributed among CPUs. First, the parallelization of the Hartree-Fock self-consistent field (SCF) method is considered. SCF method is the most common starting point for more accurate calculations. The Fock build (sub step of SCF) from AO integrals is also

  8. Computer technology -- 1996: Applications and methodology. PVP-Volume 326

    Energy Technology Data Exchange (ETDEWEB)

    Hulbert, G.M. [ed.] [Univ. of Michigan, Ann Arbor, MI (United States); Hsu, K.H. [ed.] [Babcock and Wilcox, Barberton, OH (United States); Lee, T.W. [ed.] [FMC Corp., Santa Clara, CA (United States); Nicholas, T. [ed.] [USAF Wright Laboratory, Wright-Patterson AFB, OH (United States)

    1996-12-01

    The primary objective of the Computer Technology Committee of the ASME Pressure Vessels and Piping Division is to promote interest and technical exchange in the field of computer technology, related to the design and analysis of pressure vessels and piping. The topics included in this volume are: analysis of bolted joints; nonlinear analysis, applications and methodology; finite element analysis and applications; and behavior of materials. Separate abstracts were prepared for 23 of the papers in this volume.

  9. Restricted access processor - An application of computer security technology

    Science.gov (United States)

    Mcmahon, E. M.

    1985-01-01

    This paper describes a security guard device that is currently being developed by Computer Sciences Corporation (CSC). The methods used to provide assurance that the system meets its security requirements include the system architecture, a system security evaluation, and the application of formal and informal verification techniques. The combination of state-of-the-art technology and the incorporation of new verification procedures results in a demonstration of the feasibility of computer security technology for operational applications.

  10. Computational Fluid Dynamics Methods and Their Applications in Medical Science

    OpenAIRE

    Kowalewski Wojciech; Roszak Magdalena; Kołodziejczak Barbara; Ren-Kurc Anna; Bręborowicz Andrzej

    2016-01-01

    As defined by the National Institutes of Health: “Biomedical engineering integrates physical, chemical, mathematical, and computational sciences and engineering principles to study biology, medicine, behavior, and health”. Many issues in this area are closely related to fluid dynamics. This paper provides an overview of the basic concepts concerning Computational Fluid Dynamics and its applications in medicine.

  11. Computer and control applications in a vegetable processing plant

    Science.gov (United States)

    There are many advantages to the use of computers and control in food industry. Software in the food industry takes 2 forms - general purpose commercial computer software and software for specialized applications, such as drying and thermal processing of foods. Many applied simulation models for d...

  12. The fourteenth international symposium on mine planning and equipment selection (MPES) 2005 and the fifth international conference on computer applications in the minerals industry (CAMI) 2005

    Energy Technology Data Exchange (ETDEWEB)

    Singhal, R.J.; Fytas, K.; Chiwetelu, C. (eds.)

    2005-07-01

    The proceedings contain 122 papers on mine planning, equipment selection, and computer applications in the mining and minerals industry. Presentations cover surface and underground mining, development, coal mining, oil sands mining, risk analysis, productivity, computer modelling, and waste treatment. Selected papers have been abstracted for the Coal Abstracts database.

  13. A database of neutron spectra, instrument response functions, and dosimetric conversion factors for radiation protection applications

    Energy Technology Data Exchange (ETDEWEB)

    Naismith, O.F. [National Physical Lab., Teddington (United Kingdom); Siebert, B.R.L. [Physikalisch-Technische Bundesanstalt, Braunschweig (Germany)

    1997-09-01

    One of the major problems encountered in dose assessment for neutron radiation protection derives from the imperfect dose equivalent response of the devices used for monitoring. To investigate the performance of such devices in realistic neutron fields and to optimise calibration procedures, knowledge of both the prevalent spectral fluences and the energy response of the dosemeters is required. To facilitate this and similar studies, a database has been developed comprising a catalogue of neutron spectra and energy-dependent response functions together with a software package to manipulate the data in the catalogue. The range of data, features of the programs, and examples for radiation protection applications are described. (author).

  14. The GEISA Spectroscopic Database as a Tool for Hyperspectral Earth' Tropospheric Remote Sensing Applications

    Science.gov (United States)

    Jacquinet-Husson, Nicole; Crépeau, Laurent; Capelle, Virginie; Scott, Noëlle; Armante, Raymond; Chédin, Alain

    2010-05-01

    Remote sensing of the terrestrial atmosphere has advanced significantly in recent years, and this has placed greater demands on the compilations in terms of accuracy, additional species, and spectral coverage. The successful performances of the new generation of hyperspectral Earth' atmospheric sounders like AIRS (Atmospheric Infrared Sounder -http://www-airs.jpl.nasa.gov/), in the USA, and IASI (Infrared Atmospheric Sounding Interferometer -http://earth-sciences.cnes.fr/IASI/) in Europe, which have a better vertical resolution and accuracy, compared to the previous satellite infrared vertical sounders, depend ultimately on the accuracy to which the spectroscopic parameters of the optically active gases are known, since they constitute an essential input to the forward radiative transfer models that are used to interpret their observations. In this context, the GEISA (1) (Gestion et Etude des Informations Spectroscopiques Atmosphériques: Management and Study of Atmospheric Spectroscopic Information) computer-accessible database, initiated in 1976, is continuously developed and maintained at LMD (Laboratoire de Météorologie Dynamique, France). The updated 2009 edition of GEISA (GEISA-09)is a system comprising three independent sub-databases devoted respectively to: line transition parameters, infrared and ultraviolet/visible absorption cross-sections, microphysical and optical properties of atmospheric aerosols. In this edition, the contents of which will be summarized, 50 molecules are involved in the line transition parameters sub-database, including 111 isotopes, for a total of 3,807,997 entries, in the spectral range from 10-6 to 35,877.031 cm-1. Currently, GEISA is involved in activities related to the assessment of the capabilities of IASI through the GEISA/IASI database derived from GEISA (2). Since the Metop (http://www.eumetsat.int) launch (October 19th 2006), GEISA/IASI is the reference spectroscopic database for the validation of the level-1 IASI data

  15. Computational hemodynamics theory, modelling and applications

    CERN Document Server

    Tu, Jiyuan; Wong, Kelvin Kian Loong

    2015-01-01

    This book discusses geometric and mathematical models that can be used to study fluid and structural mechanics in the cardiovascular system.  Where traditional research methodologies in the human cardiovascular system are challenging due to its invasive nature, several recent advances in medical imaging and computational fluid and solid mechanics modelling now provide new and exciting research opportunities. This emerging field of study is multi-disciplinary, involving numerical methods, computational science, fluid and structural mechanics, and biomedical engineering. Certainly any new student or researcher in this field may feel overwhelmed by the wide range of disciplines that need to be understood. This unique book is one of the first to bring together knowledge from multiple disciplines, providing a starting point to each of the individual disciplines involved, attempting to ease the steep learning curve. This book presents elementary knowledge on the physiology of the cardiovascular system; basic knowl...

  16. Overview of Nuclear Physics Data: Databases, Web Applications and Teaching Tools

    Science.gov (United States)

    McCutchan, Elizabeth

    2017-01-01

    The mission of the United States Nuclear Data Program (USNDP) is to provide current, accurate, and authoritative data for use in pure and applied areas of nuclear science and engineering. This is accomplished by compiling, evaluating, and disseminating extensive datasets. Our main products include the Evaluated Nuclear Structure File (ENSDF) containing information on nuclear structure and decay properties and the Evaluated Nuclear Data File (ENDF) containing information on neutron-induced reactions. The National Nuclear Data Center (NNDC), through the website www.nndc.bnl.gov, provides web-based retrieval systems for these and many other databases. In addition, the NNDC hosts several on-line physics tools, useful for calculating various quantities relating to basic nuclear physics. In this talk, I will first introduce the quantities which are evaluated and recommended in our databases. I will then outline the searching capabilities which allow one to quickly and efficiently retrieve data. Finally, I will demonstrate how the database searches and web applications can provide effective teaching tools concerning the structure of nuclei and how they interact. Work supported by the Office of Nuclear Physics, Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-98CH10886.

  17. Exploring Cloud Computing for Large-scale Scientific Applications

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Guang; Han, Binh; Yin, Jian; Gorton, Ian

    2013-06-27

    This paper explores cloud computing for large-scale data-intensive scientific applications. Cloud computing is attractive because it provides hardware and software resources on-demand, which relieves the burden of acquiring and maintaining a huge amount of resources that may be used only once by a scientific application. However, unlike typical commercial applications that often just requires a moderate amount of ordinary resources, large-scale scientific applications often need to process enormous amount of data in the terabyte or even petabyte range and require special high performance hardware with low latency connections to complete computation in a reasonable amount of time. To address these challenges, we build an infrastructure that can dynamically select high performance computing hardware across institutions and dynamically adapt the computation to the selected resources to achieve high performance. We have also demonstrated the effectiveness of our infrastructure by building a system biology application and an uncertainty quantification application for carbon sequestration, which can efficiently utilize data and computation resources across several institutions.

  18. The NASA Ames Polycyclic Aromatic Hydrocarbon Infrared Spectroscopic Database : The Computed Spectra

    NARCIS (Netherlands)

    Bauschlicher, C. W.; Boersma, C.; Ricca, A.; Mattioda, A. L.; Cami, J.; Peeters, E.; de Armas, F. Sanchez; Saborido, G. Puerta; Hudgins, D. M.; Allamandola, L. J.

    2010-01-01

    The astronomical emission features, formerly known as the unidentified infrared bands, are now commonly ascribed to polycyclic aromatic hydrocarbons (PAHs). The laboratory experiments and computational modeling done at the NASA Ames Research Center to create a collection of PAH IR spectra relevant t

  19. The NASA Ames Polycyclic Aromatic Hydrocarbon Infrared Spectroscopic Database: The Computed Spectra

    NARCIS (Netherlands)

    Bauschlicher, C. W.; Boersma, C.; Ricca, A.; Mattioda, A. L.; Cami, J.; Peeters, E.; Sánchez de Armas, F.; Puerta Saborido, G.; Hudgins, D. M.; Allamandola, L. J.

    2010-01-01

    The astronomical emission features, formerly known as the unidentified infrared bands, are now commonly ascribed to polycyclic aromatic hydrocarbons (PAHs). The laboratory experiments and computational modeling done at the NASA Ames Research Center to create a collection of PAH IR spectra relevant t

  20. The NASA Ames Polycyclic Aromatic Hydrocarbon Infrared Spectroscopic Database : The Computed Spectra

    NARCIS (Netherlands)

    Bauschlicher, C. W.; Boersma, C.; Ricca, A.; Mattioda, A. L.; Cami, J.; Peeters, E.; de Armas, F. Sanchez; Saborido, G. Puerta; Hudgins, D. M.; Allamandola, L. J.

    The astronomical emission features, formerly known as the unidentified infrared bands, are now commonly ascribed to polycyclic aromatic hydrocarbons (PAHs). The laboratory experiments and computational modeling done at the NASA Ames Research Center to create a collection of PAH IR spectra relevant

  1. The NASA Ames Polycyclic Aromatic Hydrocarbon Infrared Spectroscopic Database: The Computed Spectra

    NARCIS (Netherlands)

    Bauschlicher, C. W.; Boersma, C.; Ricca, A.; Mattioda, A. L.; Cami, J.; Peeters, E.; Sánchez de Armas, F.; Puerta Saborido, G.; Hudgins, D. M.; Allamandola, L. J.

    2010-01-01

    The astronomical emission features, formerly known as the unidentified infrared bands, are now commonly ascribed to polycyclic aromatic hydrocarbons (PAHs). The laboratory experiments and computational modeling done at the NASA Ames Research Center to create a collection of PAH IR spectra relevant t

  2. Intelligent decision support systems for sustainable computing paradigms and applications

    CERN Document Server

    Abraham, Ajith; Siarry, Patrick; Sheng, Michael

    2017-01-01

    This unique book dicusses the latest research, innovative ideas, challenges and computational intelligence (CI) solutions in sustainable computing. It presents novel, in-depth fundamental research on achieving a sustainable lifestyle for society, either from a methodological or from an application perspective. Sustainable computing has expanded to become a significant research area covering the fields of computer science and engineering, electrical engineering and other engineering disciplines, and there has been an increase in the amount of literature on aspects sustainable computing such as energy efficiency and natural resources conservation that emphasizes the role of ICT (information and communications technology) in achieving system design and operation objectives. The energy impact/design of more efficient IT infrastructures is a key challenge in realizing new computing paradigms. The book explores the uses of computational intelligence (CI) techniques for intelligent decision support that can be explo...

  3. Advanced Methods and Applications in Computational Intelligence

    CERN Document Server

    Nikodem, Jan; Jacak, Witold; Chaczko, Zenon; ACASE 2012

    2014-01-01

    This book offers an excellent presentation of intelligent engineering and informatics foundations for researchers in this field as well as many examples with industrial application. It contains extended versions of selected papers presented at the inaugural ACASE 2012 Conference dedicated to the Applications of Systems Engineering. This conference was held from the 6th to the 8th of February 2012, at the University of Technology, Sydney, Australia, organized by the University of Technology, Sydney (Australia), Wroclaw University of Technology (Poland) and the University of Applied Sciences in Hagenberg (Austria). The  book is organized into three main parts. Part I contains papers devoted to the heuristic approaches that are applicable in situations where the problem cannot be solved by exact methods, due to various characteristics or  dimensionality problems. Part II covers essential issues of the network management, presents intelligent models of the next generation of networks and distributed systems ...

  4. Mobile computing acceptance grows as applications evolve.

    Science.gov (United States)

    Porn, Louis M; Patrick, Kelly

    2002-01-01

    Handheld devices are becoming more cost-effective to own, and their use in healthcare environments is increasing. Handheld devices currently are being used for e-prescribing, charge capture, and accessing daily schedules and reference tools. Future applications may include education on medications, dictation, order entry, and test-results reporting. Selecting the right handheld device requires careful analysis of current and future applications, as well as vendor expertise. It is important to recognize the technology will continue to evolve over the next three years.

  5. Advances in Computer Science, Engineering & Applications : Proceedings of the Second International Conference on Computer Science, Engineering & Applications

    CERN Document Server

    Zizka, Jan; Nagamalai, Dhinaharan

    2012-01-01

    The International conference series on Computer Science, Engineering & Applications (ICCSEA) aims to bring together researchers and practitioners from academia and industry to focus on understanding computer science, engineering and applications and to establish new collaborations in these areas. The Second International Conference on Computer Science, Engineering & Applications (ICCSEA-2012), held in Delhi, India, during May 25-27, 2012 attracted many local and international delegates, presenting a balanced mixture of  intellect and research both from the East and from the West. Upon a strenuous peer-review process the best submissions were selected leading to an exciting, rich and a high quality technical conference program, which featured high-impact presentations in the latest developments of various areas of computer science, engineering and applications research.  

  6. Advances in Computer Science, Engineering & Applications : Proceedings of the Second International Conference on Computer Science, Engineering & Applications

    CERN Document Server

    Zizka, Jan; Nagamalai, Dhinaharan

    2012-01-01

    The International conference series on Computer Science, Engineering & Applications (ICCSEA) aims to bring together researchers and practitioners from academia and industry to focus on understanding computer science, engineering and applications and to establish new collaborations in these areas. The Second International Conference on Computer Science, Engineering & Applications (ICCSEA-2012), held in Delhi, India, during May 25-27, 2012 attracted many local and international delegates, presenting a balanced mixture of  intellect and research both from the East and from the West. Upon a strenuous peer-review process the best submissions were selected leading to an exciting, rich and a high quality technical conference program, which featured high-impact presentations in the latest developments of various areas of computer science, engineering and applications research.

  7. Cloud computing and digital media fundamentals, techniques, and applications

    CERN Document Server

    Li, Kuan-Ching; Shih, Timothy K

    2014-01-01

    Cloud Computing and Digital Media: Fundamentals, Techniques, and Applications presents the fundamentals of cloud and media infrastructure, novel technologies that integrate digital media with cloud computing, and real-world applications that exemplify the potential of cloud computing for next-generation digital media. It brings together technologies for media/data communication, elastic media/data storage, security, authentication, cross-network media/data fusion, interdevice media interaction/reaction, data centers, PaaS, SaaS, and more.The book covers resource optimization for multimedia clo

  8. Cloud computing principles, systems and applications

    CERN Document Server

    Antonopoulos, Nick

    2017-01-01

    This essential reference is a thorough and timely examination of the services, interfaces and types of applications that can be executed on cloud-based systems. Among other things, it identifies and highlights state-of-the-art techniques and methodologies.

  9. Computational applications of DNA structural scales

    DEFF Research Database (Denmark)

    Baldi, P.; Chauvin, Y.; Brunak, Søren

    1998-01-01

    Studies several different physical scales associated with the structural features of DNA sequences from a computational standpoint, including dinucleotide scales, such as base stacking energy and propeller twist, and trinucleotide scales, such as bendability and nucleosome positioning. We show...... that these scales provide an alternative or complementary compact representation of DNA sequences. As an example, we construct a strand-invariant representation of DNA sequences. The scales can also be used to analyze and discover new DNA structural patterns, especially in combination with hidden Markov models...

  10. Computational applications of DNA physical scales

    DEFF Research Database (Denmark)

    Baldi, Pierre; Chauvin, Yves; Brunak, Søren

    1998-01-01

    The authors study from a computational standpoint several different physical scales associated with structural features of DNA sequences, including dinucleotide scales such as base stacking energy and propellor twist, and trinucleotide scales such as bendability and nucleosome positioning. We show...... that these scales provide an alternative or complementary compact representation of DNA sequences. As an example we construct a strand invariant representation of DNA sequences. The scales can also be used to analyze and discover new DNA structural patterns, especially in combinations with hidden Markov models...

  11. High performance computing for beam physics applications

    Science.gov (United States)

    Ryne, R. D.; Habib, S.

    Several countries are now involved in efforts aimed at utilizing accelerator-driven technologies to solve problems of national and international importance. These technologies have both economic and environmental implications. The technologies include waste transmutation, plutonium conversion, neutron production for materials science and biological science research, neutron production for fusion materials testing, fission energy production systems, and tritium production. All of these projects require a high-intensity linear accelerator that operates with extremely low beam loss. This presents a formidable computational challenge: One must design and optimize over a kilometer of complex accelerating structures while taking into account beam loss to an accuracy of 10 parts per billion per meter. Such modeling is essential if one is to have confidence that the accelerator will meet its beam loss requirement, which ultimately affects system reliability, safety and cost. At Los Alamos, the authors are developing a capability to model ultra-low loss accelerators using the CM-5 at the Advanced Computing Laboratory. They are developing PIC, Vlasov/Poisson, and Langevin/Fokker-Planck codes for this purpose. With slight modification, they have also applied their codes to modeling mesoscopic systems and astrophysical systems. In this paper, they will first describe HPC activities in the accelerator community. Then they will discuss the tools they have developed to model classical and quantum evolution equations. Lastly they will describe how these tools have been used to study beam halo in high current, mismatched charged particle beams.

  12. Computational Fluid Dynamics - Applications in Manufacturing Processes

    Science.gov (United States)

    Beninati, Maria Laura; Kathol, Austin; Ziemian, Constance

    2012-11-01

    A new Computational Fluid Dynamics (CFD) exercise has been developed for the undergraduate introductory fluid mechanics course at Bucknell University. The goal is to develop a computational exercise that students complete which links the manufacturing processes course and the concurrent fluid mechanics course in a way that reinforces the concepts in both. In general, CFD is used as a tool to increase student understanding of the fundamentals in a virtual world. A ``learning factory,'' which is currently in development at Bucknell seeks to use the laboratory as a means to link courses that previously seemed to have little correlation at first glance. A large part of the manufacturing processes course is a project using an injection molding machine. The flow of pressurized molten polyurethane into the mold cavity can also be an example of fluid motion (a jet of liquid hitting a plate) that is applied in manufacturing. The students will run a CFD process that captures this flow using their virtual mold created with a graphics package, such as SolidWorks. The laboratory structure is currently being implemented and analyzed as a part of the ``learning factory''. Lastly, a survey taken before and after the CFD exercise demonstrate a better understanding of both the CFD and manufacturing process.

  13. Near threshold computing technology, methods and applications

    CERN Document Server

    Silvano, Cristina

    2016-01-01

    This book explores near-threshold computing (NTC), a design-space using techniques to run digital chips (processors) near the lowest possible voltage.  Readers will be enabled with specific techniques to design chips that are extremely robust; tolerating variability and resilient against errors.  Variability-aware voltage and frequency allocation schemes will be presented that will provide performance guarantees, when moving toward near-threshold manycore chips.  ·         Provides an introduction to near-threshold computing, enabling reader with a variety of tools to face the challenges of the power/utilization wall; ·         Demonstrates how to design efficient voltage regulation, so that each region of the chip can operate at the most efficient voltage and frequency point; ·         Investigates how performance guarantees can be ensured when moving towards NTC manycores through variability-aware voltage and frequency allocation schemes.  .

  14. Runtime optimization of an application executing on a parallel computer

    Science.gov (United States)

    Faraj, Daniel A.; Smith, Brian E.

    2013-01-29

    Identifying a collective operation within an application executing on a parallel computer; identifying a call site of the collective operation; determining whether the collective operation is root-based; if the collective operation is not root-based: establishing a tuning session and executing the collective operation in the tuning session; if the collective operation is root-based, determining whether all compute nodes executing the application identified the collective operation at the same call site; if all compute nodes identified the collective operation at the same call site, establishing a tuning session and executing the collective operation in the tuning session; and if all compute nodes executing the application did not identify the collective operation at the same call site, executing the collective operation without establishing a tuning session.

  15. Reliable Provisioning of Spot Instances for Compute-intensive Applications

    CERN Document Server

    Voorsluys, William

    2011-01-01

    Cloud computing providers are now offering their unused resources for leasing in the spot market, which has been considered the first step towards a full-fledged market economy for computational resources. Spot instances are virtual machines (VMs) available at lower prices than their standard on-demand counterparts. These VMs will run for as long as the current price is lower than the maximum bid price users are willing to pay per hour. Spot instances have been increasingly used for executing compute-intensive applications. In spite of an apparent economical advantage, due to an intermittent nature of biddable resources, application execution times may be prolonged or they may not finish at all. This paper proposes a resource allocation strategy that addresses the problem of running compute-intensive jobs on a pool of intermittent virtual machines, while also aiming to run applications in a fast and economical way. To mitigate potential unavailability periods, a multifaceted fault-aware resource provisioning ...

  16. Computed Tomography Technology: Development and Applications for Defence

    Science.gov (United States)

    Baheti, G. L.; Saxena, Nisheet; Tripathi, D. K.; Songara, K. C.; Meghwal, L. R.; Meena, V. L.

    2008-09-01

    Computed Tomography(CT) has revolutionized the field of Non-Destructive Testing and Evaluation (NDT&E). Tomography for industrial applications warrants design and development of customized solutions catering to specific visualization requirements. Present paper highlights Tomography Technology Solutions implemented at Defence Laboratory, Jodhpur (DLJ). Details on the technological developments carried out and their utilization for various Defence applications has been covered.

  17. Applications of Deontic Logic in Computer Science: A Concise Overview

    NARCIS (Netherlands)

    Meyer, J.-J.Ch.; Meyer, John-Jules Ch.; Wieringa, Roelf J.

    1993-01-01

    Deontic logic is the logic that deals with actual as well as ideal behavior of systems. In this paper, we survey a number of applications of deontic logic in computer science that have arisen in the eighties, and give a systematic framework in which these applications can be classified. Many

  18. Applications of Parsing Theory to Computer-Assisted Instruction.

    Science.gov (United States)

    Markosian, Lawrence Z.; Ager, Tryg A.

    1983-01-01

    Applications of an LR-1 parsing algorithm to intelligent programs for computer assisted instruction in symbolic logic and foreign languages are discussed. The system has been adequately used for diverse instructional applications, including analysis of student input, generation of pattern drills, and modeling the student's understanding of the…

  19. Applications of computational tools in biosciences and medical engineering

    CERN Document Server

    Altenbach, Holm

    2015-01-01

     This book presents the latest developments and applications of computational tools related to the biosciences and medical engineering. It also reports the findings of different multi-disciplinary research projects, for example, from the areas of scaffolds and synthetic bones, implants and medical devices, and medical materials. It is also shown that the application of computational tools often requires mathematical and experimental methods. Computational tools such as the finite element methods, computer-aided design and optimization as well as visualization techniques such as computed axial tomography open up completely new research fields that combine the fields of engineering and bio/medical. Nevertheless, there are still hurdles since both directions are based on quite different ways of education. Often even the “language” can vary from discipline to discipline.

  20. Computational intelligence in digital forensics forensic investigation and applications

    CERN Document Server

    Choo, Yun-Huoy; Abraham, Ajith; Srihari, Sargur

    2014-01-01

    Computational Intelligence techniques have been widely explored in various domains including forensics. Analysis in forensic encompasses the study of pattern analysis that answer the question of interest in security, medical, legal, genetic studies and etc. However, forensic analysis is usually performed through experiments in lab which is expensive both in cost and time. Therefore, this book seeks to explore the progress and advancement of computational intelligence technique in different focus areas of forensic studies. This aims to build stronger connection between computer scientists and forensic field experts.   This book, Computational Intelligence in Digital Forensics: Forensic Investigation and Applications, is the first volume in the Intelligent Systems Reference Library series. The book presents original research results and innovative applications of computational intelligence in digital forensics. This edited volume contains seventeen chapters and presents the latest state-of-the-art advancement ...

  1. Towards Process Support for Migrating Applications to Cloud Computing

    DEFF Research Database (Denmark)

    Chauhan, Muhammad Aufeef; Babar, Muhammad Ali

    2012-01-01

    Cloud computing is an active area of research for industry and academia. There are a large number of organizations providing cloud computing infrastructure and services. In order to utilize these infrastructure resources and services, existing applications need to be migrated to clouds. However...... for supporting migration to cloud computing based on our experiences from migrating an Open Source System (OSS), Hackystat, to two different cloud computing platforms. We explained the process by performing a comparative analysis of our efforts to migrate Hackystate to Amazon Web Services and Google App Engine....... We also report the potential challenges, suitable solutions, and lesson learned to support the presented process framework. We expect that the reported experiences can serve guidelines for those who intend to migrate software applications to cloud computing....

  2. 关于中职学校开设数据库课程的思考%Reflections on Opening Database Application Course in Secondary Vocational Schools

    Institute of Scientific and Technical Information of China (English)

    陈宗林

    2013-01-01

      本文主要阐述中等职业学校计算机及相关专业开设数据库应用课程的几点思考。根据计算机及相关专业的特点,结合实际教学经验,以实际数据库应用课程为例,阐述数据库语言的选择、教材的选择、网络环境辅助教学以及常用的教学模式等。%This paper mainly expounds reflections on opening database application course in computer and related specialty in secondary vocational schools. According to the characteristics of computers and related specialty, combined with the practical teaching experience, with the actual database application course as an example, the paper elaborates the choice of the database language, the choice of teaching materials and network aided teaching as well as the commonly used teaching mode.

  3. Medical imaging technology reviews and computational applications

    CERN Document Server

    Dewi, Dyah

    2015-01-01

    This book presents the latest research findings and reviews in the field of medical imaging technology, covering ultrasound diagnostics approaches for detecting osteoarthritis, breast carcinoma and cardiovascular conditions, image guided biopsy and segmentation techniques for detecting lung cancer, image fusion, and simulating fluid flows for cardiovascular applications. It offers a useful guide for students, lecturers and professional researchers in the fields of biomedical engineering and image processing.

  4. Guidelines for Security of Computer Applications

    Science.gov (United States)

    2007-11-02

    RUTHZ 77], [RUTHZ 78], [EDPAF 77], [ IIASA 77], [SGCCA 75B], [MAIRW 76] are effective in meeting all three security objectives; however, some are...and [ IIASA 77]. * Fields can be checked for: − legitimate characters (format checks), − proper sequences with respect to corresponding fields in...MAIRW 76], [JANCE 74], and [ IIASA 77] referenced above. * Integrated Test Facility (ITF). The ITF allows the performance of the application system

  5. Outlook for digital computing and its applications, 1976--1995

    Energy Technology Data Exchange (ETDEWEB)

    Chin, H. W.; Lau, H.; McWilliams, T.; Weisberg, A.; Widdoes, L. C.; Wood, L.

    1976-08-20

    The current status of digital computing technology and its applications are briefly reviewed. A surprise-free (invention-precluding) scenario for predicting the advance of this technology and its applications for the next two decades is developed, and employed to set lower bounds on the performance of digital computing systems and their potential applications. Progress-pacing features are identified, and are seen to be primarily of hardware origin in the near term, and due to software limitations in the longer term. 5 figures, 2 tables.

  6. The computation of fixed points and applications

    CERN Document Server

    Todd, Michael J

    1976-01-01

    Fixed-point algorithms have diverse applications in economics, optimization, game theory and the numerical solution of boundary-value problems. Since Scarf's pioneering work [56,57] on obtaining approximate fixed points of continuous mappings, a great deal of research has been done in extending the applicability and improving the efficiency of fixed-point methods. Much of this work is available only in research papers, although Scarf's book [58] gives a remarkably clear exposition of the power of fixed-point methods. However, the algorithms described by Scarf have been super~eded by the more sophisticated restart and homotopy techniques of Merrill [~8,~9] and Eaves and Saigal [1~,16]. To understand the more efficient algorithms one must become familiar with the notions of triangulation and simplicial approxi- tion, whereas Scarf stresses the concept of primitive set. These notes are intended to introduce to a wider audience the most recent fixed-point methods and their applications. Our approach is therefore ...

  7. Computer-aided identification of potential TYK2 inhibitors from drug database

    Science.gov (United States)

    Zhang, Wei; Li, Jianzong; Huang, Zhixin; Wang, Haiyang; Luo, Hao; Wang, Xin; Zhou, Nan; Wu, Chuanfang; Bao, Jinku

    2016-10-01

    TYK2 is a member of JAKs family protein tyrosine kinase activated in response to various cytokines. It plays a crucial role in transducing signals downstream of various cytokine receptors, which are involved in proinflammatory responses associated with immunological diseases. Thus, the study of selective TYK2 inhibitors is one of the most popular fields in anti-inflammation drug development. Herein, we adopted molecular docking, molecular dynamics simulation and MM-PBSA binding free energy calculation to screen potential TYK2-selective inhibitors from ZINC Drug Database. Finally, three small molecule drugs ZINC12503271 (Gemifloxacin), ZINC05844792 (Nebivolol) and ZINC00537805 (Glyburide) were selected as potential TYK2-selective inhibitors. Compared to known inhibitor 2,6-dichloro-N-{2-[(cyclopropylcarbonyl)amino]pyridin-4-yl}benzamide, these three candidates had better Grid score and Amber score from molecular docking and preferable results from binding free energy calculation as well. What's more, the ATP-binding site and A-loop motif had been identified to play key roles in TYK2-targeted inhibitor discovery. It is expected that our study will pave the way for the design of potent TYK2 inhibitors of new drugs to treat a wide variety of immunological diseases such as inflammatory diseases, multiple sclerosis, psoriasis inflammatory bowel disease (IBD) and so on.

  8. Computer and Modernization%Concurrency Control Algorithms with Balanced Blink-tree Database Index

    Institute of Scientific and Technical Information of China (English)

    包斌; 李亚岗

    2016-01-01

    针对B链树作为多版本数据库索引的并发控制机制,提出一种新的B链树结构修改并发控制算法。该算法将B链树结构修改操作划分为几个更小的原子修改操作,这些修改操作可以并发运行并且不会死锁。实验表明,新算法提高了并发度与事务的吞吐量,并保持B链树结构的一致性和平衡性。%Concerning the concurrency control mechanism for multiversion database index based on Blink-tree, a new Blink-tree concurrency control modification algorithm was proposed. The algorithm divides Blink-tree structure modification into several smal-ler atomic modifications which run concurrently and deadlock-free. The experimental results show that the new algorithm improves concurrency and transaction throughput, and retains consistency and balance of Blink-tree structure.

  9. BindingDB in 2015: A public database for medicinal chemistry, computational chemistry and systems pharmacology.

    Science.gov (United States)

    Gilson, Michael K; Liu, Tiqing; Baitaluk, Michael; Nicola, George; Hwang, Linda; Chong, Jenny

    2016-01-04

    BindingDB, www.bindingdb.org, is a publicly accessible database of experimental protein-small molecule interaction data. Its collection of over a million data entries derives primarily from scientific articles and, increasingly, US patents. BindingDB provides many ways to browse and search for data of interest, including an advanced search tool, which can cross searches of multiple query types, including text, chemical structure, protein sequence and numerical affinities. The PDB and PubMed provide links to data in BindingDB, and vice versa; and BindingDB provides links to pathway information, the ZINC catalog of available compounds, and other resources. The BindingDB website offers specialized tools that take advantage of its large data collection, including ones to generate hypotheses for the protein targets bound by a bioactive compound, and for the compounds bound by a new protein of known sequence; and virtual compound screening by maximal chemical similarity, binary kernel discrimination, and support vector machine methods. Specialized data sets are also available, such as binding data for hundreds of congeneric series of ligands, drawn from BindingDB and organized for use in validating drug design methods. BindingDB offers several forms of programmatic access, and comes with extensive background material and documentation. Here, we provide the first update of BindingDB since 2007, focusing on new and unique features and highlighting directions of importance to the field as a whole.

  10. Theory, computation, and application of exponential splines

    Science.gov (United States)

    Mccartin, B. J.

    1981-01-01

    A generalization of the semiclassical cubic spline known in the literature as the exponential spline is discussed. In actuality, the exponential spline represents a continuum of interpolants ranging from the cubic spline to the linear spline. A particular member of this family is uniquely specified by the choice of certain tension parameters. The theoretical underpinnings of the exponential spline are outlined. This development roughly parallels the existing theory for cubic splines. The primary extension lies in the ability of the exponential spline to preserve convexity and monotonicity present in the data. Next, the numerical computation of the exponential spline is discussed. A variety of numerical devices are employed to produce a stable and robust algorithm. An algorithm for the selection of tension parameters that will produce a shape preserving approximant is developed. A sequence of selected curve-fitting examples are presented which clearly demonstrate the advantages of exponential splines over cubic splines.

  11. Computer, Informatics, Cybernetics and Applications : Proceedings of the CICA 2011

    CERN Document Server

    Hua, Ertian; Lin, Yun; Liu, Xiaozhu

    2012-01-01

    Computer Informatics Cybernetics and Applications offers 91 papers chosen for publication from among 184 papers accepted for presentation to the International Conference on Computer, Informatics, Cybernetics and Applications 2011 (CICA 2011), held in Hangzhou, China, September 13-16, 2011. The CICA 2011 conference provided a forum for engineers and scientists in academia, industry, and government to address the most innovative research and development including technical challenges and social, legal, political, and economic issues, and to present and discuss their ideas, results, work in progress and experience on all aspects of Computer, Informatics, Cybernetics and Applications. Reflecting the broad scope of the conference, the contents are organized in these topical categories: Communication Technologies and Applications Intelligence and Biometrics Technologies Networks Systems and Web Technologies Data Modeling and Programming Languages Digital Image Processing Optimization and Scheduling Education and In...

  12. Soft Computing Applications in Optimization, Control, and Recognition

    CERN Document Server

    Castillo, Oscar

    2013-01-01

    Soft computing includes several intelligent computing paradigms, like fuzzy logic, neural networks, and bio-inspired optimization algorithms. This book describes the application of soft computing techniques to intelligent control, pattern recognition, and optimization problems. The book is organized in four main parts. The first part deals with nature-inspired optimization methods and their applications. Papers included in this part propose new models for achieving intelligent optimization in different application areas. The second part discusses hybrid intelligent systems for achieving control. Papers included in this part make use of nature-inspired techniques, like evolutionary algorithms, fuzzy logic and neural networks, for the optimal design of intelligent controllers for different kind of applications. Papers in the third part focus on intelligent techniques for pattern recognition and propose new methods to solve complex pattern recognition problems. The fourth part discusses new theoretical concepts ...

  13. 面向托管的数据库即服务系统及资源优化技术%Database as a service system for business database application hosting and its resource optimization technique

    Institute of Scientific and Technical Information of China (English)

    王卓昊; 王希诚

    2011-01-01

    Database as a Service(DBaaS) is becoming a research hotspot of cloud computing.As a main application domain, business database application hosting puts forward the requirements of data and performance isolation and reliability guarantee. To satisfy these requirements, this paper proposes a virtual machine based database hosting method and corresponding DBaaS system.Furthermore, aiming at the key problem of resource (such as CPU, memory, etc.) optimal allocation for virtual machines which host database applications of different tenants,the paper formalizes the constraint programming problem,and solves the problem through a greedy algorithm based on a performance model and a utility function.The application example and experiment show that,the algorithm can optimize the resource cost while meeting database performance demand of each tenant.%数据库即服务(DBaaS)是云计算的一个研究热点,而数据应用托管则是当前DBaaS的一个重要应用领域.为满足行业数据应用托管中对DBaaS提出的数据隔离、性能隔离及可靠性保障等方面的要求,提出一种无共享架构下基于虚拟机,支持副本的多租户数据托管方法及相应的数据库即服务系统.针对该系统中面向租户的虚拟机资源(CPU、内存等)动态优化这一核心问题,建立了基于虚拟机的系统资源效用函数和数据库性能计算模型,并在基础上给出了一种根据租户数据请求负载并采用贪心方式的虚拟机资源动态优化算法.结合科技信息服务数据库托管应用示例进行了实验,实验结果表明提出的方法可以根据各个租户的数据库负载动态优化虚拟机的资源分配,能够在满足性能需求同时达到了提高系统资源利用率的目的.

  14. Introduction to lattice theory with computer science applications

    CERN Document Server

    Garg, Vijay K

    2015-01-01

    A computational perspective on partial order and lattice theory, focusing on algorithms and their applications This book provides a uniform treatment of the theory and applications of lattice theory. The applications covered include tracking dependency in distributed systems, combinatorics, detecting global predicates in distributed systems, set families, and integer partitions. The book presents algorithmic proofs of theorems whenever possible. These proofs are written in the calculational style advocated by Dijkstra, with arguments explicitly spelled out step by step. The author's intent

  15. Development and application of indices using large volcanic databases for a global hazard and risk assessment

    Science.gov (United States)

    Brown, Sarah; Auker, Melanie; Cottrell, Elizabeth; Delgado Granados, Hugo; Loughlin, Sue; Ortiz Guerrero, Natalie; Sparks, Steve; Vye-Brown, Charlotte; Taskforce, Indices

    2015-04-01

    monitoring levels around the world; this is designed to be complementary to WOVOdat (the World Organisation of Volcano Observatories: Database of Volcanic Unrest). An index developed from this has been adapted and applied to a global dataset showing that approximately one third of historically active volcanoes have levels of ground-based monitoring that may permit analysis of magma movements and activity forecasts. Some unmonitored volcanoes score highly for both hazard and population risk. The development and application of such indices is dependent on the availability and accessibility of large, systematic, sustainable and compatible databases. These indices help to harmonise approaches and allows first order assessments, highlighting gaps in knowledge and areas where research and investment is recommended.

  16. Mobile devices and computing cloud resources allocation for interactive applications

    Directory of Open Access Journals (Sweden)

    Krawczyk Henryk

    2017-06-01

    Full Text Available Using mobile devices such as smartphones or iPads for various interactive applications is currently very common. In the case of complex applications, e.g. chess games, the capabilities of these devices are insufficient to run the application in real time. One of the solutions is to use cloud computing. However, there is an optimization problem of mobile device and cloud resources allocation. An iterative heuristic algorithm for application distribution is proposed. The algorithm minimizes the energy cost of application execution with constrained execution time.

  17. Application of bilateral filtration with weight coefficients for similarity metric calculation in optical flow computation algorithm

    Science.gov (United States)

    Panin, S. V.; Titkov, V. V.; Lyubutin, P. S.; Chemezov, V. O.; Eremin, A. V.

    2016-11-01

    Application of weight coefficients of the bilateral filter used to determine weighted similarity metrics of image ranges in optical flow computation algorithm that employs 3-dimension recursive search (3DRS) was investigated. By testing the algorithm applying images taken from the public test database Middlebury benchmark, the effectiveness of this weighted similarity metrics for solving the image processing problem was demonstrated. The necessity of matching the equation parameter values when calculating the weight coefficients aimed at taking into account image texture features was proved for reaching the higher noise resistance under the vector field construction. The adaptation technique which allows excluding manual determination of parameter values was proposed and its efficiency was demonstrated.

  18. [Mobile phone-computer wireless interactive graphics transmission technology and its medical application].

    Science.gov (United States)

    Huang, Shuo; Liu, Jing

    2010-05-01

    Application of clinical digital medical imaging has raised many tough issues to tackle, such as data storage, management, and information sharing. Here we investigated a mobile phone based medical image management system which is capable of achieving personal medical imaging information storage, management and comprehensive health information analysis. The technologies related to the management system spanning the wireless transmission technology, the technical capabilities of phone in mobile health care and management of mobile medical database were discussed. Taking medical infrared images transmission between phone and computer as an example, the working principle of the present system was demonstrated.

  19. LHC Databases on the Grid: Achievements and Open Issues

    CERN Document Server

    Vaniachine, A V

    2010-01-01

    To extract physics results from the recorded data, the LHC experiments are using Grid computing infrastructure. The event data processing on the Grid requires scalable access to non-event data (detector conditions, calibrations, etc.) stored in relational databases. The database-resident data are critical for the event data reconstruction processing steps and often required for physics analysis. This paper reviews LHC experience with database technologies for the Grid computing. List of topics includes: database integration with Grid computing models of the LHC experiments; choice of database technologies; examples of database interfaces; distributed database applications (data complexity, update frequency, data volumes and access patterns); scalability of database access in the Grid computing environment of the LHC experiments. The review describes areas in which substantial progress was made and remaining open issues.

  20. CERN Computing Colloquium | Scientific Databases at Scale and SciDB | 27 May

    CERN Multimedia

    2013-01-01

    by Dr. Michael Stonebraker (MIT - Massachusetts Institute of Technology - Cambridge MA, USA) Monday 27 May 2013 from 2 p.m. to 4 p.m. at CERN ( 222-R-001 - Filtration Plant ) Abstract: As a general rule, scientists have shunned relational data management systems (RDBMS), choosing instead to “roll their own” on top of file system technology.  We first discuss why file systems are a poor choice for science data storage, especially as data volumes become large and scalability becomes important. Then, we continue with the reasons why RDBMSs work poorly on most science applications.  These include a data model “impedance mismatch” and missing features. We discuss array DBMSs, and why they are a much better choice for science applications, and use SciDB as an exemplar of this new class of DBMSs. Most science applications require a mix of data management and complex analytics.  In most cases, the analytics entail a sequence of linear a...

  1. First Database Course--Keeping It All Organized

    Science.gov (United States)

    Baugh, Jeanne M.

    2015-01-01

    All Computer Information Systems programs require a database course for their majors. This paper describes an approach to such a course in which real world examples, both design projects and actual database application projects are incorporated throughout the semester. Students are expected to apply the traditional database concepts to actual…

  2. Application of computed tomography to spinal disorders

    Energy Technology Data Exchange (ETDEWEB)

    Sha, N.; Kurihara, A.; Kataoka, O. (Kobe Univ. (Japan). School of Medicine)

    1980-08-01

    The axial tomographic examination of the spine and its soft tissues is now readily available for orthopaedic surgery. If the appropriate conditions are maintained, computed tomography (CT) can provide useful information concerning the diagnosis and the treatment of spinal diseases. There are, however, some pitfalls in interpreting CT scans: 1) the existence of the lesion may be masked, and 2) its extent and configuration may be over- or under-evaluated depending on such technical factors as the slicing level, the slicing angle, the window width, and the window level. Experimental studies were carried out on a plaster of Paris model of the spine and a cadaver spine to determine the appropriate technical factors by which the CT (EMI whole body CT scanner 5005) can be applied accurately to a diseased spine. The factors obtained were then applied in examining ninety-nine patients with various spinal disorders. Window levels ranging between 100 and 150 were found to be most appropriate. The slicing angle should be 90/sup 0/, or perpendicular to the long axis of the object under study. However, deviations of +10 or -10 degrees are acceptable. The CT view of the spine may be divided into two patterns at the cervical and thoracic levels and into three patterns at the lumbar level. In addition, the usefulness and the diagnostic value of CT for various spinal problems are discussed based on our clinical material.

  3. Application of the British Food Standards Agency nutrient profiling system in a French food composition database.

    Science.gov (United States)

    Julia, Chantal; Kesse-Guyot, Emmanuelle; Touvier, Mathilde; Méjean, Caroline; Fezeu, Léopold; Hercberg, Serge

    2014-11-28

    Nutrient profiling systems are powerful tools for public health initiatives, as they aim at categorising foods according to their nutritional quality. The British Food Standards Agency (FSA) nutrient profiling system (FSA score) has been validated in a British food database, but the application of the model in other contexts has not yet been evaluated. The objective of the present study was to assess the application of the British FSA score in a French food composition database. Foods from the French NutriNet-Santé study food composition table were categorised according to their FSA score using the Office of Communication (OfCom) cut-off value ('healthier' ≤ 4 for foods and ≤ 1 for beverages; 'less healthy' >4 for foods and >1 for beverages) and distribution cut-offs (quintiles for foods, quartiles for beverages). Foods were also categorised according to the food groups used for the French Programme National Nutrition Santé (PNNS) recommendations. Foods were weighted according to their relative consumption in a sample drawn from the NutriNet-Santé study (n 4225), representative of the French population. Classification of foods according to the OfCom cut-offs was consistent with food groups described in the PNNS: 97·8 % of fruit and vegetables, 90·4 % of cereals and potatoes and only 3·8 % of sugary snacks were considered as 'healthier'. Moreover, variability in the FSA score allowed for a discrimination between subcategories in the same food group, confirming the possibility of using the FSA score as a multiple category system, for example as a basis for front-of-pack nutrition labelling. Application of the FSA score in the French context would adequately complement current public health recommendations.

  4. Exploring the Ligand-Protein Networks in Traditional Chinese Medicine: Current Databases, Methods, and Applications

    Directory of Open Access Journals (Sweden)

    Mingzhu Zhao

    2013-01-01

    Full Text Available The traditional Chinese medicine (TCM, which has thousands of years of clinical application among China and other Asian countries, is the pioneer of the “multicomponent-multitarget” and network pharmacology. Although there is no doubt of the efficacy, it is difficult to elucidate convincing underlying mechanism of TCM due to its complex composition and unclear pharmacology. The use of ligand-protein networks has been gaining significant value in the history of drug discovery while its application in TCM is still in its early stage. This paper firstly surveys TCM databases for virtual screening that have been greatly expanded in size and data diversity in recent years. On that basis, different screening methods and strategies for identifying active ingredients and targets of TCM are outlined based on the amount of network information available, both on sides of ligand bioactivity and the protein structures. Furthermore, applications of successful in silico target identification attempts are discussed in detail along with experiments in exploring the ligand-protein networks of TCM. Finally, it will be concluded that the prospective application of ligand-protein networks can be used not only to predict protein targets of a small molecule, but also to explore the mode of action of TCM.

  5. Human-computer interface incorporating personal and application domains

    Science.gov (United States)

    Anderson, Thomas G.

    2011-03-29

    The present invention provides a human-computer interface. The interface includes provision of an application domain, for example corresponding to a three-dimensional application. The user is allowed to navigate and interact with the application domain. The interface also includes a personal domain, offering the user controls and interaction distinct from the application domain. The separation into two domains allows the most suitable interface methods in each: for example, three-dimensional navigation in the application domain, and two- or three-dimensional controls in the personal domain. Transitions between the application domain and the personal domain are under control of the user, and the transition method is substantially independent of the navigation in the application domain. For example, the user can fly through a three-dimensional application domain, and always move to the personal domain by moving a cursor near one extreme of the display.

  6. Deep Learning and Applications in Computational Biology

    KAUST Repository

    Zeng, Jianyang

    2016-01-26

    RNA-binding proteins (RBPs) play important roles in the post-transcriptional control of RNAs. Identifying RBP binding sites and characterizing RBP binding preferences are key steps toward understanding the basic mechanisms of the post-transcriptional gene regulation. Though numerous computational methods have been developed for modeling RBP binding preferences, discovering a complete structural representation of the RBP targets by integrating their available structural features in all three dimensions is still a challenging task. In this work, we develop a general and flexible deep learning framework for modeling structural binding preferences and predicting binding sites of RBPs, which takes (predicted) RNA tertiary structural information into account for the first time. Our framework constructs a unified representation that characterizes the structural specificities of RBP targets in all three dimensions, which can be further used to predict novel candidate binding sites and discover potential binding motifs. Through testing on the real CLIP-seq datasets, we have demonstrated that our deep learning framework can automatically extract effective hidden structural features from the encoded raw sequence and structural profiles, and predict accurate RBP binding sites. In addition, we have conducted the first study to show that integrating the additional RNA tertiary structural features can improve the model performance in predicting RBP binding sites, especially for the polypyrimidine tract-binding protein (PTB), which also provides a new evidence to support the view that RBPs may own specific tertiary structural binding preferences. In particular, the tests on the internal ribosome entry site (IRES) segments yield satisfiable results with experimental support from the literature and further demonstrate the necessity of incorporating RNA tertiary structural information into the prediction model. The source code of our approach can be found in https://github.com/thucombio/deepnet-rbp.

  7. A PC/workstation cluster computing environment for reservoir engineering simulation applications

    Energy Technology Data Exchange (ETDEWEB)

    Hermes, C.E.; Koo, J. [Texaco Inc., Houston, TX (United States). Exploration and Production Technology Dept.

    1995-06-01

    Like the rest of the petroleum industry, Texaco has been transferring its applications and databases from mainframes to PC`s and workstations. This transition has been very positive because it provides an environment for integrating applications, increases end-user productivity, and in general reduces overall computing costs. On the down side, the transition typically results in a dramatic increase in workstation purchases and raises concerns regarding the cost and effective management of computing resources in this new environment. The workstation transition also places the user in a Unix computing environment which, to say the least, can be quite frustrating to learn and to use. This paper describes the approach, philosophy, architecture, and current status of the new reservoir engineering/simulation computing environment developed at Texaco`s E and P Technology Dept. (EPTD) in Houston. The environment is representative of those under development at several other large oil companies and is based on a cluster of IBM and Silicon Graphics Intl. (SGI) workstations connected by a fiber-optics communications network and engineering PC`s connected to local area networks, or Ethernets. Because computing resources and software licenses are shared among a group of users, the new environment enables the company to get more out of its investments in workstation hardware and software.

  8. Autonomic Management of Application Workflows on Hybrid Computing Infrastructure

    Directory of Open Access Journals (Sweden)

    Hyunjoo Kim

    2011-01-01

    Full Text Available In this paper, we present a programming and runtime framework that enables the autonomic management of complex application workflows on hybrid computing infrastructures. The framework is designed to address system and application heterogeneity and dynamics to ensure that application objectives and constraints are satisfied. The need for such autonomic system and application management is becoming critical as computing infrastructures become increasingly heterogeneous, integrating different classes of resources from high-end HPC systems to commodity clusters and clouds. For example, the framework presented in this paper can be used to provision the appropriate mix of resources based on application requirements and constraints. The framework also monitors the system/application state and adapts the application and/or resources to respond to changing requirements or environment. To demonstrate the operation of the framework and to evaluate its ability, we employ a workflow used to characterize an oil reservoir executing on a hybrid infrastructure composed of TeraGrid nodes and Amazon EC2 instances of various types. Specifically, we show how different applications objectives such as acceleration, conservation and resilience can be effectively achieved while satisfying deadline and budget constraints, using an appropriate mix of dynamically provisioned resources. Our evaluations also demonstrate that public clouds can be used to complement and reinforce the scheduling and usage of traditional high performance computing infrastructure.

  9. First International Conference on Intelligent Computing and Applications

    CERN Document Server

    Kar, Rajib; Das, Swagatam; Panigrahi, Bijaya

    2015-01-01

    The idea of the 1st International Conference on Intelligent Computing and Applications (ICICA 2014) is to bring the Research Engineers, Scientists, Industrialists, Scholars and Students together from in and around the globe to present the on-going research activities and hence to encourage research interactions between universities and industries. The conference provides opportunities for the delegates to exchange new ideas, applications and experiences, to establish research relations and to find global partners for future collaboration. The proceedings covers latest progresses in the cutting-edge research on various research areas of Image, Language Processing, Computer Vision and Pattern Recognition, Machine Learning, Data Mining and Computational Life Sciences, Management of Data including Big Data and Analytics, Distributed and Mobile Systems including Grid and Cloud infrastructure, Information Security and Privacy, VLSI, Electronic Circuits, Power Systems, Antenna, Computational fluid dynamics & Hea...

  10. Applications of TsunAWI: Operational scenario database in Indonesia, case studies in Chile

    Science.gov (United States)

    Rakowsky, Natalja; Harig, Sven; Immerz, Antonia; Androsov, Alexey; Hiller, Wolfgang; Schröter, Jens

    2016-04-01

    The numerical simulation code TsunAWI was developed in the framework of the German-Indonesian Tsunami Early Warning System (GITEWS). The Numerical simulation of prototypic tsunami scenarios plays a decisive role in the a priori risk assessment for coastal regions and in the early warning process itself. TsunAWI is suited for both tasks. It is based on a finite element discretisation, employs unstructured grids with high resolution along the coast, and includes inundation. This contribution presents two fields of applications. In the Indonesian tsunami early warning system, the existing TsunAWI scenario database covers the Sunda subduction zone from Sumatra to the Lesser Sunda Islands with 715 epicenters and 4500 scenarios. In a collaboration with Geoscience Australia, we support the scientific staff at the Indonesian warning center to extend the data base to the remaining tectonic zones in the Indonesian Archipelago. The extentension started for North Sulawesi, West and East Maluku Islands. For the Hydrographic and Oceanographic Service of the Chilean Navy (SHOA), we calculated a small scenario database of 100 scenarios (sources by Universidad de Chile) for a lightweight decision support system prototype (built by DLR). The earthquake and tsunami events on 1 April 2014 and 16 November 2016 showed the practical use of this approach in comparison to hind casts of these events.

  11. Intelligent Computational Systems. Opening Remarks: CFD Application Process Workshop

    Science.gov (United States)

    VanDalsem, William R.

    1994-01-01

    This discussion will include a short review of the challenges that must be overcome if computational physics technology is to have a larger impact on the design cycles of U.S. aerospace companies. Some of the potential solutions to these challenges may come from the information sciences fields. A few examples of potential computational physics/information sciences synergy will be presented, as motivation and inspiration for the Improving The CFD Applications Process Workshop.

  12. Computational Science Applications in Manufacturing (CSAM) workshop evaluation report

    Energy Technology Data Exchange (ETDEWEB)

    Bradford, J.; Dixon, L.; Rutherford, W.

    1994-09-01

    The Computational Science Applications in Manufacturing (CSAM) workshop is a program designed to expose and train high school students in the techniques used in computational science as they pertain to manufacturing. This effort was sponsored by the AlliedSignal Inc., Kansas City Division (KCD) in cooperation with the Department of Energy (DOE) and their initiative to support education with respect to the advances in technology.

  13. Computational biomechanics for medicine from algorithms to models and applications

    CERN Document Server

    Joldes, Grand; Nielsen, Poul; Doyle, Barry; Miller, Karol

    2017-01-01

    This volume comprises the latest developments in both fundamental science and patient-specific applications, discussing topics such as: cellular mechanics; injury biomechanics; biomechanics of heart and vascular system; medical image analysis; and both patient-specific fluid dynamics and solid mechanics simulations. With contributions from researchers world-wide, the Computational Biomechanics for Medicine series of titles provides an opportunity for specialists in computational biomechanics to present their latest methodologies and advancements.

  14. Cloud-Based NoSQL Open Database of Pulmonary Nodules for Computer-Aided Lung Cancer Diagnosis and Reproducible Research.

    Science.gov (United States)

    Ferreira Junior, José Raniery; Oliveira, Marcelo Costa; de Azevedo-Marques, Paulo Mazzoncini

    2016-12-01

    Lung cancer is the leading cause of cancer-related deaths in the world, and its main manifestation is pulmonary nodules. Detection and classification of pulmonary nodules are challenging tasks that must be done by qualified specialists, but image interpretation errors make those tasks difficult. In order to aid radiologists on those hard tasks, it is important to integrate the computer-based tools with the lesion detection, pathology diagnosis, and image interpretation processes. However, computer-aided diagnosis research faces the problem of not having enough shared medical reference data for the development, testing, and evaluation of computational methods for diagnosis. In order to minimize this problem, this paper presents a public nonrelational document-oriented cloud-based database of pulmonary nodules characterized by 3D texture attributes, identified by experienced radiologists and classified in nine different subjective characteristics by the same specialists. Our goal with the development of this database is to improve computer-aided lung cancer diagnosis and pulmonary nodule detection and classification research through the deployment of this database in a cloud Database as a Service framework. Pulmonary nodule data was provided by the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI), image descriptors were acquired by a volumetric texture analysis, and database schema was developed using a document-oriented Not only Structured Query Language (NoSQL) approach. The proposed database is now with 379 exams, 838 nodules, and 8237 images, 4029 of them are CT scans and 4208 manually segmented nodules, and it is allocated in a MongoDB instance on a cloud infrastructure.

  15. A new Volcanic managEment Risk Database desIgn (VERDI): Application to El Hierro Island (Canary Islands)

    Science.gov (United States)

    Bartolini, S.; Becerril, L.; Martí, J.

    2014-11-01

    One of the most important issues in modern volcanology is the assessment of volcanic risk, which will depend - among other factors - on both the quantity and quality of the available data and an optimum storage mechanism. This will require the design of purpose-built databases that take into account data format and availability and afford easy data storage and sharing, and will provide for a more complete risk assessment that combines different analyses but avoids any duplication of information. Data contained in any such database should facilitate spatial and temporal analysis that will (1) produce probabilistic hazard models for future vent opening, (2) simulate volcanic hazards and (3) assess their socio-economic impact. We describe the design of a new spatial database structure, VERDI (Volcanic managEment Risk Database desIgn), which allows different types of data, including geological, volcanological, meteorological, monitoring and socio-economic information, to be manipulated, organized and managed. The root of the question is to ensure that VERDI will serve as a tool for connecting different kinds of data sources, GIS platforms and modeling applications. We present an overview of the database design, its components and the attributes that play an important role in the database model. The potential of the VERDI structure and the possibilities it offers in regard to data organization are here shown through its application on El Hierro (Canary Islands). The VERDI database will provide scientists and decision makers with a useful tool that will assist to conduct volcanic risk assessment and management.

  16. Geometric computations with interval and new robust methods applications in computer graphics, GIS and computational geometry

    CERN Document Server

    Ratschek, H

    2003-01-01

    This undergraduate and postgraduate text will familiarise readers with interval arithmetic and related tools to gain reliable and validated results and logically correct decisions for a variety of geometric computations plus the means for alleviating the effects of the errors. It also considers computations on geometric point-sets, which are neither robust nor reliable in processing with standard methods. The authors provide two effective tools for obtaining correct results: (a) interval arithmetic, and (b) ESSA the new powerful algorithm which improves many geometric computations and makes th

  17. New orthopaedic implant management tool for computer-assisted planning, navigation, and simulation: from implant CAD files to a standardized XML-based implant database.

    Science.gov (United States)

    Sagbo, S; Blochaou, F; Langlotz, F; Vangenot, C; Nolte, L-P; Zheng, G

    2005-01-01

    Computer-Assisted Orthopaedic Surgery (CAOS) has made much progress over the last 10 years. Navigation systems have been recognized as important tools that help surgeons, and various such systems have been developed. A disadvantage of these systems is that they use non-standard formalisms and techniques. As a result, there are no standard concepts for implant and tool management or data formats to store information for use in 3D planning and navigation. We addressed these limitations and developed a practical and generic solution that offers benefits for surgeons, implant manufacturers, and CAS application developers. We developed a virtual implant database containing geometrical as well as calibration information for orthopedic implants and instruments, with a focus on trauma. This database has been successfully tested for various applications in the client/server mode. The implant information is not static, however, because manufacturers periodically revise their implants, resulting in the deletion of some implants and the introduction of new ones. Tracking these continuous changes and keeping CAS systems up to date is a tedious task if done manually. This leads to additional costs for system development, and some errors are inevitably generated due to the huge amount of information that has to be processed. To ease management with respect to implant life cycle, we developed a tool to assist end-users (surgeons, hospitals, CAS system providers, and implant manufacturers) in managing their implants. Our system can be used for pre-operative planning and intra-operative navigation, and also for any surgical simulation involving orthopedic implants. Currently, this tool allows addition of new implants, modification of existing ones, deletion of obsolete implants, export of a given implant, and also creation of backups. Our implant management system has been successfully tested in the laboratory with very promising results. It makes it possible to fill the current gap

  18. PREPARING FOR EXASCALE: ORNL Leadership Computing Application Requirements and Strategy

    Energy Technology Data Exchange (ETDEWEB)

    Joubert, Wayne [ORNL; Kothe, Douglas B [ORNL; Nam, Hai Ah [ORNL

    2009-12-01

    In 2009 the Oak Ridge Leadership Computing Facility (OLCF), a U.S. Department of Energy (DOE) facility at the Oak Ridge National Laboratory (ORNL) National Center for Computational Sciences (NCCS), elicited petascale computational science requirements from leading computational scientists in the international science community. This effort targeted science teams whose projects received large computer allocation awards on OLCF systems. A clear finding of this process was that in order to reach their science goals over the next several years, multiple projects will require computational resources in excess of an order of magnitude more powerful than those currently available. Additionally, for the longer term, next-generation science will require computing platforms of exascale capability in order to reach DOE science objectives over the next decade. It is generally recognized that achieving exascale in the proposed time frame will require disruptive changes in computer hardware and software. Processor hardware will become necessarily heterogeneous and will include accelerator technologies. Software must undergo the concomitant changes needed to extract the available performance from this heterogeneous hardware. This disruption portends to be substantial, not unlike the change to the message passing paradigm in the computational science community over 20 years ago. Since technological disruptions take time to assimilate, we must aggressively embark on this course of change now, to insure that science applications and their underlying programming models are mature and ready when exascale computing arrives. This includes initiation of application readiness efforts to adapt existing codes to heterogeneous architectures, support of relevant software tools, and procurement of next-generation hardware testbeds for porting and testing codes. The 2009 OLCF requirements process identified numerous actions necessary to meet this challenge: (1) Hardware capabilities must be

  19. Providing Assistive Technology Applications as a Service Through Cloud Computing.

    Science.gov (United States)

    Mulfari, Davide; Celesti, Antonio; Villari, Massimo; Puliafito, Antonio

    2015-01-01

    Users with disabilities interact with Personal Computers (PCs) using Assistive Technology (AT) software solutions. Such applications run on a PC that a person with a disability commonly uses. However the configuration of AT applications is not trivial at all, especially whenever the user needs to work on a PC that does not allow him/her to rely on his / her AT tools (e.g., at work, at university, in an Internet point). In this paper, we discuss how cloud computing provides a valid technological solution to enhance such a scenario.With the emergence of cloud computing, many applications are executed on top of virtual machines (VMs). Virtualization allows us to achieve a software implementation of a real computer able to execute a standard operating system and any kind of application. In this paper we propose to build personalized VMs running AT programs and settings. By using the remote desktop technology, our solution enables users to control their customized virtual desktop environment by means of an HTML5-based web interface running on any computer equipped with a browser, whenever they are.

  20. Hydropower Computation Using Visual Basic for Application Programming

    Science.gov (United States)

    Yan, Wang; Hongliang, Hu

    Hydropower computation is essential to determine the operating conditions of hydroelectric station. Among the existing methods for hydropower computation, equal monthly hydropower output and dynamic programming are the most commonly used methods, but both of them are too complex in computation and hard to be finished manually. Taking the advantage of the data processing ability of Microsoft Excel and its attached Visual Basic for Application (VBA) program, the complex hydropower computation can be easily achieved. An instance was analyzed in two methods and all delt with VBA. VBA demonstrates its powerful function in solving problem with complex computation, visualizing, and secondary data processing. The results show that the dynamic programming method was more receptive than the other one.

  1. Introduction to computational mass transfer with applications to chemical engineering

    CERN Document Server

    Yu, Kuo-Tsung

    2017-01-01

    This book offers an easy-to-understand introduction to the computational mass transfer (CMT) method. On the basis of the contents of the first edition, this new edition is characterized by the following additional materials. It describes the successful application of this method to the simulation of the mass transfer process in a fluidized bed, as well as recent investigations and computing methods for predictions for the multi-component mass transfer process. It also demonstrates the general issues concerning computational methods for simulating the mass transfer of the rising bubble process. This new edition has been reorganized by moving the preparatory materials for Computational Fluid Dynamics (CFD) and Computational Heat Transfer into appendices, additions of new chapters, and including three new appendices on, respectively, generalized representation of the two-equation model for the CMT, derivation of the equilibrium distribution function in the lattice-Boltzmann method, and derivation of the Navier-S...

  2. Reversible logic synthesis methodologies with application to quantum computing

    CERN Document Server

    Taha, Saleem Mohammed Ridha

    2016-01-01

    This book opens the door to a new interesting and ambitious world of reversible and quantum computing research. It presents the state of the art required to travel around that world safely. Top world universities, companies and government institutions  are in a race of developing new methodologies, algorithms and circuits on reversible logic, quantum logic, reversible and quantum computing and nano-technologies. In this book, twelve reversible logic synthesis methodologies are presented for the first time in a single literature with some new proposals. Also, the sequential reversible logic circuitries are discussed for the first time in a book. Reversible logic plays an important role in quantum computing. Any progress in the domain of reversible logic can be directly applied to quantum logic. One of the goals of this book is to show the application of reversible logic in quantum computing. A new implementation of wavelet and multiwavelet transforms using quantum computing is performed for this purpose. Rese...

  3. Computational biomechanics for medicine new approaches and new applications

    CERN Document Server

    Miller, Karol; Wittek, Adam; Nielsen, Poul

    2015-01-01

    The Computational Biomechanics for Medicine titles provide an opportunity for specialists in computational biomechanics to present their latest methodologiesand advancements. Thisvolumecomprises twelve of the newest approaches and applications of computational biomechanics, from researchers in Australia, New Zealand, USA, France, Spain and Switzerland. Some of the interesting topics discussed are:real-time simulations; growth and remodelling of soft tissues; inverse and meshless solutions; medical image analysis; and patient-specific solid mechanics simulations. One of the greatest challenges facing the computational engineering community is to extend the success of computational mechanics to fields outside traditional engineering, in particular to biology, the biomedical sciences, and medicine. We hope the research presented within this book series will contribute to overcoming this grand challenge.

  4. Application of CAPEC Lipid Property Databases in the Synthesis and Design of Biorefinery Networks

    DEFF Research Database (Denmark)

    Bertran, Maria-Ona; Cunico, Larissa; Gani, Rafiqul

    processes are not. Lipids are present in biorefinery processes: they represent feedstock (vegetable oil, waste cooking oil, microalgal oil), intermediate products (fatty acids, glycerol) and final products in biorefineries, thus the prediction of their properties is of relevance for the synthesis and design......]. The wide variety and complex nature of components in biorefineries poses a challenge with respect to the synthesis and design of these types of processes. Whereas physical and thermodynamic property data or models for petroleum-based processes are widely available, most data and models for biobased...... of biorefinery networks. The objective of this work is to show the application of databases of physical and thermodynamic properties of lipid components to the synthesis and design of biorefinery networks....

  5. Application of chaos and fractals to computer vision

    CERN Document Server

    Farmer, Michael E

    2014-01-01

    This book provides a thorough investigation of the application of chaos theory and fractal analysis to computer vision. The field of chaos theory has been studied in dynamical physical systems, and has been very successful in providing computational models for very complex problems ranging from weather systems to neural pathway signal propagation. Computer vision researchers have derived motivation for their algorithms from biology and physics for many years as witnessed by the optical flow algorithm, the oscillator model underlying graphical cuts and of course neural networks. These algorithm

  6. Field-programmable custom computing technology architectures, tools, and applications

    CERN Document Server

    Luk, Wayne; Pocek, Ken

    2000-01-01

    Field-Programmable Custom Computing Technology: Architectures, Tools, and Applications brings together in one place important contributions and up-to-date research results in this fast-moving area. In seven selected chapters, the book describes the latest advances in architectures, design methods, and applications of field-programmable devices for high-performance reconfigurable systems. The contributors to this work were selected from the leading researchers and practitioners in the field. It will be valuable to anyone working or researching in the field of custom computing technology. It serves as an excellent reference, providing insight into some of the most challenging issues being examined today.

  7. Computers and Organizational Change Factors That Influence Useful Adoption of Computer Applications

    Science.gov (United States)

    Davis, Howard R.; Salasin, Susan E.

    1982-01-01

    The spread of computer applications and their usefulness in improving patient care and organizational change are impressive. However, an unevenness in the impact has occurred - as can be expected in the diffusion of innovation process, in general. Later-adopting persons and organizations need the attention of researchers and innovation experts in this field. Factors that can be addressed in fostering appropriate change through computer applications are: (1) feeling of the need for the application; (2) clarity on how to employ the technology; (3) degree of fit with the style and values of the host facilities; (4) special local circumstances and timing; (5) resistances, both rational and subtle; (6) anticipation, or direct experience of benefits from computer aplication; and (7) abilities and resources that are required for successful system operation.

  8. Computer-aided diagnosis workstation and database system for chest diagnosis based on multi-helical CT images

    Science.gov (United States)

    Satoh, Hitoshi; Niki, Noboru; Mori, Kiyoshi; Eguchi, Kenji; Kaneko, Masahiro; Kakinuma, Ryutarou; Moriyama, Noriyuki; Ohmatsu, Hironobu; Masuda, Hideo; Machida, Suguru; Sasagawa, Michizou

    2006-03-01

    Multi-helical CT scanner advanced remarkably at the speed at which the chest CT images were acquired for mass screening. Mass screening based on multi-helical CT images requires a considerable number of images to be read. It is this time-consuming step that makes the use of helical CT for mass screening impractical at present. To overcome this problem, we have provided diagnostic assistance methods to medical screening specialists by developing a lung cancer screening algorithm that automatically detects suspected lung cancers in helical CT images and a coronary artery calcification screening algorithm that automatically detects suspected coronary artery calcification. We also have developed electronic medical recording system and prototype internet system for the community health in two or more regions by using the Virtual Private Network router and Biometric fingerprint authentication system and Biometric face authentication system for safety of medical information. Based on these diagnostic assistance methods, we have now developed a new computer-aided workstation and database that can display suspected lesions three-dimensionally in a short time. This paper describes basic studies that have been conducted to evaluate this new system. The results of this study indicate that our computer-aided diagnosis workstation and network system can increase diagnostic speed, diagnostic accuracy and safety of medical information.

  9. 计算机数据库入侵检测技术分析%Computer Database Analysis of Intrusion Detection Technology

    Institute of Scientific and Technical Information of China (English)

    连志强

    2013-01-01

    The computer database system is directly related to the corporate and personal privacy protection and social stability and national security,so the computer database system security work must be done bit.Although the firewall has spread to a computer database system, but the inherent flaws firewall capability is still not completely against hackers,thus deepen the computer database intrusion detection technology without delay.This paper mainly focus on computer database intrusion detection technology start discussion,in order to improve our computer database system security.%计算机数据库系统直接关乎企业与个人的隐私保护及社会稳定与国家安全,因此计算机数据库系统的安全保障工作必须做到位。尽管防火墙目前已普及到计算机数据库系统,但防火墙自身固有的缺陷尚没有能力完全抵御黑客的入侵,因此深化对计算机数据库入侵检测技术的研究刻不容缓。本文主要围绕计算机数据库入侵检测技术展开论述,以期提高我国计算机数据库系统的安全性。

  10. Application-adaptive resource scheduling in a computational grid

    Institute of Scientific and Technical Information of China (English)

    LUAN Cui-ju; SONG Guang-hua; ZHENG Yao

    2006-01-01

    Selecting appropriate resources for running a job efficiently is one of the common objectives in a computational grid.Resource scheduling should consider the specific characteristics of the application, and decide the metrics to be used accordingly.This paper presents a distributed resource scheduling framework mainly consisting of a job scheduler and a local scheduler. In order to meet the requirements of different applications, we adopt HGSA, a Heuristic-based Greedy Scheduling Algorithm, to schedule jobs in the grid, where the heuristic knowledge is the metric weights of the computing resources and the metric workload impact factors. The metric weight is used to control the effect of the metric on the application. For different applications, only metric weights and the metric workload impact factors need to be changed, while the scheduling algorithm remains the same.Experimental results are presented to demonstrate the adaptability of the HGSA.

  11. International Conference on Soft Computing Techniques and Engineering Application

    CERN Document Server

    Li, Xiaolong

    2014-01-01

    The main objective of ICSCTEA 2013 is to provide a platform for researchers, engineers and academicians from all over the world to present their research results and development activities in soft computing techniques and engineering application. This conference provides opportunities for them to exchange new ideas and application experiences face to face, to establish business or research relations and to find global partners for future collaboration.

  12. Discover knowledge in databases: Mining of data and applications; Descubrir conocimiento en bases de datos: Mineria de datos y aplicaciones

    Energy Technology Data Exchange (ETDEWEB)

    Rodriguez Martinez, Andres F [Instituto de Investigaciones Electricas, Temixco, Morelos (Mexico); Morales Manzanares, Eduardo [Instituto Tecnologico de Estudios Superiores de Monterrey (ITESM), Campus Cuernavaca, Morelos (Mexico)

    2000-07-01

    In the last years it has existed an enormous growth in the generation capacity and information storage, due to the increasing automation of processes in general and to the advances in the information capacity storage. Unfortunately, the information analysis techniques have not shown an equivalent development, reason why it exists the necessity of a new generation of computing techniques and tools that can assist the one who makes decisions in the automatic and intelligent analysis of large information volumes. To find useful knowledge among great amounts of data is the main objective of the area of discovery of knowledge in databases. The present article has like objective the spread of the process of discovering the knowledge in databases in general and the concept of mining of data in particular; to establish the relation that exists between the process of discovering knowledge in databases and the mining of data; as well as to fix the characteristics and complexities of looking for useful patterns in the data. Also the main methods of mining of data and the areas of application are described, where these algorithms have had greater success. [Spanish] En los ultimos anos ha existido un enorme crecimiento en la capacidad de generacion y almacenamiento de informacion, debido a la creciente automatizacion de procesos en general y a los avances en las capacidades de almacenamiento de informacion. Desafortunadamente, las tecnicas de analisis de informacion no han mostrado un desarrollo equivalente, por lo que existe la necesidad de una nueva generacion de tecnicas y herramientas computacionales que puedan asistir a quien toma decisiones en el analisis automatico e inteligente de grandes volumenes de informacion. Encontrar conocimiento util entre grandes cantidades de datos es el objetivo principal del area de descubrimiento de conocimiento en bases de datos. El presente articulo tiene como objetivo difundir el proceso de descubrir conocimiento en bases de datos en

  13. Message passing interface and multithreading hybrid for parallel molecular docking of large databases on petascale high performance computing machines.

    Science.gov (United States)

    Zhang, Xiaohua; Wong, Sergio E; Lightstone, Felice C

    2013-04-30

    A mixed parallel scheme that combines message passing interface (MPI) and multithreading was implemented in the AutoDock Vina molecular docking program. The resulting program, named VinaLC, was tested on the petascale high performance computing (HPC) machines at Lawrence Livermore National Laboratory. To exploit the typical cluster-type supercomputers, thousands of docking calculations were dispatched by the master process to run simultaneously on thousands of slave processes, where each docking calculation takes one slave process on one node, and within the node each docking calculation runs via multithreading on multiple CPU cores and shared memory. Input and output of the program and the data handling within the program were carefully designed to deal with large databases and ultimately achieve HPC on a large number of CPU cores. Parallel performance analysis of the VinaLC program shows that the code scales up to more than 15K CPUs with a very low overhead cost of 3.94%. One million flexible compound docking calculations took only 1.4 h to finish on about 15K CPUs. The docking accuracy of VinaLC has been validated against the DUD data set by the re-docking of X-ray ligands and an enrichment study, 64.4% of the top scoring poses have RMSD values under 2.0 Å. The program has been demonstrated to have good enrichment performance on 70% of the targets in the DUD data set. An analysis of the enrichment factors calculated at various percentages of the screening database indicates VinaLC has very good early recovery of actives.

  14. COMPUTER ASSISTED INSTRUCTION AND ITS APPLICATION IN ENGLISH LEARNING

    Institute of Scientific and Technical Information of China (English)

    1998-01-01

    This paper briefly describes the development of computer assisted instruction(CAI) abroad and in China, lists the advantages of CAI and deals with its application in English learning. Some suggestions about how to make better use of CAI in ELT are also given.

  15. Development of Multimedia Computer Applications for Clinical Pharmacy Training.

    Science.gov (United States)

    Schlict, John R.; Livengood, Bruce; Shepherd, John

    1997-01-01

    Computer simulations in clinical pharmacy education help expose students to clinical patient management earlier and enable training of large numbers of students outside conventional clinical practice sites. Multimedia instruction and its application to pharmacy training are described, the general process for developing multimedia presentations is…

  16. The Poor Man's Guide to Computer Networks and their Applications

    DEFF Research Database (Denmark)

    Sharp, Robin

    2003-01-01

    These notes for DTU course 02220, Concurrent Programming, give an introduction to computer networks, with focus on the modern Internet. Basic Internet protocols such as IP, TCP and UDP are presented, and two Internet application protocols, SMTP and HTTP, are described in some detail. Techniques...

  17. Application and prospect of computer technology in welding materials field

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    This paper summarizes the application status of computer technology in welding materials field from three aspects: the CAD of welding materials, the date base system for welding materials and the expert system for welding materials .Besides, this paper explores and discusses the existing problems and the developing trend in the future.

  18. Conference Report: Wyoming Invitational Conference on Instructional Applications of Computers.

    Science.gov (United States)

    Kansky, Bob

    This report: (1) describes the organization of an invitational conference aimed at gathering direction from classroom teachers regarding instructional applications of computers; (2) provides copies of all materials used in organizing such a conference; and (3) reports the results of the conference in terms of conference products (resolutions,…

  19. Recent applications of digital computers in analytical chemistry.

    Science.gov (United States)

    Perrin, D D

    1977-06-01

    Minicomputers are finding increasing use for the control and operation of analytical instruments. This role is likely to be shared in the near future with dedicated microcomputers. Applications of computers to electroanalytical chemistry, Fourier transform techniques, spectroscopy, rapid-reaction kinetics, equilibrium constants, studies of analytical methods and to literature searching, are also discussed.

  20. Multimedia Instructional Tools and Student Learning in Computer Applications Courses

    Science.gov (United States)

    Chapman, Debra Laier

    2013-01-01

    Advances in technology and changes in educational strategies have resulted in the integration of technology into the classroom. Multimedia instructional tools (MMIT) have been identified as a way to provide student-centered active-learning instructional material to students. MMITs are common in introductory computer applications courses based on…

  1. Application of Google Maps API service for creating web map of information retrieved from CORINE land cover databases

    Directory of Open Access Journals (Sweden)

    Kilibarda Milan

    2010-01-01

    Full Text Available Today, Google Maps API application based on Ajax technology as standard web service; facilitate users with publication interactive web maps, thus opening new possibilities in relation to the classical analogue maps. CORINE land cover databases are recognized as the fundamental reference data sets for numerious spatial analysis. The theoretical and applicable aspects of Google Maps API cartographic service are considered on the case of creating web map of change in urban areas in Belgrade and surround from 2000. to 2006. year, obtained from CORINE databases.

  2. Applications of evolutionary computation in image processing and pattern recognition

    CERN Document Server

    Cuevas, Erik; Perez-Cisneros, Marco

    2016-01-01

    This book presents the use of efficient Evolutionary Computation (EC) algorithms for solving diverse real-world image processing and pattern recognition problems. It provides an overview of the different aspects of evolutionary methods in order to enable the reader in reaching a global understanding of the field and, in conducting studies on specific evolutionary techniques that are related to applications in image processing and pattern recognition. It explains the basic ideas of the proposed applications in a way that can also be understood by readers outside of the field. Image processing and pattern recognition practitioners who are not evolutionary computation researchers will appreciate the discussed techniques beyond simple theoretical tools since they have been adapted to solve significant problems that commonly arise on such areas. On the other hand, members of the evolutionary computation community can learn the way in which image processing and pattern recognition problems can be translated into an...

  3. Computational Intelligence and Decision Making Trends and Applications

    CERN Document Server

    Madureira, Ana; Marques, Viriato

    2013-01-01

    This book provides a general overview and original analysis of new developments and applications in several areas of Computational Intelligence and Information Systems. Computational Intelligence has become an important tool for engineers to develop and analyze novel techniques to solve problems in basic sciences such as physics, chemistry, biology, engineering, environment and social sciences.   The material contained in this book addresses the foundations and applications of Artificial Intelligence and Decision Support Systems, Complex and Biological Inspired Systems, Simulation and Evolution of Real and Artificial Life Forms, Intelligent Models and Control Systems, Knowledge and Learning Technologies, Web Semantics and Ontologies, Intelligent Tutoring Systems, Intelligent Power Systems, Self-Organized and Distributed Systems, Intelligent Manufacturing Systems and Affective Computing. The contributions have all been written by international experts, who provide current views on the topics discussed and pr...

  4. Probabilistic Databases

    CERN Document Server

    Suciu, Dan; Koch, Christop

    2011-01-01

    Probabilistic databases are databases where the value of some attributes or the presence of some records are uncertain and known only with some probability. Applications in many areas such as information extraction, RFID and scientific data management, data cleaning, data integration, and financial risk assessment produce large volumes of uncertain data, which are best modeled and processed by a probabilistic database. This book presents the state of the art in representation formalisms and query processing techniques for probabilistic data. It starts by discussing the basic principles for rep

  5. Aggregating job exit statuses of a plurality of compute nodes executing a parallel application

    Energy Technology Data Exchange (ETDEWEB)

    Aho, Michael E.; Attinella, John E.; Gooding, Thomas M.; Mundy, Michael B.

    2015-07-21

    Aggregating job exit statuses of a plurality of compute nodes executing a parallel application, including: identifying a subset of compute nodes in the parallel computer to execute the parallel application; selecting one compute node in the subset of compute nodes in the parallel computer as a job leader compute node; initiating execution of the parallel application on the subset of compute nodes; receiving an exit status from each compute node in the subset of compute nodes, where the exit status for each compute node includes information describing execution of some portion of the parallel application by the compute node; aggregating each exit status from each compute node in the subset of compute nodes; and sending an aggregated exit status for the subset of compute nodes in the parallel computer.

  6. Computer graphics application in the engineering design integration system

    Science.gov (United States)

    Glatt, C. R.; Abel, R. W.; Hirsch, G. N.; Alford, G. E.; Colquitt, W. N.; Stewart, W. A.

    1975-01-01

    The computer graphics aspect of the Engineering Design Integration (EDIN) system and its application to design problems were discussed. Three basic types of computer graphics may be used with the EDIN system for the evaluation of aerospace vehicles preliminary designs: offline graphics systems using vellum-inking or photographic processes, online graphics systems characterized by direct coupled low cost storage tube terminals with limited interactive capabilities, and a minicomputer based refresh terminal offering highly interactive capabilities. The offline line systems are characterized by high quality (resolution better than 0.254 mm) and slow turnaround (one to four days). The online systems are characterized by low cost, instant visualization of the computer results, slow line speed (300 BAUD), poor hard copy, and the early limitations on vector graphic input capabilities. The recent acquisition of the Adage 330 Graphic Display system has greatly enhanced the potential for interactive computer aided design.

  7. The Application of Computer Technology in Art Design

    Directory of Open Access Journals (Sweden)

    Tao Xin

    2016-01-01

    Full Text Available After entering the twenty-first Century, computer technology has been greatly developed, applied in various fields gradually, and gained remarkable achievement. Computer technology not only is a project of great importance in China, but also cannot be ignored in the world. It has already attracted attention of numerous people, gradually integrated in daily life and closely related with it. Now how to apply it into more fields has been the main focus of our nation and also the topic we care about all the time. It is also an important project in our country to apply computer technology in art design in order to higher the art design quality. The purpose of this paper is to introduce people problems of the application of computer technology in art design, and find ways to promote cooperation and common development of them.

  8. Research on application of computer technologies in jewelry process

    Directory of Open Access Journals (Sweden)

    Junbo Xia

    2017-02-01

    Full Text Available Jewelry production is a process of precious raw materials and low losses in processing. The traditional manual mode is unable to meet the needs of enterprises in reality, while the involvement of computer technology can just solve this practical problem. At present, the problem of restricting the application for computer in jewelry production is mainly a failure to find a production model that can serve the whole industry chain with the computer as the core of production. This paper designs a “synchronous and diversified” production model with “computer aided design technology” and “rapid prototyping technology” as the core, and tests with actual production cases, and achieves certain results, which are forward-looking and advanced.

  9. Introduction to computers

    OpenAIRE

    Rajaraman, A

    1995-01-01

    An article on computer application for knowledge processing intended to generate awareness among librarians on the possiblities offered by ICT to improve services. Compares computers and the human brain, provides a historical perspective of the development of computer technology, explains the components of the computer and the computer languages, identifes the areas where computers can be applied and its benefits. Explains available storage systems and database management process. Points out ...

  10. Identifying Social Impacts in Product Supply Chains:Overview and Application of the Social Hotspot Database

    Directory of Open Access Journals (Sweden)

    Gregory Norris

    2012-08-01

    Full Text Available One emerging tool to measure the social-related impacts in supply chains is Social Life Cycle Assessment (S-LCA, a derivative of the well-established environmental LCA technique. LCA has recently started to gain popularity among large corporations and initiatives, such as The Sustainability Consortium or the Sustainable Apparel Coalition. Both have made the technique a cornerstone of their applied-research program. The Social Hotspots Database (SHDB is an overarching, global database that eases the data collection burden in S-LCA studies. Proposed “hotspots” are production activities or unit processes (also defined as country-specific sectors in the supply chain that may be at risk for social issues to be present. The SHDB enables efficient application of S-LCA by allowing users to prioritize production activities for which site-specific data collection is most desirable. Data for three criteria are used to inform prioritization: (1 labor intensity in worker hours per unit process and (2 risk for, or opportunity to affect, relevant social themes or sub-categories related to Human Rights, Labor Rights and Decent Work, Governance and Access to Community Services (3 gravity of a social issue. The Worker Hours Model was developed using a global input/output economic model and wage rate data. Nearly 200 reputable sources of statistical data have been used to develop 20 Social Theme Tables by country and sector. This paper presents an overview of the SHDB development and features, as well as results from a pilot study conducted on strawberry yogurt. This study, one of seven Social Scoping Assessments mandated by The Sustainability Consortium, identifies the potential social hotspots existing in the supply chain of strawberry yogurt. With this knowledge, companies that manufacture or sell yogurt can refine their data collection efforts in order to put their social responsibility performance in perspective and effectively set up programs and

  11. SISSY: An example of a multi-threaded, networked, object-oriented databased application

    Energy Technology Data Exchange (ETDEWEB)

    Scipioni, B.; Liu, D.; Song, T.

    1993-05-01

    The Systems Integration Support SYstem (SISSY) is presented and its capabilities and techniques are discussed. It is fully automated data collection and analysis system supporting the SSCL`s systems analysis activities as they relate to the Physics Detector and Simulation Facility (PDSF). SISSY itself is a paradigm of effective computing on the PDSF. It uses home-grown code (C++), network programming (RPC, SNMP), relational (SYBASE) and object-oriented (ObjectStore) DBMSs, UNIX operating system services (IRIX threads, cron, system utilities, shells scripts, etc.), and third party software applications (NetCentral Station, Wingz, DataLink) all of which act together as a single application to monitor and analyze the PDSF.

  12. Hybrid cloud and cluster computing paradigms for life science applications.

    Science.gov (United States)

    Qiu, Judy; Ekanayake, Jaliya; Gunarathne, Thilina; Choi, Jong Youl; Bae, Seung-Hee; Li, Hui; Zhang, Bingjing; Wu, Tak-Lon; Ruan, Yang; Ekanayake, Saliya; Hughes, Adam; Fox, Geoffrey

    2010-12-21

    Clouds and MapReduce have shown themselves to be a broadly useful approach to scientific computing especially for parallel data intensive applications. However they have limited applicability to some areas such as data mining because MapReduce has poor performance on problems with an iterative structure present in the linear algebra that underlies much data analysis. Such problems can be run efficiently on clusters using MPI leading to a hybrid cloud and cluster environment. This motivates the design and implementation of an open source Iterative MapReduce system Twister. Comparisons of Amazon, Azure, and traditional Linux and Windows environments on common applications have shown encouraging performance and usability comparisons in several important non iterative cases. These are linked to MPI applications for final stages of the data analysis. Further we have released the open source Twister Iterative MapReduce and benchmarked it against basic MapReduce (Hadoop) and MPI in information retrieval and life sciences applications. The hybrid cloud (MapReduce) and cluster (MPI) approach offers an attractive production environment while Twister promises a uniform programming environment for many Life Sciences applications. We used commercial clouds Amazon and Azure and the NSF resource FutureGrid to perform detailed comparisons and evaluations of different approaches to data intensive computing. Several applications were developed in MPI, MapReduce and Twister in these different environments.

  13. Nature-inspired computing and optimization theory and applications

    CERN Document Server

    Yang, Xin-She; Nakamatsu, Kazumi

    2017-01-01

    The book provides readers with a snapshot of the state of the art in the field of nature-inspired computing and its application in optimization. The approach is mainly practice-oriented: each bio-inspired technique or algorithm is introduced together with one of its possible applications. Applications cover a wide range of real-world optimization problems: from feature selection and image enhancement to scheduling and dynamic resource management, from wireless sensor networks and wiring network diagnosis to sports training planning and gene expression, from topology control and morphological filters to nutritional meal design and antenna array design. There are a few theoretical chapters comparing different existing techniques, exploring the advantages of nature-inspired computing over other methods, and investigating the mixing time of genetic algorithms. The book also introduces a wide range of algorithms, including the ant colony optimization, the bat algorithm, genetic algorithms, the collision-based opti...

  14. BoreholeAR: A mobile tablet application for effective borehole database visualization using an augmented reality technology

    Science.gov (United States)

    Lee, Sangho; Suh, Jangwon; Park, Hyeong-Dong

    2015-03-01

    Boring logs are widely used in geological field studies since the data describes various attributes of underground and surface environments. However, it is difficult to manage multiple boring logs in the field as the conventional management and visualization methods are not suitable for integrating and combining large data sets. We developed an iPad application to enable its user to search the boring log rapidly and visualize them using the augmented reality (AR) technique. For the development of the application, a standard borehole database appropriate for a mobile-based borehole database management system was designed. The application consists of three modules: an AR module, a map module, and a database module. The AR module superimposes borehole data on camera imagery as viewed by the user and provides intuitive visualization of borehole locations. The map module shows the locations of corresponding borehole data on a 2D map with additional map layers. The database module provides data management functions for large borehole databases for other modules. Field survey was also carried out using more than 100,000 borehole data.

  15. Application Scheduling in Mobile Cloud Computing with Load Balancing

    Directory of Open Access Journals (Sweden)

    Xianglin Wei

    2013-01-01

    Full Text Available Mobile cloud computing (MCC enables the mobile devices to offload their applications to the cloud and thus greatly enriches the types of applications on mobile devices and enhances the quality of service of the applications. Under various circumstances, researchers have put forward several MCC architectures. However, how to reduce the response latency while efficiently utilizing the idle service capacities of the mobile devices still remains a challenge. In this paper, we firstly give a definition of MCC and divide the recently proposed architectures into four categories. Secondly, we present a Hybrid Local Mobile Cloud Model (HLMCM by extending the Cloudlet architecture. Then, after formulating the application scheduling problems in HLMCM and bringing forward the Hybrid Ant Colony algorithm based Application Scheduling (HACAS algorithm, we finally validate the efficiency of the HACAS algorithm by simulation experiments.

  16. Studying an Eulerian Computer Model on Different High-performance Computer Platforms and Some Applications

    Science.gov (United States)

    Georgiev, K.; Zlatev, Z.

    2010-11-01

    The Danish Eulerian Model (DEM) is an Eulerian model for studying the transport of air pollutants on large scale. Originally, the model was developed at the National Environmental Research Institute of Denmark. The model computational domain covers Europe and some neighbour parts belong to the Atlantic Ocean, Asia and Africa. If DEM model is to be applied by using fine grids, then its discretization leads to a huge computational problem. This implies that such a model as DEM must be run only on high-performance computer architectures. The implementation and tuning of such a complex large-scale model on each different computer is a non-trivial task. Here, some comparison results of running of this model on different kind of vector (CRAY C92A, Fujitsu, etc.), parallel computers with distributed memory (IBM SP, CRAY T3E, Beowulf clusters, Macintosh G4 clusters, etc.), parallel computers with shared memory (SGI Origin, SUN, etc.) and parallel computers with two levels of parallelism (IBM SMP, IBM BlueGene/P, clusters of multiprocessor nodes, etc.) will be presented. The main idea in the parallel version of DEM is domain partitioning approach. Discussions according to the effective use of the cache and hierarchical memories of the modern computers as well as the performance, speed-ups and efficiency achieved will be done. The parallel code of DEM, created by using MPI standard library, appears to be highly portable and shows good efficiency and scalability on different kind of vector and parallel computers. Some important applications of the computer model output are presented in short.

  17. Engineering Applications of Computational Fluid Dynamics: Volume 2

    Directory of Open Access Journals (Sweden)

    Maher A.R. Sadiq Al-Baghdadi

    2013-01-01

    Full Text Available Chapter 1: CFD Modeling of Methane Reforming in Compact Reformers. Meng Ni Chapter 2: FEM Based Solution of Thermo Fluid Dynamic Phenomena in Solid Oxide Fuel Cells (SOFCS. F. Arpino, A. Carotenuto, N. Massarotti, A. Mauro Chapter 3: Computational Fluid Dynamics in the Development of a 3D Simulator for Testing Pollution Monitoring Robotic Fishes. John Oyekan, Bowen Lu, Huosheng Hu Chapter 4: CFD Applications in Electronic Packaging. C.Y. Khor, Chun-Sean Lau, M.Z. Abdullah Chapter 5: CFD Simulation of Savonius Wind Turbine Operation. Jo?o Vicente Akwa, Adriane Prisco Petry Chapter 6: Intermittency Modelling of Transitional Boundary Layer Flows on Steam and Gas Turbine Blades. Erik Dick, Slawomir Kubacki, Koen Lodefier, Witold Elsner Chapter 7: Numerical Analysis of the Flow through Fitting in Air Conditioning Systems. N.C. Uz?rraga-Rodriguez, A. Gallegos-Mu?oz, J.M. Belman-Flores, J.C. Rubio-Arana Chapter 8: Design and Optimization of Food Processing Equipments using Computational Fluid Dynamics Modeling. N. Chhanwal and C. Anandharamakrishnan Chapter 9: Fuel and Intake Systems Optimization of a Converted LPG Engine: Steady and Unsteady in-Cylinder Flow CFD Investigations and Experiments Validation. M. A. Jemni, G. Kantchev, Z. Driss, M. S. Abid Chapter 10: Computational Fluid Dynamics Application for Thermal Management in Underground Mines. Agus P. Sasmito, Jundika C. Kurnia, Guan Mengzhao, Erik Birgersson, Arun S. Mujumdar Chapter 11: Computational Fluid Dynamics and its Applications. R.Parthiban, C.Muthuraj, A.Rajakumar

  18. Failure rates in Barsebaeck-1 reactor coolant pressure boundary piping. An application of a piping failure database

    Energy Technology Data Exchange (ETDEWEB)

    Lydell, B. [RSA Technologies, Vista, CA (United States)

    1999-05-01

    This report documents an application of a piping failure database to estimate the frequency of leak and rupture in reactor coolant pressure boundary piping. The study used Barsebaeck-1 as reference plant. The study tried two different approaches to piping failure rate estimation: 1) PSA-style, simple estimation using Bayesian statistics, and 2) fitting of statistical distribution to failure data. A large, validated database on piping failures (like the SKI-PIPE database) supports both approaches. In addition to documenting leak and rupture frequencies, the SKI report describes the use of piping failure data to estimate frequency of medium and large loss of coolant accidents (LOCAs). This application study was co sponsored by Barsebaeck Kraft AB and SKI Research 41 refs, figs, tabs

  19. Probability, statistics and queueing theory, with computer science applications

    CERN Document Server

    Allen, Arnold O

    1978-01-01

    Probability, Statistics, and Queueing Theory: With Computer Science Applications focuses on the use of statistics and queueing theory for the design and analysis of data communication systems, emphasizing how the theorems and theory can be used to solve practical computer science problems. This book is divided into three parts. The first part discusses the basic concept of probability, probability distributions commonly used in applied probability, and important concept of a stochastic process. Part II covers the discipline of queueing theory, while Part III deals with statistical inference. T

  20. Computer, Network, Software, and Hardware Engineering with Applications

    CERN Document Server

    Schneidewind, Norman F

    2012-01-01

    There are many books on computers, networks, and software engineering but none that integrate the three with applications. Integration is important because, increasingly, software dominates the performance, reliability, maintainability, and availability of complex computer and systems. Books on software engineering typically portray software as if it exists in a vacuum with no relationship to the wider system. This is wrong because a system is more than software. It is comprised of people, organizations, processes, hardware, and software. All of these components must be considered in an integr