WorldWideScience

Sample records for automating spreadsheet discovery

  1. Automated discovery systems and the inductivist controversy

    Science.gov (United States)

    Giza, Piotr

    2017-09-01

    The paper explores possible influences that some developments in the field of branches of AI, called automated discovery and machine learning systems, might have upon some aspects of the old debate between Francis Bacon's inductivism and Karl Popper's falsificationism. Donald Gillies facetiously calls this controversy 'the duel of two English knights', and claims, after some analysis of historical cases of discovery, that Baconian induction had been used in science very rarely, or not at all, although he argues that the situation has changed with the advent of machine learning systems. (Some clarification of terms machine learning and automated discovery is required here. The key idea of machine learning is that, given data with associated outcomes, software can be trained to make those associations in future cases which typically amounts to inducing some rules from individual cases classified by the experts. Automated discovery (also called machine discovery) deals with uncovering new knowledge that is valuable for human beings, and its key idea is that discovery is like other intellectual tasks and that the general idea of heuristic search in problem spaces applies also to discovery tasks. However, since machine learning systems discover (very low-level) regularities in data, throughout this paper I use the generic term automated discovery for both kinds of systems. I will elaborate on this later on). Gillies's line of argument can be generalised: thanks to automated discovery systems, philosophers of science have at their disposal a new tool for empirically testing their philosophical hypotheses. Accordingly, in the paper, I will address the question, which of the two philosophical conceptions of scientific method is better vindicated in view of the successes and failures of systems developed within three major research programmes in the field: machine learning systems in the Turing tradition, normative theory of scientific discovery formulated by Herbert Simon

  2. Perspectives on bioanalytical mass spectrometry and automation in drug discovery.

    Science.gov (United States)

    Janiszewski, John S; Liston, Theodore E; Cole, Mark J

    2008-11-01

    The use of high speed synthesis technologies has resulted in a steady increase in the number of new chemical entities active in the drug discovery research stream. Large organizations can have thousands of chemical entities in various stages of testing and evaluation across numerous projects on a weekly basis. Qualitative and quantitative measurements made using LC/MS are integrated throughout this process from early stage lead generation through candidate nomination. Nearly all analytical processes and procedures in modern research organizations are automated to some degree. This includes both hardware and software automation. In this review we discuss bioanalytical mass spectrometry and automation as components of the analytical chemistry infrastructure in pharma. Analytical chemists are presented as members of distinct groups with similar skillsets that build automated systems, manage test compounds, assays and reagents, and deliver data to project teams. The ADME-screening process in drug discovery is used as a model to highlight the relationships between analytical tasks in drug discovery. Emerging software and process automation tools are described that can potentially address gaps and link analytical chemistry related tasks. The role of analytical chemists and groups in modern 'industrialized' drug discovery is also discussed.

  3. Spreadsheet analysis of gamma spectra for nuclear material measurements

    International Nuclear Information System (INIS)

    Mosby, W.R.; Pace, D.M.

    1990-01-01

    A widely available commercial spreadsheet package for personal computers is used to calculate gamma spectra peak areas using both region of interest and peak fitting methods. The gamma peak areas obtained are used for uranium enrichment assays and for isotopic analyses of mixtures of transuranics. The use of spreadsheet software with an internal processing language allows automation of routine analysis procedures increasing ease of use and reducing processing errors while providing great flexibility in addressing unusual measurement problems. 4 refs., 9 figs

  4. Flexible End2End Workflow Automation of Hit-Discovery Research.

    Science.gov (United States)

    Holzmüller-Laue, Silke; Göde, Bernd; Thurow, Kerstin

    2014-08-01

    The article considers a new approach of more complex laboratory automation at the workflow layer. The authors purpose the automation of end2end workflows. The combination of all relevant subprocesses-whether automated or manually performed, independently, and in which organizational unit-results in end2end processes that include all result dependencies. The end2end approach focuses on not only the classical experiments in synthesis or screening, but also on auxiliary processes such as the production and storage of chemicals, cell culturing, and maintenance as well as preparatory activities and analyses of experiments. Furthermore, the connection of control flow and data flow in the same process model leads to reducing of effort of the data transfer between the involved systems, including the necessary data transformations. This end2end laboratory automation can be realized effectively with the modern methods of business process management (BPM). This approach is based on a new standardization of the process-modeling notation Business Process Model and Notation 2.0. In drug discovery, several scientific disciplines act together with manifold modern methods, technologies, and a wide range of automated instruments for the discovery and design of target-based drugs. The article discusses the novel BPM-based automation concept with an implemented example of a high-throughput screening of previously synthesized compound libraries. © 2014 Society for Laboratory Automation and Screening.

  5. Automated Supernova Discovery (Abstract)

    Science.gov (United States)

    Post, R. S.

    2015-12-01

    (Abstract only) We are developing a system of robotic telescopes for automatic recognition of Supernovas as well as other transient events in collaboration with the Puckett Supernova Search Team. At the SAS2014 meeting, the discovery program, SNARE, was first described. Since then, it has been continuously improved to handle searches under a wide variety of atmospheric conditions. Currently, two telescopes are used to build a reference library while searching for PSN with a partial library. Since data is taken every night without clouds, we must deal with varying atmospheric and high background illumination from the moon. Software is configured to identify a PSN, reshoot for verification with options to change the run plan to acquire photometric or spectrographic data. The telescopes are 24-inch CDK24, with Alta U230 cameras, one in CA and one in NM. Images and run plans are sent between sites so the CA telescope can search while photometry is done in NM. Our goal is to find bright PSNs with magnitude 17.5 or less which is the limit of our planned spectroscopy. We present results from our first automated PSN discoveries and plans for PSN data acquisition.

  6. A Fully Automated High-Throughput Flow Cytometry Screening System Enabling Phenotypic Drug Discovery.

    Science.gov (United States)

    Joslin, John; Gilligan, James; Anderson, Paul; Garcia, Catherine; Sharif, Orzala; Hampton, Janice; Cohen, Steven; King, Miranda; Zhou, Bin; Jiang, Shumei; Trussell, Christopher; Dunn, Robert; Fathman, John W; Snead, Jennifer L; Boitano, Anthony E; Nguyen, Tommy; Conner, Michael; Cooke, Mike; Harris, Jennifer; Ainscow, Ed; Zhou, Yingyao; Shaw, Chris; Sipes, Dan; Mainquist, James; Lesley, Scott

    2018-05-01

    The goal of high-throughput screening is to enable screening of compound libraries in an automated manner to identify quality starting points for optimization. This often involves screening a large diversity of compounds in an assay that preserves a connection to the disease pathology. Phenotypic screening is a powerful tool for drug identification, in that assays can be run without prior understanding of the target and with primary cells that closely mimic the therapeutic setting. Advanced automation and high-content imaging have enabled many complex assays, but these are still relatively slow and low throughput. To address this limitation, we have developed an automated workflow that is dedicated to processing complex phenotypic assays for flow cytometry. The system can achieve a throughput of 50,000 wells per day, resulting in a fully automated platform that enables robust phenotypic drug discovery. Over the past 5 years, this screening system has been used for a variety of drug discovery programs, across many disease areas, with many molecules advancing quickly into preclinical development and into the clinic. This report will highlight a diversity of approaches that automated flow cytometry has enabled for phenotypic drug discovery.

  7. Automated cell type discovery and classification through knowledge transfer

    Science.gov (United States)

    Lee, Hao-Chih; Kosoy, Roman; Becker, Christine E.

    2017-01-01

    Abstract Motivation: Recent advances in mass cytometry allow simultaneous measurements of up to 50 markers at single-cell resolution. However, the high dimensionality of mass cytometry data introduces computational challenges for automated data analysis and hinders translation of new biological understanding into clinical applications. Previous studies have applied machine learning to facilitate processing of mass cytometry data. However, manual inspection is still inevitable and becoming the barrier to reliable large-scale analysis. Results: We present a new algorithm called Automated Cell-type Discovery and Classification (ACDC) that fully automates the classification of canonical cell populations and highlights novel cell types in mass cytometry data. Evaluations on real-world data show ACDC provides accurate and reliable estimations compared to manual gating results. Additionally, ACDC automatically classifies previously ambiguous cell types to facilitate discovery. Our findings suggest that ACDC substantially improves both reliability and interpretability of results obtained from high-dimensional mass cytometry profiling data. Availability and Implementation: A Python package (Python 3) and analysis scripts for reproducing the results are availability on https://bitbucket.org/dudleylab/acdc. Contact: brian.kidd@mssm.edu or joel.dudley@mssm.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:28158442

  8. Spreadsheet Patents

    DEFF Research Database (Denmark)

    Borum, Holger Stadel; Kirkbro, Malthe Ettrup; Sestoft, Peter

    2018-01-01

    This technical report gives a list of US patents and patent applications related to spreadsheet implementation technology. It is intended as a companion to the monograph Spreadsheet Implementation Technology (Peter Sestoft, MIT Press 2014), and substantially extends and updates an appendix from...

  9. Computerized cost estimation spreadsheet and cost data base for fusion devices

    International Nuclear Information System (INIS)

    Hamilton, W.R.; Rothe, K.E.

    1985-01-01

    An automated approach to performing and cataloging cost estimates has been developed at the Fusion Engineering Design Center (FEDC), wherein the cost estimate record is stored in the LOTUS 1-2-3 spreadsheet on an IBM personal computer. The cost estimation spreadsheet is based on the cost coefficient/cost algorithm approach and incorporates a detailed generic code of cost accounts for both tokamak and tandem mirror devices. Component design parameters (weight, surface area, etc.) and cost factors are input, and direct and indirect costs are calculated. The cost data base file derived from actual cost experience within the fusion community and refined to be compatible with the spreadsheet costing approach is a catalog of cost coefficients, algorithms, and component costs arranged into data modules corresponding to specific components and/or subsystems. Each data module contains engineering, equipment, and installation labor cost data for different configurations and types of the specific component or subsystem. This paper describes the assumptions, definitions, methodology, and architecture incorporated in the development of the cost estimation spreadsheet and cost data base, along with the type of input required and the output format

  10. Measuring Spreadsheet Formula Understandability

    NARCIS (Netherlands)

    Hermans, F.F.J.; Pinzger, M.; Van Deursen, A.

    2012-01-01

    Spreadsheets are widely used in industry, because they are flexible and easy to use. Often they are used for business-critical applications. It is however difficult for spreadsheet users to correctly assess the quality of spreadsheets, especially with respect to the understandability.

  11. Recent development in software and automation tools for high-throughput discovery bioanalysis.

    Science.gov (United States)

    Shou, Wilson Z; Zhang, Jun

    2012-05-01

    Bioanalysis with LC-MS/MS has been established as the method of choice for quantitative determination of drug candidates in biological matrices in drug discovery and development. The LC-MS/MS bioanalytical support for drug discovery, especially for early discovery, often requires high-throughput (HT) analysis of large numbers of samples (hundreds to thousands per day) generated from many structurally diverse compounds (tens to hundreds per day) with a very quick turnaround time, in order to provide important activity and liability data to move discovery projects forward. Another important consideration for discovery bioanalysis is its fit-for-purpose quality requirement depending on the particular experiments being conducted at this stage, and it is usually not as stringent as those required in bioanalysis supporting drug development. These aforementioned attributes of HT discovery bioanalysis made it an ideal candidate for using software and automation tools to eliminate manual steps, remove bottlenecks, improve efficiency and reduce turnaround time while maintaining adequate quality. In this article we will review various recent developments that facilitate automation of individual bioanalytical procedures, such as sample preparation, MS/MS method development, sample analysis and data review, as well as fully integrated software tools that manage the entire bioanalytical workflow in HT discovery bioanalysis. In addition, software tools supporting the emerging high-resolution accurate MS bioanalytical approach are also discussed.

  12. Automated Discovery of Speech Act Categories in Educational Games

    Science.gov (United States)

    Rus, Vasile; Moldovan, Cristian; Niraula, Nobal; Graesser, Arthur C.

    2012-01-01

    In this paper we address the important task of automated discovery of speech act categories in dialogue-based, multi-party educational games. Speech acts are important in dialogue-based educational systems because they help infer the student speaker's intentions (the task of speech act classification) which in turn is crucial to providing adequate…

  13. AutoDrug: fully automated macromolecular crystallography workflows for fragment-based drug discovery

    International Nuclear Information System (INIS)

    Tsai, Yingssu; McPhillips, Scott E.; González, Ana; McPhillips, Timothy M.; Zinn, Daniel; Cohen, Aina E.; Feese, Michael D.; Bushnell, David; Tiefenbrunn, Theresa; Stout, C. David; Ludaescher, Bertram; Hedman, Britt; Hodgson, Keith O.; Soltis, S. Michael

    2013-01-01

    New software has been developed for automating the experimental and data-processing stages of fragment-based drug discovery at a macromolecular crystallography beamline. A new workflow-automation framework orchestrates beamline-control and data-analysis software while organizing results from multiple samples. AutoDrug is software based upon the scientific workflow paradigm that integrates the Stanford Synchrotron Radiation Lightsource macromolecular crystallography beamlines and third-party processing software to automate the crystallography steps of the fragment-based drug-discovery process. AutoDrug screens a cassette of fragment-soaked crystals, selects crystals for data collection based on screening results and user-specified criteria and determines optimal data-collection strategies. It then collects and processes diffraction data, performs molecular replacement using provided models and detects electron density that is likely to arise from bound fragments. All processes are fully automated, i.e. are performed without user interaction or supervision. Samples can be screened in groups corresponding to particular proteins, crystal forms and/or soaking conditions. A single AutoDrug run is only limited by the capacity of the sample-storage dewar at the beamline: currently 288 samples. AutoDrug was developed in conjunction with RestFlow, a new scientific workflow-automation framework. RestFlow simplifies the design of AutoDrug by managing the flow of data and the organization of results and by orchestrating the execution of computational pipeline steps. It also simplifies the execution and interaction of third-party programs and the beamline-control system. Modeling AutoDrug as a scientific workflow enables multiple variants that meet the requirements of different user groups to be developed and supported. A workflow tailored to mimic the crystallography stages comprising the drug-discovery pipeline of CoCrystal Discovery Inc. has been deployed and successfully

  14. Spreadsheet

    International Nuclear Information System (INIS)

    Anon.

    1991-01-01

    The spreadsheet shown in tables is intended to show how environmental costs can be calculated, displayed, and modified. It is not intended to show the environmental costs of any real resource or its effects, although it could show such costs if actual data were used. It is based on a hypothetical coal plant emitting various quantities of pollutants to which people are exposed. The environmental cost of the plant consists of the economic value of the ensuing health risks. The values used in the table are intended to be illustrative only, although they are based on modified versions of actual data from a study for the Bonneville Power Administration. The formulas used to calculate the values are also displayed. Although only one environmental effect (health risks) is calculated and valued in this spreadsheet, the same or similar procedure could be used for a variety of other environmental effects. This spreadsheet is intended to be a model; a complete accounting for all environmental costs associated with a coal plant is beyond the scope of this project

  15. Some Spreadsheet Poka-Yoke

    OpenAIRE

    Bekenn, Bill; Hooper, Ray

    2009-01-01

    Whilst not all spreadsheet defects are structural in nature, poor layout choices can compromise spreadsheet quality. These defects may be avoided at the development stage by some simple mistake prevention and detection devices. Poka-Yoke (Japanese for Mistake Proofing), which owes its genesis to the Toyota Production System (the standard for manufacturing excellence throughout the world) offers some principles that may be applied to reducing spreadsheet defects. In this paper we examine sprea...

  16. Optimization modeling with spreadsheets

    CERN Document Server

    Baker, Kenneth R

    2015-01-01

    An accessible introduction to optimization analysis using spreadsheets Updated and revised, Optimization Modeling with Spreadsheets, Third Edition emphasizes model building skills in optimization analysis. By emphasizing both spreadsheet modeling and optimization tools in the freely available Microsoft® Office Excel® Solver, the book illustrates how to find solutions to real-world optimization problems without needing additional specialized software. The Third Edition includes many practical applications of optimization models as well as a systematic framework that il

  17. Accelerating the discovery of materials for clean energy in the era of smart automation

    Science.gov (United States)

    Tabor, Daniel P.; Roch, Loïc M.; Saikin, Semion K.; Kreisbeck, Christoph; Sheberla, Dennis; Montoya, Joseph H.; Dwaraknath, Shyam; Aykol, Muratahan; Ortiz, Carlos; Tribukait, Hermann; Amador-Bedolla, Carlos; Brabec, Christoph J.; Maruyama, Benji; Persson, Kristin A.; Aspuru-Guzik, Alán

    2018-05-01

    The discovery and development of novel materials in the field of energy are essential to accelerate the transition to a low-carbon economy. Bringing recent technological innovations in automation, robotics and computer science together with current approaches in chemistry, materials synthesis and characterization will act as a catalyst for revolutionizing traditional research and development in both industry and academia. This Perspective provides a vision for an integrated artificial intelligence approach towards autonomous materials discovery, which, in our opinion, will emerge within the next 5 to 10 years. The approach we discuss requires the integration of the following tools, which have already seen substantial development to date: high-throughput virtual screening, automated synthesis planning, automated laboratories and machine learning algorithms. In addition to reducing the time to deployment of new materials by an order of magnitude, this integrated approach is expected to lower the cost associated with the initial discovery. Thus, the price of the final products (for example, solar panels, batteries and electric vehicles) will also decrease. This in turn will enable industries and governments to meet more ambitious targets in terms of reducing greenhouse gas emissions at a faster pace.

  18. Declarative Parallel Programming in Spreadsheet End-User Development

    DEFF Research Database (Denmark)

    Biermann, Florian

    2016-01-01

    Spreadsheets are first-order functional languages and are widely used in research and industry as a tool to conveniently perform all kinds of computations. Because cells on a spreadsheet are immutable, there are possibilities for implicit parallelization of spreadsheet computations. In this liter...... can directly apply results from functional array programming to a spreadsheet model of computations.......Spreadsheets are first-order functional languages and are widely used in research and industry as a tool to conveniently perform all kinds of computations. Because cells on a spreadsheet are immutable, there are possibilities for implicit parallelization of spreadsheet computations....... In this literature study, we provide an overview of the publications on spreadsheet end-user programming and declarative array programming to inform further research on parallel programming in spreadsheets. Our results show that there is a clear overlap between spreadsheet programming and array programming and we...

  19. DataSpread: Unifying Databases and Spreadsheets.

    Science.gov (United States)

    Bendre, Mangesh; Sun, Bofan; Zhang, Ding; Zhou, Xinyan; Chang, Kevin ChenChuan; Parameswaran, Aditya

    2015-08-01

    Spreadsheet software is often the tool of choice for ad-hoc tabular data management, processing, and visualization, especially on tiny data sets. On the other hand, relational database systems offer significant power, expressivity, and efficiency over spreadsheet software for data management, while lacking in the ease of use and ad-hoc analysis capabilities. We demonstrate DataSpread, a data exploration tool that holistically unifies databases and spreadsheets. It continues to offer a Microsoft Excel-based spreadsheet front-end, while in parallel managing all the data in a back-end database, specifically, PostgreSQL. DataSpread retains all the advantages of spreadsheets, including ease of use, ad-hoc analysis and visualization capabilities, and a schema-free nature, while also adding the advantages of traditional relational databases, such as scalability and the ability to use arbitrary SQL to import, filter, or join external or internal tables and have the results appear in the spreadsheet. DataSpread needs to reason about and reconcile differences in the notions of schema, addressing of cells and tuples, and the current "pane" (which exists in spreadsheets but not in traditional databases), and support data modifications at both the front-end and the back-end. Our demonstration will center on our first and early prototype of the DataSpread, and will give the attendees a sense for the enormous data exploration capabilities offered by unifying spreadsheets and databases.

  20. A Literature Review of Spreadsheet Technology

    DEFF Research Database (Denmark)

    Bock, Alexander

    2016-01-01

    It was estimated that there would be over 55 million end-user programmers in 2012 in many different fields such as engineering,insurance and banking, and the numbers are not expected to have dwindled since. Consequently, technological advancements of spreadsheets is of great interest to a wide...... number of people from different backgrounds. This literature review presents an overview of research on spreadsheet technology, its challenges and its solutions. We also attempt to identify why software developers generally frown upon spreadsheets and how spreadsheet research can help alter this view....

  1. Spreadsheets and Bulgarian Goats

    Science.gov (United States)

    Sugden, Steve

    2012-01-01

    We consider a problem appearing in an Australian Mathematics Challenge in 2003. This article considers whether a spreadsheet might be used to model this problem, thus allowing students to explore its structure within the spreadsheet environment. It then goes on to reflect on some general principles of problem decomposition when the final goal is a…

  2. Spreadsheet algorithm for stagewise solvent extraction

    International Nuclear Information System (INIS)

    Leonard, R.A.; Regalbuto, M.C.

    1994-01-01

    The material balance and equilibrium equations for solvent extraction processes have been combined with computer spreadsheets in a new way so that models for very complex multicomponent multistage operations can be setup and used easily. A part of the novelty is the way in which the problem is organized in the spreadsheet. In addition, to facilitate spreadsheet setup, a new calculational procedure has been developed. The resulting Spreadsheet Algorithm for Stagewise Solvent Extraction (SASSE) can be used with either IBM or Macintosh personal computers as a simple yet powerful tool for analyzing solvent extraction flowsheets. 22 refs., 4 figs., 2 tabs

  3. AUTOMATED INADVERTENT INTRUDER APPLICATION

    International Nuclear Information System (INIS)

    Koffman, L; Patricia Lee, P; Jim Cook, J; Elmer Wilhite, E

    2007-01-01

    The Environmental Analysis and Performance Modeling group of Savannah River National Laboratory (SRNL) conducts performance assessments of the Savannah River Site (SRS) low-level waste facilities to meet the requirements of DOE Order 435.1. These performance assessments, which result in limits on the amounts of radiological substances that can be placed in the waste disposal facilities, consider numerous potential exposure pathways that could occur in the future. One set of exposure scenarios, known as inadvertent intruder analysis, considers the impact on hypothetical individuals who are assumed to inadvertently intrude onto the waste disposal site. Inadvertent intruder analysis considers three distinct scenarios for exposure referred to as the agriculture scenario, the resident scenario, and the post-drilling scenario. Each of these scenarios has specific exposure pathways that contribute to the overall dose for the scenario. For the inadvertent intruder analysis, the calculation of dose for the exposure pathways is a relatively straightforward algebraic calculation that utilizes dose conversion factors. Prior to 2004, these calculations were performed using an Excel spreadsheet. However, design checks of the spreadsheet calculations revealed that errors could be introduced inadvertently when copying spreadsheet formulas cell by cell and finding these errors was tedious and time consuming. This weakness led to the specification of functional requirements to create a software application that would automate the calculations for inadvertent intruder analysis using a controlled source of input parameters. This software application, named the Automated Inadvertent Intruder Application, has undergone rigorous testing of the internal calculations and meets software QA requirements. The Automated Inadvertent Intruder Application was intended to replace the previous spreadsheet analyses with an automated application that was verified to produce the same calculations and

  4. Automated Inadvertent Intruder Application

    International Nuclear Information System (INIS)

    Koffman, Larry D.; Lee, Patricia L.; Cook, James R.; Wilhite, Elmer L.

    2008-01-01

    The Environmental Analysis and Performance Modeling group of Savannah River National Laboratory (SRNL) conducts performance assessments of the Savannah River Site (SRS) low-level waste facilities to meet the requirements of DOE Order 435.1. These performance assessments, which result in limits on the amounts of radiological substances that can be placed in the waste disposal facilities, consider numerous potential exposure pathways that could occur in the future. One set of exposure scenarios, known as inadvertent intruder analysis, considers the impact on hypothetical individuals who are assumed to inadvertently intrude onto the waste disposal site. Inadvertent intruder analysis considers three distinct scenarios for exposure referred to as the agriculture scenario, the resident scenario, and the post-drilling scenario. Each of these scenarios has specific exposure pathways that contribute to the overall dose for the scenario. For the inadvertent intruder analysis, the calculation of dose for the exposure pathways is a relatively straightforward algebraic calculation that utilizes dose conversion factors. Prior to 2004, these calculations were performed using an Excel spreadsheet. However, design checks of the spreadsheet calculations revealed that errors could be introduced inadvertently when copying spreadsheet formulas cell by cell and finding these errors was tedious and time consuming. This weakness led to the specification of functional requirements to create a software application that would automate the calculations for inadvertent intruder analysis using a controlled source of input parameters. This software application, named the Automated Inadvertent Intruder Application, has undergone rigorous testing of the internal calculations and meets software QA requirements. The Automated Inadvertent Intruder Application was intended to replace the previous spreadsheet analyses with an automated application that was verified to produce the same calculations and

  5. Implementing function spreadsheets

    DEFF Research Database (Denmark)

    Sestoft, Peter

    2008-01-01

    : that of turning an expression into a named function. Hence they proposed a way to define a function in terms of a worksheet with designated input and output cells; we shall call it a function sheet. The goal of our work is to develop implementations of function sheets and study their application to realistic...... examples. Therefore, we are also developing a simple yet comprehensive spreadsheet core implementation for experimentation with this technology. Here we report briefly on our experiments with function sheets as well as other uses of our spreadsheet core implementation....

  6. The deductive spreadsheet

    CERN Document Server

    Cervesato, Iliano

    2013-01-01

    This book describes recent multidisciplinary research at the confluence of the fields of logic programming, database theory and human-computer interaction. The goal of this effort was to develop the basis of a deductive spreadsheet, a user productivity application that allows users without formal training in computer science to make decisions about generic data in the same simple way they currently use spreadsheets to make decisions about numerical data. The result is an elegant design supported by the most recent developments in the above disciplines.The first half of the book focuses on the

  7. Process mining : spreadsheet-like technology for processes

    NARCIS (Netherlands)

    Van Der Aalst, W.M.P.

    2016-01-01

    Spreadsheets can be viewed as a success story. Since the late seventies spreadsheet programs have been installed on the majority of computers and play a role comparable to text editors and databases management systems. Spreadsheets can be used to do anything with numbers, but are unable to handle

  8. Automated vocabulary discovery for geo-parsing online epidemic intelligence.

    Science.gov (United States)

    Keller, Mikaela; Freifeld, Clark C; Brownstein, John S

    2009-11-24

    Automated surveillance of the Internet provides a timely and sensitive method for alerting on global emerging infectious disease threats. HealthMap is part of a new generation of online systems designed to monitor and visualize, on a real-time basis, disease outbreak alerts as reported by online news media and public health sources. HealthMap is of specific interest for national and international public health organizations and international travelers. A particular task that makes such a surveillance useful is the automated discovery of the geographic references contained in the retrieved outbreak alerts. This task is sometimes referred to as "geo-parsing". A typical approach to geo-parsing would demand an expensive training corpus of alerts manually tagged by a human. Given that human readers perform this kind of task by using both their lexical and contextual knowledge, we developed an approach which relies on a relatively small expert-built gazetteer, thus limiting the need of human input, but focuses on learning the context in which geographic references appear. We show in a set of experiments, that this approach exhibits a substantial capacity to discover geographic locations outside of its initial lexicon. The results of this analysis provide a framework for future automated global surveillance efforts that reduce manual input and improve timeliness of reporting.

  9. Enron’s Spreadsheets and Related Emails : A Dataset and Analysis

    NARCIS (Netherlands)

    Hermans, F.; Murphy-Hill, E.

    2014-01-01

    Spreadsheets are used extensively in business processes around the world and as such, a topic of research interest. Over the past few years, many spreadsheet studies have been performed on the EUSES spreadsheet corpus. While this corpus has served the spreadsheet community well, the spreadsheets it

  10. Spreadsheet tool for estimating noise reduction costs

    International Nuclear Information System (INIS)

    Frank, L.; Senden, V.; Leszczynski, Y.

    2009-01-01

    The Northeast Capital Industrial Association (NCIA) represents industry in Alberta's industrial heartland. The organization is in the process of developing a regional noise management plan (RNMP) for their member companies. The RNMP includes the development of a noise reduction cost spreadsheet tool to conduct reviews of practical noise control treatments available for individual plant equipment, inclusive of ranges of noise attenuation achievable, which produces a budgetary prediction of the installed cost of practical noise control treatments. This paper discussed the noise reduction cost spreadsheet tool, with particular reference to noise control best practices approaches and spreadsheet tool development such as prerequisite, assembling data required, approach, and unit pricing database. Use and optimization of the noise reduction cost spreadsheet tool was also discussed. It was concluded that the noise reduction cost spreadsheet tool is an easy interactive tool to estimate implementation costs related to different strategies and options of noise control mitigating measures and was very helpful in gaining insight for noise control planning purposes. 2 tabs.

  11. Predicting Causal Relationships from Biological Data: Applying Automated Casual Discovery on Mass Cytometry Data of Human Immune Cells

    KAUST Repository

    Triantafillou, Sofia; Lagani, Vincenzo; Heinze-Deml, Christina; Schmidt, Angelika; Tegner, Jesper; Tsamardinos, Ioannis

    2017-01-01

    Learning the causal relationships that define a molecular system allows us to predict how the system will respond to different interventions. Distinguishing causality from mere association typically requires randomized experiments. Methods for automated causal discovery from limited experiments exist, but have so far rarely been tested in systems biology applications. In this work, we apply state-of-the art causal discovery methods on a large collection of public mass cytometry data sets, measuring intra-cellular signaling proteins of the human immune system and their response to several perturbations. We show how different experimental conditions can be used to facilitate causal discovery, and apply two fundamental methods that produce context-specific causal predictions. Causal predictions were reproducible across independent data sets from two different studies, but often disagree with the KEGG pathway databases. Within this context, we discuss the caveats we need to overcome for automated causal discovery to become a part of the routine data analysis in systems biology.

  12. Predicting Causal Relationships from Biological Data: Applying Automated Casual Discovery on Mass Cytometry Data of Human Immune Cells

    KAUST Repository

    Triantafillou, Sofia

    2017-03-31

    Learning the causal relationships that define a molecular system allows us to predict how the system will respond to different interventions. Distinguishing causality from mere association typically requires randomized experiments. Methods for automated causal discovery from limited experiments exist, but have so far rarely been tested in systems biology applications. In this work, we apply state-of-the art causal discovery methods on a large collection of public mass cytometry data sets, measuring intra-cellular signaling proteins of the human immune system and their response to several perturbations. We show how different experimental conditions can be used to facilitate causal discovery, and apply two fundamental methods that produce context-specific causal predictions. Causal predictions were reproducible across independent data sets from two different studies, but often disagree with the KEGG pathway databases. Within this context, we discuss the caveats we need to overcome for automated causal discovery to become a part of the routine data analysis in systems biology.

  13. Predicting Causal Relationships from Biological Data: Applying Automated Causal Discovery on Mass Cytometry Data of Human Immune Cells

    KAUST Repository

    Triantafillou, Sofia; Lagani, Vincenzo; Heinze-Deml, Christina; Schmidt, Angelika; Tegner, Jesper; Tsamardinos, Ioannis

    2017-01-01

    Learning the causal relationships that define a molecular system allows us to predict how the system will respond to different interventions. Distinguishing causality from mere association typically requires randomized experiments. Methods for automated  causal discovery from limited experiments exist, but have so far rarely been tested in systems biology applications. In this work, we apply state-of-the art causal discovery methods on a large collection of public mass cytometry data sets, measuring intra-cellular signaling proteins of the human immune system and their response to several perturbations. We show how different experimental conditions can be used to facilitate causal discovery, and apply two fundamental methods that produce context-specific causal predictions. Causal predictions were reproducible across independent data sets from two different studies, but often disagree with the KEGG pathway databases. Within this context, we discuss the caveats we need to overcome for automated causal discovery to become a part of the routine data analysis in systems biology.

  14. Predicting Causal Relationships from Biological Data: Applying Automated Causal Discovery on Mass Cytometry Data of Human Immune Cells

    KAUST Repository

    Triantafillou, Sofia

    2017-09-29

    Learning the causal relationships that define a molecular system allows us to predict how the system will respond to different interventions. Distinguishing causality from mere association typically requires randomized experiments. Methods for automated  causal discovery from limited experiments exist, but have so far rarely been tested in systems biology applications. In this work, we apply state-of-the art causal discovery methods on a large collection of public mass cytometry data sets, measuring intra-cellular signaling proteins of the human immune system and their response to several perturbations. We show how different experimental conditions can be used to facilitate causal discovery, and apply two fundamental methods that produce context-specific causal predictions. Causal predictions were reproducible across independent data sets from two different studies, but often disagree with the KEGG pathway databases. Within this context, we discuss the caveats we need to overcome for automated causal discovery to become a part of the routine data analysis in systems biology.

  15. Electronic spreadsheet to acquire the reflectance from the TM and ETM+ Landsat images

    Directory of Open Access Journals (Sweden)

    Antonio R. Formaggio

    2005-08-01

    Full Text Available The reflectance of agricultural cultures and other terrestrial surface "targets" is an intrinsic parameter of these targets, so in many situations, it must be used instead of the values of "gray levels" that is found in the satellite images. In order to get reflectance values, it is necessary to eliminate the atmospheric interference and to make a set of calculations that uses sensor parameters and information regarding the original image. The automation of this procedure has the advantage to speed up the process and to reduce the possibility of errors during calculations. The objective of this paper is to present an electronic spreadsheet that simplifies and automatizes the transformation of the digital numbers of the TM/Landsat-5 and ETM+/Landsat-7 images into reflectance. The method employed for atmospheric correction was the dark object subtraction (DOS. The electronic spreadsheet described here is freely available to users and can be downloaded at the following website: http://www.dsr.inpe.br/Calculo_Reflectancia.xls.

  16. Automated vocabulary discovery for geo-parsing online epidemic intelligence

    Directory of Open Access Journals (Sweden)

    Freifeld Clark C

    2009-11-01

    Full Text Available Abstract Background Automated surveillance of the Internet provides a timely and sensitive method for alerting on global emerging infectious disease threats. HealthMap is part of a new generation of online systems designed to monitor and visualize, on a real-time basis, disease outbreak alerts as reported by online news media and public health sources. HealthMap is of specific interest for national and international public health organizations and international travelers. A particular task that makes such a surveillance useful is the automated discovery of the geographic references contained in the retrieved outbreak alerts. This task is sometimes referred to as "geo-parsing". A typical approach to geo-parsing would demand an expensive training corpus of alerts manually tagged by a human. Results Given that human readers perform this kind of task by using both their lexical and contextual knowledge, we developed an approach which relies on a relatively small expert-built gazetteer, thus limiting the need of human input, but focuses on learning the context in which geographic references appear. We show in a set of experiments, that this approach exhibits a substantial capacity to discover geographic locations outside of its initial lexicon. Conclusion The results of this analysis provide a framework for future automated global surveillance efforts that reduce manual input and improve timeliness of reporting.

  17. Integrated Spreadsheets as Learning Environments for Young Children

    Directory of Open Access Journals (Sweden)

    Sergei Abramovich

    2014-01-01

    Full Text Available This classroom note shares experience of using spreadsheets with a group of 2nd grade students. The main feature of the learning environments that made effective the integration of technology and grade appropriate mathematics is the use of images of modern tools such as the Nintendo DC, the Play Station Portable, and the iPhone. The idea is illustrated by presenting a number of worksheets of so modified spreadsheets called integrated spreadsheets. The authors suggest using spreadsheets in that way offers an attractive interface for young students and enhances significantly their on-task behavior.

  18. Supporting professional spreadsheet users by generating leveled dataflow diagrams

    NARCIS (Netherlands)

    Hermans, F.; Pinzger, M.; Van Deursen, A.

    2010-01-01

    Thanks to their flexibility and intuitive programming model, spreadsheets are widely used in industry, often for businesscritical applications. Similar to software developers, professional spreadsheet users demand support for maintaining and transferring their spreadsheets. In this paper, we first

  19. Automated in vivo platform for the discovery of functional food treatments of hypercholesterolemia.

    Directory of Open Access Journals (Sweden)

    Robert M Littleton

    Full Text Available The zebrafish is becoming an increasingly popular model system for both automated drug discovery and investigating hypercholesterolemia. Here we combine these aspects and for the first time develop an automated high-content confocal assay for treatments of hypercholesterolemia. We also create two algorithms for automated analysis of cardiodynamic data acquired by high-speed confocal microscopy. The first algorithm computes cardiac parameters solely from the frequency-domain representation of cardiodynamic data while the second uses both frequency- and time-domain data. The combined approach resulted in smaller differences relative to manual measurements. The methods are implemented to test the ability of a methanolic extract of the hawthorn plant (Crataegus laevigata to treat hypercholesterolemia and its peripheral cardiovascular effects. Results demonstrate the utility of these methods and suggest the extract has both antihypercholesterolemic and postitively inotropic properties.

  20. Enron versus EUSES : A comparison of two spreadsheet corpora

    NARCIS (Netherlands)

    Jansen, B.

    2015-01-01

    Spreadsheets are widely used within companies and often form the basis for business decisions. Numerous cases are known where incorrect information in spreadsheets lead to incorrect decisions. Such cases underline the relevance of research on the professional use of spreadsheets. Recently a new

  1. The Spiral Discovery Network as an Automated General-Purpose Optimization Tool

    Directory of Open Access Journals (Sweden)

    Adam B. Csapo

    2018-01-01

    Full Text Available The Spiral Discovery Method (SDM was originally proposed as a cognitive artifact for dealing with black-box models that are dependent on multiple inputs with nonlinear and/or multiplicative interaction effects. Besides directly helping to identify functional patterns in such systems, SDM also simplifies their control through its characteristic spiral structure. In this paper, a neural network-based formulation of SDM is proposed together with a set of automatic update rules that makes it suitable for both semiautomated and automated forms of optimization. The behavior of the generalized SDM model, referred to as the Spiral Discovery Network (SDN, and its applicability to nondifferentiable nonconvex optimization problems are elucidated through simulation. Based on the simulation, the case is made that its applicability would be worth investigating in all areas where the default approach of gradient-based backpropagation is used today.

  2. Lens Ray Diagrams with a Spreadsheet

    Science.gov (United States)

    González, Manuel I.

    2018-01-01

    Physicists create spreadsheets customarily to carry out numerical calculations and to display their results in a meaningful, nice-looking way. Spreadsheets can also be used to display a vivid geometrical model of a physical system. This statement is illustrated with an example taken from geometrical optics: images formed by a thin lens. A careful…

  3. On the Numerical Accuracy of Spreadsheets

    Directory of Open Access Journals (Sweden)

    Alejandro C. Frery

    2010-10-01

    Full Text Available This paper discusses the numerical precision of five spreadsheets (Calc, Excel, Gnumeric, NeoOffice and Oleo running on two hardware platforms (i386 and amd64 and on three operating systems (Windows Vista, Ubuntu Intrepid and Mac OS Leopard. The methodology consists of checking the number of correct significant digits returned by each spreadsheet when computing the sample mean, standard deviation, first-order autocorrelation, F statistic in ANOVA tests, linear and nonlinear regression and distribution functions. A discussion about the algorithms for pseudorandom number generation provided by these platforms is also conducted. We conclude that there is no safe choice among the spreadsheets here assessed: they all fail in nonlinear regression and they are not suited for Monte Carlo experiments.

  4. Toolkits for nuclear science. Data and spreadsheets

    International Nuclear Information System (INIS)

    Lindstrom, R.M.

    2006-01-01

    In the past decade, the combination of readily accessible, reliable data in electronic form with well-tested spreadsheet programs has changed the approach to experiment planning and computation of results. This has led to a flowering of software applications based on spreadsheets, mostly written by scientists, not by professional programmers trained in numerical methods. Formal quality systems increasingly call for verified computational methods and reference data as part of the analytical process, a demand that is difficult to meet with most spreadsheets. Examples are given of utilities used in our laboratory, with suggestions for verification and quality maintenance. (author)

  5. GWATCH: a web platform for automated gene association discovery analysis

    Science.gov (United States)

    2014-01-01

    Background As genome-wide sequence analyses for complex human disease determinants are expanding, it is increasingly necessary to develop strategies to promote discovery and validation of potential disease-gene associations. Findings Here we present a dynamic web-based platform – GWATCH – that automates and facilitates four steps in genetic epidemiological discovery: 1) Rapid gene association search and discovery analysis of large genome-wide datasets; 2) Expanded visual display of gene associations for genome-wide variants (SNPs, indels, CNVs), including Manhattan plots, 2D and 3D snapshots of any gene region, and a dynamic genome browser illustrating gene association chromosomal regions; 3) Real-time validation/replication of candidate or putative genes suggested from other sources, limiting Bonferroni genome-wide association study (GWAS) penalties; 4) Open data release and sharing by eliminating privacy constraints (The National Human Genome Research Institute (NHGRI) Institutional Review Board (IRB), informed consent, The Health Insurance Portability and Accountability Act (HIPAA) of 1996 etc.) on unabridged results, which allows for open access comparative and meta-analysis. Conclusions GWATCH is suitable for both GWAS and whole genome sequence association datasets. We illustrate the utility of GWATCH with three large genome-wide association studies for HIV-AIDS resistance genes screened in large multicenter cohorts; however, association datasets from any study can be uploaded and analyzed by GWATCH. PMID:25374661

  6. Early identification of hERG liability in drug discovery programs by automated patch clamp

    Directory of Open Access Journals (Sweden)

    Timm eDanker

    2014-09-01

    Full Text Available Blockade of the cardiac ion channel coded by hERG can lead to cardiac arrhythmia, which has become a major concern in drug discovery and development. Automated electrophysiological patch clamp allows assessment of hERG channel effects early in drug development to aid medicinal chemistry programs and has become routine in pharmaceutical companies. However, a number of potential sources of errors in setting up hERG channel assays by automated patch clamp can lead to misinterpretation of data or false effects being reported. This article describes protocols for automated electrophysiology screening of compound effects on the hERG channel current. Protocol details and the translation of criteria known from manual patch clamp experiments to automated patch clamp experiments to achieve good quality data are emphasized. Typical pitfalls and artifacts that may lead to misinterpretation of data are discussed. While this article focuses on hERG channel recordings using the QPatch (Sophion A/S, Copenhagen, Denmark technology, many of the assay and protocol details given in this article can be transferred for setting up different ion channel assays by automated patch clamp and are similar on other planar patch clamp platforms.

  7. Spreadsheet Design: An Optimal Checklist for Accountants

    Science.gov (United States)

    Barnes, Jeffrey N.; Tufte, David; Christensen, David

    2009-01-01

    Just as good grammar, punctuation, style, and content organization are important to well-written documents, basic fundamentals of spreadsheet design are essential to clear communication. In fact, the very principles of good writing should be integrated into spreadsheet workpaper design and organization. The unique contributions of this paper are…

  8. Rewriting High-Level Spreadsheet Structures into Higher-Order Functional Programs

    DEFF Research Database (Denmark)

    Biermann, Florian; Dou, Wensheng; Sestoft, Peter

    2017-01-01

    Spreadsheets are used heavily in industry and academia. Often, spreadsheet models are developed for years and their complexity grows vastly beyond what the paradigm was originally conceived for. Such complexity often comes at the cost of recalculation performance. However, spreadsheet models...

  9. Spreadsheets in the Cloud { Not Ready Yet

    Directory of Open Access Journals (Sweden)

    Bruce D. McCullogh

    2013-01-01

    Full Text Available Cloud computing is a relatively new technology that facilitates collaborative creation and modification of documents over the internet in real time. Here we provide an introductory assessment of the available statistical functions in three leading cloud spreadsheets namely Google Spreadsheet, Microsoft Excel Web App, and Zoho Sheet. Our results show that the developers of cloud-based spreadsheets are not performing basic quality control, resulting in statistical computations that are misleading and erroneous. Moreover, the developers do not provide sufficient information regarding the software and the hardware, which can change at any time without notice. Indeed, rerunning the tests after several months we obtained different and sometimes worsened results.

  10. A system for automated quantification of cutaneous electrogastrograms

    DEFF Research Database (Denmark)

    Paskaranandavadivel, Niranchan; Bull, Simon Henry; Parsell, Doug

    2015-01-01

    and amplitude were compared to automated estimates. The methods were packaged into a software executable which processes the data and presents the results in an intuitive graphical and a spreadsheet formats. Automated EGG analysis allows for clinical translation of bio-electrical analysis for potential......Clinical evaluation of cutaneous electrogastrograms (EGG) is important for understanding the role of slow waves in functional motility disorders and may be a useful diagnostic aid. An automated software package has been developed which computes metrics of interest from EGG and from slow wave...

  11. Designing Spreadsheet-Based Tasks for Purposeful Algebra

    Science.gov (United States)

    Ainley, Janet; Bills, Liz; Wilson, Kirsty

    2005-01-01

    We describe the design of a sequence of spreadsheet-based pedagogic tasks for the introduction of algebra in the early years of secondary schooling within the Purposeful Algebraic Activity project. This design combines two relatively novel features to bring a different perspective to research in the use of spreadsheets for the learning and…

  12. LICSS - a chemical spreadsheet in microsoft excel.

    Science.gov (United States)

    Lawson, Kevin R; Lawson, Jonty

    2012-02-02

    Representations of chemical datasets in spreadsheet format are important for ready data assimilation and manipulation. In addition to the normal spreadsheet facilities, chemical spreadsheets need to have visualisable chemical structures and data searchable by chemical as well as textual queries. Many such chemical spreadsheet tools are available, some operating in the familiar Microsoft Excel environment. However, within this group, the performance of Excel is often compromised, particularly in terms of the number of compounds which can usefully be stored on a sheet. LICSS is a lightweight chemical spreadsheet within Microsoft Excel for Windows. LICSS stores structures solely as Smiles strings. Chemical operations are carried out by calling Java code modules which use the CDK, JChemPaint and OPSIN libraries to provide cheminformatics functionality. Compounds in sheets or charts may be visualised (individually or en masse), and sheets may be searched by substructure or similarity. All the molecular descriptors available in CDK may be calculated for compounds (in batch or on-the-fly), and various cheminformatic operations such as fingerprint calculation, Sammon mapping, clustering and R group table creation may be carried out.We detail here the features of LICSS and how they are implemented. We also explain the design criteria, particularly in terms of potential corporate use, which led to this particular implementation. LICSS is an Excel-based chemical spreadsheet with a difference:• It can usefully be used on sheets containing hundreds of thousands of compounds; it doesn't compromise the normal performance of Microsoft Excel• It is designed to be installed and run in environments in which users do not have admin privileges; installation involves merely file copying, and sharing of LICSS sheets invokes automatic installation• It is free and extensibleLICSS is open source software and we hope sufficient detail is provided here to enable developers to add their

  13. Spreadsheet Modeling of Electron Distributions in Solids

    Science.gov (United States)

    Glassy, Wingfield V.

    2006-01-01

    A series of spreadsheet modeling exercises constructed as part of a new upper-level elective course on solid state materials and surface chemistry is described. The spreadsheet exercises are developed to provide students with the opportunity to interact with the conceptual framework where the role of the density of states and the Fermi-Dirac…

  14. Use of spreadsheets for interactive control of MFTF-B plasma diagnostic instruments

    International Nuclear Information System (INIS)

    Preckshot, G.G.; Goldner, A.L.; Kobayashi, A.

    1986-01-01

    The Mirror Fusion Test Facility (MFTF-B) at Lawrence Livermore National Laboratory has a variety of highly individualized plasma diagnostic instruments attached to the experiment. These instruments are controlled through graphics workstations networked to a central computer system. A distributed spreadsheet-like program runs in both the graphics workstations and in the central computer system. An interface very similar to a commercial spreadsheet program is presented to the user at a workstation. In a commercial spreadsheet program, the user may attach mathematical calculation functions to spreadsheet cells. At MFTF-B, hardware control functions, hardware monitoring functions, and communications functions, as well as mathematical functions, may be attached to cells. Both the user and feedback from instrument hardware may make entries in spreadsheet cells; any entry in a spreadsheet cell may cause reevaluation of the cell's associated functions. The spreadsheet approach makes the addition of a new instrument a matter of designing one or more spreadsheet tables with associated meta-language-defined control and communication function strings. This paper describes the details of the spreadsheets and the implementation experience

  15. Use of spreadsheets for interactive control of MFTF-B plasma diagnostic instruments

    International Nuclear Information System (INIS)

    Preckshot, G.G.; Goldner, A.; Kobayashi, A.

    1985-01-01

    The Mirror Fusion Test Facility (MFTF-B) at Lawrence Livermore National Laboratory has a variety of highly individualized plasma diagnostic instruments attached to the experiment. These instruments are controlled through graphics workstations networked to a central computer system. A distributed spreadsheet-like program runs in both the graphics workstations and in the central computer system. An interface very similar to a commercial spreadsheet program is presented to the user at a workstation. In a commercial spreadsheet program, the user may attach mathematical calculation functions to spreadsheet cells. At MFTF-B, hardware control functions, hardware monitoring functions, and communications functions, as well as mathematical functions, may be attached to cells. Both the user and feedback from instrument hardware may make entries in spreadsheet cells; any entry in a spreadsheet cell may cause reevaluation of the cell's associated functions. The spreadsheet approach makes the addition of a new instrument a matter of designing one or more spreadsheet tables with associated meta-language-defined control and communication function strings. We report here details of our spreadsheets and our implementation experience

  16. Automated cost modeling for coal combustion systems

    International Nuclear Information System (INIS)

    Rowe, R.M.; Anast, K.R.

    1991-01-01

    This paper reports on cost information developed at AMAX R and D Center for coal-water slurry production implemented in an automated spreadsheet (Lotus 123) for personal computer use. The spreadsheet format allows the user toe valuate impacts of various process options, coal feedstock characteristics, fuel characteristics, plant location sites, and plant sizes on fuel cost. Model flexibility reduces time and labor required to determine fuel costs and provides a basis to compare fuels manufactured by different processes. The model input includes coal characteristics, plant flowsheet definition, plant size, and market location. Based on these inputs, selected unit operations are chosen for coal processing

  17. Using Spreadsheets to Produce Acid-Base Titration Curves.

    Science.gov (United States)

    Cawley, Martin James; Parkinson, John

    1995-01-01

    Describes two spreadsheets for producing acid-base titration curves, one uses relatively simple cell formulae that can be written into the spreadsheet by inexperienced students and the second uses more complex formulae that are best written by the teacher. (JRH)

  18. Mitigating Spreadsheet Model Risk with Python Open Source Infrastructure

    OpenAIRE

    Beavers, Oliver

    2018-01-01

    Across an aggregation of EuSpRIG presentation papers, two maxims hold true: spreadsheets models are akin to software, yet spreadsheet developers are not software engineers. As such, the lack of traditional software engineering tools and protocols invites a higher rate of error in the end result. This paper lays ground work for spreadsheet modelling professionals to develop reproducible audit tools using freely available, open source packages built with the Python programming language, enablin...

  19. 3D Graphics with Spreadsheets

    Directory of Open Access Journals (Sweden)

    Jan Benacka

    2009-06-01

    Full Text Available In the article, the formulas for orthographic parallel projection of 3D bodies on computer screen are derived using secondary school vector algebra. The spreadsheet implementation is demonstrated in six applications that project bodies with increasing intricacy – a convex body (cube with non-solved visibility, convex bodies (cube, chapel with solved visibility, a coloured convex body (chapel with solved visibility, and a coloured non-convex body (church with solved visibility. The projections are revolvable in horizontal and vertical plane, and they are changeable in size. The examples show an unusual way of using spreadsheets as a 3D computer graphics tool. The applications can serve as a simple introduction to the general principles of computer graphics, to the graphics with spreadsheets, and as a tool for exercising stereoscopic vision. The presented approach is usable at visualising 3D scenes within some topics of secondary school curricula as solid geometry (angles and distances of lines and planes within simple bodies or analytic geometry in space (angles and distances of lines and planes in E3, and even at university level within calculus at visualising graphs of z = f(x,y functions. Examples are pictured.

  20. The governance of risk arising from the use of spreadsheets in organisations

    Directory of Open Access Journals (Sweden)

    Tessa Minter

    2014-06-01

    Full Text Available The key to maximising the effectiveness of spreadsheet models for critical decision making is appropriate risk governance. Those responsible for governance need, at a macro level, to identify the specific spreadsheet risks, determine the reasons for such exposures and establish where and when risk exposures occur from point of initiation to usage and storage. It is essential to identify which parties could create the exposure taking cognisance of the entire supply chain of the organisation. If management’s risk strategy is to control the risks then the question reverts to how these risks can be prevented and/or detected and corrected? This paper attempts to address each of these critical issues and to offer guidance in the governance of spreadsheet risk. The paper identifies the the risk exposures and sets out the responsibilities of directors in relation to spreadsheets and the spreadsheet cycle. Spreadsheet risk exposure can be managed in terms of setting the control environment, undertaking risk assessment, providing the requisite information and communicating with internal and external parties as well as implementing spreadsheet lifecycle application controls and monitoring activities

  1. Affordances of Spreadsheets In Mathematical Investigation: Potentialities For Learning

    Directory of Open Access Journals (Sweden)

    Nigel Calder

    2009-10-01

    Full Text Available This article, is concerned with the ways learning is shaped when mathematics problems are investigated in spreadsheet environments. It considers how the opportunities and constraints the digital media affords influenced the decisions the students made, and the direction of their enquiry pathway. How might the learning trajectory unfold, and the learning process and mathematical understanding emerge? Will the spreadsheet, as the pedagogical medium, evoke learning in a distinctive manner? The article reports on an aspect of an ongoing study involving students as they engage mathematical investigative tasks through digital media, the spreadsheet in particular. It considers the affordances of this learning environment for primary-aged students.

  2. Spread-sheet application to classify radioactive material for shipment

    International Nuclear Information System (INIS)

    Brown, A.N.

    1998-01-01

    A spread-sheet application has been developed at the Idaho National Engineering and Environmental Laboratory to aid the shipper when classifying nuclide mixtures of normal form, radioactive materials. The results generated by this spread-sheet are used to confirm the proper US DOT classification when offering radioactive material packages for transport. The user must input to the spread-sheet the mass of the material being classified, the physical form (liquid or not) and the activity of each regulated nuclide. The spread-sheet uses these inputs to calculate two general values: 1)the specific activity of the material and a summation calculation of the nuclide content. The specific activity is used to determine if the material exceeds the DOT minimal threshold for a radioactive material. If the material is calculated to be radioactive, the specific activity is also used to determine if the material meets the activity requirement for one of the three low specific activity designations (LSA-I, LSA-II, LSA-III, or not LSA). Again, if the material is calculated to be radioactive, the summation calculation is then used to determine which activity category the material will meet (Limited Quantity, Type A, Type B, or Highway Route Controlled Quantity). This spread-sheet has proven to be an invaluable aid for shippers of radioactive materials at the Idaho National Engineering and Environmental Laboratory. (authors)

  3. Spreadsheet software to assess locomotor disability to quantify permanent physical impairment

    Directory of Open Access Journals (Sweden)

    Sunderraj Ellur

    2012-01-01

    Full Text Available Context: Assessment of physical disability is an important duty of a plastic surgeon especially for those of us who are in an institutional practice. Aim: The Gazette of India notification gives a guideline regarding the assessment of the disability. However, the calculations as per the guidelines are time consuming. In this article, a spreadsheet program which is based on the notification is presented. The aim of this article is to design a spreadsheet program which is simple, reproducible, user friendly, less time consuming and accurate. Materials and Methods: This spreadsheet program was designed using the Microsoft Excel. The spreadsheet program was designed on the basis of the guidelines in the Gazette of India Notification regarding the assessment of Locomotor Disability to Quantify Permanent Physical Impairment. Two representative examples are presented to help understand the application of this program. Results: Two spreadsheet programs, one for upper limb and another for the lower limb are presented. The representative examples show the accuracy of the program to match the results of the traditional method of calculation. Conclusion: A simple spreadsheet program can be designed to assess disability as per the Gazette of India Notification. This program is easy to use and is accurate.

  4. Spreadsheets as tools for statistical computing and statistics education

    OpenAIRE

    Neuwirth, Erich

    2000-01-01

    Spreadsheets are an ubiquitous program category, and we will discuss their use in statistics and statistics education on various levels, ranging from very basic examples to extremely powerful methods. Since the spreadsheet paradigm is very familiar to many potential users, using it as the interface to statistical methods can make statistics more easily accessible.

  5. Using the Talbot_Lau_interferometer_parameters Spreadsheet

    Energy Technology Data Exchange (ETDEWEB)

    Kallman, Jeffrey S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-06-04

    Talbot-Lau interferometers allow incoherent X-ray sources to be used for phase contrast imaging. A spreadsheet for exploring the parameter space of Talbot and Talbot-Lau interferometers has been assembled. This spreadsheet allows the user to examine the consequences of choosing phase grating pitch, source energy, and source location on the overall geometry of a Talbot or Talbot-Lau X-ray interferometer. For the X-ray energies required to penetrate scanned luggage the spacing between gratings is large enough that the mechanical tolerances for amplitude grating positioning are unlikely to be met.

  6. A generic template for automated bioanalytical ligand-binding assays using modular robotic scripts in support of discovery biotherapeutic programs.

    Science.gov (United States)

    Duo, Jia; Dong, Huijin; DeSilva, Binodh; Zhang, Yan J

    2013-07-01

    Sample dilution and reagent pipetting are time-consuming steps in ligand-binding assays (LBAs). Traditional automation-assisted LBAs use assay-specific scripts that require labor-intensive script writing and user training. Five major script modules were developed on Tecan Freedom EVO liquid handling software to facilitate the automated sample preparation and LBA procedure: sample dilution, sample minimum required dilution, standard/QC minimum required dilution, standard/QC/sample addition, and reagent addition. The modular design of automation scripts allowed the users to assemble an automated assay with minimal script modification. The application of the template was demonstrated in three LBAs to support discovery biotherapeutic programs. The results demonstrated that the modular scripts provided the flexibility in adapting to various LBA formats and the significant time saving in script writing and scientist training. Data generated by the automated process were comparable to those by manual process while the bioanalytical productivity was significantly improved using the modular robotic scripts.

  7. Spreadsheet application to classify radioactive material for shipment

    International Nuclear Information System (INIS)

    Brown, A.N.

    1997-12-01

    A spreadsheet application has been developed at the Idaho National Engineering and Environmental Laboratory to aid the shipper when classifying nuclide mixtures of normal form, radioactive materials. The results generated by this spreadsheet are used to confirm the proper US Department of Transportation (DOT) classification when offering radioactive material packages for transport. The user must input to the spreadsheet the mass of the material being classified, the physical form (liquid or not), and the activity of each regulated nuclide. The spreadsheet uses these inputs to calculate two general values: (1) the specific activity of the material, and (2) a summation calculation of the nuclide content. The specific activity is used to determine if the material exceeds the DOT minimal threshold for a radioactive material (Yes or No). If the material is calculated to be radioactive, the specific activity is also used to determine if the material meets the activity requirement for one of the three Low Specific Activity designations (LSA-I, LSA-II, LSA-III, or Not LSA). Again, if the material is calculated to be radioactive, the summation calculation is then used to determine which activity category the material will meet (Limited Quantity, Type A, Type B, or Highway Route Controlled Quantity)

  8. Forming conjectures within a spreadsheet environment

    Science.gov (United States)

    Calder, Nigel; Brown, Tony; Hanley, Una; Darby, Susan

    2006-12-01

    This paper is concerned with the use of spreadsheets within mathematical investigational tasks. Considering the learning of both children and pre-service teaching students, it examines how mathematical phenomena can be seen as a function of the pedagogical media through which they are encountered. In particular, it shows how pedagogical apparatus influence patterns of social interaction, and how this interaction shapes the mathematical ideas that are engaged with. Notions of conjecture, along with the particular faculty of the spreadsheet setting, are considered with regard to the facilitation of mathematical thinking. Employing an interpretive perspective, a key focus is on how alternative pedagogical media and associated discursive networks influence the way that students form and test informal conjectures.

  9. Using spreadsheet modelling to teach about feedback in physics

    Science.gov (United States)

    Lingard, Michael

    2003-09-01

    This article looks generally at spreadsheet modelling of feedback situations. It has several benefits as a teaching tool. Additionally, a consideration of the limitations of calculating at many discrete points can lead, at A-level, to an appreciation of the need for the calculus. Feedback situations can be used to introduce the idea of differential equations. Microsoft ExcelTM is the spreadsheet used.

  10. Use of Wingz spreadsheet as an interface to total-system performance assessment

    International Nuclear Information System (INIS)

    Chambers, W.F.; Treadway, A.H.

    1992-01-01

    A commercial spreadsheet has been used as an interface to a set of simple models to simulate possible nominal flow and failure scenarios at the potential high-level nuclear waste repository at Yucca Mountain, Nevada. Individual models coded in FORTRAN are linked to the spreadsheet. Complementary cumulative probability distribution functions resulting from the models are plotted through scripts associated with the spreadsheet. All codes are maintained under a source code control system for quality assurance. The spreadsheet and the simple models can be run on workstations, PCs, and Macintoshes. The software system is designed so that the FORTRAN codes can be run on several machines if a network environment is available

  11. Excel Spreadsheets for Algebra: Improving Mental Modeling for Problem Solving

    Science.gov (United States)

    Engerman, Jason; Rusek, Matthew; Clariana, Roy

    2014-01-01

    This experiment investigates the effectiveness of Excel spreadsheets in a high school algebra class. Students in the experiment group convincingly outperformed the control group on a post lesson assessment. The student responses, teacher observations involving Excel spreadsheet revealed that it operated as a mindtool, which formed the users'…

  12. Spreadsheet Error Detection: an Empirical Examination in the Context of Greece

    Directory of Open Access Journals (Sweden)

    Dimitrios Maditinos

    2012-06-01

    Full Text Available The personal computers era made advanced programming tasks available to end users. Spreadsheet models are one of the most widely used applications that can produce valuable results with minimal training and effort. However, errors contained in most spreadsheets may be catastrophic and difficult to detect. This study attempts to investigate the influence of experience and spreadsheet presentation on the error finding performance by end users. To reach the target of the study, 216 business and finance students participated in a task of finding errors in a simple free cash flow model. The findings of the study reveal that presentation of the spreadsheet is of major importance as far as the error finding performance is concerned, while experience does not seem to affect students on their performance. Further research proposals and limitations of the study are, moreover, discussed.

  13. Simple Functions Spreadsheet tool presentation

    International Nuclear Information System (INIS)

    Grive, Mireia; Domenech, Cristina; Montoya, Vanessa; Garcia, David; Duro, Lara

    2010-09-01

    This document is a guide for users of the Simple Functions Spreadsheet tool. The Simple Functions Spreadsheet tool has been developed by Amphos 21 to determine the solubility limits of some radionuclides and it has been especially designed for Performance Assessment exercises. The development of this tool has been promoted by the necessity expressed by SKB of having a confident and easy-to-handle tool to calculate solubility limits in an agile and relatively fast manner. Its development started in 2005 and since then, it has been improved until the current version. This document describes the accurate and preliminary study following expert criteria that has been used to select the simplified aqueous speciation and solid phase system included in the tool. This report also gives the basic instructions to use this tool and to interpret its results. Finally, this document also reports the different validation tests and sensitivity analyses that have been done during the verification process

  14. Automated extraction of radiation dose information from CT dose report images.

    Science.gov (United States)

    Li, Xinhua; Zhang, Da; Liu, Bob

    2011-06-01

    The purpose of this article is to describe the development of an automated tool for retrieving texts from CT dose report images. Optical character recognition was adopted to perform text recognitions of CT dose report images. The developed tool is able to automate the process of analyzing multiple CT examinations, including text recognition, parsing, error correction, and exporting data to spreadsheets. The results were precise for total dose-length product (DLP) and were about 95% accurate for CT dose index and DLP of scanned series.

  15. Service discovery at home

    NARCIS (Netherlands)

    Sundramoorthy, V.; Scholten, Johan; Jansen, P.G.; Hartel, Pieter H.

    2003-01-01

    Service discovery is a fairly new field that kicked off since the advent of ubiquitous computing and has been found essential in the making of intelligent networks by implementing automated discovery and remote control between devices. This paper provides an overview and comparison of several

  16. Development of Spreadsheet-Based Integrated Transaction Processing Systems and Financial Reporting Systems

    Science.gov (United States)

    Ariana, I. M.; Bagiada, I. M.

    2018-01-01

    Development of spreadsheet-based integrated transaction processing systems and financial reporting systems is intended to optimize the capabilities of spreadsheet in accounting data processing. The purpose of this study are: 1) to describe the spreadsheet-based integrated transaction processing systems and financial reporting systems; 2) to test its technical and operational feasibility. This study type is research and development. The main steps of study are: 1) needs analysis (need assessment); 2) developing spreadsheet-based integrated transaction processing systems and financial reporting systems; and 3) testing the feasibility of spreadsheet-based integrated transaction processing systems and financial reporting systems. The technical feasibility include the ability of hardware and operating systems to respond the application of accounting, simplicity and ease of use. Operational feasibility include the ability of users using accounting applications, the ability of accounting applications to produce information, and control applications of the accounting applications. The instrument used to assess the technical and operational feasibility of the systems is the expert perception questionnaire. The instrument uses 4 Likert scale, from 1 (strongly disagree) to 4 (strongly agree). Data were analyzed using percentage analysis by comparing the number of answers within one (1) item by the number of ideal answer within one (1) item. Spreadsheet-based integrated transaction processing systems and financial reporting systems integrate sales, purchases, and cash transaction processing systems to produce financial reports (statement of profit or loss and other comprehensive income, statement of changes in equity, statement of financial position, and statement of cash flows) and other reports. Spreadsheet-based integrated transaction processing systems and financial reporting systems is feasible from the technical aspects (87.50%) and operational aspects (84.17%).

  17. Service Discovery At Home

    NARCIS (Netherlands)

    Sundramoorthy, V.; Scholten, Johan; Jansen, P.G.; Hartel, Pieter H.

    Service discovery is a fady new field that kicked off since the advent of ubiquitous computing and has been found essential in the making of intelligent networks by implementing automated discovery and remote control between deviies. This paper provides an ovewiew and comparison of several prominent

  18. Applying the CobiT Control Framework to Spreadsheet Developments

    OpenAIRE

    Butler, Raymond J.

    2008-01-01

    One of the problems reported by researchers and auditors in the field of spreadsheet risks is that of getting and keeping managements attention to the problem. Since 1996, the Information Systems Audit & Control Foundation and the IT Governance Institute have published CobiT which brings mainstream IT control issues into the corporate governance arena. This paper illustrates how spreadsheet risk and control issues can be mapped onto the CobiT framework and thus brought to managers attention i...

  19. SV-AUTOPILOT: optimized, automated construction of structural variation discovery and benchmarking pipelines.

    Science.gov (United States)

    Leung, Wai Yi; Marschall, Tobias; Paudel, Yogesh; Falquet, Laurent; Mei, Hailiang; Schönhuth, Alexander; Maoz Moss, Tiffanie Yael

    2015-03-25

    Many tools exist to predict structural variants (SVs), utilizing a variety of algorithms. However, they have largely been developed and tested on human germline or somatic (e.g. cancer) variation. It seems appropriate to exploit this wealth of technology available for humans also for other species. Objectives of this work included: a) Creating an automated, standardized pipeline for SV prediction. b) Identifying the best tool(s) for SV prediction through benchmarking. c) Providing a statistically sound method for merging SV calls. The SV-AUTOPILOT meta-tool platform is an automated pipeline for standardization of SV prediction and SV tool development in paired-end next-generation sequencing (NGS) analysis. SV-AUTOPILOT comes in the form of a virtual machine, which includes all datasets, tools and algorithms presented here. The virtual machine easily allows one to add, replace and update genomes, SV callers and post-processing routines and therefore provides an easy, out-of-the-box environment for complex SV discovery tasks. SV-AUTOPILOT was used to make a direct comparison between 7 popular SV tools on the Arabidopsis thaliana genome using the Landsberg (Ler) ecotype as a standardized dataset. Recall and precision measurements suggest that Pindel and Clever were the most adaptable to this dataset across all size ranges while Delly performed well for SVs larger than 250 nucleotides. A novel, statistically-sound merging process, which can control the false discovery rate, reduced the false positive rate on the Arabidopsis benchmark dataset used here by >60%. SV-AUTOPILOT provides a meta-tool platform for future SV tool development and the benchmarking of tools on other genomes using a standardized pipeline. It optimizes detection of SVs in non-human genomes using statistically robust merging. The benchmarking in this study has demonstrated the power of 7 different SV tools for analyzing different size classes and types of structural variants. The optional merge

  20. Spreadsheet based analysis of Mössbauer spectra

    Energy Technology Data Exchange (ETDEWEB)

    Gunnlaugsson, H. P., E-mail: haraldur.p.gunnlaugsson@cern.ch [CERN, PH Div (Switzerland)

    2016-12-15

    Using spreadsheet programs to analyse spectral data opens up new possibilities in data analysis. The spreadsheet program contains all the functionality needed for graphical support, fitting and post processing of the results. Unconventional restrictions between fitting parameters can be set up freely, and simultaneous analysis i.e. analysis of many spectra simultaneously in terms of model parameters is straightforward. The free program package Vinda – used for analysing Mössbauer spectra – is described. The package contains support for reading data, calibration, and common functions of particular importance for Mössbauer spectroscopy (f-factors, second order Doppler shift etc.). Methods to create spectral series and support for error analysis is included. Different types of fitting models are included, ranging from simple Lorentzian models to complex distribution models.

  1. Automated DBS microsampling, microscale automation and microflow LC-MS for therapeutic protein PK.

    Science.gov (United States)

    Zhang, Qian; Tomazela, Daniela; Vasicek, Lisa A; Spellman, Daniel S; Beaumont, Maribel; Shyong, BaoJen; Kenny, Jacqueline; Fauty, Scott; Fillgrove, Kerry; Harrelson, Jane; Bateman, Kevin P

    2016-04-01

    Reduce animal usage for discovery-stage PK studies for biologics programs using microsampling-based approaches and microscale LC-MS. We report the development of an automated DBS-based serial microsampling approach for studying the PK of therapeutic proteins in mice. Automated sample preparation and microflow LC-MS were used to enable assay miniaturization and improve overall assay throughput. Serial sampling of mice was possible over the full 21-day study period with the first six time points over 24 h being collected using automated DBS sample collection. Overall, this approach demonstrated comparable data to a previous study using single mice per time point liquid samples while reducing animal and compound requirements by 14-fold. Reduction in animals and drug material is enabled by the use of automated serial DBS microsampling for mice studies in discovery-stage studies of protein therapeutics.

  2. The FAO/IAEA interactive spreadsheet for design and operation of insect mass rearing facilities

    International Nuclear Information System (INIS)

    Caceres, Carlos; Rendon, Pedro

    2006-01-01

    An electronic spreadsheet is described which helps users to design, equip and operate facilities for the mass rearing of insects for use in insect pest control programmes integrating the sterile insect technique. The spreadsheet was designed based on experience accumulated in the mass rearing of the Mediterranean fruit fly, Ceratitis capitata (Wiedemann), using genetic sexing strains based on a temperature sensitive lethal (tsl) mutation. The spreadsheet takes into account the biological, production, and quality control parameters of the species to be mass reared, as well as the diets and equipment required. All this information is incorporated into the spreadsheet for user-friendly calculation of the main components involved in facility design and operation. Outputs of the spreadsheet include size of the different rearing areas, rearing equipment, volumes of diet ingredients, other consumables, as well as personnel requirements. By adding cost factors to these components, the spreadsheet can estimate the costs of facility construction, equipment, and operation. All the output parameters can be easily generated by simply entering the target number of sterile insects required per week. For other insect species, the biological and production characteristics need to be defined and inputted accordingly to obtain outputs relevant to these species. This spreadsheet, available under http://www-naweb.iaea.org/nafa/ipc/index.html, is a powerful tool for project and facility managers as it can be used to estimate facility cost, production cost, and production projections under different rearing efficiency scenarios. (author)

  3. The FAO/IAEA interactive spreadsheet for design and operation of insect mass rearing facilities

    Energy Technology Data Exchange (ETDEWEB)

    Caceres, Carlos, E-mail: carlos.e.caceres@aphis.usda.co [International Atomic Energy Agency (IAEA), Seibersdorf (Austria). Agency' s Labs. Programme of Nuclear Techniques in Food and Agriculture; Rendon, Pedro [U.S. Department of Agriculture (USDA/APHIS/CPHST), Guatemala City (Guatemala). Animal and Plant Health Inspection. Center for Plant Health Science and Technology

    2006-07-01

    An electronic spreadsheet is described which helps users to design, equip and operate facilities for the mass rearing of insects for use in insect pest control programmes integrating the sterile insect technique. The spreadsheet was designed based on experience accumulated in the mass rearing of the Mediterranean fruit fly, Ceratitis capitata (Wiedemann), using genetic sexing strains based on a temperature sensitive lethal (tsl) mutation. The spreadsheet takes into account the biological, production, and quality control parameters of the species to be mass reared, as well as the diets and equipment required. All this information is incorporated into the spreadsheet for user-friendly calculation of the main components involved in facility design and operation. Outputs of the spreadsheet include size of the different rearing areas, rearing equipment, volumes of diet ingredients, other consumables, as well as personnel requirements. By adding cost factors to these components, the spreadsheet can estimate the costs of facility construction, equipment, and operation. All the output parameters can be easily generated by simply entering the target number of sterile insects required per week. For other insect species, the biological and production characteristics need to be defined and inputted accordingly to obtain outputs relevant to these species. This spreadsheet, available under http://www-naweb.iaea.org/nafa/ipc/index.html, is a powerful tool for project and facility managers as it can be used to estimate facility cost, production cost, and production projections under different rearing efficiency scenarios. (author)

  4. Electronic spreadsheet vs. manual payroll.

    Science.gov (United States)

    Kiley, M M

    1991-01-01

    Medical groups with direct employees must employ someone or contract with a company to compute payroll, writes Michael Kiley, Ph.D., M.P.H. However, many medical groups, including small ones, own a personal or minicomputer to handle accounts receivable. Kiley explains, in detail, how this same computer and a spreadsheet program also can be used to perform payroll functions.

  5. Information Spreadsheet for Engines and Vehicles Compliance Information System (EV-CIS) User Registration

    Science.gov (United States)

    In this spreadsheet, user(s) provide their company’s manufacturer code, user contact information for EV-CIS, and user roles. This spreadsheet is used for the Company Authorizing Official (CAO), CROMERR Signer, and EV-CIS Submitters.

  6. 18 excel spreadsheets by species and year giving reproduction and growth data. One excel spreadsheet of herbicide treatment chemistry.

    Data.gov (United States)

    U.S. Environmental Protection Agency — Excel spreadsheets by species (4 letter code is abbreviation for genus and species used in study, year 2010 or 2011 is year data collected, SH indicates data for...

  7. A spreadsheet-coupled SOLGAS: A computerized thermodynamic equilibrium calculation tool. Revision 1

    Energy Technology Data Exchange (ETDEWEB)

    Trowbridge, L.D.; Leitnaker, J.M. [Oak Ridge K-25 Site, TN (United States). Technical Analysis and Operations Div.

    1995-07-01

    SOLGAS, an early computer program for calculating equilibrium in a chemical system, has been made more user-friendly, and several ``bells and whistles`` have been added. The necessity to include elemental species has been eliminated. The input of large numbers of starting conditions has been automated. A revised spreadsheet-based format for entering data, including non-ideal binary and ternary mixtures, simplifies and reduces chances for error. Calculational errors by SOLGAS are flagged, and several programming errors are corrected. Auxiliary programs are available to assemble and partially automate plotting of large amounts of data. Thermodynamic input data can be changed on line. The program can be operated with or without a co-processor. Copies of the program, suitable for the IBM-PC or compatibles with at least 384 bytes of low RAM, are available from the authors. This user manual contains appendices with examples of the use of SOLGAS. These range from elementary examples, such as, the relationships among water, ice, and water vapor, to more complex systems: phase diagram calculation of UF{sub 4} and UF{sub 6} system; burning UF{sub 4} in fluorine; thermodynamic calculation of the Cl-F-O-H system; equilibria calculations in the CCl{sub 4}--CH{sub 3}OH system; and limitations applicable to aqueous solutions. An appendix also contains the source code.

  8. When Spreadsheets Become Software - Quality Control Challenges and Approaches - 13360

    International Nuclear Information System (INIS)

    Fountain, Stefanie A.; Chen, Emmie G.; Beech, John F.; Wyatt, Elizabeth E.; Quinn, Tanya B.; Seifert, Robert W.; Bonczek, Richard R.

    2013-01-01

    As part of a preliminary waste acceptance criteria (PWAC) development, several commercial models were employed, including the Hydrologic Evaluation of Landfill Performance model (HELP) [1], the Disposal Unit Source Term - Multiple Species model (DUSTMS) [2], and the Analytical Transient One, Two, and Three-Dimensional model (AT123D) [3]. The results of these models were post-processed in MS Excel spreadsheets to convert the model results to alternate units, compare the groundwater concentrations to the groundwater concentration thresholds, and then to adjust the waste contaminant masses (based on average concentration over the waste volume) as needed in an attempt to achieve groundwater concentrations at the limiting point of assessment that would meet the compliance concentrations while maximizing the potential use of the landfill (i.e., maximizing the volume of projected waste being generated that could be placed in the landfill). During the course of the PWAC calculation development, one of the Microsoft (MS) Excel spreadsheets used to post-process the results of the commercial model packages grew to include more than 575,000 formulas across 18 worksheets. This spreadsheet was used to assess six base scenarios as well as nine uncertainty/sensitivity scenarios. The complexity of the spreadsheet resulted in the need for a rigorous quality control (QC) procedure to verify data entry and confirm the accuracy of formulas. (authors)

  9. When Spreadsheets Become Software - Quality Control Challenges and Approaches - 13360

    Energy Technology Data Exchange (ETDEWEB)

    Fountain, Stefanie A.; Chen, Emmie G.; Beech, John F. [Geosyntec Consultants, Inc., 1255 Roberts Boulevard NW, Suite 200, Kennesaw, GA 30144 (United States); Wyatt, Elizabeth E. [LATA Environmental Services of Kentucky, LLC, 761 Veterans Ave, Kevil, KY 42053 (United States); Quinn, Tanya B. [Geosyntec Consultants, Inc., 2002 Summit Boulevard NE, Suite 885, Atlanta, GA 30319 (United States); Seifert, Robert W. [Portsmouth/Paducah Project Office, United States Department of Energy, 5600 Hobbs Rd, Kevil, KY 42053 (United States); Bonczek, Richard R. [Portsmouth/Paducah Project Office, United States Department of Energy, 1017 Majestic Drive, Lexington, KY 40513 (United States)

    2013-07-01

    As part of a preliminary waste acceptance criteria (PWAC) development, several commercial models were employed, including the Hydrologic Evaluation of Landfill Performance model (HELP) [1], the Disposal Unit Source Term - Multiple Species model (DUSTMS) [2], and the Analytical Transient One, Two, and Three-Dimensional model (AT123D) [3]. The results of these models were post-processed in MS Excel spreadsheets to convert the model results to alternate units, compare the groundwater concentrations to the groundwater concentration thresholds, and then to adjust the waste contaminant masses (based on average concentration over the waste volume) as needed in an attempt to achieve groundwater concentrations at the limiting point of assessment that would meet the compliance concentrations while maximizing the potential use of the landfill (i.e., maximizing the volume of projected waste being generated that could be placed in the landfill). During the course of the PWAC calculation development, one of the Microsoft (MS) Excel spreadsheets used to post-process the results of the commercial model packages grew to include more than 575,000 formulas across 18 worksheets. This spreadsheet was used to assess six base scenarios as well as nine uncertainty/sensitivity scenarios. The complexity of the spreadsheet resulted in the need for a rigorous quality control (QC) procedure to verify data entry and confirm the accuracy of formulas. (authors)

  10. A fully automated primary screening system for the discovery of therapeutic antibodies directly from B cells.

    Science.gov (United States)

    Tickle, Simon; Howells, Louise; O'Dowd, Victoria; Starkie, Dale; Whale, Kevin; Saunders, Mark; Lee, David; Lightwood, Daniel

    2015-04-01

    For a therapeutic antibody to succeed, it must meet a range of potency, stability, and specificity criteria. Many of these characteristics are conferred by the amino acid sequence of the heavy and light chain variable regions and, for this reason, can be screened for during antibody selection. However, it is important to consider that antibodies satisfying all these criteria may be of low frequency in an immunized animal; for this reason, it is essential to have a mechanism that allows for efficient sampling of the immune repertoire. UCB's core antibody discovery platform combines high-throughput B cell culture screening and the identification and isolation of single, antigen-specific IgG-secreting B cells through a proprietary technique called the "fluorescent foci" method. Using state-of-the-art automation to facilitate primary screening, extremely efficient interrogation of the natural antibody repertoire is made possible; more than 1 billion immune B cells can now be screened to provide a useful starting point from which to identify the rare therapeutic antibody. This article will describe the design, construction, and commissioning of a bespoke automated screening platform and two examples of how it was used to screen for antibodies against two targets. © 2014 Society for Laboratory Automation and Screening.

  11. Spreadsheets, Graphing Calculators and the Line of Best Fit

    Directory of Open Access Journals (Sweden)

    Bernie O'Sullivan

    2003-07-01

    One technique that can now be done, almost mindlessly, is the line of best fit. Both the graphing calculator and the Excel spreadsheet produce models for collected data that appear to be very good fits, but upon closer scrutiny, are revealed to be quite poor. This article will examine one such case. I will couch the paper within the framework of a very good classroom investigation that will help generate students’ understanding of the basic principles of curve fitting and will enable them to produce a very accurate model of collected data by combining the technology of the graphing calculator and the spreadsheet.

  12. Towards tool support for spreadsheet-based domain-specific languages

    DEFF Research Database (Denmark)

    Adam, Marian Sorin; Schultz, Ulrik Pagh

    2015-01-01

    Spreadsheets are commonly used by non-programmers to store data in a structured form, this data can in some cases be considered to be a program in a domain-specific language (DSL). Unlike ordinary text-based domain-specific languages, there is however currently no formalism for expressing...... the syntax of such spreadsheet-based DSLs (SDSLs), and there is no tool support for automatically generating language infrastructure such as parsers and IDE support. In this paper we define a simple notion of two-dimensional grammars for SDSLs, and show how such grammars can be used for automatically...

  13. A Comparative Study of Spreadsheet Applications on Mobile Devices

    Directory of Open Access Journals (Sweden)

    Veera V. S. M. Chintapalli

    2016-01-01

    Full Text Available Advances in mobile screen sizes and feature enhancement for mobile applications have increased the number of users accessing spreadsheets on mobile devices. This paper reports a comparative usability study on four popular mobile spreadsheet applications: OfficeSuite Viewer 6, Documents To Go, ThinkFree Online, and Google Drive. We compare them against three categories of usability criteria: visibility; navigation, scrolling, and feedback; and interaction, satisfaction, simplicity, and convenience. Measures for each criterion were derived in a survey. Questionnaires were designed to address the measures based on the comparative criteria provided in the analysis.

  14. Discovery informatics in biological and biomedical sciences: research challenges and opportunities.

    Science.gov (United States)

    Honavar, Vasant

    2015-01-01

    New discoveries in biological, biomedical and health sciences are increasingly being driven by our ability to acquire, share, integrate and analyze, and construct and simulate predictive models of biological systems. While much attention has focused on automating routine aspects of management and analysis of "big data", realizing the full potential of "big data" to accelerate discovery calls for automating many other aspects of the scientific process that have so far largely resisted automation: identifying gaps in the current state of knowledge; generating and prioritizing questions; designing studies; designing, prioritizing, planning, and executing experiments; interpreting results; forming hypotheses; drawing conclusions; replicating studies; validating claims; documenting studies; communicating results; reviewing results; and integrating results into the larger body of knowledge in a discipline. Against this background, the PSB workshop on Discovery Informatics in Biological and Biomedical Sciences explores the opportunities and challenges of automating discovery or assisting humans in discovery through advances (i) Understanding, formalization, and information processing accounts of, the entire scientific process; (ii) Design, development, and evaluation of the computational artifacts (representations, processes) that embody such understanding; and (iii) Application of the resulting artifacts and systems to advance science (by augmenting individual or collective human efforts, or by fully automating science).

  15. Application of magnetic sensors in automation control

    Energy Technology Data Exchange (ETDEWEB)

    Hou Chunhong [AMETEK Inc., Paoli, PA 19301 (United States); Qian Zhenghong, E-mail: zqian@hdu.edu.cn [Center For Integrated Spintronic Devices (CISD), Hangzhou Dianzi University, Hangzhou, ZJ 310018 (China)

    2011-01-01

    Controls in automation need speed and position feedback. The feedback device is often referred to as encoder. Feedback technology includes mechanical, optical, and magnetic, etc. All advance with new inventions and discoveries. Magnetic sensing as a feedback technology offers certain advantages over other technologies like optical one. With new discoveries like GMR (Giant Magneto-Resistance), TMR (Tunneling Magneto-Resistance) becoming feasible for commercialization, more and more applications will be using advanced magnetic sensors in automation. This paper offers a general review on encoder and applications of magnetic sensors in automation control.

  16. A user friendly spreadsheet program for calibration using weighted regression. User's Guide

    NARCIS (Netherlands)

    Gort SM; Hoogerbrugge R; LOC

    1995-01-01

    Een gebruiksvriendelijke computer spreadsheet voor calibratie doeleinden is beschreven. Deze spreadsheet (ontwikkeld met behulp van Microsoft Excel) stelt niet-statistici, zoals analytische chemici, in de gelegenheid om gewogen lineaire regressie toe te passen. Verschillende calibratiefuncties en

  17. Development of an excel spreadsheet formean glandular dose in mammography

    International Nuclear Information System (INIS)

    Nagoshi, Kazuyo; Fujisaki, Tatsuya

    2008-01-01

    The purpose of this study was to develop an Excel spreadsheet to calculate mean glandular dose (D g ) in mammography using clinical exposure data. D g can be calculated as the product of incident air kerma (K a ) and D gN (i.e., D g =K a x D gN ). According to the method of Klein et al (Phys Med Biol 1997; 42: 651-671), K a was measured at the entrance surface with an ionization dosimeter. Normalized glandular dose (D gN ) coefficients, taking into account breast glandularity, were computed using Boone's method (Med Phys 2002; 29: 869-875). D gN coefficients can be calculated for any arbitrary X-ray spectrum. These calculation procedures were input into a Microsoft Excel spreadsheet. The resulting Excel spreadsheet is easy to use and is always applicable in the field of mammography. The exposure conditions concerning D g in clinical practice were also investigated in 22 women. Four exposure conditions (target/filter combination and tube voltage) were automatically selected in this study. This investigation found that average D g for each exposure was 1.9 mGy. Because it is recommended that quality control of radiation dose management in mammography is done using an American College of Radiology (ACR) phantom, information about patient dose is not obtained in many facilities. The present Excel spreadsheet was accordingly considered useful for optimization of exposure conditions and explanation of mammography to patients. (author)

  18. SU-F-BRB-16: A Spreadsheet Based Automatic Trajectory GEnerator (SAGE): An Open Source Tool for Automatic Creation of TrueBeam Developer Mode Robotic Trajectories

    International Nuclear Information System (INIS)

    Etmektzoglou, A; Mishra, P; Svatos, M

    2015-01-01

    Purpose: To automate creation and delivery of robotic linac trajectories with TrueBeam Developer Mode, an open source spreadsheet-based trajectory generation tool has been developed, tested and made freely available. The computing power inherent in a spreadsheet environment plus additional functions programmed into the tool insulate users from the underlying schema tedium and allow easy calculation, parameterization, graphical visualization, validation and finally automatic generation of Developer Mode XML scripts which are directly loadable on a TrueBeam linac. Methods: The robotic control system platform that allows total coordination of potentially all linac moving axes with beam (continuous, step-and-shoot, or combination thereof) becomes available in TrueBeam Developer Mode. Many complex trajectories are either geometric or can be described in analytical form, making the computational power, graphing and programmability available in a spreadsheet environment an easy and ideal vehicle for automatic trajectory generation. The spreadsheet environment allows also for parameterization of trajectories thus enabling the creation of entire families of trajectories using only a few variables. Standard spreadsheet functionality has been extended for powerful movie-like dynamic graphic visualization of the gantry, table, MLC, room, lasers, 3D observer placement and beam centerline all as a function of MU or time, for analysis of the motions before requiring actual linac time. Results: We used the tool to generate and deliver extended SAD “virtual isocenter” trajectories of various shapes such as parameterized circles and ellipses. We also demonstrated use of the tool in generating linac couch motions that simulate respiratory motion using analytical parameterized functions. Conclusion: The SAGE tool is a valuable resource to experiment with families of complex geometric trajectories for a TrueBeam Linac. It makes Developer Mode more accessible as a vehicle to quickly

  19. SU-F-BRB-16: A Spreadsheet Based Automatic Trajectory GEnerator (SAGE): An Open Source Tool for Automatic Creation of TrueBeam Developer Mode Robotic Trajectories

    Energy Technology Data Exchange (ETDEWEB)

    Etmektzoglou, A; Mishra, P; Svatos, M [Varian Medical Systems, Palo Alto, CA (United States)

    2015-06-15

    Purpose: To automate creation and delivery of robotic linac trajectories with TrueBeam Developer Mode, an open source spreadsheet-based trajectory generation tool has been developed, tested and made freely available. The computing power inherent in a spreadsheet environment plus additional functions programmed into the tool insulate users from the underlying schema tedium and allow easy calculation, parameterization, graphical visualization, validation and finally automatic generation of Developer Mode XML scripts which are directly loadable on a TrueBeam linac. Methods: The robotic control system platform that allows total coordination of potentially all linac moving axes with beam (continuous, step-and-shoot, or combination thereof) becomes available in TrueBeam Developer Mode. Many complex trajectories are either geometric or can be described in analytical form, making the computational power, graphing and programmability available in a spreadsheet environment an easy and ideal vehicle for automatic trajectory generation. The spreadsheet environment allows also for parameterization of trajectories thus enabling the creation of entire families of trajectories using only a few variables. Standard spreadsheet functionality has been extended for powerful movie-like dynamic graphic visualization of the gantry, table, MLC, room, lasers, 3D observer placement and beam centerline all as a function of MU or time, for analysis of the motions before requiring actual linac time. Results: We used the tool to generate and deliver extended SAD “virtual isocenter” trajectories of various shapes such as parameterized circles and ellipses. We also demonstrated use of the tool in generating linac couch motions that simulate respiratory motion using analytical parameterized functions. Conclusion: The SAGE tool is a valuable resource to experiment with families of complex geometric trajectories for a TrueBeam Linac. It makes Developer Mode more accessible as a vehicle to quickly

  20. Description of the Material Balance Model and Spreadsheet for Salt Dissolution

    International Nuclear Information System (INIS)

    Wiersma, B.J.

    1994-01-01

    The model employed to estimate the amount of inhibitors necessary for bearing water and dissolution water during the salt dissolution process is described. This model was inputed on a spreadsheet which allowed many different case studies to be performed. This memo describes the assumptions and equations which are used in the model, and documents the input and output cells of the spreadsheet. Two case studies are shown as examples of how the model may be employed

  1. The meaning of diagnostic test results: A spreadsheet for swift data analysis

    International Nuclear Information System (INIS)

    MacEneaney, Peter M.; Malone, Dermot E.

    2000-01-01

    AIMS: To design a spreadsheet program to: (a) analyse rapidly diagnostic test result data produced in local research or reported in the literature; (b) correct reported predictive values for disease prevalence in any population; (c) estimate the post-test probability of disease in individual patients. MATERIALS AND METHODS: Microsoft Excel TM was used. Section A: a contingency (2 x 2) table was incorporated into the spreadsheet. Formulae for standard calculations [sample size, disease prevalence, sensitivity and specificity with 95% confidence intervals, predictive values and likelihood ratios (LRs)] were linked to this table. The results change automatically when the data in the true or false negative and positive cells are changed. Section B: this estimates predictive values in any population, compensating for altered disease prevalence. Sections C-F: Bayes' theorem was incorporated to generate individual post-test probabilities. The spreadsheet generates 95% confidence intervals, LRs and a table and graph of conditional probabilities once the sensitivity and specificity of the test are entered. The latter shows the expected post-test probability of disease for any pre-test probability when a test of known sensitivity and specificity is positive or negative. RESULTS: This spreadsheet can be used on desktop and palmtop computers. The MS Excel TM version can be downloaded via the Internet from the URL ftp://radiography.com/pub/Rad-data99.xls CONCLUSION: A spreadsheet is useful for contingency table data analysis and assessment of the clinical meaning of diagnostic test results. MacEneaney, P.M., Malone, D.E. (2000)

  2. Automated discovery of safety and efficacy concerns for joint & muscle pain relief treatments from online reviews.

    Science.gov (United States)

    Adams, David Z; Gruss, Richard; Abrahams, Alan S

    2017-04-01

    Product issues can cost companies millions in lawsuits and have devastating effects on a firm's sales, image and goodwill, especially in the era of social media. The ability for a system to detect the presence of safety and efficacy (S&E) concerns early on could not only protect consumers from injuries due to safety hazards, but could also mitigate financial damage to the manufacturer. Prior studies in the field of automated defect discovery have found industry-specific techniques appropriate to the automotive, consumer electronics, home appliance, and toy industries, but have not investigated pain relief medicines and medical devices. In this study, we focus specifically on automated discovery of S&E concerns in over-the-counter (OTC) joint and muscle pain relief remedies and devices. We select a dataset of over 32,000 records for three categories of Joint & Muscle Pain Relief treatments from Amazon's online product reviews, and train "smoke word" dictionaries which we use to score holdout reviews, for the presence of safety and efficacy issues. We also score using conventional sentiment analysis techniques. Compared to traditional sentiment analysis techniques, we found that smoke term dictionaries were better suited to detect product concerns from online consumer reviews, and significantly outperformed the sentiment analysis techniques in uncovering both efficacy and safety concerns, across all product subcategories. Our research can be applied to the healthcare and pharmaceutical industry in order to detect safety and efficacy concerns, reducing risks that consumers face using these products. These findings can be highly beneficial to improving quality assurance and management in joint and muscle pain relief. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Comparison of two spreadsheets for calculation of radiation exposure following hyperthyroidism treatment with iodine-131

    Energy Technology Data Exchange (ETDEWEB)

    Vrigneaud, J.M. [CHU Bichat, nuclear medicine department, 75 - Paris (France); Carlier, T. [CHU Hotel Dieu, nuclear medicine department, 44 - Nantes (France)

    2006-07-01

    Comparison of the two spreadsheets did not show any significant differences provided that proper biological models were used to follow 131 iodine clearance. This means that even simple assumptions can be used to give reasonable radiation safety recommendations. Nevertheless, a complete understanding of the formalism is required to use correctly these spreadsheets. Initial parameters must be chosen carefully and validation of the computed results must be done. Published guidelines are found to be in accordance with those issued from these spreadsheets. Furthermore, both programs make it possible to collect biological data from each patient and use it as input to calculate individual tailored radiation safety advices. Also, measured exposure rate may be entered into the spreadsheets to calculate patient-specific close contact delays required to reduce the dose to specified limits. These spreadsheets may be used to compute restriction times for any given radiopharmaceutical, provided that input parameters are chosen correctly. They can be of great help to physicians to provide patients with guidance on how to maintain doses to other individuals as low as reasonably achievable. (authors)

  4. Comparison of two spreadsheets for calculation of radiation exposure following hyperthyroidism treatment with iodine-131

    International Nuclear Information System (INIS)

    Vrigneaud, J.M.; Carlier, T.

    2006-01-01

    Comparison of the two spreadsheets did not show any significant differences provided that proper biological models were used to follow 131 iodine clearance. This means that even simple assumptions can be used to give reasonable radiation safety recommendations. Nevertheless, a complete understanding of the formalism is required to use correctly these spreadsheets. Initial parameters must be chosen carefully and validation of the computed results must be done. Published guidelines are found to be in accordance with those issued from these spreadsheets. Furthermore, both programs make it possible to collect biological data from each patient and use it as input to calculate individual tailored radiation safety advices. Also, measured exposure rate may be entered into the spreadsheets to calculate patient-specific close contact delays required to reduce the dose to specified limits. These spreadsheets may be used to compute restriction times for any given radiopharmaceutical, provided that input parameters are chosen correctly. They can be of great help to physicians to provide patients with guidance on how to maintain doses to other individuals as low as reasonably achievable. (authors)

  5. Mass spectrometry for protein quantification in biomarker discovery.

    Science.gov (United States)

    Wang, Mu; You, Jinsam

    2012-01-01

    Major technological advances have made proteomics an extremely active field for biomarker discovery in recent years due primarily to the development of newer mass spectrometric technologies and the explosion in genomic and protein bioinformatics. This leads to an increased emphasis on larger scale, faster, and more efficient methods for detecting protein biomarkers in human tissues, cells, and biofluids. Most current proteomic methodologies for biomarker discovery, however, are not highly automated and are generally labor-intensive and expensive. More automation and improved software programs capable of handling a large amount of data are essential to reduce the cost of discovery and to increase throughput. In this chapter, we discuss and describe mass spectrometry-based proteomic methods for quantitative protein analysis.

  6. Introduction to supercritical fluids a spreadsheet-based approach

    CERN Document Server

    Smith, Richard; Peters, Cor

    2013-01-01

    This text provides an introduction to supercritical fluids with easy-to-use Excel spreadsheets suitable for both specialized-discipline (chemistry or chemical engineering student) and mixed-discipline (engineering/economic student) classes. Each chapter contains worked examples, tip boxes and end-of-the-chapter problems and projects. Part I covers web-based chemical information resources, applications and simplified theory presented in a way that allows students of all disciplines to delve into the properties of supercritical fluids and to design energy, extraction and materials formation systems for real-world processes that use supercritical water or supercritical carbon dioxide. Part II takes a practical approach and addresses the thermodynamic framework, equations of state, fluid phase equilibria, heat and mass transfer, chemical equilibria and reaction kinetics of supercritical fluids. Spreadsheets are arranged as Visual Basic for Applications (VBA) functions and macros that are completely (source code) ...

  7. Spreadsheet-Enhanced Problem Solving in Context as Modeling

    Directory of Open Access Journals (Sweden)

    Sergei Abramovich

    2003-07-01

    development through situated mathematical problem solving. Modeling activities described in this paper support the epistemological position regarding the interplay that exists between the development of mathematical concepts and available methods of calculation. The spreadsheet used is Microsoft Excel 2001

  8. A review of simple multiple criteria decision making analytic procedures which are implementable on spreadsheet packages

    Directory of Open Access Journals (Sweden)

    T.J. Stewart

    2003-12-01

    Full Text Available A number of modern multi-criteria decision making aids for the discrete choice problem, are reviewed, with particular emphasis on those which can be implemented on standard commercial spreadsheet packages. Three broad classes of procedures are discussed, namely the analytic hierarchy process, reference point methods, and outranking methods. The broad principles are summarised in a consistent framework, and on a spreadsheet. LOTUS spreadsheets implementing these are available from the author.

  9. Spreadsheet as a tool of engineering analysis

    International Nuclear Information System (INIS)

    Becker, M.

    1985-01-01

    In engineering analysis, problems tend to be categorized into those that can be done by hand and those that require the computer for solution. The advent of personal computers, and in particular, the advent of spreadsheet software, blurs this distinction, creating an intermediate category of problems appropriate for use with interactive personal computing

  10. Development of a spreadsheet for SNPs typing using Microsoft EXCEL.

    Science.gov (United States)

    Hashiyada, Masaki; Itakura, Yukio; Takahashi, Shirushi; Sakai, Jun; Funayama, Masato

    2009-04-01

    Single-nucleotide polymorphisms (SNPs) have some characteristics that make them very appropriate for forensic studies and applications. In our institute, SNPs typings were performed by the TaqMan SNP Genotyping Assays using the ABI PRISM 7500 FAST Real-Time PCR System (AppliedBiosystems) and Sequence Detection Software ver.1.4 (AppliedBiosystem). The TaqMan method was desired two positive control (Allele1 and 2) and one negative control to analyze each SNP locus. Therefore, it can be analyzed up to 24 loci of a person on a 96-well-plate at the same time. If SNPs analysis is expected to apply to biometrics authentication, 48 and over loci are required to identify a person. In this study, we designed a spreadsheet package using Microsoft EXCEL, and population data were used from our 120 SNPs population studies. On the spreadsheet, we defined SNP types using 'template files' instead of positive and negative controls. "Template files" consisted of the results of 94 unknown samples and two negative controls of each of 120 SNPs loci we had previously studied. By the use of the files, the spreadsheet could analyze 96 SNPs on a 96-wells-plate simultaneously.

  11. Integrated Spreadsheets as a Paradigm of Type II Technology Applications in Mathematics Teacher Education

    Science.gov (United States)

    Abramovich, Sergei

    2016-01-01

    The paper presents the use of spreadsheets integrated with digital tools capable of symbolic computations and graphic constructions in a master's level capstone course for secondary mathematics teachers. Such use of spreadsheets is congruent with the Type II technology applications framework aimed at the development of conceptual knowledge in the…

  12. A Spreadsheet-Based, Matrix Formulation Linear Programming Lesson

    DEFF Research Database (Denmark)

    Harrod, Steven

    2009-01-01

    The article focuses on the spreadsheet-based, matrix formulation linear programming lesson. According to the article, it makes a higher level of theoretical mathematics approachable by a wide spectrum of students wherein many may not be decision sciences or quantitative methods majors. Moreover...

  13. A Spreadsheet Tool for Learning the Multiple Regression F-Test, T-Tests, and Multicollinearity

    Science.gov (United States)

    Martin, David

    2008-01-01

    This note presents a spreadsheet tool that allows teachers the opportunity to guide students towards answering on their own questions related to the multiple regression F-test, the t-tests, and multicollinearity. The note demonstrates approaches for using the spreadsheet that might be appropriate for three different levels of statistics classes,…

  14. Automated security management

    CERN Document Server

    Al-Shaer, Ehab; Xie, Geoffrey

    2013-01-01

    In this contributed volume, leading international researchers explore configuration modeling and checking, vulnerability and risk assessment, configuration analysis, and diagnostics and discovery. The authors equip readers to understand automated security management systems and techniques that increase overall network assurability and usability. These constantly changing networks defend against cyber attacks by integrating hundreds of security devices such as firewalls, IPSec gateways, IDS/IPS, authentication servers, authorization/RBAC servers, and crypto systems. Automated Security Managemen

  15. Automated methods of corrosion measurement

    DEFF Research Database (Denmark)

    Andersen, Jens Enevold Thaulov; Bech-Nielsen, Gregers; Reeve, John Ch

    1997-01-01

    to revise assumptions regarding the basis of the method, which sometimes leads to the discovery of as-yet unnoticed phenomena. The present selection of automated methods for corrosion measurements is not motivated simply by the fact that a certain measurement can be performed automatically. Automation...... is applied to nearly all types of measurements today....

  16. The Architecture of a Complex GIS & Spreadsheet Based DSS

    Directory of Open Access Journals (Sweden)

    Dinu Airinei

    2010-01-01

    Full Text Available The decision support applications available on today market use to combine the decision analysis of historical databased on On-Line Analytical Processing (OLAP products or spreadsheet pivot tables with some new reporting facilities as alerts or key performance indicators available in portal dashboards or in complex spreadsheet-like reports, both corresponding to a new approach of the field called Business Intelligence. Moreover the geographical features of GIS added to DSS applications become more and more required by many kinds of businesses. In fact they are more useful this way than as distinctive parts.The paper tries to present a certain DSS architecture based on the association between such approaches and technologies. The particular examples are meant to support all the theoretical arguments and to complete the understanding of the interaction schemas available.

  17. An automated data handling process integrating spreadsheets and word processors with analytical programs

    International Nuclear Information System (INIS)

    Fisher, G.F.; Bennett, L.G.I.

    1994-01-01

    A data handling process utilizing software programs that are commercially available for use on MS-DOS microcomputers was developed to reduce the time, energy and labour required to tabulate the final results of trace analyses. The elimination of hand computations reduced the possibility of transcription errors since, once the γ-ray spectrum analysis results are obtained and saved to a hard disk of a microcomputer, they can be manipulated very easily with little possibility of distortion. The 8 step process permitted the selection of each element of interest's best concentration value based upon its associated peak area. Calculated concentration values were automatically compared against the sample's determination limit. Unsatisfactory values were flagged for latter review and adjustment by the user. In the final step, a file was created which identified the samples with their appropriate particulars (i.e. source, sample, date, etc.), and the trace element concentration were displayed. This final file contained a fully formatted summary table that listed all of the sample's results and particulars such that it could be printed or imported into a word processor for inclusion in a report. In the illustrated application of analyzing wear debris in oil-lubricated systems, over 13,000 individual numbers were processed to arrive at final concentration estimates of 19 trace elements in 80 samples. The system works very well for the elements that were analyzed in this investigation. The usefulness of commercially available spreadsheets and word processors for this task was demonstrated. (author) 5 refs.; 2 figs.; 5 tabs

  18. Implementing parallel spreadsheet models for health policy decisions: The impact of unintentional errors on model projections.

    Science.gov (United States)

    Bailey, Stephanie L; Bono, Rose S; Nash, Denis; Kimmel, April D

    2018-01-01

    Spreadsheet software is increasingly used to implement systems science models informing health policy decisions, both in academia and in practice where technical capacity may be limited. However, spreadsheet models are prone to unintentional errors that may not always be identified using standard error-checking techniques. Our objective was to illustrate, through a methodologic case study analysis, the impact of unintentional errors on model projections by implementing parallel model versions. We leveraged a real-world need to revise an existing spreadsheet model designed to inform HIV policy. We developed three parallel versions of a previously validated spreadsheet-based model; versions differed by the spreadsheet cell-referencing approach (named single cells; column/row references; named matrices). For each version, we implemented three model revisions (re-entry into care; guideline-concordant treatment initiation; immediate treatment initiation). After standard error-checking, we identified unintentional errors by comparing model output across the three versions. Concordant model output across all versions was considered error-free. We calculated the impact of unintentional errors as the percentage difference in model projections between model versions with and without unintentional errors, using +/-5% difference to define a material error. We identified 58 original and 4,331 propagated unintentional errors across all model versions and revisions. Over 40% (24/58) of original unintentional errors occurred in the column/row reference model version; most (23/24) were due to incorrect cell references. Overall, >20% of model spreadsheet cells had material unintentional errors. When examining error impact along the HIV care continuum, the percentage difference between versions with and without unintentional errors ranged from +3% to +16% (named single cells), +26% to +76% (column/row reference), and 0% (named matrices). Standard error-checking techniques may not

  19. Computer Corner: Spreadsheets, Power Series, Generating Functions, and Integers.

    Science.gov (United States)

    Snow, Donald R.

    1989-01-01

    Implements a table algorithm on a spreadsheet program and obtains functions for several number sequences such as the Fibonacci and Catalan numbers. Considers other applications of the table algorithm to integers represented in various number bases. (YP)

  20. A Typical Model Audit Approach: Spreadsheet Audit Methodologies in the City of London

    OpenAIRE

    Croll, Grenville J.

    2007-01-01

    Spreadsheet audit and review procedures are an essential part of almost all City of London financial transactions. Structured processes are used to discover errors in large financial spreadsheets underpinning major transactions of all types. Serious errors are routinely found and are fed back to model development teams generally under conditions of extreme time urgency. Corrected models form the essence of the completed transaction and firms undertaking model audit and review expose themselve...

  1. Spreadsheet Decision Support Model for Training Exercise Material Requirements Planning

    National Research Council Canada - National Science Library

    Tringali, Arthur

    1997-01-01

    This thesis focuses on developing a spreadsheet decision support model that can be used by combat engineer platoon and company commanders in determining the material requirements and estimated costs...

  2. Introducing Artificial Neural Networks through a Spreadsheet Model

    Science.gov (United States)

    Rienzo, Thomas F.; Athappilly, Kuriakose K.

    2012-01-01

    Business students taking data mining classes are often introduced to artificial neural networks (ANN) through point and click navigation exercises in application software. Even if correct outcomes are obtained, students frequently do not obtain a thorough understanding of ANN processes. This spreadsheet model was created to illuminate the roles of…

  3. Strontium-90 Error Discovered in Subcontract Laboratory Spreadsheet. Topical Report

    International Nuclear Information System (INIS)

    Brown, D.D.; Nagel, A.S.

    1999-07-01

    West Valley Demonstration Project health physicists and environment scientists discovered a series of errors in a subcontractor's spreadsheet being used to reduce data as part of their strontium-90 analytical process

  4. Calculating germination measurements and organizing spreadsheets

    OpenAIRE

    Ranal, Marli A.; Santana, Denise Garcia de; Ferreira, Wanessa Resende; Mendes-Rodrigues, Clesnan

    2009-01-01

    With the objective to minimize difficulties for beginners we are proposing the use of a conventional spreadsheet for the calculations of the main germination (or emergence) measurements, the organization of the final data for the statistical analysis and some electronic commands involved in these steps. Com o objetivo de minimizar as dificuldades dos iniciantes, estamos propondo o uso de planilhas eletrônicas convencionais para o cálculo das principais medidas de germinação (ou emergência)...

  5. A Spreadsheet-based GIS tool for planning aerial photography

    Science.gov (United States)

    The U.S.EPA's Pacific Coastal Ecology Branch has developed a tool which facilitates planning aerial photography missions. This tool is an Excel spreadsheet which accepts various input parameters such as desired photo-scale and boundary coordinates of the study area and compiles ...

  6. Spreadsheet Decision Support Model for Training Exercise Material Requirements Planning

    National Research Council Canada - National Science Library

    Tringali, Arthur

    1997-01-01

    ... associated with military training exercises. The model combines the business practice of Material Requirements Planning and the commercial spreadsheet software capabilities of Lotus 1-2-3 to calculate the requirements for food, consumable...

  7. Bell automation system on STM32F4 Discovery board

    OpenAIRE

    Božović, Denis

    2017-01-01

    A bell automation system is a device, the aim of which is to maximize the automation of bell ringing and thus release from duty the person in charge of it. The modern way of life and forms of employment generally make it difficult for human bell-ringers to carry out the task as they did for centuries. In this thesis it is explained what can be expected of the bell automation system in the regions of Slovenia, and why it is desirable that it supports certain functionalities. Using as an exampl...

  8. A procedure to compute equilibrium concentrations in multicomponent systems by Gibbs energy minimization on spreadsheets

    International Nuclear Information System (INIS)

    Lima da Silva, Aline; Heck, Nestor Cesar

    2003-01-01

    Equilibrium concentrations are traditionally calculated with the help of equilibrium constant equations from selected reactions. This procedure, however, is only useful for simpler problems. Analysis of the equilibrium state in a multicomponent and multiphase system necessarily involves solution of several simultaneous equations, and, as the number of system components grows, the required computation becomes more complex and tedious. A more direct and general method for solving the problem is the direct minimization of the Gibbs energy function. The solution for the nonlinear problem consists in minimizing the objective function (Gibbs energy of the system) subjected to the constraints of the elemental mass-balance. To solve it, usually a computer code is developed, which requires considerable testing and debugging efforts. In this work, a simple method to predict equilibrium composition in multicomponent systems is presented, which makes use of an electronic spreadsheet. The ability to carry out these calculations within a spreadsheet environment shows several advantages. First, spreadsheets are available 'universally' on nearly all personal computers. Second, the input and output capabilities of spreadsheets can be effectively used to monitor calculated results. Third, no additional systems or programs need to be learned. In this way, spreadsheets can be as suitable in computing equilibrium concentrations as well as to be used as teaching and learning aids. This work describes, therefore, the use of the Solver tool, contained in the Microsoft Excel spreadsheet package, on computing equilibrium concentrations in a multicomponent system, by the method of direct Gibbs energy minimization. The four phases Fe-Cr-O-C-Ni system is used as an example to illustrate the method proposed. The pure stoichiometric phases considered in equilibrium calculations are: Cr 2 O 3 (s) and FeO C r 2 O 3 (s). The atmosphere consists of O 2 , CO e CO 2 constituents. The liquid iron

  9. Computerized cost estimation spreadsheet and cost data base for fusion devices

    International Nuclear Information System (INIS)

    Hamilton, W.R.; Rothe, K.E.

    1985-01-01

    Component design parameters (weight, surface area, etc.) and cost factors are input and direct and indirect costs are calculated. The cost data base file derived from actual cost experience within the fusion community and refined to be compatible with the spreadsheet costing approach is a catalog of cost coefficients, algorithms, and component costs arranged into data modules corresponding to specific components and/or subsystems. Each data module contains engineering, equipment, and installation labor cost data for different configurations and types of the specific component or subsystem. This paper describes the assumptions, definitions, methodology, and architecture incorporated in the development of the cost estimation spreadsheet and cost data base, along with the type of input required and the output format

  10. Solving L-L Extraction Problems with Excel Spreadsheet

    Science.gov (United States)

    Teppaitoon, Wittaya

    2016-01-01

    This work aims to demonstrate the use of Excel spreadsheets for solving L-L extraction problems. The key to solving the problems successfully is to be able to determine a tie line on the ternary diagram where the calculation must be carried out. This enables the reader to analyze the extraction process starting with a simple operation, the…

  11. Software validation applied to spreadsheets used in laboratories working under ISO/IEC 17025

    Science.gov (United States)

    Banegas, J. M.; Orué, M. W.

    2016-07-01

    Several documents deal with software validation. Nevertheless, more are too complex to be applied to validate spreadsheets - surely the most used software in laboratories working under ISO/IEC 17025. The method proposed in this work is intended to be directly applied to validate spreadsheets. It includes a systematic way to document requirements, operational aspects regarding to validation, and a simple method to keep records of validation results and modifications history. This method is actually being used in an accredited calibration laboratory, showing to be practical and efficient.

  12. Automation of High-Throughput Crystal Screening and Data Collection at SSRL

    International Nuclear Information System (INIS)

    Miller, Mitchell D.; Brinen, Linda S.; Deacon, Ashley M.; Bedem, Henry van den; Wolf, Guenter; Xu Qingping; Zhang Zepu; Cohen, Aina; Ellis, Paul; McPhillips, Scott E.; McPhillips, Timothy M.; Phizackerley, R. Paul; Soltis, S. Michael

    2004-01-01

    A robotic system for auto-mounting crystals from liquid nitrogen is now operational on SSRL beamlines (Cohen et al., J. Appl. Cryst. (2002). 35, 720-726). The system uses a small industrial 4-axis robot with a custom built actuator. Once mounted, automated alignment of the sample loop to the X-ray beam readies the crystal for data collection. After data collection, samples are returned to the cassette. The beamline Dewar accommodates three compact sample cassettes (holding up to 96 samples each). During the past 4 months, the system on beamline 11-1 has been used to screen over 1000 crystals. The system has reduced both screening time and manpower. Integration of the hardware components is accomplished in the Distributed Control System architecture developed at SSRL (McPhillips et al., J. Synchrotron Rad. (2002) 9, 401-406). A crystal-screening interface has been implemented in Blu-Ice. Sample details can be uploaded from an Excel spreadsheet. The JCSG generates these spreadsheets automatically from their tracking database using standard database tools (http://www.jcsg.org). New diffraction image analysis tools are being employed to aid in extracting results. Automation also permits tele-presence. For example, samples have been changed during the night without leaving home and scientists have screened crystals 1600 miles from the beamline. The system developed on beamline 11-1 has been replicated onto 1-5, 9-1, 9-2, and 11-3 and is used by both general users and the JCSG

  13. AXAOTHER XL -- A spreadsheet for determining doses for incidents caused by tornadoes or high-velocity straight winds

    International Nuclear Information System (INIS)

    Simpkins, A.A.

    1996-09-01

    AXAOTHER XL is an Excel Spreadsheet used to determine dose to the maximally exposed offsite individual during high-velocity straight winds or tornado conditions. Both individual and population doses may be considered. Potential exposure pathways are inhalation and plume shine. For high-velocity straight winds the spreadsheet has the capability to determine the downwind relative air concentration, however for the tornado conditions, the user must enter the relative air concentration. Theoretical models are discussed and hand calculations are performed to ensure proper application of methodologies. A section has also been included that contains user instructions for the spreadsheet

  14. [Development of an Excel spreadsheet for meta-analysis of indirect and mixed treatment comparisons].

    Science.gov (United States)

    Tobías, Aurelio; Catalá-López, Ferrán; Roqué, Marta

    2014-01-01

    Meta-analyses in clinical research usually aimed to evaluate treatment efficacy and safety in direct comparison with a unique comparator. Indirect comparisons, using the Bucher's method, can summarize primary data when information from direct comparisons is limited or nonexistent. Mixed comparisons allow combining estimates from direct and indirect comparisons, increasing statistical power. There is a need for simple applications for meta-analysis of indirect and mixed comparisons. These can easily be conducted using a Microsoft Office Excel spreadsheet. We developed a spreadsheet for indirect and mixed effects comparisons of friendly use for clinical researchers interested in systematic reviews, but non-familiarized with the use of more advanced statistical packages. The use of the proposed Excel spreadsheet for indirect and mixed comparisons can be of great use in clinical epidemiology to extend the knowledge provided by traditional meta-analysis when evidence from direct comparisons is limited or nonexistent.

  15. A spreadsheet-based microcomputer application for determining cost-effectiveness of commercial lighting retrofit opportunities

    International Nuclear Information System (INIS)

    Spain, T.K.

    1992-01-01

    Lighting accounts for 20-25% of electricity use in the United States. With estimates of 50-70% potential reductions being made by energy engineers, lighting is a promising area for cost-effective energy conservation projects in commercial buildings. With an extensive array of alternatives available to replace or modify existing lighting systems, simple but effective calculation tools are needed to help energy auditors evaluate lighting retrofits. This paper describes a spreadsheet-based microcomputer application for determining the cost-effectiveness of commercial lighting retrofits. Developed to support walk-through energy audits conducted by the Industrial Energy Advisory Service (IdEAS), the spreadsheet provides essential comparative data for evaluating the payback of alternatives. The impact of alternatives on environmental emissions is calculated to help communicate external costs and sell the project, if appropriate. The methodology and calculations are fully documented to allow the user to duplicate the spreadsheet and modify it as needed

  16. A High Throughput, 384-Well, Semi-Automated, Hepatocyte Intrinsic Clearance Assay for Screening New Molecular Entities in Drug Discovery.

    Science.gov (United States)

    Heinle, Lance; Peterkin, Vincent; de Morais, Sonia M; Jenkins, Gary J; Badagnani, Ilaria

    2015-01-01

    A high throughput, semi-automated clearance screening assay in hepatocytes was developed allowing a scientist to generate data for 96 compounds in one week. The 384-well format assay utilizes a Thermo Multidrop Combi and an optimized LC-MS/MS method. The previously reported LCMS/ MS method reduced the analytical run time by 3-fold, down to 1.2 min injection-to-injection. The Multidrop was able to deliver hepatocytes to 384-well plates with minimal viability loss. Comparison of results from the new 384-well and historical 24-well assays yielded a correlation of 0.95. In addition, results obtained for 25 marketed drugs with various metabolism pathways had a correlation of 0.75 when compared with literature values. Precision was maintained in the new format as 8 compounds tested in ≥39 independent experiments had coefficients of variation ≤21%. The ability to predict in vivo clearances using the new stability assay format was also investigated using 22 marketed drugs and 26 AbbVie compounds. Correction of intrinsic clearance values with binding to hepatocytes (in vitro data) and plasma (in vivo data) resulted in a higher in vitro to in vivo correlation when comparing 22 marketed compounds in human (0.80 vs 0.35) and 26 AbbVie Discovery compounds in rat (0.56 vs 0.17), demonstrating the importance of correcting for binding in clearance studies. This newly developed high throughput, semi-automated clearance assay allows for rapid screening of Discovery compounds to enable Structure Activity Relationship (SAR) analysis based on high quality hepatocyte stability data in sufficient quantity and quality to drive the next round of compound synthesis.

  17. Early detection of pharmacovigilance signals with automated methods based on false discovery rates: a comparative study.

    Science.gov (United States)

    Ahmed, Ismaïl; Thiessard, Frantz; Miremont-Salamé, Ghada; Haramburu, Françoise; Kreft-Jais, Carmen; Bégaud, Bernard; Tubert-Bitter, Pascale

    2012-06-01

    Improving the detection of drug safety signals has led several pharmacovigilance regulatory agencies to incorporate automated quantitative methods into their spontaneous reporting management systems. The three largest worldwide pharmacovigilance databases are routinely screened by the lower bound of the 95% confidence interval of proportional reporting ratio (PRR₀₂.₅), the 2.5% quantile of the Information Component (IC₀₂.₅) or the 5% quantile of the Gamma Poisson Shrinker (GPS₀₅). More recently, Bayesian and non-Bayesian False Discovery Rate (FDR)-based methods were proposed that address the arbitrariness of thresholds and allow for a built-in estimate of the FDR. These methods were also shown through simulation studies to be interesting alternatives to the currently used methods. The objective of this work was twofold. Based on an extensive retrospective study, we compared PRR₀₂.₅, GPS₀₅ and IC₀₂.₅ with two FDR-based methods derived from the Fisher's exact test and the GPS model (GPS(pH0) [posterior probability of the null hypothesis H₀ calculated from the Gamma Poisson Shrinker model]). Secondly, restricting the analysis to GPS(pH0), we aimed to evaluate the added value of using automated signal detection tools compared with 'traditional' methods, i.e. non-automated surveillance operated by pharmacovigilance experts. The analysis was performed sequentially, i.e. every month, and retrospectively on the whole French pharmacovigilance database over the period 1 January 1996-1 July 2002. Evaluation was based on a list of 243 reference signals (RSs) corresponding to investigations launched by the French Pharmacovigilance Technical Committee (PhVTC) during the same period. The comparison of detection methods was made on the basis of the number of RSs detected as well as the time to detection. Results comparing the five automated quantitative methods were in favour of GPS(pH0) in terms of both number of detections of true signals and

  18. Spreadsheet design and validation for characteristic limits determination in gross alpha and beta measurement

    International Nuclear Information System (INIS)

    Prado, Rodrigo G.P. do; Dalmazio, Ilza

    2013-01-01

    The identification and detection of ionizing radiation are essential requisites of radiation protection. Gross alpha and beta measurements are widely applied as a screening method in radiological characterization, environmental monitoring and industrial applications. As in any other analytical technique, test performance depends on the quality of instrumental measurements and reliability of calculations. Characteristic limits refer to three specific statistics, namely, decision threshold, detection limit and confidence interval, which are fundamental to ensuring the quality of determinations. This work describes a way to calculate characteristic limits for measurements of gross alpha and beta activity applying spreadsheets. The approach used for determination of decision threshold, detection limit and limits of the confidence interval, the mathematical expressions of measurands and uncertainty followed standards guidelines. A succinct overview of this approach and examples are presented and spreadsheets were validated using specific software. Furthermore, these spreadsheets could be used as tool to instruct beginner users of methods for ionizing radiation measurements. (author)

  19. Integrating Computer Spreadsheet Modeling into a Microeconomics Curriculum: Principles to Managerial.

    Science.gov (United States)

    Clark, Joy L.; Hegji, Charles E.

    1997-01-01

    Notes that using spreadsheets to teach microeconomics principles enables learning by doing in the exploration of basic concepts. Introduction of increasingly complex topics leads to exploration of theory and managerial decision making. (SK)

  20. Using a Spreadsheet Scroll Bar to Solve Equilibrium Concentrations

    Science.gov (United States)

    Raviolo, Andres

    2012-01-01

    A simple, conceptual method is described for using the spreadsheet scroll bar to find the composition of a system at chemical equilibrium. Simulation of any kind of chemical equilibrium can be carried out using this method, and the effects of different disturbances can be predicted. This simulation, which can be used in general chemistry…

  1. Spinning the Big Wheel on “The Price is Right”: A Spreadsheet Simulation Exercise

    Directory of Open Access Journals (Sweden)

    Keith A Willoughby

    2010-04-01

    Full Text Available A popular game played in each broadcast of the United States television game show “The Price is Right” has contestants spinning a large wheel comprised of twenty different monetary values (in 5-cent increments from $0.05 to $1.00. A player wins by scoring closest to, without exceeding, $1.00. Players may accomplish this in one or a total of two spins. We develop a spreadsheet modeling exercise, useful in an introductory undergraduate Spreadsheet Analytics course, to simulate the spinning of the wheel and to determine optimal spinning strategies.

  2. A Cognitive Adopted Framework for IoT Big-Data Management and Knowledge Discovery Prospective

    OpenAIRE

    Mishra, Nilamadhab; Lin, Chung-Chih; Chang, Hsien-Tsung

    2015-01-01

    In future IoT big-data management and knowledge discovery for large scale industrial automation application, the importance of industrial internet is increasing day by day. Several diversified technologies such as IoT (Internet of Things), computational intelligence, machine type communication, big-data, and sensor technology can be incorporated together to improve the data management and knowledge discovery efficiency of large scale automation applications. So in this work, we need to propos...

  3. A methodology for constructing the calculation model of scientific spreadsheets

    NARCIS (Netherlands)

    Vos, de M.; Wielemaker, J.; Schreiber, G.; Wielinga, B.; Top, J.L.

    2015-01-01

    Spreadsheets models are frequently used by scientists to analyze research data. These models are typically described in a paper or a report, which serves as single source of information on the underlying research project. As the calculation workflow in these models is not made explicit, readers are

  4. A Novel Approach to Formulae Production and Overconfidence Measurement to Reduce Risk in Spreadsheet Modelling

    OpenAIRE

    Thorne, Simon; Ball, David; Lawson, Zoe

    2008-01-01

    Research on formulae production in spreadsheets has established the practice as high risk yet unrecognised as such by industry. There are numerous software applications that are designed to audit formulae and find errors. However these are all post creation, designed to catch errors before the spreadsheet is deployed. As a general conclusion from EuSpRIG 2003 conference it was decided that the time has come to attempt novel solutions based on an understanding of human factors. Hence in this p...

  5. NET PRESENT VALUE SIMULATING WITH A SPREADSHEET

    Directory of Open Access Journals (Sweden)

    Maria CONSTANTINESCU

    2010-01-01

    Full Text Available Decision making has always been a difficult process, based on various combinations if objectivity (when scientific tools were used and subjectivity (considering that decisions are finally made by people, with their strengths and weaknesses. The IT revolution has also reached the areas of management and decision making, helping managers make better and more informed decisions by providing them with a variety of tools, from the personal computers to the specialized software. Most simulations are performed in a spreadsheet, because the number of calculations required soon overwhelms human capability.

  6. INVARIANT PRACTICAL TASKS FOR WORK WITH ELECTRONIC SPREADSHEETS AT THE SECONDARY SCHOOL

    Directory of Open Access Journals (Sweden)

    Л И Карташова

    2016-12-01

    Full Text Available In article examples of practical jobs on creation and editing electronic spreadsheets for pupils of the main school are given. For fixing of knowledge and abilities of pupils on formatting of cells they are offered to create, for example, in the plate processor the table and to make its formatting on a sample which shall be brought to the computer monitor, is printed on a color printer and is laid out on the local area network in the form of the image. In the course of assimilation of data types jobs for determination and the explanation of data types to which different strings belong are offered school students. For assimilation of features of record of formulas school students are offered to write different mathematical expressions in the look suitable for use in electronic spreadsheets.Jobs reflect fundamental invariant approach to implementation of technology of operation with electronic spreadsheets as don’t depend on specific versions of computer programs. The provided jobs can be used in case of study of any plate processors. In training activity on the basis of use of invariant jobs there is a mastering the generalized methods of activities to numerical information that will allow to create a system view on use of information technologies and to consciously apply them to the solution of tasks.

  7. Automated and comprehensive link engineering supporting branched, ring, and mesh network topologies

    Science.gov (United States)

    Farina, J.; Khomchenko, D.; Yevseyenko, D.; Meester, J.; Richter, A.

    2016-02-01

    Link design, while relatively easy in the past, can become quite cumbersome with complex channel plans and equipment configurations. The task of designing optical transport systems and selecting equipment is often performed by an applications or sales engineer using simple tools, such as custom Excel spreadsheets. Eventually, every individual has their own version of the spreadsheet as well as their own methodology for building the network. This approach becomes unmanageable very quickly and leads to mistakes, bending of the engineering rules and installations that do not perform as expected. We demonstrate a comprehensive planning environment, which offers an efficient approach to unify, control and expedite the design process by controlling libraries of equipment and engineering methodologies, automating the process and providing the analysis tools necessary to predict system performance throughout the system and for all channels. In addition to the placement of EDFAs and DCEs, performance analysis metrics are provided at every step of the way. Metrics that can be tracked include power, CD and OSNR, SPM, XPM, FWM and SBS. Automated routine steps assist in design aspects such as equalization, padding and gain setting for EDFAs, the placement of ROADMs and transceivers, and creating regeneration points. DWDM networks consisting of a large number of nodes and repeater huts, interconnected in linear, branched, mesh and ring network topologies, can be designed much faster when compared with conventional design methods. Using flexible templates for all major optical components, our technology-agnostic planning approach supports the constant advances in optical communications.

  8. SRTC Spreadsheet to Determine Relative Percent Difference (RPD) for Duplicate Waste Assay Results and to Perform the RPD Acceptance Test

    International Nuclear Information System (INIS)

    Casella, V.R.

    2002-01-01

    This report documents the calculations and logic used for the Microsoft(R) Excel spreadsheet that is used at the 773-A Solid Waste Assay Facility for evaluating duplicate analyses, and validates that the spreadsheet is performing these functions correctly

  9. Automated Sample Preparation Platform for Mass Spectrometry-Based Plasma Proteomics and Biomarker Discovery

    Directory of Open Access Journals (Sweden)

    Vilém Guryča

    2014-03-01

    Full Text Available The identification of novel biomarkers from human plasma remains a critical need in order to develop and monitor drug therapies for nearly all disease areas. The discovery of novel plasma biomarkers is, however, significantly hampered by the complexity and dynamic range of proteins within plasma, as well as the inherent variability in composition from patient to patient. In addition, it is widely accepted that most soluble plasma biomarkers for diseases such as cancer will be represented by tissue leakage products, circulating in plasma at low levels. It is therefore necessary to find approaches with the prerequisite level of sensitivity in such a complex biological matrix. Strategies for fractionating the plasma proteome have been suggested, but improvements in sensitivity are often negated by the resultant process variability. Here we describe an approach using multidimensional chromatography and on-line protein derivatization, which allows for higher sensitivity, whilst minimizing the process variability. In order to evaluate this automated process fully, we demonstrate three levels of processing and compare sensitivity, throughput and reproducibility. We demonstrate that high sensitivity analysis of the human plasma proteome is possible down to the low ng/mL or even high pg/mL level with a high degree of technical reproducibility.

  10. Negative Effects of Learning Spreadsheet Management on Learning Database Management

    Science.gov (United States)

    Vágner, Anikó; Zsakó, László

    2015-01-01

    A lot of students learn spreadsheet management before database management. Their similarities can cause a lot of negative effects when learning database management. In this article, we consider these similarities and explain what can cause problems. First, we analyse the basic concepts such as table, database, row, cell, reference, etc. Then, we…

  11. Use of combinatorial chemistry to speed drug discovery.

    Science.gov (United States)

    Rádl, S

    1998-10-01

    IBC's International Conference on Integrating Combinatorial Chemistry into the Discovery Pipeline was held September 14-15, 1998. The program started with a pre-conference workshop on High-Throughput Compound Characterization and Purification. The agenda of the main conference was divided into sessions of Synthesis, Automation and Unique Chemistries; Integrating Combinatorial Chemistry, Medicinal Chemistry and Screening; Combinatorial Chemistry Applications for Drug Discovery; and Information and Data Management. This meeting was an excellent opportunity to see how big pharma, biotech and service companies are addressing the current bottlenecks in combinatorial chemistry to speed drug discovery. (c) 1998 Prous Science. All rights reserved.

  12. Semi-automated software service integration in virtual organisations

    Science.gov (United States)

    Afsarmanesh, Hamideh; Sargolzaei, Mahdi; Shadi, Mahdieh

    2015-08-01

    To enhance their business opportunities, organisations involved in many service industries are increasingly active in pursuit of both online provision of their business services (BSs) and collaborating with others. Collaborative Networks (CNs) in service industry sector, however, face many challenges related to sharing and integration of their collection of provided BSs and their corresponding software services. Therefore, the topic of service interoperability for which this article introduces a framework is gaining momentum in research for supporting CNs. It contributes to generation of formal machine readable specification for business processes, aimed at providing their unambiguous definitions, as needed for developing their equivalent software services. The framework provides a model and implementation architecture for discovery and composition of shared services, to support the semi-automated development of integrated value-added services. In support of service discovery, a main contribution of this research is the formal representation of services' behaviour and applying desired service behaviour specified by users for automated matchmaking with other existing services. Furthermore, to support service integration, mechanisms are developed for automated selection of the most suitable service(s) according to a number of service quality aspects. Two scenario cases are presented, which exemplify several specific features related to service discovery and service integration aspects.

  13. Potential errors when fitting experience curves by means of spreadsheet software

    NARCIS (Netherlands)

    van Sark, W.G.J.H.M.|info:eu-repo/dai/nl/074628526; Alsema, E.A.|info:eu-repo/dai/nl/073416258

    2010-01-01

    Progress ratios (PRs) are widely used in forecasting development of many technologies; they are derived from historical data represented in experience curves. Fitting the double logarithmic graphs is easily done with spreadsheet software like Microsoft Excel, by adding a trend line to the graph.

  14. Googling your hand hygiene data: Using Google Forms, Google Sheets, and R to collect and automate analysis of hand hygiene compliance monitoring.

    Science.gov (United States)

    Wiemken, Timothy L; Furmanek, Stephen P; Mattingly, William A; Haas, Janet; Ramirez, Julio A; Carrico, Ruth M

    2018-06-01

    Hand hygiene is one of the most important interventions in the quest to eliminate healthcare-associated infections, and rates in healthcare facilities are markedly low. Since hand hygiene observation and feedback are critical to improve adherence, we created an easy-to-use, platform-independent hand hygiene data collection process and an automated, on-demand reporting engine. A 3-step approach was used for this project: 1) creation of a data collection form using Google Forms, 2) transfer of data from the form to a spreadsheet using Google Spreadsheets, and 3) creation of an automated, cloud-based analytics platform for report generation using R and RStudio Shiny software. A video tutorial of all steps in the creation and use of this free tool can be found on our YouTube channel: https://www.youtube.com/watch?v=uFatMR1rXqU&t. The on-demand reporting tool can be accessed at: https://crsp.louisville.edu/shiny/handhygiene. This data collection and automated analytics engine provides an easy-to-use environment for evaluating hand hygiene data; it also provides rapid feedback to healthcare workers. By reducing some of the data management workload required of the infection preventionist, more focused interventions may be instituted to increase global hand hygiene rates and reduce infection. Copyright © 2018 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.

  15. Learning Binomial Probability Concepts with Simulation, Random Numbers and a Spreadsheet

    Science.gov (United States)

    Rochowicz, John A., Jr.

    2005-01-01

    This paper introduces the reader to the concepts of binomial probability and simulation. A spreadsheet is used to illustrate these concepts. Random number generators are great technological tools for demonstrating the concepts of probability. Ideas of approximation, estimation, and mathematical usefulness provide numerous ways of learning…

  16. Using spreadsheets to develop applied skills in a business math course: Student feedback and perceived learning

    Directory of Open Access Journals (Sweden)

    Thomas Mays

    2015-10-01

    Full Text Available This paper describes the redesign of a business math course and its delivery in both face-to-face and online formats. Central to the redesigned course was the addition of applied spreadsheet exercises that served as both learning and summative assessment tools. Several other learning activities and assignments were integrated in the course to address diverse student learning styles and levels of math anxiety. Students were invited to complete a survey that asked them to rank course activities and assignments based on how well they helped the student learn course material. Open-ended items were also included in the survey. In the online course sections, students reported higher perceived learning from the use the spreadsheet-based application assignments, while face-to-face students preferred demonstrations. Qualitative remarks from the online students included numerous comments about the positive learning impact of the business application spreadsheet-based assignments, as well as the link between these assignments and what students considered the “real world.”

  17. Sparse Mbplsr for Metabolomics Data and Biomarker Discovery

    DEFF Research Database (Denmark)

    Karaman, İbrahim

    2014-01-01

    the link between high throughput metabolomics data generated on different analytical platforms, discover important metabolites deriving from the digestion processes in the gut, and automate metabolic pathway discovery from mass spectrometry. PLS (partial least squares) based chemometric methods were...

  18. PLAN: a web platform for automating high-throughput BLAST searches and for managing and mining results.

    Science.gov (United States)

    He, Ji; Dai, Xinbin; Zhao, Xuechun

    2007-02-09

    BLAST searches are widely used for sequence alignment. The search results are commonly adopted for various functional and comparative genomics tasks such as annotating unknown sequences, investigating gene models and comparing two sequence sets. Advances in sequencing technologies pose challenges for high-throughput analysis of large-scale sequence data. A number of programs and hardware solutions exist for efficient BLAST searching, but there is a lack of generic software solutions for mining and personalized management of the results. Systematically reviewing the results and identifying information of interest remains tedious and time-consuming. Personal BLAST Navigator (PLAN) is a versatile web platform that helps users to carry out various personalized pre- and post-BLAST tasks, including: (1) query and target sequence database management, (2) automated high-throughput BLAST searching, (3) indexing and searching of results, (4) filtering results online, (5) managing results of personal interest in favorite categories, (6) automated sequence annotation (such as NCBI NR and ontology-based annotation). PLAN integrates, by default, the Decypher hardware-based BLAST solution provided by Active Motif Inc. with a greatly improved efficiency over conventional BLAST software. BLAST results are visualized by spreadsheets and graphs and are full-text searchable. BLAST results and sequence annotations can be exported, in part or in full, in various formats including Microsoft Excel and FASTA. Sequences and BLAST results are organized in projects, the data publication levels of which are controlled by the registered project owners. In addition, all analytical functions are provided to public users without registration. PLAN has proved a valuable addition to the community for automated high-throughput BLAST searches, and, more importantly, for knowledge discovery, management and sharing based on sequence alignment results. The PLAN web interface is platform

  19. PLAN: a web platform for automating high-throughput BLAST searches and for managing and mining results

    Directory of Open Access Journals (Sweden)

    Zhao Xuechun

    2007-02-01

    Full Text Available Abstract Background BLAST searches are widely used for sequence alignment. The search results are commonly adopted for various functional and comparative genomics tasks such as annotating unknown sequences, investigating gene models and comparing two sequence sets. Advances in sequencing technologies pose challenges for high-throughput analysis of large-scale sequence data. A number of programs and hardware solutions exist for efficient BLAST searching, but there is a lack of generic software solutions for mining and personalized management of the results. Systematically reviewing the results and identifying information of interest remains tedious and time-consuming. Results Personal BLAST Navigator (PLAN is a versatile web platform that helps users to carry out various personalized pre- and post-BLAST tasks, including: (1 query and target sequence database management, (2 automated high-throughput BLAST searching, (3 indexing and searching of results, (4 filtering results online, (5 managing results of personal interest in favorite categories, (6 automated sequence annotation (such as NCBI NR and ontology-based annotation. PLAN integrates, by default, the Decypher hardware-based BLAST solution provided by Active Motif Inc. with a greatly improved efficiency over conventional BLAST software. BLAST results are visualized by spreadsheets and graphs and are full-text searchable. BLAST results and sequence annotations can be exported, in part or in full, in various formats including Microsoft Excel and FASTA. Sequences and BLAST results are organized in projects, the data publication levels of which are controlled by the registered project owners. In addition, all analytical functions are provided to public users without registration. Conclusion PLAN has proved a valuable addition to the community for automated high-throughput BLAST searches, and, more importantly, for knowledge discovery, management and sharing based on sequence alignment results

  20. Novel automated biomarker discovery work flow for urinary peptidomics

    DEFF Research Database (Denmark)

    Balog, Crina I.; Hensbergen, Paul J.; Derks, Rico

    2009-01-01

    samples from Schistosoma haematobium-infected individuals to evaluate clinical applicability. RESULTS: The automated RP-SCX sample cleanup and fractionation system exhibits a high qualitative and quantitative reproducibility, with both BSA standards and urine samples. Because of the relatively high...

  1. Spreadsheets as a Transparent Resource for Learning the Mathematics of Annuities

    Science.gov (United States)

    Pournara, Craig

    2009-01-01

    The ability of mathematics teachers to decompress mathematics and to move between representations are two key features of mathematical knowledge that is usable for teaching. This article reports on four pre-service secondary mathematics teachers learning the mathematics of annuities. In working with spreadsheets students began to make sense of…

  2. Automated applications of sandwich-cultured hepatocytes in the evaluation of hepatic drug transport.

    Science.gov (United States)

    Perry, Cassandra H; Smith, William R; St Claire, Robert L; Brouwer, Kenneth R

    2011-04-01

    Predictions of the absorption, distribution, metabolism, excretion, and toxicity of compounds in pharmaceutical development are essential aspects of the drug discovery process. B-CLEAR is an in vitro system that uses sandwich-cultured hepatocytes to evaluate and predict in vivo hepatobiliary disposition (hepatic uptake, biliary excretion, and biliary clearance), transporter-based hepatic drug-drug interactions, and potential drug-induced hepatotoxicity. Automation of predictive technologies is an advantageous and preferred format in drug discovery. In this study, manual and automated studies are investigated and equivalence is demonstrated. In addition, automated applications using model probe substrates and inhibitors to assess the cholestatic potential of drugs and evaluate hepatic drug transport are examined. The successful automation of this technology provides a more reproducible and less labor-intensive approach, reducing potential operator error in complex studies and facilitating technology transfer.

  3. Spreadsheet eases heat balance, payback calculations

    International Nuclear Information System (INIS)

    Conner, K.P.

    1992-01-01

    This paper reports that a generalized Lotus type spreadsheet program has been developed to perform the heat balance and simple payback calculations for various turbine-generator (TG) inlet steam pressures. It can be used for potential plant expansions or new cogeneration installations. The program performs the basic heat balance calculations that are associated with turbine-generator, feedwater heating process steam requirements and desuperheating. The printout, shows the basic data and formulation used in the calculations. The turbine efficiency data used are applicable for automatic extraction turbine-generators in the 30-80 MW range. Simple payback calculations are for chemical recovery boilers and power boilers used in the pulp and paper industry. However, the program will also accommodate boilers common to other industries

  4. Integrating numerical computation into the undergraduate education physics curriculum using spreadsheet excel

    Science.gov (United States)

    Fauzi, Ahmad

    2017-11-01

    Numerical computation has many pedagogical advantages: it develops analytical skills and problem-solving skills, helps to learn through visualization, and enhances physics education. Unfortunately, numerical computation is not taught to undergraduate education physics students in Indonesia. Incorporate numerical computation into the undergraduate education physics curriculum presents many challenges. The main challenges are the dense curriculum that makes difficult to put new numerical computation course and most students have no programming experience. In this research, we used case study to review how to integrate numerical computation into undergraduate education physics curriculum. The participants of this research were 54 students of the fourth semester of physics education department. As a result, we concluded that numerical computation could be integrated into undergraduate education physics curriculum using spreadsheet excel combined with another course. The results of this research become complements of the study on how to integrate numerical computation in learning physics using spreadsheet excel.

  5. A novel approach to formulae production\\ud and overconfidence measurement\\ud to reduce risk in spreadsheet modelling

    OpenAIRE

    Thorne, Simon; Ball, David; Lawson, Zoe Frances

    2004-01-01

    Research on formulae production in spreadsheets has established the practice as high risk yet\\ud unrecognised as such by industry. There are numerous software applications that are designed\\ud to audit formulae and find errors. However these are all post creation, designed to catch errors\\ud before the spreadsheet is deployed. As a general conclusion from EuSpRIG 2003 conference\\ud it was decided that the time has come to attempt novel solutions based on an understanding of\\ud human factors. ...

  6. The Euler’s Graphical User Interface Spreadsheet Calculator for Solving Ordinary Differential Equations by Visual Basic for Application Programming

    Science.gov (United States)

    Gaik Tay, Kim; Cheong, Tau Han; Foong Lee, Ming; Kek, Sie Long; Abdul-Kahar, Rosmila

    2017-08-01

    In the previous work on Euler’s spreadsheet calculator for solving an ordinary differential equation, the Visual Basic for Application (VBA) programming was used, however, a graphical user interface was not developed to capture users input. This weakness may make users confuse on the input and output since those input and output are displayed in the same worksheet. Besides, the existing Euler’s spreadsheet calculator is not interactive as there is no prompt message if there is a mistake in inputting the parameters. On top of that, there are no users’ instructions to guide users to input the derivative function. Hence, in this paper, we improved previous limitations by developing a user-friendly and interactive graphical user interface. This improvement is aimed to capture users’ input with users’ instructions and interactive prompt error messages by using VBA programming. This Euler’s graphical user interface spreadsheet calculator is not acted as a black box as users can click on any cells in the worksheet to see the formula used to implement the numerical scheme. In this way, it could enhance self-learning and life-long learning in implementing the numerical scheme in a spreadsheet and later in any programming language.

  7. Upscaling and automation of electrophysiology: toward high throughput screening in ion channel drug discovery

    DEFF Research Database (Denmark)

    Asmild, Margit; Oswald, Nicholas; Krzywkowski, Karen M

    2003-01-01

    by developing two lines of automated patch clamp products, a traditional pipette-based system called Apatchi-1, and a silicon chip-based system QPatch. The degree of automation spans from semi-automation (Apatchi-1) where a trained technician interacts with the system in a limited way, to a complete automation...... (QPatch 96) where the system works continuously and unattended until screening of a full compound library is completed. The performance of the systems range from medium to high throughputs....

  8. Automation of P-3 Simulations to Improve Operator Workload

    Science.gov (United States)

    2012-09-01

    Training GBE Group Behavior Engine GCC Geocentric Coordinates GCS Global Coordinate System GUI Graphical User Interface xiv HLA High...this thesis and because they each have a unique approach to solving the problem of entity behavior automation. A. DISCOVERY MACHINE The United States...from the operators and can be automated in JSAF using the mental simulation approach . Two trips were conducted to visit the Naval Warfare

  9. Automated tool for virtual screening and pharmacology-based pathway prediction and analysis

    Directory of Open Access Journals (Sweden)

    Sugandh Kumar

    2017-10-01

    Full Text Available The virtual screening is an effective tool for the lead identification in drug discovery. However, there are limited numbers of crystal structures available as compared to the number of biological sequences which makes (Structure Based Drug Discovery SBDD a difficult choice. The current tool is an attempt to automate the protein structure modelling and automatic virtual screening followed by pharmacology-based prediction and analysis. Starting from sequence(s, this tool automates protein structure modelling, binding site identification, automated docking, ligand preparation, post docking analysis and identification of hits in the biological pathways that can be modulated by a group of ligands. This automation helps in the characterization of ligands selectivity and action of ligands on a complex biological molecular network as well as on individual receptor. The judicial combination of the ligands binding different receptors can be used to inhibit selective biological pathways in a disease. This tool also allows the user to systemically investigate network-dependent effects of a drug or drug candidate.

  10. Context-sensitive service discovery experimental prototype and evaluation

    DEFF Research Database (Denmark)

    Balken, Robin; Haukrogh, Jesper; L. Jensen, Jens

    2007-01-01

    The amount of different networks and services available to users today are increasing. This introduces the need for a way to locate and sort out irrelevant services in the process of discovering available services to a user. This paper describes and evaluates a prototype of an automated discovery...... and selection system, which locates services relevant to a user, based on his/her context and the context of the available services. The prototype includes a multi-level, hierarchical system approach and the introduction of entities called User-nodes, Super-nodes and Root-nodes. These entities separate...... the network in domains that handle the complex distributed service discovery, which is based on dynamically changing context information. In the prototype, a method for performing context-sensitive service discovery has been realised. The service discovery part utilizes UPnP, which has been expanded in order...

  11. Simple Spreadsheet Thermal Models for Cryogenic Applications

    Science.gov (United States)

    Nash, Alfred

    1995-01-01

    Self consistent circuit analog thermal models that can be run in commercial spreadsheet programs on personal computers have been created to calculate the cooldown and steady state performance of cryogen cooled Dewars. The models include temperature dependent conduction and radiation effects. The outputs of the models provide temperature distribution and Dewar performance information. these models have been used to analyze the SIRTF Telescope Test Facility (STTF). The facility has been brought on line for its first user, the Infrared Telescope Technology Testbed (ITTT), for the Space Infrared Telescope Facility (SIRTF) at JPL. The model algorithm as well as a comparison between the models' predictions and actual performance of this facility will be presented.

  12. Head First Excel A learner's guide to spreadsheets

    CERN Document Server

    Milton, Michael

    2010-01-01

    Do you use Excel for simple lists, but get confused and frustrated when it comes to actually doing something useful with all that data? Stop tearing your hair out: Head First Excel helps you painlessly move from spreadsheet dabbler to savvy user. Whether you're completely new to Excel or an experienced user looking to make the program work better for you, this book will help you incorporate Excel into every aspect of your workflow, from a scratch pad for data-based brainstorming to exploratory analysis with PivotTables, optimizing outcomes with Goal Seek, and presenting your conclusions wit

  13. A spreadsheet to determine the volume ratio for target and breast in partial breast irradiation

    International Nuclear Information System (INIS)

    Kron, T.; Willis, D.; Miller, J.; Hubbard, P.; Oliver, M.; Chua, B.

    2009-01-01

    Full text: The technical feasibility of Partial Breast Irradiation (PBI) using external beam radiotherapy depends on the ratio between the evaluation planning target volume (PTV e val) and the whole breast volume (PBI volume ratio = PVR). We aimed to develop a simple method to determine PVR using measurements performed at the time of the planning CT scan. A PVR calculation tool was developed using a Microsoft Excel spreadsheet to determine the PTV from three orthogonal dimensions of the seroma cavity and a given margin on the CT scans. The breast volume is estimated from the separation and breast height in five equally spaced CT slices. The PTV e val and whole breast volume were determined for 29 patients from two centres using the spreadsheet calculation tool and compared to volumes delineated on computerised treatment planning systems. Both the PTV e val and whole breast volumes were underestimated by approximately 25% using the spreadsheet. The resulting PVRs were 1.05 +/- 0.35 (mean +/- 1 S D) times larger than the ones determined from planning. Estimations of the PVR using the calculation tool were achievable in around 5 minutes at the time of CT scanning and allow a prompt decision on the suitability of the patients for PBI.

  14. IFDOTMETER: A New Software Application for Automated Immunofluorescence Analysis.

    Science.gov (United States)

    Rodríguez-Arribas, Mario; Pizarro-Estrella, Elisa; Gómez-Sánchez, Rubén; Yakhine-Diop, S M S; Gragera-Hidalgo, Antonio; Cristo, Alejandro; Bravo-San Pedro, Jose M; González-Polo, Rosa A; Fuentes, José M

    2016-04-01

    Most laboratories interested in autophagy use different imaging software for managing and analyzing heterogeneous parameters in immunofluorescence experiments (e.g., LC3-puncta quantification and determination of the number and size of lysosomes). One solution would be software that works on a user's laptop or workstation that can access all image settings and provide quick and easy-to-use analysis of data. Thus, we have designed and implemented an application called IFDOTMETER, which can run on all major operating systems because it has been programmed using JAVA (Sun Microsystems). Briefly, IFDOTMETER software has been created to quantify a variety of biological hallmarks, including mitochondrial morphology and nuclear condensation. The program interface is intuitive and user-friendly, making it useful for users not familiar with computer handling. By setting previously defined parameters, the software can automatically analyze a large number of images without the supervision of the researcher. Once analysis is complete, the results are stored in a spreadsheet. Using software for high-throughput cell image analysis offers researchers the possibility of performing comprehensive and precise analysis of a high number of images in an automated manner, making this routine task easier. © 2015 Society for Laboratory Automation and Screening.

  15. Station Program Note Pull Automation

    Science.gov (United States)

    Delgado, Ivan

    2016-01-01

    Upon commencement of my internship, I was in charge of maintaining the CoFR (Certificate of Flight Readiness) Tool. The tool acquires data from existing Excel workbooks on NASA's and Boeing's databases to create a new spreadsheet listing out all the potential safety concerns for upcoming flights and software transitions. Since the application was written in Visual Basic, I had to learn a new programming language and prepare to handle any malfunctions within the program. Shortly afterwards, I was given the assignment to automate the Station Program Note (SPN) Pull process. I developed an application, in Python, that generated a GUI (Graphical User Interface) that will be used by the International Space Station Safety & Mission Assurance team here at Johnson Space Center. The application will allow its users to download online files with the click of a button, import SPN's based on three different pulls, instantly manipulate and filter spreadsheets, and compare the three sources to determine which active SPN's (Station Program Notes) must be reviewed for any upcoming flights, missions, and/or software transitions. Initially, to perform the NASA SPN pull (one of three), I had created the program to allow the user to login to a secure webpage that stores data, input specific parameters, and retrieve the desired SPN's based on their inputs. However, to avoid any conflicts with sustainment, I altered it so that the user may login and download the NASA file independently. After the user has downloaded the file with the click of a button, I defined the program to check for any outdated or pre-existing files, for successful downloads, to acquire the spreadsheet, convert it from a text file to a comma separated file and finally into an Excel spreadsheet to be filtered and later scrutinized for specific SPN numbers. Once this file has been automatically manipulated to provide only the SPN numbers that are desired, they are stored in a global variable, shown on the GUI, and

  16. A Computer Simulation Using Spreadsheets for Learning Concept of Steady-State Equilibrium

    Science.gov (United States)

    Sharda, Vandana; Sastri, O. S. K. S.; Bhardwaj, Jyoti; Jha, Arbind K.

    2016-01-01

    In this paper, we present a simple spreadsheet based simulation activity that can be performed by students at the undergraduate level. This simulation is implemented in free open source software (FOSS) LibreOffice Calc, which is available for both Windows and Linux platform. This activity aims at building the probability distribution for the…

  17. Calculational Tool for Skin Contamination Dose Assessment

    CERN Document Server

    Hill, R L

    2002-01-01

    Spreadsheet calculational tool was developed to automate the calculations preformed for dose assessment of skin contamination. This document reports on the design and testing of the spreadsheet calculational tool.

  18. Making the business case for telemedicine: an interactive spreadsheet.

    Science.gov (United States)

    McCue, Michael J; Palsbo, Susan E

    2006-04-01

    The objective of this study was to demonstrate the business case for telemedicine in nonrural areas. We developed an interactive spreadsheet to conduct multiple financial analyses under different capital investment, revenue, and expense scenarios. We applied the spreadsheet to the specific case of poststroke rehabilitation in urban settings. The setting involved outpatient clinics associated with a freestanding rehabilitation hospital in Oklahoma. Our baseline scenario used historical financial data from face-to-face encounters as the baseline for payer and volume mix. We assumed a cost of capital of 10% to finance the project. The outcome measures were financial breakeven points and internal rate of return. A total of 340 telemedicine visits will generate a positive net cash flow each year. The project is expected to recoup the initial investment by the fourth year, produce a positive present value dollar return of more than $2,000, and earn rate of return of 20%, which exceeds the hospital's cost of capital. The business case is demonstrated for this scenario. Urban telemedicine programs can be financially self-sustaining without accounting for reductions in travel time by providers or patients. Urban telemedicine programs can be a sound business investment and not depend on grants or subsidies for start-up funding. There are several key decision points that affect breakeven points and return on investment. The best business strategy is to approach the decision as whether or not to build a new clinic.

  19. The Semantic Automated Discovery and Integration (SADI Web service Design-Pattern, API and Reference Implementation

    Directory of Open Access Journals (Sweden)

    Wilkinson Mark D

    2011-10-01

    Full Text Available Abstract Background The complexity and inter-related nature of biological data poses a difficult challenge for data and tool integration. There has been a proliferation of interoperability standards and projects over the past decade, none of which has been widely adopted by the bioinformatics community. Recent attempts have focused on the use of semantics to assist integration, and Semantic Web technologies are being welcomed by this community. Description SADI - Semantic Automated Discovery and Integration - is a lightweight set of fully standards-compliant Semantic Web service design patterns that simplify the publication of services of the type commonly found in bioinformatics and other scientific domains. Using Semantic Web technologies at every level of the Web services "stack", SADI services consume and produce instances of OWL Classes following a small number of very straightforward best-practices. In addition, we provide codebases that support these best-practices, and plug-in tools to popular developer and client software that dramatically simplify deployment of services by providers, and the discovery and utilization of those services by their consumers. Conclusions SADI Services are fully compliant with, and utilize only foundational Web standards; are simple to create and maintain for service providers; and can be discovered and utilized in a very intuitive way by biologist end-users. In addition, the SADI design patterns significantly improve the ability of software to automatically discover appropriate services based on user-needs, and automatically chain these into complex analytical workflows. We show that, when resources are exposed through SADI, data compliant with a given ontological model can be automatically gathered, or generated, from these distributed, non-coordinating resources - a behaviour we have not observed in any other Semantic system. Finally, we show that, using SADI, data dynamically generated from Web services

  20. The Semantic Automated Discovery and Integration (SADI) Web service Design-Pattern, API and Reference Implementation

    Science.gov (United States)

    2011-01-01

    Background The complexity and inter-related nature of biological data poses a difficult challenge for data and tool integration. There has been a proliferation of interoperability standards and projects over the past decade, none of which has been widely adopted by the bioinformatics community. Recent attempts have focused on the use of semantics to assist integration, and Semantic Web technologies are being welcomed by this community. Description SADI - Semantic Automated Discovery and Integration - is a lightweight set of fully standards-compliant Semantic Web service design patterns that simplify the publication of services of the type commonly found in bioinformatics and other scientific domains. Using Semantic Web technologies at every level of the Web services "stack", SADI services consume and produce instances of OWL Classes following a small number of very straightforward best-practices. In addition, we provide codebases that support these best-practices, and plug-in tools to popular developer and client software that dramatically simplify deployment of services by providers, and the discovery and utilization of those services by their consumers. Conclusions SADI Services are fully compliant with, and utilize only foundational Web standards; are simple to create and maintain for service providers; and can be discovered and utilized in a very intuitive way by biologist end-users. In addition, the SADI design patterns significantly improve the ability of software to automatically discover appropriate services based on user-needs, and automatically chain these into complex analytical workflows. We show that, when resources are exposed through SADI, data compliant with a given ontological model can be automatically gathered, or generated, from these distributed, non-coordinating resources - a behaviour we have not observed in any other Semantic system. Finally, we show that, using SADI, data dynamically generated from Web services can be explored in a manner

  1. The Semantic Automated Discovery and Integration (SADI) Web service Design-Pattern, API and Reference Implementation.

    Science.gov (United States)

    Wilkinson, Mark D; Vandervalk, Benjamin; McCarthy, Luke

    2011-10-24

    The complexity and inter-related nature of biological data poses a difficult challenge for data and tool integration. There has been a proliferation of interoperability standards and projects over the past decade, none of which has been widely adopted by the bioinformatics community. Recent attempts have focused on the use of semantics to assist integration, and Semantic Web technologies are being welcomed by this community. SADI - Semantic Automated Discovery and Integration - is a lightweight set of fully standards-compliant Semantic Web service design patterns that simplify the publication of services of the type commonly found in bioinformatics and other scientific domains. Using Semantic Web technologies at every level of the Web services "stack", SADI services consume and produce instances of OWL Classes following a small number of very straightforward best-practices. In addition, we provide codebases that support these best-practices, and plug-in tools to popular developer and client software that dramatically simplify deployment of services by providers, and the discovery and utilization of those services by their consumers. SADI Services are fully compliant with, and utilize only foundational Web standards; are simple to create and maintain for service providers; and can be discovered and utilized in a very intuitive way by biologist end-users. In addition, the SADI design patterns significantly improve the ability of software to automatically discover appropriate services based on user-needs, and automatically chain these into complex analytical workflows. We show that, when resources are exposed through SADI, data compliant with a given ontological model can be automatically gathered, or generated, from these distributed, non-coordinating resources - a behaviour we have not observed in any other Semantic system. Finally, we show that, using SADI, data dynamically generated from Web services can be explored in a manner very similar to data housed in

  2. Applying 'Evidence-Based Medicine' Theory to Interventional Radiology.Part 2: A Spreadsheet for Swift Assessment of Procedural Benefit and Harm

    International Nuclear Information System (INIS)

    MacEneaney, Peter M.; Malone, Dermot E.

    2000-01-01

    AIM: To design a spreadsheet program to analyse interventional radiology (IR) data rapidly produced in local research or reported in the literature using 'evidence-based medicine' (EBM) parameters of treatment benefit and harm. MATERIALS AND METHODS: Microsoft Excel TM was used. The spreadsheet consists of three worksheets. The first shows the 'Levels of Evidence and Grades of Recommendations' that can be assigned to therapeutic studies as defined by the Oxford Centre for EBM. The second and third worksheets facilitate the EBM assessment of therapeutic benefit and harm. Validity criteria are described. These include the assessment of the adequacy of sample size in the detection of possible procedural complications. A contingency (2 x 2) table for raw data on comparative outcomes in treated patients and controls has been incorporated. Formulae for EBM calculations are related to these numerators and denominators in the spreadsheet. The parameters calculated are benefit -- relative risk reduction, absolute risk reduction, number needed to treat (NNT). Harm -- relative risk, relative odds, number needed to harm (NNH). Ninety-five per cent confidence intervals are calculated for all these indices. The results change automatically when the data in the therapeutic outcome cells are changed. A final section allows the user to correct the NNT or NNH in their application to individual patients. RESULTS: This spreadsheet can be used on desktop and palmtop computers. The MS Excel TM version can be downloaded via the Internet from the URL ftp://radiography.com/pub/TxHarm00.xls. CONCLUSION: A spreadsheet is useful for the rapid analysis of the clinical benefit and harm from IR procedures. MacEneaney, P.M. and Malone, D.E

  3. Simple Functions Spreadsheet tool presentation; for determination of solubility limits of some radionuclides

    Energy Technology Data Exchange (ETDEWEB)

    Grive, Mireia; Domenech, Cristina; Montoya, Vanessa; Garcia, David; Duro, Lara (Amphos 21, Barcelona (Spain))

    2010-09-15

    This document is a guide for users of the Simple Functions Spreadsheet tool. The Simple Functions Spreadsheet tool has been developed by Amphos 21 to determine the solubility limits of some radionuclides and it has been especially designed for Performance Assessment exercises. The development of this tool has been promoted by the necessity expressed by SKB of having a confident and easy-to-handle tool to calculate solubility limits in an agile and relatively fast manner. Its development started in 2005 and since then, it has been improved until the current version. This document describes the accurate and preliminary study following expert criteria that has been used to select the simplified aqueous speciation and solid phase system included in the tool. This report also gives the basic instructions to use this tool and to interpret its results. Finally, this document also reports the different validation tests and sensitivity analyses that have been done during the verification process

  4. Data analysis and graphing in an introductory physics laboratory: spreadsheet versus statistics suite

    International Nuclear Information System (INIS)

    Peterlin, Primoz

    2010-01-01

    Two methods of data analysis are compared: spreadsheet software and a statistics software suite. Their use is compared analysing data collected in three selected experiments taken from an introductory physics laboratory, which include a linear dependence, a nonlinear dependence and a histogram. The merits of each method are compared.

  5. Excel spreadsheet in teaching numerical methods

    Science.gov (United States)

    Djamila, Harimi

    2017-09-01

    One of the important objectives in teaching numerical methods for undergraduates’ students is to bring into the comprehension of numerical methods algorithms. Although, manual calculation is important in understanding the procedure, it is time consuming and prone to error. This is specifically the case when considering the iteration procedure used in many numerical methods. Currently, many commercial programs are useful in teaching numerical methods such as Matlab, Maple, and Mathematica. These are usually not user-friendly by the uninitiated. Excel spreadsheet offers an initial level of programming, which it can be used either in or off campus. The students will not be distracted with writing codes. It must be emphasized that general commercial software is required to be introduced later to more elaborated questions. This article aims to report on a teaching numerical methods strategy for undergraduates engineering programs. It is directed to students, lecturers and researchers in engineering field.

  6. Automated detection of structural alerts (chemical fragments in (ecotoxicology

    Directory of Open Access Journals (Sweden)

    Ronan Bureau

    2013-02-01

    Full Text Available This mini-review describes the evolution of different algorithms dedicated to the automated discovery of chemical fragments associated to (ecotoxicological endpoints. These structural alerts correspond to one of the most interesting approach of in silico toxicology due to their direct link with specific toxicological mechanisms. A number of expert systems are already available but, since the first work in this field which considered a binomial distribution of chemical fragments between two datasets, new data miners were developed and applied with success in chemoinformatics. The frequency of a chemical fragment in a dataset is often at the core of the process for the definition of its toxicological relevance. However, recent progresses in data mining provide new insights into the automated discovery of new rules. Particularly, this review highlights the notion of Emerging Patterns that can capture contrasts between classes of data.

  7. AUTOMATED DETECTION OF STRUCTURAL ALERTS (CHEMICAL FRAGMENTS IN (ECOTOXICOLOGY

    Directory of Open Access Journals (Sweden)

    Alban Lepailleur

    2013-02-01

    Full Text Available This mini-review describes the evolution of different algorithms dedicated to the automated discovery of chemical fragments associated to (ecotoxicological endpoints. These structural alerts correspond to one of the most interesting approach of in silico toxicology due to their direct link with specific toxicological mechanisms. A number of expert systems are already available but, since the first work in this field which considered a binomial distribution of chemical fragments between two datasets, new data miners were developed and applied with success in chemoinformatics. The frequency of a chemical fragment in a dataset is often at the core of the process for the definition of its toxicological relevance. However, recent progresses in data mining provide new insights into the automated discovery of new rules. Particularly, this review highlights the notion of Emerging Patterns that can capture contrasts between classes of data.

  8. Maxine: A spreadsheet for estimating dose from chronic atmospheric radioactive releases

    Energy Technology Data Exchange (ETDEWEB)

    Jannik, Tim [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Bell, Evaleigh [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Dixon, Kenneth [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2017-07-24

    MAXINE is an EXCEL© spreadsheet, which is used to estimate dose to individuals for routine and accidental atmospheric releases of radioactive materials. MAXINE does not contain an atmospheric dispersion model, but rather doses are estimated using air and ground concentrations as input. Minimal input is required to run the program and site specific parameters are used when possible. Complete code description, verification of models, and user’s manual have been included.

  9. Automation of data processing and calculation of retention parameters and thermodynamic data for gas chromatography

    Science.gov (United States)

    Makarycheva, A. I.; Faerman, V. A.

    2017-02-01

    The analyses of automation patterns is performed and the programming solution for the automation of data processing of the chromatographic data and their further information storage with a help of a software package, Mathcad and MS Excel spreadsheets, is developed. The offered approach concedes the ability of data processing algorithm modification and does not require any programming experts participation. The approach provides making a measurement of the given time and retention volumes, specific retention volumes, a measurement of differential molar free adsorption energy, and a measurement of partial molar solution enthalpies and isosteric heats of adsorption. The developed solution is focused on the appliance in a small research group and is tested on the series of some new gas chromatography sorbents. More than 20 analytes were submitted to calculation of retention parameters and thermodynamic sorption quantities. The received data are provided in the form accessible to comparative analysis, and they are able to find sorbing agents with the most profitable properties to solve some concrete analytic issues.

  10. Automating Groundwater Sampling At Hanford, The Next Step

    International Nuclear Information System (INIS)

    Connell, C.W.; Conley, S.F.; Hildebrand, R.D.; Cunningham, D.E.

    2010-01-01

    Historically, the groundwater monitoring activities at the Department of Energy's Hanford Site in southeastern Washington State have been very 'people intensive.' Approximately 1500 wells are sampled each year by field personnel or 'samplers.' These individuals have been issued pre-printed forms showing information about the well(s) for a particular sampling evolution. This information is taken from 2 official electronic databases: the Hanford Well information System (HWIS) and the Hanford Environmental Information System (HEIS). The samplers used these hardcopy forms to document the groundwater samples and well water-levels. After recording the entries in the field, the samplers turned the forms in at the end of the day and other personnel posted the collected information onto a spreadsheet that was then printed and included in a log book. The log book was then used to make manual entries of the new information into the software application(s) for the HEIS and HWIS databases. A pilot project for automating this extremely tedious process was lauched in 2008. Initially, the automation was focused on water-level measurements. Now, the effort is being extended to automate the meta-data associated with collecting groundwater samples. The project allowed electronic forms produced in the field by samplers to be used in a work flow process where the data is transferred to the database and electronic form is filed in managed records - thus eliminating manually completed forms. Elimating the manual forms and streamlining the data entry not only improved the accuracy of the information recorded, but also enhanced the efficiency and sampling capacity of field office personnel.

  11. Universal Verification Methodology Based Register Test Automation Flow.

    Science.gov (United States)

    Woo, Jae Hun; Cho, Yong Kwan; Park, Sun Kyu

    2016-05-01

    In today's SoC design, the number of registers has been increased along with complexity of hardware blocks. Register validation is a time-consuming and error-pron task. Therefore, we need an efficient way to perform verification with less effort in shorter time. In this work, we suggest register test automation flow based UVM (Universal Verification Methodology). UVM provides a standard methodology, called a register model, to facilitate stimulus generation and functional checking of registers. However, it is not easy for designers to create register models for their functional blocks or integrate models in test-bench environment because it requires knowledge of SystemVerilog and UVM libraries. For the creation of register models, many commercial tools support a register model generation from register specification described in IP-XACT, but it is time-consuming to describe register specification in IP-XACT format. For easy creation of register model, we propose spreadsheet-based register template which is translated to IP-XACT description, from which register models can be easily generated using commercial tools. On the other hand, we also automate all the steps involved integrating test-bench and generating test-cases, so that designers may use register model without detailed knowledge of UVM or SystemVerilog. This automation flow involves generating and connecting test-bench components (e.g., driver, checker, bus adaptor, etc.) and writing test sequence for each type of register test-case. With the proposed flow, designers can save considerable amount of time to verify functionality of registers.

  12. USE OF ELECTRONIC EDUCATIONAL RESOURCES WHEN TRAINING IN WORK WITH SPREADSHEETS

    Directory of Open Access Journals (Sweden)

    Х А Гербеков

    2017-12-01

    Full Text Available Today the tools for maintaining training courses based on opportunities of information and communication technologies are developed. Practically in all directions of preparation and on all subject matters electronic textbook and self-instruction manuals are created. Nevertheless the industry of computer educational and methodical materials actively develops and gets more and more areas of development and introduction. In this regard more and more urgent is a problem of development of the electronic educational resources adequate to modern educational requirements. Creation and the organization of training courses with use of electronic educational resources in particular on the basis of Internet technologies remains a difficult methodical task.In article the questions connected with development of electronic educational resources for use when studying the substantial line “Information technologies” of a school course of informatics in particular for studying of spreadsheets are considered. Also the analysis of maintenance of a school course and the unified state examination from the point of view of representation of task in him corresponding to the substantial line of studying “Information technologies” on mastering technology of information processing in spreadsheets and the methods of visualization given by means of charts and schedules is carried out.

  13. Data Analysis and Graphing in an Introductory Physics Laboratory: Spreadsheet versus Statistics Suite

    Science.gov (United States)

    Peterlin, Primoz

    2010-01-01

    Two methods of data analysis are compared: spreadsheet software and a statistics software suite. Their use is compared analysing data collected in three selected experiments taken from an introductory physics laboratory, which include a linear dependence, a nonlinear dependence and a histogram. The merits of each method are compared. (Contains 7…

  14. Automated sampling and data processing derived from biomimetic membranes

    DEFF Research Database (Denmark)

    Perry, Mark; Vissing, Thomas; Boesen, P.

    2009-01-01

    data processing software to analyze and organize the large amounts of data generated. In this work, we developed an automated instrumental voltage clamp solution based on a custom-designed software controller application (the WaveManager), which enables automated on-line voltage clamp data acquisition...... applicable to long-time series experiments. We designed another software program for off-line data processing. The automation of the on-line voltage clamp data acquisition and off-line processing was furthermore integrated with a searchable database (DiscoverySheet (TM)) for efficient data management......Recent advances in biomimetic membrane systems have resulted in an increase in membrane lifetimes from hours to days and months. Long-lived membrane systems demand the development of both new automated monitoring equipment capable of measuring electrophysiological membrane characteristics and new...

  15. Verification of Monitor unit calculations for eclipse Treatment Planning System by in- house developed spreadsheet

    Directory of Open Access Journals (Sweden)

    Hemalatha Athiyaman

    2018-04-01

    Conclusion: The spreadsheet was tested for most of the routine treatment sites and geometries. It has good agreement with the Eclipse TPS version 13.8 for homogenous treatment sites such as head &and neck and carcinoma cervix.

  16. (abstract) Simple Spreadsheet Thermal Models for Cryogenic Applications

    Science.gov (United States)

    Nash, A. E.

    1994-01-01

    Self consistent circuit analog thermal models, that can be run in commercial spreadsheet programs on personal computers, have been created to calculate the cooldown and steady state performance of cryogen cooled Dewars. The models include temperature dependent conduction and radiation effects. The outputs of the models provide temperature distribution and Dewar performance information. These models have been used to analyze the Cryogenic Telescope Test Facility (CTTF). The facility will be on line in early 1995 for its first user, the Infrared Telescope Technology Testbed (ITTT), for the Space Infrared Telescope Facility (SIRTF) at JPL. The model algorithm as well as a comparison of the model predictions and actual performance of this facility will be presented.

  17. Understanding Gauss’s law using spreadsheets

    Science.gov (United States)

    Baird, William H.

    2013-09-01

    Some of the results from the electrostatics portion of introductory physics are particularly difficult for students to understand and/or believe. For students who have yet to take vector calculus, Gauss’s law is far from obvious and may seem more difficult than Coulomb’s. When these same students are told that the minimum potential energy for charges added to a conductor is realized when all charges are on the surface, they may have a hard time believing that the energy would not be lowered if just one of those charges were moved from the surface to the interior of a conductor. Investigating these ideas using Coulomb’s law and/or the formula for the potential energy of a system of discrete charges might be tempting, but as the number of charges climbs past a few the calculations become tedious. A spreadsheet enables students to perform these for a hundred or more charges and confirm the familiar results.

  18. The parameter spreadsheets and their applications

    International Nuclear Information System (INIS)

    Schwitters, R.; Chao, A.; Chou, W.; Peterson, J.

    1993-01-01

    This paper is to announce that a set of parameter spreadsheets, using the Microsoft EXCEL software, has been developed for the SSC (and also for the LHC). In this program, the input (or control) parameters and the derived parameters are linked by equations that express the accelerator physics involved. A subgroup of parameters that are considered critical, or possible bottlenecks, has been highlighted under the category of open-quotes Flagsclose quotes. Given certain performance goals, one can use this program to open-quotes tuneclose quotes the input parameters in such a way that the flagged parameters do not exceed their acceptable range. During the past years, this program has been employed for the following purposes: (a) To guide the machine designs for various operation scenarios, (b) To generate a parameter list that is self-consistent and, (c) To study the impact of some proposed parameter changes (e.g., different choices of the rf frequency and bunch spacing)

  19. Use of a commercial spreadsheet for quality control in radiotherapy

    International Nuclear Information System (INIS)

    Sales, D.A.G.; Batista, D.V.S.

    2001-01-01

    This work presents the results obtained from elaboration of a spreadsheet to quality control of physical and clinical dosimetry of a radiotherapy service. It was developed using the resources of a commercial software, in the way to behave an independent verification of manual calculation and therapy planning system calculation to routine procedures of radiotherapy service of Instituto Nacional de Cancer. Its validation was made with the reference of current manual calculation proposed at literature and with the results of therapy planning system for test cases. (author)

  20. Scientific workflows as productivity tools for drug discovery.

    Science.gov (United States)

    Shon, John; Ohkawa, Hitomi; Hammer, Juergen

    2008-05-01

    Large pharmaceutical companies annually invest tens to hundreds of millions of US dollars in research informatics to support their early drug discovery processes. Traditionally, most of these investments are designed to increase the efficiency of drug discovery. The introduction of do-it-yourself scientific workflow platforms has enabled research informatics organizations to shift their efforts toward scientific innovation, ultimately resulting in a possible increase in return on their investments. Unlike the handling of most scientific data and application integration approaches, researchers apply scientific workflows to in silico experimentation and exploration, leading to scientific discoveries that lie beyond automation and integration. This review highlights some key requirements for scientific workflow environments in the pharmaceutical industry that are necessary for increasing research productivity. Examples of the application of scientific workflows in research and a summary of recent platform advances are also provided.

  1. Proteomic biomarker discovery in 1000 human plasma samples with mass spectrometry

    DEFF Research Database (Denmark)

    Cominetti, Ornella; Núñez Galindo, Antonio; Corthésy, John

    2016-01-01

    automated proteomic biomarker discovery workflow. Herein, we have applied this approach to analyze 1000 plasma samples from the multicentered human dietary intervention study "DiOGenes". Study design, sample randomization, tracking, and logistics were the foundations of our large-scale study. We checked...

  2. Can automation in radiotherapy reduce costs?

    Science.gov (United States)

    Massaccesi, Mariangela; Corti, Michele; Azario, Luigi; Balducci, Mario; Ferro, Milena; Mantini, Giovanna; Mattiucci, Gian Carlo; Valentini, Vincenzo

    2015-01-01

    Computerized automation is likely to play an increasingly important role in radiotherapy. The objective of this study was to report the results of the first part of a program to implement a model for economical evaluation based on micro-costing method. To test the efficacy of the model, the financial impact of the introduction of an automation tool was estimated. A single- and multi-center validation of the model by a prospective collection of data is planned as the second step of the program. The model was implemented by using an interactive spreadsheet (Microsoft Excel, 2010). The variables to be included were identified across three components: productivity, staff, and equipment. To calculate staff requirements, the workflow of Gemelli ART center was mapped out and relevant workload measures were defined. Profit and loss, productivity and staffing were identified as significant outcomes. Results were presented in terms of earnings before interest and taxes (EBIT). Three different scenarios were hypothesized: baseline situation at Gemelli ART (scenario 1); reduction by 2 minutes of the average duration of treatment fractions (scenario 2); and increased incidence of advanced treatment modalities (scenario 3). By using the model, predicted EBIT values for each scenario were calculated across a period of eight years (from 2015 to 2022). For both scenarios 2 and 3 costs are expected to slightly increase as compared to baseline situation that is particularly due to a little increase in clinical personnel costs. However, in both cases EBIT values are more favorable than baseline situation (EBIT values: scenario 1, 27%, scenario 2, 30%, scenario 3, 28% of revenues). A model based on a micro-costing method was able to estimate the financial consequences of the introduction of an automation tool in our radiotherapy department. A prospective collection of data at Gemelli ART and in a consortium of centers is currently under way to prospectively validate the model.

  3. Random Numbers Demonstrate the Frequency of Type I Errors: Three Spreadsheets for Class Instruction

    Science.gov (United States)

    Duffy, Sean

    2010-01-01

    This paper describes three spreadsheet exercises demonstrating the nature and frequency of type I errors using random number generation. The exercises are designed specifically to address issues related to testing multiple relations using correlation (Demonstration I), t tests varying in sample size (Demonstration II) and multiple comparisons…

  4. CROSS-CORRELATION MODELLING OF SURFACE WATER – GROUNDWATER INTERACTION USING THE EXCEL SPREADSHEET APPLICATION

    Directory of Open Access Journals (Sweden)

    Kristijan Posavec

    2017-01-01

    Full Text Available Modelling responses of groundwater levels in aquifer systems, which occur as a reaction to changes in aquifer system boundary conditions such as river or stream stages, is commonly being studied using statistical methods, namely correlation, cross-correlation and regression methods. Although correlation and regression analysis tools are readily available in Microsoft Excel, a widely applied spreadsheet industry standard, the cross-correlation analysis tool is missing. As a part of research of groundwater pressure propagation into alluvial aquifer systems of the Sava and Drava/Danube River catchments following river stages rise, focused on estimating groundwater pressure travel times in aquifers, an Excel spreadsheet data analysis application for cross-correlation modelling has been designed and used in modelling surface water – groundwater interaction. Examples of fi eld data from the Zagreb aquifer system and the Kopački rit Nature Park aquifer system are used to illustrate the usefulness of the cross-correlation application.

  5. Bead-based screening in chemical biology and drug discovery

    DEFF Research Database (Denmark)

    Komnatnyy, Vitaly V.; Nielsen, Thomas Eiland; Qvortrup, Katrine

    2018-01-01

    libraries for early drug discovery. Among the various library forms, the one-bead-one-compound (OBOC) library, where each bead carries many copies of a single compound, holds the greatest potential for the rapid identification of novel hits against emerging drug targets. However, this potential has not yet...... been fully realized due to a number of technical obstacles. In this feature article, we review the progress that has been made towards bead-based library screening and applications to the discovery of bioactive compounds. We identify the key challenges of this approach and highlight key steps needed......High-throughput screening is an important component of the drug discovery process. The screening of libraries containing hundreds of thousands of compounds requires assays amanable to miniaturisation and automization. Combinatorial chemistry holds a unique promise to deliver structural diverse...

  6. Teaching Graphical Simulations of Fourier Series Expansion of Some Periodic Waves Using Spreadsheets

    Science.gov (United States)

    Singh, Iqbal; Kaur, Bikramjeet

    2018-01-01

    The present article demonstrates a way of programming using an Excel spreadsheet to teach Fourier series expansion in school/colleges without the knowledge of any typical programming language. By using this, a student learns to approximate partial sum of the n terms of Fourier series for some periodic signals such as square wave, saw tooth wave,…

  7. Automation Hooks Architecture for Flexible Test Orchestration - Concept Development and Validation

    Science.gov (United States)

    Lansdowne, C. A.; Maclean, John R.; Winton, Chris; McCartney, Pat

    2011-01-01

    The Automation Hooks Architecture Trade Study for Flexible Test Orchestration sought a standardized data-driven alternative to conventional automated test programming interfaces. The study recommended composing the interface using multicast DNS (mDNS/SD) service discovery, Representational State Transfer (Restful) Web Services, and Automatic Test Markup Language (ATML). We describe additional efforts to rapidly mature the Automation Hooks Architecture candidate interface definition by validating it in a broad spectrum of applications. These activities have allowed us to further refine our concepts and provide observations directed toward objectives of economy, scalability, versatility, performance, severability, maintainability, scriptability and others.

  8. The Adam and Eve Robot Scientists for the Automated Discovery of Scientific Knowledge

    Science.gov (United States)

    King, Ross

    A Robot Scientist is a physically implemented robotic system that applies techniques from artificial intelligence to execute cycles of automated scientific experimentation. A Robot Scientist can automatically execute cycles of hypothesis formation, selection of efficient experiments to discriminate between hypotheses, execution of experiments using laboratory automation equipment, and analysis of results. The motivation for developing Robot Scientists is to better understand science, and to make scientific research more efficient. The Robot Scientist `Adam' was the first machine to autonomously discover scientific knowledge: both form and experimentally confirm novel hypotheses. Adam worked in the domain of yeast functional genomics. The Robot Scientist `Eve' was originally developed to automate early-stage drug development, with specific application to neglected tropical disease such as malaria, African sleeping sickness, etc. We are now adapting Eve to work with on cancer. We are also teaching Eve to autonomously extract information from the scientific literature.

  9. Automation of PCXMC and ImPACT for NASA Astronaut Medical Imaging Dose and Risk Tracking

    Science.gov (United States)

    Bahadori, Amir; Picco, Charles; Flores-McLaughlin, John; Shavers, Mark; Semones, Edward

    2011-01-01

    To automate astronaut organ and effective dose calculations from occupational X-ray and computed tomography (CT) examinations incorporating PCXMC and ImPACT tools and to estimate the associated lifetime cancer risk per the National Council on Radiation Protection & Measurements (NCRP) using MATLAB(R). Methods: NASA follows guidance from the NCRP on its operational radiation safety program for astronauts. NCRP Report 142 recommends that astronauts be informed of the cancer risks from reported exposures to ionizing radiation from medical imaging. MATLAB(R) code was written to retrieve exam parameters for medical imaging procedures from a NASA database, calculate associated dose and risk, and return results to the database, using the Microsoft .NET Framework. This code interfaces with the PCXMC executable and emulates the ImPACT Excel spreadsheet to calculate organ doses from X-rays and CTs, respectively, eliminating the need to utilize the PCXMC graphical user interface (except for a few special cases) and the ImPACT spreadsheet. Results: Using MATLAB(R) code to interface with PCXMC and replicate ImPACT dose calculation allowed for rapid evaluation of multiple medical imaging exams. The user inputs the exam parameter data into the database and runs the code. Based on the imaging modality and input parameters, the organ doses are calculated. Output files are created for record, and organ doses, effective dose, and cancer risks associated with each exam are written to the database. Annual and post-flight exposure reports, which are used by the flight surgeon to brief the astronaut, are generated from the database. Conclusions: Automating PCXMC and ImPACT for evaluation of NASA astronaut medical imaging radiation procedures allowed for a traceable and rapid method for tracking projected cancer risks associated with over 12,000 exposures. This code will be used to evaluate future medical radiation exposures, and can easily be modified to accommodate changes to the risk

  10. Number Theory, Dialogue, and the Use of Spreadsheets in Teacher Education

    Directory of Open Access Journals (Sweden)

    Sergei Abramovich

    2011-04-01

    Full Text Available This paper demonstrates the use of a spreadsheet in teaching topics in elementary number theory. It emphasizes both the power and deficiency of inductive reasoning using a number of historically significant examples. The notion of computational experiment as a modern approach to the teaching of mathematics is discussed. The paper, grounded in a teacher-student dialogue as an instructional method, is a reflection on the author’s work over the years with prospective teachers of secondary mathematics.

  11. Automated Diatom Analysis Applied to Traditional Light Microscopy: A Proof-of-Concept Study

    Science.gov (United States)

    Little, Z. H. L.; Bishop, I.; Spaulding, S. A.; Nelson, H.; Mahoney, C.

    2017-12-01

    Diatom identification and enumeration by high resolution light microscopy is required for many areas of research and water quality assessment. Such analyses, however, are both expertise and labor-intensive. These challenges motivate the need for an automated process to efficiently and accurately identify and enumerate diatoms. Improvements in particle analysis software have increased the likelihood that diatom enumeration can be automated. VisualSpreadsheet software provides a possible solution for automated particle analysis of high-resolution light microscope diatom images. We applied the software, independent of its complementary FlowCam hardware, to automated analysis of light microscope images containing diatoms. Through numerous trials, we arrived at threshold settings to correctly segment 67% of the total possible diatom valves and fragments from broad fields of view. (183 light microscope images were examined containing 255 diatom particles. Of the 255 diatom particles present, 216 diatoms valves and fragments of valves were processed, with 170 properly analyzed and focused upon by the software). Manual analysis of the images yielded 255 particles in 400 seconds, whereas the software yielded a total of 216 particles in 68 seconds, thus highlighting that the software has an approximate five-fold efficiency advantage in particle analysis time. As in past efforts, incomplete or incorrect recognition was found for images with multiple valves in contact or valves with little contrast. The software has potential to be an effective tool in assisting taxonomists with diatom enumeration by completing a large portion of analyses. Benefits and limitations of the approach are presented to allow for development of future work in image analysis and automated enumeration of traditional light microscope images containing diatoms.

  12. Natural Products for Drug Discovery in the 21st Century: Innovations for Novel Drug Discovery

    Directory of Open Access Journals (Sweden)

    Nicholas Ekow Thomford

    2018-05-01

    Full Text Available The therapeutic properties of plants have been recognised since time immemorial. Many pathological conditions have been treated using plant-derived medicines. These medicines are used as concoctions or concentrated plant extracts without isolation of active compounds. Modern medicine however, requires the isolation and purification of one or two active compounds. There are however a lot of global health challenges with diseases such as cancer, degenerative diseases, HIV/AIDS and diabetes, of which modern medicine is struggling to provide cures. Many times the isolation of “active compound” has made the compound ineffective. Drug discovery is a multidimensional problem requiring several parameters of both natural and synthetic compounds such as safety, pharmacokinetics and efficacy to be evaluated during drug candidate selection. The advent of latest technologies that enhance drug design hypotheses such as Artificial Intelligence, the use of ‘organ-on chip’ and microfluidics technologies, means that automation has become part of drug discovery. This has resulted in increased speed in drug discovery and evaluation of the safety, pharmacokinetics and efficacy of candidate compounds whilst allowing novel ways of drug design and synthesis based on natural compounds. Recent advances in analytical and computational techniques have opened new avenues to process complex natural products and to use their structures to derive new and innovative drugs. Indeed, we are in the era of computational molecular design, as applied to natural products. Predictive computational softwares have contributed to the discovery of molecular targets of natural products and their derivatives. In future the use of quantum computing, computational softwares and databases in modelling molecular interactions and predicting features and parameters needed for drug development, such as pharmacokinetic and pharmacodynamics, will result in few false positive leads in drug

  13. Natural Products for Drug Discovery in the 21st Century: Innovations for Novel Drug Discovery.

    Science.gov (United States)

    Thomford, Nicholas Ekow; Senthebane, Dimakatso Alice; Rowe, Arielle; Munro, Daniella; Seele, Palesa; Maroyi, Alfred; Dzobo, Kevin

    2018-05-25

    The therapeutic properties of plants have been recognised since time immemorial. Many pathological conditions have been treated using plant-derived medicines. These medicines are used as concoctions or concentrated plant extracts without isolation of active compounds. Modern medicine however, requires the isolation and purification of one or two active compounds. There are however a lot of global health challenges with diseases such as cancer, degenerative diseases, HIV/AIDS and diabetes, of which modern medicine is struggling to provide cures. Many times the isolation of "active compound" has made the compound ineffective. Drug discovery is a multidimensional problem requiring several parameters of both natural and synthetic compounds such as safety, pharmacokinetics and efficacy to be evaluated during drug candidate selection. The advent of latest technologies that enhance drug design hypotheses such as Artificial Intelligence, the use of 'organ-on chip' and microfluidics technologies, means that automation has become part of drug discovery. This has resulted in increased speed in drug discovery and evaluation of the safety, pharmacokinetics and efficacy of candidate compounds whilst allowing novel ways of drug design and synthesis based on natural compounds. Recent advances in analytical and computational techniques have opened new avenues to process complex natural products and to use their structures to derive new and innovative drugs. Indeed, we are in the era of computational molecular design, as applied to natural products. Predictive computational softwares have contributed to the discovery of molecular targets of natural products and their derivatives. In future the use of quantum computing, computational softwares and databases in modelling molecular interactions and predicting features and parameters needed for drug development, such as pharmacokinetic and pharmacodynamics, will result in few false positive leads in drug development. This review

  14. A Framework for Automatic Web Service Discovery Based on Semantics and NLP Techniques

    Directory of Open Access Journals (Sweden)

    Asma Adala

    2011-01-01

    Full Text Available As a greater number of Web Services are made available today, automatic discovery is recognized as an important task. To promote the automation of service discovery, different semantic languages have been created that allow describing the functionality of services in a machine interpretable form using Semantic Web technologies. The problem is that users do not have intimate knowledge about semantic Web service languages and related toolkits. In this paper, we propose a discovery framework that enables semantic Web service discovery based on keywords written in natural language. We describe a novel approach for automatic discovery of semantic Web services which employs Natural Language Processing techniques to match a user request, expressed in natural language, with a semantic Web service description. Additionally, we present an efficient semantic matching technique to compute the semantic distance between ontological concepts.

  15. Current status and future prospects for enabling chemistry technology in the drug discovery process.

    Science.gov (United States)

    Djuric, Stevan W; Hutchins, Charles W; Talaty, Nari N

    2016-01-01

    This review covers recent advances in the implementation of enabling chemistry technologies into the drug discovery process. Areas covered include parallel synthesis chemistry, high-throughput experimentation, automated synthesis and purification methods, flow chemistry methodology including photochemistry, electrochemistry, and the handling of "dangerous" reagents. Also featured are advances in the "computer-assisted drug design" area and the expanding application of novel mass spectrometry-based techniques to a wide range of drug discovery activities.

  16. Teaching graphical simulations of Fourier series expansion of some periodic waves using spreadsheets

    Science.gov (United States)

    Singh, Iqbal; Kaur, Bikramjeet

    2018-05-01

    The present article demonstrates a way of programming using an Excel spreadsheet to teach Fourier series expansion in school/colleges without the knowledge of any typical programming language. By using this, a student learns to approximate partial sum of the n terms of Fourier series for some periodic signals such as square wave, saw tooth wave, half wave rectifier and full wave rectifier signals.

  17. Numerical Modelling with Spreadsheets as a Means to Promote STEM to High School Students

    Science.gov (United States)

    Benacka, Jan

    2016-01-01

    The article gives an account of an experiment in which sixty-eight high school students of age 16 - 19 developed spreadsheet applications that simulated fall and projectile motion in the air. The students applied the Euler method to solve the governing differential equations. The aim was to promote STEM to the students and motivate them to study…

  18. Pre-service teachers’ TPACK competencies for spreadsheet integration: insights from a mathematics-specific instructional technology course

    NARCIS (Netherlands)

    Agyei, D.D.; Voogt, J.M.

    2015-01-01

    This article explored the impact of strategies applied in a mathematics instructional technology course for developing technology integration competencies, in particular in the use of spreadsheets, in pre-service teachers. In this respect, 104 pre-service mathematics teachers from a teacher training

  19. Methods for Automated and Continuous Commissioning of Building Systems

    Energy Technology Data Exchange (ETDEWEB)

    Larry Luskay; Michael Brambley; Srinivas Katipamula

    2003-04-30

    Avoidance of poorly installed HVAC systems is best accomplished at the close of construction by having a building and its systems put ''through their paces'' with a well conducted commissioning process. This research project focused on developing key components to enable the development of tools that will automatically detect and correct equipment operating problems, thus providing continuous and automatic commissioning of the HVAC systems throughout the life of a facility. A study of pervasive operating problems reveled the following would most benefit from an automated and continuous commissioning process: (1) faulty economizer operation; (2) malfunctioning sensors; (3) malfunctioning valves and dampers, and (4) access to project design data. Methodologies for detecting system operation faults in these areas were developed and validated in ''bare-bones'' forms within standard software such as spreadsheets, databases, statistical or mathematical packages. Demonstrations included flow diagrams and simplified mock-up applications. Techniques to manage data were demonstrated by illustrating how test forms could be populated with original design information and the recommended sequence of operation for equipment systems. Proposed tools would use measured data, design data, and equipment operating parameters to diagnosis system problems. Steps for future research are suggested to help more toward practical application of automated commissioning and its high potential to improve equipment availability, increase occupant comfort, and extend the life of system equipment.

  20. Current status and future prospects for enabling chemistry technology in the drug discovery process

    Science.gov (United States)

    Djuric, Stevan W.; Hutchins, Charles W.; Talaty, Nari N.

    2016-01-01

    This review covers recent advances in the implementation of enabling chemistry technologies into the drug discovery process. Areas covered include parallel synthesis chemistry, high-throughput experimentation, automated synthesis and purification methods, flow chemistry methodology including photochemistry, electrochemistry, and the handling of “dangerous” reagents. Also featured are advances in the “computer-assisted drug design” area and the expanding application of novel mass spectrometry-based techniques to a wide range of drug discovery activities. PMID:27781094

  1. Automated sampling and data processing derived from biomimetic membranes

    International Nuclear Information System (INIS)

    Perry, M; Vissing, T; Hansen, J S; Nielsen, C H; Boesen, T P; Emneus, J

    2009-01-01

    Recent advances in biomimetic membrane systems have resulted in an increase in membrane lifetimes from hours to days and months. Long-lived membrane systems demand the development of both new automated monitoring equipment capable of measuring electrophysiological membrane characteristics and new data processing software to analyze and organize the large amounts of data generated. In this work, we developed an automated instrumental voltage clamp solution based on a custom-designed software controller application (the WaveManager), which enables automated on-line voltage clamp data acquisition applicable to long-time series experiments. We designed another software program for off-line data processing. The automation of the on-line voltage clamp data acquisition and off-line processing was furthermore integrated with a searchable database (DiscoverySheet(TM)) for efficient data management. The combined solution provides a cost efficient and fast way to acquire, process and administrate large amounts of voltage clamp data that may be too laborious and time consuming to handle manually. (communication)

  2. Automated sampling and data processing derived from biomimetic membranes

    Energy Technology Data Exchange (ETDEWEB)

    Perry, M; Vissing, T; Hansen, J S; Nielsen, C H [Aquaporin A/S, Diplomvej 377, DK-2800 Kgs. Lyngby (Denmark); Boesen, T P [Xefion ApS, Kildegaardsvej 8C, DK-2900 Hellerup (Denmark); Emneus, J, E-mail: Claus.Nielsen@fysik.dtu.d [DTU Nanotech, Technical University of Denmark, DK-2800 Kgs. Lyngby (Denmark)

    2009-12-15

    Recent advances in biomimetic membrane systems have resulted in an increase in membrane lifetimes from hours to days and months. Long-lived membrane systems demand the development of both new automated monitoring equipment capable of measuring electrophysiological membrane characteristics and new data processing software to analyze and organize the large amounts of data generated. In this work, we developed an automated instrumental voltage clamp solution based on a custom-designed software controller application (the WaveManager), which enables automated on-line voltage clamp data acquisition applicable to long-time series experiments. We designed another software program for off-line data processing. The automation of the on-line voltage clamp data acquisition and off-line processing was furthermore integrated with a searchable database (DiscoverySheet(TM)) for efficient data management. The combined solution provides a cost efficient and fast way to acquire, process and administrate large amounts of voltage clamp data that may be too laborious and time consuming to handle manually. (communication)

  3. Engine Icing Data - An Analytics Approach

    Science.gov (United States)

    Fitzgerald, Brooke A.; Flegel, Ashlie B.

    2017-01-01

    Engine icing researchers at the NASA Glenn Research Center use the Escort data acquisition system in the Propulsion Systems Laboratory (PSL) to generate and collect a tremendous amount of data every day. Currently these researchers spend countless hours processing and formatting their data, selecting important variables, and plotting relationships between variables, all by hand, generally analyzing data in a spreadsheet-style program (such as Microsoft Excel). Though spreadsheet-style analysis is familiar and intuitive to many, processing data in spreadsheets is often unreproducible and small mistakes are easily overlooked. Spreadsheet-style analysis is also time inefficient. The same formatting, processing, and plotting procedure has to be repeated for every dataset, which leads to researchers performing the same tedious data munging process over and over instead of making discoveries within their data. This paper documents a data analysis tool written in Python hosted in a Jupyter notebook that vastly simplifies the analysis process. From the file path of any folder containing time series datasets, this tool batch loads every dataset in the folder, processes the datasets in parallel, and ingests them into a widget where users can search for and interactively plot subsets of columns in a number of ways with a click of a button, easily and intuitively comparing their data and discovering interesting dynamics. Furthermore, comparing variables across data sets and integrating video data (while extremely difficult with spreadsheet-style programs) is quite simplified in this tool. This tool has also gathered interest outside the engine icing branch, and will be used by researchers across NASA Glenn Research Center. This project exemplifies the enormous benefit of automating data processing, analysis, and visualization, and will help researchers move from raw data to insight in a much smaller time frame.

  4. A Spreadsheet-Based Visualized Mindtool for Improving Students' Learning Performance in Identifying Relationships between Numerical Variables

    Science.gov (United States)

    Lai, Chiu-Lin; Hwang, Gwo-Jen

    2015-01-01

    In this study, a spreadsheet-based visualized Mindtool was developed for improving students' learning performance when finding relationships between numerical variables by engaging them in reasoning and decision-making activities. To evaluate the effectiveness of the proposed approach, an experiment was conducted on the "phenomena of climate…

  5. Recent advances in inkjet dispensing technologies: applications in drug discovery.

    Science.gov (United States)

    Zhu, Xiangcheng; Zheng, Qiang; Yang, Hu; Cai, Jin; Huang, Lei; Duan, Yanwen; Xu, Zhinan; Cen, Peilin

    2012-09-01

    Inkjet dispensing technology is a promising fabrication methodology widely applied in drug discovery. The automated programmable characteristics and high-throughput efficiency makes this approach potentially very useful in miniaturizing the design patterns for assays and drug screening. Various custom-made inkjet dispensing systems as well as specialized bio-ink and substrates have been developed and applied to fulfill the increasing demands of basic drug discovery studies. The incorporation of other modern technologies has further exploited the potential of inkjet dispensing technology in drug discovery and development. This paper reviews and discusses the recent developments and practical applications of inkjet dispensing technology in several areas of drug discovery and development including fundamental assays of cells and proteins, microarrays, biosensors, tissue engineering, basic biological and pharmaceutical studies. Progression in a number of areas of research including biomaterials, inkjet mechanical systems and modern analytical techniques as well as the exploration and accumulation of profound biological knowledge has enabled different inkjet dispensing technologies to be developed and adapted for high-throughput pattern fabrication and miniaturization. This in turn presents a great opportunity to propel inkjet dispensing technology into drug discovery.

  6. Peningkatan Hasil Belajar Operasional Spreadsheet Jenis dan Fungsi dengan Rumus Statistik Akuntansi melalui Demonstrasi dan Presentasi

    Directory of Open Access Journals (Sweden)

    NURBAITI SALPIDA GINAYANTI

    2016-08-01

    Full Text Available The research was purposed to find out the effectiveness of demonstration and presentation models are able to improve study result of students in operating Spreadsheet type and function by statistics in X Accounting 1 in SMKN 48 at Academic Year 2014/2015. The reasearch was conducted on August-November 2014. The method of the research was Action Research (PTK which was conducted in two cycles. Demonstration and Presentation were used as learning cycle model. A cycle consisted in three times of meetings and in the third meeting was done Post test. The indicators have been achieved by the result of research. As the expectation, in the second cycle, namely the number of students got the highest points was 97,45% and average value was 80,79%. In conclusion, demonstration and presentation are able to improve students’ study result in operating spreadsheet type and function by statistics if it was implemented appropriately in X Accounting 1 in SMKN 48 Jakarta.

  7. Advances in Computer, Communication, Control and Automation

    CERN Document Server

    011 International Conference on Computer, Communication, Control and Automation

    2012-01-01

    The volume includes a set of selected papers extended and revised from the 2011 International Conference on Computer, Communication, Control and Automation (3CA 2011). 2011 International Conference on Computer, Communication, Control and Automation (3CA 2011) has been held in Zhuhai, China, November 19-20, 2011. This volume  topics covered include signal and Image processing, speech and audio Processing, video processing and analysis, artificial intelligence, computing and intelligent systems, machine learning, sensor and neural networks, knowledge discovery and data mining, fuzzy mathematics and Applications, knowledge-based systems, hybrid systems modeling and design, risk analysis and management, system modeling and simulation. We hope that researchers, graduate students and other interested readers benefit scientifically from the proceedings and also find it stimulating in the process.

  8. Heat as a groundwater tracer in shallow and deep heterogeneous media: Analytical solution, spreadsheet tool, and field applications

    Science.gov (United States)

    Kurylyk, Barret L.; Irvine, Dylan J.; Carey, Sean K.; Briggs, Martin A.; Werkema, Dale D.; Bonham, Mariah

    2017-01-01

    Groundwater flow advects heat, and thus, the deviation of subsurface temperatures from an expected conduction‐dominated regime can be analysed to estimate vertical water fluxes. A number of analytical approaches have been proposed for using heat as a groundwater tracer, and these have typically assumed a homogeneous medium. However, heterogeneous thermal properties are ubiquitous in subsurface environments, both at the scale of geologic strata and at finer scales in streambeds. Herein, we apply the analytical solution of Shan and Bodvarsson (2004), developed for estimating vertical water fluxes in layered systems, in 2 new environments distinct from previous vadose zone applications. The utility of the solution for studying groundwater‐surface water exchange is demonstrated using temperature data collected from an upwelling streambed with sediment layers, and a simple sensitivity analysis using these data indicates the solution is relatively robust. Also, a deeper temperature profile recorded in a borehole in South Australia is analysed to estimate deeper water fluxes. The analytical solution is able to match observed thermal gradients, including the change in slope at sediment interfaces. Results indicate that not accounting for layering can yield errors in the magnitude and even direction of the inferred Darcy fluxes. A simple automated spreadsheet tool (Flux‐LM) is presented to allow users to input temperature and layer data and solve the inverse problem to estimate groundwater flux rates from shallow (e.g., regimes.

  9. Pre-Service Teachers' TPACK Competencies for Spreadsheet Integration: Insights from a Mathematics-Specific Instructional Technology Course

    Science.gov (United States)

    Agyei, Douglas D.; Voogt, Joke M.

    2015-01-01

    This article explored the impact of strategies applied in a mathematics instructional technology course for developing technology integration competencies, in particular in the use of spreadsheets, in pre-service teachers. In this respect, 104 pre-service mathematics teachers from a teacher training programme in Ghana enrolled in the mathematics…

  10. An automated graphics tool for comparative genomics: the Coulson plot generator.

    Science.gov (United States)

    Field, Helen I; Coulson, Richard M R; Field, Mark C

    2013-04-27

    Comparative analysis is an essential component to biology. When applied to genomics for example, analysis may require comparisons between the predicted presence and absence of genes in a group of genomes under consideration. Frequently, genes can be grouped into small categories based on functional criteria, for example membership of a multimeric complex, participation in a metabolic or signaling pathway or shared sequence features and/or paralogy. These patterns of retention and loss are highly informative for the prediction of function, and hence possible biological context, and can provide great insights into the evolutionary history of cellular functions. However, representation of such information in a standard spreadsheet is a poor visual means from which to extract patterns within a dataset. We devised the Coulson Plot, a new graphical representation that exploits a matrix of pie charts to display comparative genomics data. Each pie is used to describe a complex or process from a separate taxon, and is divided into sectors corresponding to the number of proteins (subunits) in a complex/process. The predicted presence or absence of proteins in each complex are delineated by occupancy of a given sector; this format is visually highly accessible and makes pattern recognition rapid and reliable. A key to the identity of each subunit, plus hierarchical naming of taxa and coloring are included. A java-based application, the Coulson plot generator (CPG) automates graphic production, with a tab or comma-delineated text file as input and generating an editable portable document format or svg file. CPG software may be used to rapidly convert spreadsheet data to a graphical matrix pie chart format. The representation essentially retains all of the information from the spreadsheet but presents a graphically rich format making comparisons and identification of patterns significantly clearer. While the Coulson plot format is highly useful in comparative genomics, its

  11. Spreadsheets for business process management : Using process mining to deal with “events” rather than “numbers”?

    NARCIS (Netherlands)

    van der Aalst, Wil

    2018-01-01

    Purpose: Process mining provides a generic collection of techniques to turn event data into valuable insights, improvement ideas, predictions, and recommendations. This paper uses spreadsheets as a metaphor to introduce process mining as an essential tool for data scientists and business analysts.

  12. Perovskite classification: An Excel spreadsheet to determine and depict end-member proportions for the perovskite- and vapnikite-subgroups of the perovskite supergroup

    Science.gov (United States)

    Locock, Andrew J.; Mitchell, Roger H.

    2018-04-01

    Perovskite mineral oxides commonly exhibit extensive solid-solution, and are therefore classified on the basis of the proportions of their ideal end-members. A uniform sequence of calculation of the end-members is required if comparisons are to be made between different sets of analytical data. A Microsoft Excel spreadsheet has been programmed to assist with the classification and depiction of the minerals of the perovskite- and vapnikite-subgroups following the 2017 nomenclature of the perovskite supergroup recommended by the International Mineralogical Association (IMA). Compositional data for up to 36 elements are input into the spreadsheet as oxides in weight percent. For each analysis, the output includes the formula, the normalized proportions of 15 end-members, and the percentage of cations which cannot be assigned to those end-members. The data are automatically plotted onto the ternary and quaternary diagrams recommended by the IMA for depiction of perovskite compositions. Up to 200 analyses can be entered into the spreadsheet, which is accompanied by data calculated for 140 perovskite compositions compiled from the literature.

  13. An Excel Spreadsheet Model for States and Districts to Assess the Cost-Benefit of School Nursing Services.

    Science.gov (United States)

    Wang, Li Yan; O'Brien, Mary Jane; Maughan, Erin D

    2016-11-01

    This paper describes a user-friendly, Excel spreadsheet model and two data collection instruments constructed by the authors to help states and districts perform cost-benefit analyses of school nursing services delivered by full-time school nurses. Prior to applying the model, states or districts need to collect data using two forms: "Daily Nurse Data Collection Form" and the "Teacher Survey." The former is used to record daily nursing activities, including number of student health encounters, number of medications administered, number of student early dismissals, and number of medical procedures performed. The latter is used to obtain estimates for the time teachers spend addressing student health issues. Once inputs are entered in the model, outputs are automatically calculated, including program costs, total benefits, net benefits, and benefit-cost ratio. The spreadsheet model, data collection tools, and instructions are available at the NASN website ( http://www.nasn.org/The/CostBenefitAnalysis ).

  14. How Helpful Are Error Management and Counterfactual Thinking Instructions to Inexperienced Spreadsheet Users' Training Task Performance?

    Science.gov (United States)

    Caputi, Peter; Chan, Amy; Jayasuriya, Rohan

    2011-01-01

    This paper examined the impact of training strategies on the types of errors that novice users make when learning a commonly used spreadsheet application. Fifty participants were assigned to a counterfactual thinking training (CFT) strategy, an error management training strategy, or a combination of both strategies, and completed an easy task…

  15. Using a Spreadsheet to Solve the Schro¨dinger Equations for the Energies of the Ground Electronic State and the Two Lowest Excited States of H[subscript2

    Science.gov (United States)

    Ge, Yingbin; Rittenhouse, Robert C.; Buchanan, Jacob C.; Livingston, Benjamin

    2014-01-01

    We have designed an exercise suitable for a lab or project in an undergraduate physical chemistry course that creates a Microsoft Excel spreadsheet to calculate the energy of the S[subscript 0] ground electronic state and the S[subscript 1] and T[subscript 1] excited states of H[subscript 2]. The spreadsheet calculations circumvent the…

  16. Using the iPlant collaborative discovery environment.

    Science.gov (United States)

    Oliver, Shannon L; Lenards, Andrew J; Barthelson, Roger A; Merchant, Nirav; McKay, Sheldon J

    2013-06-01

    The iPlant Collaborative is an academic consortium whose mission is to develop an informatics and social infrastructure to address the "grand challenges" in plant biology. Its cyberinfrastructure supports the computational needs of the research community and facilitates solving major challenges in plant science. The Discovery Environment provides a powerful and rich graphical interface to the iPlant Collaborative cyberinfrastructure by creating an accessible virtual workbench that enables all levels of expertise, ranging from students to traditional biology researchers and computational experts, to explore, analyze, and share their data. By providing access to iPlant's robust data-management system and high-performance computing resources, the Discovery Environment also creates a unified space in which researchers can access scalable tools. Researchers can use available Applications (Apps) to execute analyses on their data, as well as customize or integrate their own tools to better meet the specific needs of their research. These Apps can also be used in workflows that automate more complicated analyses. This module describes how to use the main features of the Discovery Environment, using bioinformatics workflows for high-throughput sequence data as examples. © 2013 by John Wiley & Sons, Inc.

  17. Spreadsheet-based program for alignment of overlapping DNA sequences.

    Science.gov (United States)

    Anbazhagan, R; Gabrielson, E

    1999-06-01

    Molecular biology laboratories frequently face the challenge of aligning small overlapping DNA sequences derived from a long DNA segment. Here, we present a short program that can be used to adapt Excel spreadsheets as a tool for aligning DNA sequences, regardless of their orientation. The program runs on any Windows or Macintosh operating system computer with Excel 97 or Excel 98. The program is available for use as an Excel file, which can be downloaded from the BioTechniques Web site. Upon execution, the program opens a specially designed customized workbook and is capable of identifying overlapping regions between two sequence fragments and displaying the sequence alignment. It also performs a number of specialized functions such as recognition of restriction enzyme cutting sites and CpG island mapping without costly specialized software.

  18. SU-G-BRB-04: Automated Output Factor Measurements Using Continuous Data Logging for Linac Commissioning

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, X; Li, S; Zheng, D; Wang, S; Lei, Y; Zhang, M; Ma, R; Fan, Q; Wang, X; Li, X; Verma, V; Enke, C; Zhou, S [University of Nebraska Medical Center, Omaha, NE (United States)

    2016-06-15

    Purpose: Linac commissioning is a time consuming and labor intensive process, the streamline of which is highly desirable. In particular, manual measurement of output factors for a variety of field sizes and energy greatly hinders the commissioning efficiency. In this study, automated measurement of output factors was demonstrated as ‘one-click’ using data logging of an electrometer. Methods: Beams to be measured were created in the recording and verifying (R&V) system and configured for continuous delivery. An electrometer with an automatic data logging feature enabled continuous data collection for all fields without human intervention. The electrometer saved data into a spreadsheet every 0.5 seconds. A Matlab program was developed to analyze the excel data to monitor and check the data quality. Results: For each photon energy, output factors were measured for five configurations, including open field and four wedges. Each configuration includes 72 fields sizes, ranging from 4×4 to 20×30 cm{sup 2}. Using automation, it took 50 minutes to complete the measurement of 72 field sizes, in contrast to 80 minutes when using the manual approach. The automation avoided the necessity of redundant Linac status checks between fields as in the manual approach. In fact, the only limiting factor in such automation is Linac overheating. The data collection beams in the R&V system are reusable, and the simplified process is less error-prone. In addition, our Matlab program extracted the output factors faithfully from data logging, and the discrepancy between the automatic and manual measurement is within ±0.3%. For two separate automated measurements 30 days apart, consistency check shows a discrepancy within ±1% for 6MV photon with a 60 degree wedge. Conclusion: Automated output factor measurements can save time by 40% when compared with conventional manual approach. This work laid ground for further improvement for the automation of Linac commissioning.

  19. SU-G-BRB-04: Automated Output Factor Measurements Using Continuous Data Logging for Linac Commissioning

    International Nuclear Information System (INIS)

    Zhu, X; Li, S; Zheng, D; Wang, S; Lei, Y; Zhang, M; Ma, R; Fan, Q; Wang, X; Li, X; Verma, V; Enke, C; Zhou, S

    2016-01-01

    Purpose: Linac commissioning is a time consuming and labor intensive process, the streamline of which is highly desirable. In particular, manual measurement of output factors for a variety of field sizes and energy greatly hinders the commissioning efficiency. In this study, automated measurement of output factors was demonstrated as ‘one-click’ using data logging of an electrometer. Methods: Beams to be measured were created in the recording and verifying (R&V) system and configured for continuous delivery. An electrometer with an automatic data logging feature enabled continuous data collection for all fields without human intervention. The electrometer saved data into a spreadsheet every 0.5 seconds. A Matlab program was developed to analyze the excel data to monitor and check the data quality. Results: For each photon energy, output factors were measured for five configurations, including open field and four wedges. Each configuration includes 72 fields sizes, ranging from 4×4 to 20×30 cm"2. Using automation, it took 50 minutes to complete the measurement of 72 field sizes, in contrast to 80 minutes when using the manual approach. The automation avoided the necessity of redundant Linac status checks between fields as in the manual approach. In fact, the only limiting factor in such automation is Linac overheating. The data collection beams in the R&V system are reusable, and the simplified process is less error-prone. In addition, our Matlab program extracted the output factors faithfully from data logging, and the discrepancy between the automatic and manual measurement is within ±0.3%. For two separate automated measurements 30 days apart, consistency check shows a discrepancy within ±1% for 6MV photon with a 60 degree wedge. Conclusion: Automated output factor measurements can save time by 40% when compared with conventional manual approach. This work laid ground for further improvement for the automation of Linac commissioning.

  20. A Simple Spreadsheet Program to Simulate and Analyze the Far-UV Circular Dichroism Spectra of Proteins

    Science.gov (United States)

    Abriata, Luciano A.

    2011-01-01

    A simple algorithm was implemented in a spreadsheet program to simulate the circular dichroism spectra of proteins from their secondary structure content and to fit [alpha]-helix, [beta]-sheet, and random coil contents from experimental far-UV circular dichroism spectra. The physical basis of the method is briefly reviewed within the context of…

  1. Fuels planning: science synthesis and integration; environmental consequences fact sheet 11: Smoke Impact Spreadsheet (SIS) model

    Science.gov (United States)

    Trent Wickman; Ann Acheson

    2005-01-01

    The Smoke Impact Spreadsheet (SIS) is a simple-to-use planning model for calculating particulate matter (PM) emissions and concentrations downwind of wildland fires. This fact sheet identifies the intended users and uses, required inputs, what the model does and does not do, and tells the user how to obtain the model.

  2. SEMANTIC WEB SERVICES – DISCOVERY, SELECTION AND COMPOSITION TECHNIQUES

    OpenAIRE

    Sowmya Kamath S; Ananthanarayana V.S

    2013-01-01

    Web services are already one of the most important resources on the Internet. As an integrated solution for realizing the vision of the Next Generation Web, semantic web services combine semantic web technology with web service technology, envisioning automated life cycle management of web services. This paper discusses the significance and importance of service discovery & selection to business logic, and the requisite current research in the various phases of the semantic web...

  3. Ideas Tried, Lessons Learned, and Improvements to Make: A Journey in Moving a Spreadsheet-Intensive Course Online

    Science.gov (United States)

    Berardi, Victor L.

    2012-01-01

    Using information systems to solve business problems is increasingly required of everyone in an organization, not just technical specialists. In the operations management class, spreadsheet usage has intensified with the focus on building decision models to solve operations management concerns such as forecasting, process capability, and inventory…

  4. Simulation of 2D Waves in Circular Membrane Using Excel Spreadsheet with Visual Basic for Teaching Activity

    Science.gov (United States)

    Eso, R.; Safiuddin, L. O.; Agusu, L.; Arfa, L. M. R. F.

    2018-04-01

    We propose a teaching instrument demonstrating the circular membrane waves using the excel interactive spreadsheets with the Visual Basic for Application (VBA) programming. It is based on the analytic solution of circular membrane waves involving Bessel function. The vibration modes and frequencies are determined by using Bessel approximation and initial conditions. The 3D perspective based on the spreadsheets functions and facilities has been explored to show the 3D moving objects in transitional or rotational processes. This instrument is very useful both in teaching activity and learning process of wave physics. Visualizing of the vibration of waves in the circular membrane which is showing a very clear manner of m and n vibration modes of the wave in a certain frequency has been compared and matched to the experimental result using resonance method. The peak of deflection varies in time if the initial condition was working and have the same pattern with matlab simulation in zero initial velocity

  5. FURTHER CONSIDERATIONS ON SPREADSHEET-BASED AUTOMATIC TREND LINES

    Directory of Open Access Journals (Sweden)

    DANIEL HOMOCIANU

    2015-12-01

    Full Text Available Most of the nowadays business applications working with data sets allow exports to the spreadsheet format. This fact is related to the experience of common business users with such products and to the possibility to couple what they have with something containing many models, functions and possibilities to process and represent data, by that getting something in dynamics and much more than a simple static less useful report. The purpose of Business Intelligence is to identify clusters, profiles, association rules, decision trees and many other patterns or even behaviours, but also to generate alerts for exceptions, determine trends and make predictions about the future based on historical data. In this context, the paper shows some practical results obtained after testing both the automatic creation of scatter charts and trend lines corresponding to the user’s preferences and the automatic suggesting of the most appropriate trend for the tested data mostly based on the statistical measure of how close they are to the regression function.

  6. Teaching Simulation and Computer-Aided Separation Optimization in Liquid Chromatography by Means of Illustrative Microsoft Excel Spreadsheets

    Science.gov (United States)

    Fasoula, S.; Nikitas, P.; Pappa-Louisi, A.

    2017-01-01

    A series of Microsoft Excel spreadsheets were developed to simulate the process of separation optimization under isocratic and simple gradient conditions. The optimization procedure is performed in a stepwise fashion using simple macros for an automatic application of this approach. The proposed optimization approach involves modeling of the peak…

  7. Potential errors when fitting experience curves by means of spreadsheet software

    International Nuclear Information System (INIS)

    Sark, W.G.J.H.M. van; Alsema, E.A.

    2010-01-01

    Progress ratios (PRs) are widely used in forecasting development of many technologies; they are derived from historical data represented in experience curves. Fitting the double logarithmic graphs is easily done with spreadsheet software like Microsoft Excel, by adding a trend line to the graph. However, it is unknown to many that these data are transformed to linear data before a fit is performed. This leads to erroneous results or a transformation bias in the PR, as we demonstrate using the experience curve for photovoltaic technology: logarithmic transformation leads to overestimates of progress ratios and underestimates of goodness of fit. Therefore, other graphing and analysis software is recommended.

  8. Coupling Visualization and Data Analysis for Knowledge Discovery from Multi-dimensional Scientific Data

    International Nuclear Information System (INIS)

    Rubel, Oliver; Ahern, Sean; Bethel, E. Wes; Biggin, Mark D.; Childs, Hank; Cormier-Michel, Estelle; DePace, Angela; Eisen, Michael B.; Fowlkes, Charless C.; Geddes, Cameron G.R.; Hagen, Hans; Hamann, Bernd; Huang, Min-Yu; Keranen, Soile V.E.; Knowles, David W.; Hendriks, Chris L. Luengo; Malik, Jitendra; Meredith, Jeremy; Messmer, Peter; Prabhat; Ushizima, Daniela; Weber, Gunther H.; Wu, Kesheng

    2010-01-01

    Knowledge discovery from large and complex scientific data is a challenging task. With the ability to measure and simulate more processes at increasingly finer spatial and temporal scales, the growing number of data dimensions and data objects presents tremendous challenges for effective data analysis and data exploration methods and tools. The combination and close integration of methods from scientific visualization, information visualization, automated data analysis, and other enabling technologies 'such as efficient data management' supports knowledge discovery from multi-dimensional scientific data. This paper surveys two distinct applications in developmental biology and accelerator physics, illustrating the effectiveness of the described approach.

  9. Semi-Automated Discovery of Application Session Structure

    Energy Technology Data Exchange (ETDEWEB)

    Kannan, J.; Jung, J.; Paxson, V.; Koksal, C.

    2006-09-07

    While the problem of analyzing network traffic at the granularity of individual connections has seen considerable previous work and tool development, understanding traffic at a higher level---the structure of user-initiated sessions comprised of groups of related connections---remains much less explored. Some types of session structure, such as the coupling between an FTP control connection and the data connections it spawns, have prespecified forms, though the specifications do not guarantee how the forms appear in practice. Other types of sessions, such as a user reading email with a browser, only manifest empirically. Still other sessions might exist without us even knowing of their presence, such as a botnet zombie receiving instructions from its master and proceeding in turn to carry them out. We present algorithms rooted in the statistics of Poisson processes that can mine a large corpus of network connection logs to extract the apparent structure of application sessions embedded in the connections. Our methods are semi-automated in that we aim to present an analyst with high-quality information (expressed as regular expressions) reflecting different possible abstractions of an application's session structure. We develop and test our methods using traces from a large Internet site, finding diversity in the number of applications that manifest, their different session structures, and the presence of abnormal behavior. Our work has applications to traffic characterization and monitoring, source models for synthesizing network traffic, and anomaly detection.

  10. New Generation Discovery: A Systematic View for Its Development, Issues and Future

    KAUST Repository

    Yu, Yi

    2012-11-01

    Collecting, storing, discovering, and locating are integral parts of the composition of the library. To fully utilize the library and achieve its ultimate value, the construction and production of discovery has always been a central part of the library’s practice and identity. That is the reason why the new generation (also called the next-generation discovery) discovery gets such striking effect since it came into library automation arena. However, when we talk about the new generation of discovery in the library domain, we should see it in the entirety of the library as one of its organic parts and consider its progress along with the evolution of the whole library world. We should have a deeper understanding about its relationship and interaction with the internet, the rapidly changing digital environment, and the elements and the chain of library services. To address above issues, this paper overviews the different versions of the definition for the new generation discovery by combining our own understanding. The paper also gives our own description for its properties and characteristics. The paper points out what challenges, which extends the technology domain to commercial interests and business strategy, are faced by the discovery applications, and how library and library professionals deal with those challenges. Finally, the paper elaborates on the promise brought by the new discovery development and what the next exploration might be for its future.

  11. Ladtap XL Version 2017: A Spreadsheet For Estimating Dose Resulting From Aqueous Releases

    Energy Technology Data Exchange (ETDEWEB)

    Minter, K. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Jannik, T. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2017-06-15

    LADTAP XL© is an EXCEL© spreadsheet used to estimate dose to offsite individuals and populations resulting from routine and accidental releases of radioactive materials to the Savannah River. LADTAP XL© contains two worksheets: LADTAP and IRRIDOSE. The LADTAP worksheet estimates dose for environmental pathways including external exposure resulting from recreational activities on the Savannah River and internal exposure resulting from ingestion of water, fish, and invertebrates originating from the Savannah River. IRRIDOSE estimates offsite dose to individuals and populations from irrigation of foodstuffs with contaminated water from the Savannah River. In 2004, a complete description of the LADTAP XL© code and an associated user’s manual was documented in LADTAP XL©: A Spreadsheet for Estimating Dose Resulting from Aqueous Release (WSRC-TR-2004-00059) and revised input parameters, dose coefficients, and radionuclide decay constants were incorporated into LADTAP XL© Version 2013 (SRNL-STI-2011-00238). LADTAP XL© Version 2017 is a slight modification to Version 2013 with minor changes made for more user-friendly parameter inputs and organization, updates in the time conversion factors used within the dose calculations, and fixed an issue with the expected time build-up parameter referenced within the population shoreline dose calculations. This manual has been produced to update the code description, verification of the models, and provide an updated user’s manual. LADTAP XL© Version 2017 has been verified by Minter (2017) and is ready for use at the Savannah River Site (SRS).

  12. Validation and configuration management plan for the KE basins KE-PU spreadsheet code

    International Nuclear Information System (INIS)

    Harris, R.A.

    1996-01-01

    This report provides documentation of the spreadsheet KE-PU software that is used to verify compliance with the Operational Safety Requirement and Process Standard limit on the amount of plutonium in the KE-Basin sandfilter backwash pit. Included are: A summary of the verification of the method and technique used in KE-PU that were documented elsewhere, the requirements, plans, and results of validation tests that confirm the proper functioning of the software, the procedures and approvals required to make changes to the software, and the method used to maintain configuration control over the software

  13. Development of the automated circulating tumor cell recovery system with microcavity array.

    Science.gov (United States)

    Negishi, Ryo; Hosokawa, Masahito; Nakamura, Seita; Kanbara, Hisashige; Kanetomo, Masafumi; Kikuhara, Yoshihito; Tanaka, Tsuyoshi; Matsunaga, Tadashi; Yoshino, Tomoko

    2015-05-15

    Circulating tumor cells (CTCs) are well recognized as useful biomarker for cancer diagnosis and potential target of drug discovery for metastatic cancer. Efficient and precise recovery of extremely low concentrations of CTCs from blood has been required to increase the detection sensitivity. Here, an automated system equipped with a microcavity array (MCA) was demonstrated for highly efficient and reproducible CTC recovery. The use of MCA allows selective recovery of cancer cells from whole blood on the basis of differences in size between tumor and blood cells. Intra- and inter-assays revealed that the automated system achieved high efficiency and reproducibility equal to the assay manually performed by well-trained operator. Under optimized assay workflow, the automated system allows efficient and precise cell recovery for non-small cell lung cancer cells spiked in whole blood. The automated CTC recovery system will contribute to high-throughput analysis in the further clinical studies on large cohort of cancer patients. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Mapping the Stacks: Sustainability and User Experience of Animated Maps in Library Discovery Interfaces

    Science.gov (United States)

    McMillin, Bill; Gibson, Sally; MacDonald, Jean

    2016-01-01

    Animated maps of the library stacks were integrated into the catalog interface at Pratt Institute and into the EBSCO Discovery Service interface at Illinois State University. The mapping feature was developed for optimal automation of the update process to enable a range of library personnel to update maps and call-number ranges. The development…

  15. Teaching Students to Model Neural Circuits and Neural Networks Using an Electronic Spreadsheet Simulator. Microcomputing Working Paper Series.

    Science.gov (United States)

    Hewett, Thomas T.

    There are a number of areas in psychology where an electronic spreadsheet simulator can be used to study and explore functional relationships among a number of parameters. For example, when dealing with sensation, perception, and pattern recognition, it is sometimes desirable for students to understand both the basic neurophysiology and the…

  16. Incorporation of rapid thermodynamic data in fragment-based drug discovery.

    Science.gov (United States)

    Kobe, Akihiro; Caaveiro, Jose M M; Tashiro, Shinya; Kajihara, Daisuke; Kikkawa, Masato; Mitani, Tomoya; Tsumoto, Kouhei

    2013-03-14

    Fragment-based drug discovery (FBDD) has enjoyed increasing popularity in recent years. We introduce SITE (single-injection thermal extinction), a novel thermodynamic methodology that selects high-quality hits early in FBDD. SITE is a fast calorimetric competitive assay suitable for automation that captures the essence of isothermal titration calorimetry but using significantly fewer resources. We describe the principles of SITE and identify a novel family of fragment inhibitors of the enzyme ketosteroid isomerase displaying high values of enthalpic efficiency.

  17. GENPLAT: an automated platform for biomass enzyme discovery and cocktail optimization.

    Science.gov (United States)

    Walton, Jonathan; Banerjee, Goutami; Car, Suzana

    2011-10-24

    The high cost of enzymes for biomass deconstruction is a major impediment to the economic conversion of lignocellulosic feedstocks to liquid transportation fuels such as ethanol. We have developed an integrated high throughput platform, called GENPLAT, for the discovery and development of novel enzymes and enzyme cocktails for the release of sugars from diverse pretreatment/biomass combinations. GENPLAT comprises four elements: individual pure enzymes, statistical design of experiments, robotic pipeting of biomass slurries and enzymes, and automated colorimeteric determination of released Glc and Xyl. Individual enzymes are produced by expression in Pichia pastoris or Trichoderma reesei, or by chromatographic purification from commercial cocktails or from extracts of novel microorganisms. Simplex lattice (fractional factorial) mixture models are designed using commercial Design of Experiment statistical software. Enzyme mixtures of high complexity are constructed using robotic pipeting into a 96-well format. The measurement of released Glc and Xyl is automated using enzyme-linked colorimetric assays. Optimized enzyme mixtures containing as many as 16 components have been tested on a variety of feedstock and pretreatment combinations. GENPLAT is adaptable to mixtures of pure enzymes, mixtures of commercial products (e.g., Accellerase 1000 and Novozyme 188), extracts of novel microbes, or combinations thereof. To make and test mixtures of ˜10 pure enzymes requires less than 100 μg of each protein and fewer than 100 total reactions, when operated at a final total loading of 15 mg protein/g glucan. We use enzymes from several sources. Enzymes can be purified from natural sources such as fungal cultures (e.g., Aspergillus niger, Cochliobolus carbonum, and Galerina marginata), or they can be made by expression of the encoding genes (obtained from the increasing number of microbial genome sequences) in hosts such as E. coli, Pichia pastoris, or a filamentous fungus such

  18. An experimental evaluation of the generalizing capabilities of process discovery techniques and black-box sequence models

    NARCIS (Netherlands)

    Tax, N.; van Zelst, S.J.; Teinemaa, I.; Gulden, Jens; Reinhartz-Berger, Iris; Schmidt, Rainer; Guerreiro, Sérgio; Guédria, Wided; Bera, Palash

    2018-01-01

    A plethora of automated process discovery techniques have been developed which aim to discover a process model based on event data originating from the execution of business processes. The aim of the discovered process models is to describe the control-flow of the underlying business process. At the

  19. Designing Optical Spreadsheets-Technological Pedagogical Content Knowledge Simulation (S-TPACK): A Case Study of Pre-Service Teachers Course

    Science.gov (United States)

    Thohir, M. Anas

    2018-01-01

    In the 21st century, the competence of instructional technological design is important for pre-service physics teachers. This case study described the pre-service physics teachers' design of optical spreadsheet simulation and evaluated teaching and learning the task in the classroom. The case study chose three of thirty pre-service teacher's…

  20. Semi-automated knowledge discovery: identifying and profiling human trafficking

    Science.gov (United States)

    Poelmans, Jonas; Elzinga, Paul; Ignatov, Dmitry I.; Kuznetsov, Sergei O.

    2012-11-01

    We propose an iterative and human-centred knowledge discovery methodology based on formal concept analysis. The proposed approach recognizes the important role of the domain expert in mining real-world enterprise applications and makes use of specific domain knowledge, including human intelligence and domain-specific constraints. Our approach was empirically validated at the Amsterdam-Amstelland police to identify suspects and victims of human trafficking in 266,157 suspicious activity reports. Based on guidelines of the Attorney Generals of the Netherlands, we first defined multiple early warning indicators that were used to index the police reports. Using concept lattices, we revealed numerous unknown human trafficking and loverboy suspects. In-depth investigation by the police resulted in a confirmation of their involvement in illegal activities resulting in actual arrestments been made. Our human-centred approach was embedded into operational policing practice and is now successfully used on a daily basis to cope with the vastly growing amount of unstructured information.

  1. Automated Motivic Analysis

    DEFF Research Database (Denmark)

    Lartillot, Olivier

    2016-01-01

    Motivic analysis provides very detailed understanding of musical composi- tions, but is also particularly difficult to formalize and systematize. A computational automation of the discovery of motivic patterns cannot be reduced to a mere extraction of all possible sequences of descriptions...... for lossless compression. The structural complexity resulting from successive repetitions of patterns can be controlled through a simple modelling of cycles. Generally, motivic patterns cannot always be defined solely as sequences of descriptions in a fixed set of dimensions: throughout the descriptions...... of the successive notes and intervals, various sets of musical parameters may be invoked. In this chapter, a method is presented that allows for these heterogeneous patterns to be discovered. Motivic repetition with local ornamentation is detected by reconstructing, on top of “surface-level” monodic voices, longer...

  2. Current status and future prospects for enabling chemistry technology in the drug discovery process [version 1; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Stevan W. Djuric

    2016-09-01

    Full Text Available This review covers recent advances in the implementation of enabling chemistry technologies into the drug discovery process. Areas covered include parallel synthesis chemistry, high-throughput experimentation, automated synthesis and purification methods, flow chemistry methodology including photochemistry, electrochemistry, and the handling of “dangerous” reagents. Also featured are advances in the “computer-assisted drug design” area and the expanding application of novel mass spectrometry-based techniques to a wide range of drug discovery activities.

  3. Cutting solid figures by plane - analytical solution and spreadsheet implementation

    Science.gov (United States)

    Benacka, Jan

    2012-07-01

    In some secondary mathematics curricula, there is a topic called Stereometry that deals with investigating the position and finding the intersection, angle, and distance of lines and planes defined within a prism or pyramid. Coordinate system is not used. The metric tasks are solved using Pythagoras' theorem, trigonometric functions, and sine and cosine rules. The basic problem is to find the section of the figure by a plane that is defined by three points related to the figure. In this article, a formula is derived that gives the positions of the intersection points of such a plane and the figure edges, that is, the vertices of the section polygon. Spreadsheet implementations of the formula for cuboid and right rectangular pyramids are presented. The user can check his/her graphical solution, or proceed if he/she is not able to complete the section.

  4. On the use of a standard spreadsheet to model physical systems in school teaching*

    Science.gov (United States)

    Quale, Andreas

    2012-05-01

    In the teaching of physics at upper secondary school level (K10-K12), the students are generally taught to solve problems analytically, i.e. using the dynamics describing a system (typically in the form of differential equations) to compute its evolution in time, e.g. the motion of a body along a straight line or in a plane. This reduces the scope of problems, i.e. the kind of problems that are within students' capabilities. To make the tasks mathematically solvable, one is restricted to very idealized situations; more realistic problems are too difficult (or even impossible) to handle analytically with the mathematical abilities that may be expected from students at this level. For instance, ordinary ballistic trajectories under the action of gravity, when air resistance is included, have been 'out of reach'; in school textbooks such trajectories are generally assumed to take place in a vacuum. Another example is that according to Newton's law of universal gravitation satellites will in general move around a large central body in elliptical orbits, but the students can only deal with the special case where the orbit is circular, thus precluding (for example) a verification and discussion of Kepler's laws. It is shown that standard spreadsheet software offers a tool that can handle many such realistic situations in a uniform way, and display the results both numerically and graphically on a computer screen, quite independently of whether the formal description of the physical system itself is 'mathematically tractable'. The method employed, which is readily accessible to high school students, is to perform a numerical integration of the equations of motion, exploiting the spreadsheet's capability of successive iterations. The software is used to model and study motion of bodies in external force fields; specifically, ballistic trajectories in a homogeneous gravity field with air resistance and satellite motion in a centrally symmetric gravitational field. The

  5. Machine-assisted discovery of relationships in astronomy

    Science.gov (United States)

    Graham, Matthew J.; Djorgovski, S. G.; Mahabal, Ashish A.; Donalek, Ciro; Drake, Andrew J.

    2013-05-01

    High-volume feature-rich data sets are becoming the bread-and-butter of 21st century astronomy but present significant challenges to scientific discovery. In particular, identifying scientifically significant relationships between sets of parameters is non-trivial. Similar problems in biological and geosciences have led to the development of systems which can explore large parameter spaces and identify potentially interesting sets of associations. In this paper, we describe the application of automated discovery systems of relationships to astronomical data sets, focusing on an evolutionary programming technique and an information-theory technique. We demonstrate their use with classical astronomical relationships - the Hertzsprung-Russell diagram and the Fundamental Plane of elliptical galaxies. We also show how they work with the issue of binary classification which is relevant to the next generation of large synoptic sky surveys, such as the Large Synoptic Survey Telescope (LSST). We find that comparable results to more familiar techniques, such as decision trees, are achievable. Finally, we consider the reality of the relationships discovered and how this can be used for feature selection and extraction.

  6. Automating expert role to determine design concept in Kansei Engineering

    Science.gov (United States)

    Lokman, Anitawati Mohd; Haron, Mohammad Bakri Che; Abidin, Siti Zaleha Zainal; Khalid, Noor Elaiza Abd

    2016-02-01

    Affect has become imperative in product quality. In affective design field, Kansei Engineering (KE) has been recognized as a technology that enables discovery of consumer's emotion and formulation of guide to design products that win consumers in the competitive market. Albeit powerful technology, there is no rule of thumb in its analysis and interpretation process. KE expertise is required to determine sets of related Kansei and the significant concept of emotion. Many research endeavors become handicapped with the limited number of available and accessible KE experts. This work is performed to simulate the role of experts with the use of Natphoric algorithm thus providing sound solution to the complexity and flexibility in KE. The algorithm is designed to learn the process by implementing training datasets taken from previous KE research works. A framework for automated KE is then designed to realize the development of automated KE system. A comparative analysis is performed to determine feasibility of the developed prototype to automate the process. The result shows that the significant Kansei is determined by manual KE implementation and the automated process is highly similar. KE research advocates will benefit this system to automatically determine significant design concepts.

  7. Simulations of the cardiac action potential based on the Hodgkin-Huxley kinetics with the use of Microsoft Excel spreadsheets.

    Science.gov (United States)

    Wu, Sheng-Nan

    2004-03-31

    The purpose of this study was to develop a method to simulate the cardiac action potential using a Microsoft Excel spreadsheet. The mathematical model contained voltage-gated ionic currents that were modeled using either Beeler-Reuter (B-R) or Luo-Rudy (L-R) phase 1 kinetics. The simulation protocol involves the use of in-cell formulas directly typed into a spreadsheet. The capability of spreadsheet iteration was used in these simulations. It does not require any prior knowledge of computer programming, although the use of the macro language can speed up the calculation. The normal configuration of the cardiac ventricular action potential can be well simulated in the B-R model that is defined by four individual ionic currents, each representing the diffusion of ions through channels in the membrane. The contribution of Na+ inward current to the rate of depolarization is reproduced in this model. After removal of Na+ current from the model, a constant current stimulus elicits an oscillatory change in membrane potential. In the L-R phase 1 model where six types of ionic currents were defined, the effect of extracellular K+ concentration on changes both in the time course of repolarization and in the time-independent K+ current can be demonstrated, when the solutions are implemented in Excel. Using the simulation protocols described here, the users can readily study and graphically display the underlying properties of ionic currents to see how changes in these properties determine the behavior of the heart cell. The method employed in these simulation protocols may also be extended or modified to other biological simulation programs.

  8. EpHLA: an innovative and user-friendly software automating the HLAMatchmaker algorithm for antibody analysis.

    Science.gov (United States)

    Sousa, Luiz Cláudio Demes da Mata; Filho, Herton Luiz Alves Sales; Von Glehn, Cristina de Queiroz Carrascosa; da Silva, Adalberto Socorro; Neto, Pedro de Alcântara dos Santos; de Castro, José Adail Fonseca; do Monte, Semíramis Jamil Hadad

    2011-12-01

    The global challenge for solid organ transplantation programs is to distribute organs to the highly sensitized recipients. The purpose of this work is to describe and test the functionality of the EpHLA software, a program that automates the analysis of acceptable and unacceptable HLA epitopes on the basis of the HLAMatchmaker algorithm. HLAMatchmaker considers small configurations of polymorphic residues referred to as eplets as essential components of HLA-epitopes. Currently, the analyses require the creation of temporary files and the manual cut and paste of laboratory tests results between electronic spreadsheets, which is time-consuming and prone to administrative errors. The EpHLA software was developed in Object Pascal programming language and uses the HLAMatchmaker algorithm to generate histocompatibility reports. The automated generation of reports requires the integration of files containing the results of laboratory tests (HLA typing, anti-HLA antibody signature) and public data banks (NMDP, IMGT). The integration and the access to this data were accomplished by means of the framework called eDAFramework. The eDAFramework was developed in Object Pascal and PHP and it provides data access functionalities for software developed in these languages. The tool functionality was successfully tested in comparison to actual, manually derived reports of patients from a renal transplantation program with related donors. We successfully developed software, which enables the automated definition of the epitope specificities of HLA antibodies. This new tool will benefit the management of recipient/donor pairs selection for highly sensitized patients. Copyright © 2011 Elsevier B.V. All rights reserved.

  9. Simulation of axonal excitability using a Spreadsheet template created in Microsoft Excel.

    Science.gov (United States)

    Brown, A M

    2000-08-01

    The objective of this present study was to implement an established simulation protocol (A.M. Brown, A methodology for simulating biological systems using Microsoft Excel, Comp. Methods Prog. Biomed. 58 (1999) 181-90) to model axonal excitability. The simulation protocol involves the use of in-cell formulas directly typed into a spreadsheet and does not require any programming skills or use of the macro language. Once the initial spreadsheet template has been set up the simulations described in this paper can be executed with a few simple keystrokes. The model axon contained voltage-gated ion channels that were modeled using Hodgkin Huxley style kinetics. The basic properties of axonal excitability modeled were: (1) threshold of action potential firing, demonstrating that not only are the stimulus amplitude and duration critical in the generation of an action potential, but also the resting membrane potential; (2) refractoriness, the phenomenon of reduced excitability immediately following an action potential. The difference between the absolute refractory period, when no amount of stimulus will elicit an action potential, and relative refractory period, when an action potential may be generated by applying increased stimulus, was demonstrated with regard to the underlying state of the Na(+) and K(+) channels; (3) temporal summation, a process by which two sub-threshold stimuli can unite to elicit an action potential was shown to be due to conductance changes outlasting the first stimulus and summing with the second stimulus-induced conductance changes to drive the membrane potential past threshold; (4) anode break excitation, where membrane hyperpolarization was shown to produce an action potential by removing Na(+) channel inactivation that is present at resting membrane potential. The simulations described in this paper provide insights into mechanisms of axonal excitation that can be carried out by following an easily understood protocol.

  10. A Novel Real-Time Data Acquisition Using an Excel Spreadsheet in Pendulum Experiment Tool with Light-Based Timer

    Science.gov (United States)

    Adhitama, Egy; Fauzi, Ahmad

    2018-01-01

    In this study, a pendulum experimental tool with a light-based timer has been developed to measure the period of a simple pendulum. The obtained data was automatically recorded in an Excel spreadsheet. The intensity of monochromatic light, sensed by a 3DU5C phototransistor, dynamically changes as the pendulum swings. The changed intensity varies…

  11. Quantitative DMS mapping for automated RNA secondary structure inference

    OpenAIRE

    Cordero, Pablo; Kladwang, Wipapat; VanLang, Christopher C.; Das, Rhiju

    2012-01-01

    For decades, dimethyl sulfate (DMS) mapping has informed manual modeling of RNA structure in vitro and in vivo. Here, we incorporate DMS data into automated secondary structure inference using a pseudo-energy framework developed for 2'-OH acylation (SHAPE) mapping. On six non-coding RNAs with crystallographic models, DMS- guided modeling achieves overall false negative and false discovery rates of 9.5% and 11.6%, comparable or better than SHAPE-guided modeling; and non-parametric bootstrappin...

  12. Analysis of chromium-51 release assay data using personal computer spreadsheet software

    International Nuclear Information System (INIS)

    Lefor, A.T.; Steinberg, S.M.; Wiebke, E.A.

    1988-01-01

    The Chromium-51 release assay is a widely used technique to assess the lysis of labeled target cells in vitro. We have developed a simple technique to analyze data from Chromium-51 release assays using the widely available LOTUS 1-2-3 spreadsheet software. This package calculates percentage specific cytotoxicity and lytic units by linear regression. It uses all data points to compute the linear regression and can determine if there is a statistically significant difference between two lysis curves. The system is simple to use and easily modified, since its implementation requires neither knowledge of computer programming nor custom designed software. This package can help save considerable time when analyzing data from Chromium-51 release assays

  13. Illustrating Probability through Roulette: A Spreadsheet Simulation Model

    Directory of Open Access Journals (Sweden)

    Kala Chand Seal

    2005-11-01

    Full Text Available Teaching probability can be challenging because the mathematical formulas often are too abstract and complex for the students to fully grasp the underlying meaning and effect of the concepts. Games can provide a way to address this issue. For example, the game of roulette can be an exciting application for teaching probability concepts. In this paper, we implement a model of roulette in a spreadsheet that can simulate outcomes of various betting strategies. The simulations can be analyzed to gain better insights into the corresponding probability structures. We use the model to simulate a particular betting strategy known as the bet-doubling, or Martingale, strategy. This strategy is quite popular and is often erroneously perceived as a winning strategy even though the probability analysis shows that such a perception is incorrect. The simulation allows us to present the true implications of such a strategy for a player with a limited betting budget and relate the results to the underlying theoretical probability structure. The overall validation of the model, its use for teaching, including its application to analyze other types of betting strategies are discussed.

  14. Use of a Spreadsheet to Calculate the Net Charge of Peptides and Proteins as a Function of pH: An Alternative to Using "Canned" Programs to Estimate the Isoelectric Point of These Important Biomolecules

    Science.gov (United States)

    Sims, Paul A.

    2010-01-01

    An approach is presented that utilizes a spreadsheet to allow students to explore different means of calculating and visualizing how the charge on peptides and proteins varies as a function of pH. In particular, the concept of isoelectric point is developed to allow students to compare the results of their spreadsheet calculations with those of…

  15. SimpleTreat: a spreadsheet-based box model to predict the fate of xenobiotics in a municipal waste water treatment plant

    NARCIS (Netherlands)

    Struijs J; van de Meent D; Stoltenkamp J

    1991-01-01

    A non-equilibrium steady state box model is reported, that predicts the fate of new chemicals in a conventional sewage treatment plant from a minimal input data set. The model, written in an electronic spreadsheet (Lotus TM 123), requires a minimum input: some basic properties of the chemical, its

  16. Meta-analyses and Forest plots using a microsoft excel spreadsheet: step-by-step guide focusing on descriptive data analysis.

    Science.gov (United States)

    Neyeloff, Jeruza L; Fuchs, Sandra C; Moreira, Leila B

    2012-01-20

    Meta-analyses are necessary to synthesize data obtained from primary research, and in many situations reviews of observational studies are the only available alternative. General purpose statistical packages can meta-analyze data, but usually require external macros or coding. Commercial specialist software is available, but may be expensive and focused in a particular type of primary data. Most available softwares have limitations in dealing with descriptive data, and the graphical display of summary statistics such as incidence and prevalence is unsatisfactory. Analyses can be conducted using Microsoft Excel, but there was no previous guide available. We constructed a step-by-step guide to perform a meta-analysis in a Microsoft Excel spreadsheet, using either fixed-effect or random-effects models. We have also developed a second spreadsheet capable of producing customized forest plots. It is possible to conduct a meta-analysis using only Microsoft Excel. More important, to our knowledge this is the first description of a method for producing a statistically adequate but graphically appealing forest plot summarizing descriptive data, using widely available software.

  17. Meta-analyses and Forest plots using a microsoft excel spreadsheet: step-by-step guide focusing on descriptive data analysis

    Directory of Open Access Journals (Sweden)

    Neyeloff Jeruza L

    2012-01-01

    Full Text Available Abstract Background Meta-analyses are necessary to synthesize data obtained from primary research, and in many situations reviews of observational studies are the only available alternative. General purpose statistical packages can meta-analyze data, but usually require external macros or coding. Commercial specialist software is available, but may be expensive and focused in a particular type of primary data. Most available softwares have limitations in dealing with descriptive data, and the graphical display of summary statistics such as incidence and prevalence is unsatisfactory. Analyses can be conducted using Microsoft Excel, but there was no previous guide available. Findings We constructed a step-by-step guide to perform a meta-analysis in a Microsoft Excel spreadsheet, using either fixed-effect or random-effects models. We have also developed a second spreadsheet capable of producing customized forest plots. Conclusions It is possible to conduct a meta-analysis using only Microsoft Excel. More important, to our knowledge this is the first description of a method for producing a statistically adequate but graphically appealing forest plot summarizing descriptive data, using widely available software.

  18. Patient-derived stem cells: pathways to drug discovery for brain diseases

    Directory of Open Access Journals (Sweden)

    Alan eMackay-Sim

    2013-03-01

    Full Text Available The concept of drug discovery through stem cell biology is based on technological developments whose genesis is now coincident. The first is automated cell microscopy with concurrent advances in image acquisition and analysis, known as high content screening (HCS. The second is patient-derived stem cells for modelling the cell biology of brain diseases. HCS has developed from the requirements of the pharmaceutical industry for high throughput assays to screen thousands of chemical compounds in the search for new drugs. HCS combines new fluorescent probes with automated microscopy and computational power to quantify the effects of compounds on cell functions. Stem cell biology has advanced greatly since the discovery of genetic reprogramming of somatic cells into induced pluripotent stem cells (iPSCs. There is now a rush of papers describing their generation from patients with various diseases of the nervous system. Although the majority of these have been genetic diseases, iPSCs have been generated from patients with complex diseases (schizophrenia and sporadic Parkinson’s disease. Some genetic diseases are also modelled in embryonic stem cells generated from blastocysts rejected during in vitro fertilisation. Neural stem cells have been isolated from post-mortem brain of Alzheimer’s patients and neural stem cells generated from biopsies of the olfactory organ of patients is another approach. These olfactory neurosphere-derived cells demonstrate robust disease-specific phenotypes in patients with schizophrenia and Parkinson’s disease. High content screening is already in use to find small molecules for the generation and differentiation of embryonic stem cells and induced pluripotent stem cells. The challenges for using stem cells for drug discovery are to develop robust stem cell culture methods that meet the rigorous requirements for repeatable, consistent quantities of defined cell types at the industrial scale necessary for high

  19. A simple model of hysteresis behavior using spreadsheet analysis

    Science.gov (United States)

    Ehrmann, A.; Blachowicz, T.

    2015-01-01

    Hysteresis loops occur in many scientific and technical problems, especially as field dependent magnetization of ferromagnetic materials, but also as stress-strain-curves of materials measured by tensile tests including thermal effects, liquid-solid phase transitions, in cell biology or economics. While several mathematical models exist which aim to calculate hysteresis energies and other parameters, here we offer a simple model for a general hysteretic system, showing different hysteresis loops depending on the defined parameters. The calculation which is based on basic spreadsheet analysis plus an easy macro code can be used by students to understand how these systems work and how the parameters influence the reactions of the system on an external field. Importantly, in the step-by-step mode, each change of the system state, compared to the last step, becomes visible. The simple program can be developed further by several changes and additions, enabling the building of a tool which is capable of answering real physical questions in the broad field of magnetism as well as in other scientific areas, in which similar hysteresis loops occur.

  20. A simple model of hysteresis behavior using spreadsheet analysis

    International Nuclear Information System (INIS)

    Ehrmann, A; Blachowicz, T

    2015-01-01

    Hysteresis loops occur in many scientific and technical problems, especially as field dependent magnetization of ferromagnetic materials, but also as stress-strain-curves of materials measured by tensile tests including thermal effects, liquid-solid phase transitions, in cell biology or economics. While several mathematical models exist which aim to calculate hysteresis energies and other parameters, here we offer a simple model for a general hysteretic system, showing different hysteresis loops depending on the defined parameters. The calculation which is based on basic spreadsheet analysis plus an easy macro code can be used by students to understand how these systems work and how the parameters influence the reactions of the system on an external field. Importantly, in the step-by-step mode, each change of the system state, compared to the last step, becomes visible. The simple program can be developed further by several changes and additions, enabling the building of a tool which is capable of answering real physical questions in the broad field of magnetism as well as in other scientific areas, in which similar hysteresis loops occur

  1. Artificial intelligence exploration of unstable protocells leads to predictable properties and discovery of collective behavior.

    Science.gov (United States)

    Points, Laurie J; Taylor, James Ward; Grizou, Jonathan; Donkers, Kevin; Cronin, Leroy

    2018-01-30

    Protocell models are used to investigate how cells might have first assembled on Earth. Some, like oil-in-water droplets, can be seemingly simple models, while able to exhibit complex and unpredictable behaviors. How such simple oil-in-water systems can come together to yield complex and life-like behaviors remains a key question. Herein, we illustrate how the combination of automated experimentation and image processing, physicochemical analysis, and machine learning allows significant advances to be made in understanding the driving forces behind oil-in-water droplet behaviors. Utilizing >7,000 experiments collected using an autonomous robotic platform, we illustrate how smart automation cannot only help with exploration, optimization, and discovery of new behaviors, but can also be core to developing fundamental understanding of such systems. Using this process, we were able to relate droplet formulation to behavior via predicted physical properties, and to identify and predict more occurrences of a rare collective droplet behavior, droplet swarming. Proton NMR spectroscopic and qualitative pH methods enabled us to better understand oil dissolution, chemical change, phase transitions, and droplet and aqueous phase flows, illustrating the utility of the combination of smart-automation and traditional analytical chemistry techniques. We further extended our study for the simultaneous exploration of both the oil and aqueous phases using a robotic platform. Overall, this work shows that the combination of chemistry, robotics, and artificial intelligence enables discovery, prediction, and mechanistic understanding in ways that no one approach could achieve alone.

  2. ExcelAutomat: a tool for systematic processing of files as applied to quantum chemical calculations

    Science.gov (United States)

    Laloo, Jalal Z. A.; Laloo, Nassirah; Rhyman, Lydia; Ramasami, Ponnadurai

    2017-07-01

    The processing of the input and output files of quantum chemical calculations often necessitates a spreadsheet as a key component of the workflow. Spreadsheet packages with a built-in programming language editor can automate the steps involved and thus provide a direct link between processing files and the spreadsheet. This helps to reduce user-interventions as well as the need to switch between different programs to carry out each step. The ExcelAutomat tool is the implementation of this method in Microsoft Excel (MS Excel) using the default Visual Basic for Application (VBA) programming language. The code in ExcelAutomat was adapted to work with the platform-independent open-source LibreOffice Calc, which also supports VBA. ExcelAutomat provides an interface through the spreadsheet to automate repetitive tasks such as merging input files, splitting, parsing and compiling data from output files, and generation of unique filenames. Selected extracted parameters can be retrieved as variables which can be included in custom codes for a tailored approach. ExcelAutomat works with Gaussian files and is adapted for use with other computational packages including the non-commercial GAMESS. ExcelAutomat is available as a downloadable MS Excel workbook or as a LibreOffice workbook.

  3. Automation of cell-based drug absorption assays in 96-well format using permeable support systems.

    Science.gov (United States)

    Larson, Brad; Banks, Peter; Sherman, Hilary; Rothenberg, Mark

    2012-06-01

    Cell-based drug absorption assays, such as Caco-2 and MDCK-MDR1, are an essential component of lead compound ADME/Tox testing. The permeability and transport data they provide can determine whether a compound continues in the drug discovery process. Current methods typically incorporate 24-well microplates and are performed manually. Yet the need to generate absorption data earlier in the drug discovery process, on an increasing number of compounds, is driving the use of higher density plates. A simple, more efficient process that incorporates 96-well permeable supports and proper instrumentation in an automated process provides more reproducible data compared to manual methods. Here we demonstrate the ability to perform drug permeability and transport assays using Caco-2 or MDCKII-MDR1 cells. The assay procedure was automated in a 96-well format, including cell seeding, media and buffer exchanges, compound dispense, and sample removal using simple robotic instrumentation. Cell monolayer integrity was confirmed via transepithelial electrical resistance and Lucifer yellow measurements. Proper cell function was validated by analyzing apical-to-basolateral and basolateral-to-apical movement of rhodamine 123, a known P-glycoprotein substrate. Apparent permeability and efflux data demonstrate how the automated procedure provides a less variable method than manual processing, and delivers a more accurate assessment of a compound's absorption characteristics.

  4. Polar Domain Discovery with Sparkler

    Science.gov (United States)

    Duerr, R.; Khalsa, S. J. S.; Mattmann, C. A.; Ottilingam, N. K.; Singh, K.; Lopez, L. A.

    2017-12-01

    The scientific web is vast and ever growing. It encompasses millions of textual, scientific and multimedia documents describing research in a multitude of scientific streams. Most of these documents are hidden behind forms which require user action to retrieve and thus can't be directly accessed by content crawlers. These documents are hosted on web servers across the world, most often on outdated hardware and network infrastructure. Hence it is difficult and time-consuming to aggregate documents from the scientific web, especially those relevant to a specific domain. Thus generating meaningful domain-specific insights is currently difficult. We present an automated discovery system (Figure 1) using Sparkler, an open-source, extensible, horizontally scalable crawler which facilitates high throughput and focused crawling of documents pertinent to a particular domain such as information about polar regions. With this set of highly domain relevant documents, we show that it is possible to answer analytical questions about that domain. Our domain discovery algorithm leverages prior domain knowledge to reach out to commercial/scientific search engines to generate seed URLs. Subject matter experts then annotate these seed URLs manually on a scale from highly relevant to irrelevant. We leverage this annotated dataset to train a machine learning model which predicts the `domain relevance' of a given document. We extend Sparkler with this model to focus crawling on documents relevant to that domain. Sparkler avoids disruption of service by 1) partitioning URLs by hostname such that every node gets a different host to crawl and by 2) inserting delays between subsequent requests. With an NSF-funded supercomputer Wrangler, we scaled our domain discovery pipeline to crawl about 200k polar specific documents from the scientific web, within a day.

  5. Integration of lyoplate based flow cytometry and computational analysis for standardized immunological biomarker discovery.

    Directory of Open Access Journals (Sweden)

    Federica Villanova

    Full Text Available Discovery of novel immune biomarkers for monitoring of disease prognosis and response to therapy in immune-mediated inflammatory diseases is an important unmet clinical need. Here, we establish a novel framework for immunological biomarker discovery, comparing a conventional (liquid flow cytometry platform (CFP and a unique lyoplate-based flow cytometry platform (LFP in combination with advanced computational data analysis. We demonstrate that LFP had higher sensitivity compared to CFP, with increased detection of cytokines (IFN-γ and IL-10 and activation markers (Foxp3 and CD25. Fluorescent intensity of cells stained with lyophilized antibodies was increased compared to cells stained with liquid antibodies. LFP, using a plate loader, allowed medium-throughput processing of samples with comparable intra- and inter-assay variability between platforms. Automated computational analysis identified novel immunophenotypes that were not detected with manual analysis. Our results establish a new flow cytometry platform for standardized and rapid immunological biomarker discovery with wide application to immune-mediated diseases.

  6. Integration of lyoplate based flow cytometry and computational analysis for standardized immunological biomarker discovery.

    Science.gov (United States)

    Villanova, Federica; Di Meglio, Paola; Inokuma, Margaret; Aghaeepour, Nima; Perucha, Esperanza; Mollon, Jennifer; Nomura, Laurel; Hernandez-Fuentes, Maria; Cope, Andrew; Prevost, A Toby; Heck, Susanne; Maino, Vernon; Lord, Graham; Brinkman, Ryan R; Nestle, Frank O

    2013-01-01

    Discovery of novel immune biomarkers for monitoring of disease prognosis and response to therapy in immune-mediated inflammatory diseases is an important unmet clinical need. Here, we establish a novel framework for immunological biomarker discovery, comparing a conventional (liquid) flow cytometry platform (CFP) and a unique lyoplate-based flow cytometry platform (LFP) in combination with advanced computational data analysis. We demonstrate that LFP had higher sensitivity compared to CFP, with increased detection of cytokines (IFN-γ and IL-10) and activation markers (Foxp3 and CD25). Fluorescent intensity of cells stained with lyophilized antibodies was increased compared to cells stained with liquid antibodies. LFP, using a plate loader, allowed medium-throughput processing of samples with comparable intra- and inter-assay variability between platforms. Automated computational analysis identified novel immunophenotypes that were not detected with manual analysis. Our results establish a new flow cytometry platform for standardized and rapid immunological biomarker discovery with wide application to immune-mediated diseases.

  7. Rapid discovery of peptide capture candidates with demonstrated specificity for structurally similar toxins

    Science.gov (United States)

    Sarkes, Deborah A.; Hurley, Margaret M.; Coppock, Matthew B.; Farrell, Mikella E.; Pellegrino, Paul M.; Stratis-Cullum, Dimitra N.

    2016-05-01

    Peptides have emerged as viable alternatives to antibodies for molecular-based sensing due to their similarity in recognition ability despite their relative structural simplicity. Various methods for peptide capture reagent discovery exist, including phage display, yeast display, and bacterial display. One of the primary advantages of peptide discovery by bacterial display technology is the speed to candidate peptide capture agent, due to both rapid growth of bacteria and direct utilization of the sorted cells displaying each individual peptide for the subsequent round of biopanning. We have previously isolated peptide affinity reagents towards protective antigen of Bacillus anthracis using a commercially available automated magnetic sorting platform with improved enrichment as compared to manual magnetic sorting. In this work, we focus on adapting our automated biopanning method to a more challenging sort, to demonstrate the specificity possible with peptide capture agents. This was achieved using non-toxic, recombinant variants of ricin and abrin, RiVax and abrax, respectively, which are structurally similar Type II ribosomal inactivating proteins with significant sequence homology. After only two rounds of biopanning, enrichment of peptide capture candidates binding abrax but not RiVax was achieved as demonstrated by Fluorescence Activated Cell Sorting (FACS) studies. Further sorting optimization included negative sorting against RiVax, proper selection of autoMACS programs for specific sorting rounds, and using freshly made buffer and freshly thawed protein target for each round of biopanning for continued enrichment over all four rounds. Most of the resulting candidates from biopanning for abrax binding peptides were able to bind abrax but not RiVax, demonstrating that short peptide sequences can be highly specific even at this early discovery stage.

  8. The Impacts of Mathematical Representations Developed through Webquest and Spreadsheet Activities on the Motivation of Pre-Service Elementary School Teachers

    Science.gov (United States)

    Halat, Erdogan; Peker, Murat

    2011-01-01

    The purpose of this study was to compare the influence of instruction using WebQuest activities with the influence of an instruction using spreadsheet activities on the motivation of pre-service elementary school teachers in mathematics teaching course. There were a total of 70 pre-service elementary school teachers involved in this study. Thirty…

  9. Methodology for the National Water Savings Model and Spreadsheet Tool Commercial/Institutional

    Energy Technology Data Exchange (ETDEWEB)

    Chan, Peter [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Long, Tim [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Williams, Alison [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Melody, Moya [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2012-01-01

    Lawrence Berkeley National Laboratory (LBNL) has developed a mathematical model to quantify the water and monetary savings attributable to the United States Environmental Protection Agency’s (EPA’s) WaterSense labeling program for commercial and institutional products. The National Water Savings–Commercial/Institutional (NWS-CI) model is a spreadsheet tool with which the EPA can evaluate the success of its program for encouraging buyers in the commercial and institutional (CI) sectors to purchase more water-efficient products. WaterSense has begun by focusing on three water-using products commonly used in the CI sectors: flushometer valve toilets, urinals, and pre-rinse spray valves. To estimate the savings attributable to WaterSense for each of the three products, LBNL applies an accounting method to national product shipments and lifetimes to estimate the shipments of each product.

  10. Automated docking screens: a feasibility study.

    Science.gov (United States)

    Irwin, John J; Shoichet, Brian K; Mysinger, Michael M; Huang, Niu; Colizzi, Francesco; Wassam, Pascal; Cao, Yiqun

    2009-09-24

    Molecular docking is the most practical approach to leverage protein structure for ligand discovery, but the technique retains important liabilities that make it challenging to deploy on a large scale. We have therefore created an expert system, DOCK Blaster, to investigate the feasibility of full automation. The method requires a PDB code, sometimes with a ligand structure, and from that alone can launch a full screen of large libraries. A critical feature is self-assessment, which estimates the anticipated reliability of the automated screening results using pose fidelity and enrichment. Against common benchmarks, DOCK Blaster recapitulates the crystal ligand pose within 2 A rmsd 50-60% of the time; inferior to an expert, but respectrable. Half the time the ligand also ranked among the top 5% of 100 physically matched decoys chosen on the fly. Further tests were undertaken culminating in a study of 7755 eligible PDB structures. In 1398 cases, the redocked ligand ranked in the top 5% of 100 property-matched decoys while also posing within 2 A rmsd, suggesting that unsupervised prospective docking is viable. DOCK Blaster is available at http://blaster.docking.org .

  11. Exploring Customization in Higher Education: An Experiment in Leveraging Computer Spreadsheet Technology to Deliver Highly Individualized Online Instruction to Undergraduate Business Students

    Science.gov (United States)

    Kunzler, Jayson S.

    2012-01-01

    This dissertation describes a research study designed to explore whether customization of online instruction results in improved learning in a college business statistics course. The study involved utilizing computer spreadsheet technology to develop an intelligent tutoring system (ITS) designed to: a) collect and monitor individual real-time…

  12. Robofurnace: A semi-automated laboratory chemical vapor deposition system for high-throughput nanomaterial synthesis and process discovery

    International Nuclear Information System (INIS)

    Oliver, C. Ryan; Westrick, William; Koehler, Jeremy; Brieland-Shoultz, Anna; Anagnostopoulos-Politis, Ilias; Cruz-Gonzalez, Tizoc; Hart, A. John

    2013-01-01

    Laboratory research and development on new materials, such as nanostructured thin films, often utilizes manual equipment such as tube furnaces due to its relatively low cost and ease of setup. However, these systems can be prone to inconsistent outcomes due to variations in standard operating procedures and limitations in performance such as heating and cooling rates restrict the parameter space that can be explored. Perhaps more importantly, maximization of research throughput and the successful and efficient translation of materials processing knowledge to production-scale systems, relies on the attainment of consistent outcomes. In response to this need, we present a semi-automated lab-scale chemical vapor deposition (CVD) furnace system, called “Robofurnace.” Robofurnace is an automated CVD system built around a standard tube furnace, which automates sample insertion and removal and uses motion of the furnace to achieve rapid heating and cooling. The system has a 10-sample magazine and motorized transfer arm, which isolates the samples from the lab atmosphere and enables highly repeatable placement of the sample within the tube. The system is designed to enable continuous operation of the CVD reactor, with asynchronous loading/unloading of samples. To demonstrate its performance, Robofurnace is used to develop a rapid CVD recipe for carbon nanotube (CNT) forest growth, achieving a 10-fold improvement in CNT forest mass density compared to a benchmark recipe using a manual tube furnace. In the long run, multiple systems like Robofurnace may be linked to share data among laboratories by methods such as Twitter. Our hope is Robofurnace and like automation will enable machine learning to optimize and discover relationships in complex material synthesis processes

  13. Robofurnace: A semi-automated laboratory chemical vapor deposition system for high-throughput nanomaterial synthesis and process discovery

    Energy Technology Data Exchange (ETDEWEB)

    Oliver, C. Ryan; Westrick, William; Koehler, Jeremy; Brieland-Shoultz, Anna; Anagnostopoulos-Politis, Ilias; Cruz-Gonzalez, Tizoc [Department of Mechanical Engineering, University of Michigan, Ann Arbor, Michigan 48109 (United States); Hart, A. John, E-mail: ajhart@mit.edu [Department of Mechanical Engineering, University of Michigan, Ann Arbor, Michigan 48109 (United States); Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139 (United States)

    2013-11-15

    Laboratory research and development on new materials, such as nanostructured thin films, often utilizes manual equipment such as tube furnaces due to its relatively low cost and ease of setup. However, these systems can be prone to inconsistent outcomes due to variations in standard operating procedures and limitations in performance such as heating and cooling rates restrict the parameter space that can be explored. Perhaps more importantly, maximization of research throughput and the successful and efficient translation of materials processing knowledge to production-scale systems, relies on the attainment of consistent outcomes. In response to this need, we present a semi-automated lab-scale chemical vapor deposition (CVD) furnace system, called “Robofurnace.” Robofurnace is an automated CVD system built around a standard tube furnace, which automates sample insertion and removal and uses motion of the furnace to achieve rapid heating and cooling. The system has a 10-sample magazine and motorized transfer arm, which isolates the samples from the lab atmosphere and enables highly repeatable placement of the sample within the tube. The system is designed to enable continuous operation of the CVD reactor, with asynchronous loading/unloading of samples. To demonstrate its performance, Robofurnace is used to develop a rapid CVD recipe for carbon nanotube (CNT) forest growth, achieving a 10-fold improvement in CNT forest mass density compared to a benchmark recipe using a manual tube furnace. In the long run, multiple systems like Robofurnace may be linked to share data among laboratories by methods such as Twitter. Our hope is Robofurnace and like automation will enable machine learning to optimize and discover relationships in complex material synthesis processes.

  14. An integrated dataset for in silico drug discovery

    Directory of Open Access Journals (Sweden)

    Cockell Simon J

    2010-12-01

    Full Text Available Drug development is expensive and prone to failure. It is potentially much less risky and expensive to reuse a drug developed for one condition for treating a second disease, than it is to develop an entirely new compound. Systematic approaches to drug repositioning are needed to increase throughput and find candidates more reliably. Here we address this need with an integrated systems biology dataset, developed using the Ondex data integration platform, for the in silico discovery of new drug repositioning candidates. We demonstrate that the information in this dataset allows known repositioning examples to be discovered. We also propose a means of automating the search for new treatment indications of existing compounds.

  15. Developing Students' Understanding of Co-Opetition and Multilevel Inventory Management Strategies in Supply Chains: An In-Class Spreadsheet Simulation Exercise

    Science.gov (United States)

    Fetter, Gary; Shockley, Jeff

    2014-01-01

    Instructors look for ways to explain to students how supply chains can be constructed so that competing suppliers can work together to improve inventory management performance (i.e., a phenomenon known as co-opetition). An Excel spreadsheet-driven simulation is presented that models a complete multilevel supply chain system--customer, retailer,…

  16. Geo-Enrichment and Semantic Enhancement of Metadata Sets to Augment Discovery in Geoportals

    Directory of Open Access Journals (Sweden)

    Bernhard Vockner

    2014-03-01

    Full Text Available Geoportals are established to function as main gateways to find, evaluate, and start “using” geographic information. Still, current geoportal implementations face problems in optimizing the discovery process due to semantic heterogeneity issues, which leads to low recall and low precision in performing text-based searches. Therefore, we propose an enhanced semantic discovery approach that supports multilingualism and information domain context. Thus, we present workflow that enriches existing structured metadata with synonyms, toponyms, and translated terms derived from user-defined keywords based on multilingual thesauri and ontologies. To make the results easier and understandable, we also provide automated translation capabilities for the resource metadata to support the user in conceiving the thematic content of the descriptive metadata, even if it has been documented using a language the user is not familiar with. In addition, to text-enable spatial filtering capabilities, we add additional location name keywords to metadata sets. These are based on the existing bounding box and shall tweak discovery scores when performing single text line queries. In order to improve the user’s search experience, we tailor faceted search strategies presenting an enhanced query interface for geo-metadata discovery that are transparently leveraging the underlying thesauri and ontologies.

  17. Applying Dataflow Architecture and Visualization Tools to In Vitro Pharmacology Data Automation.

    Science.gov (United States)

    Pechter, David; Xu, Serena; Kurtz, Marc; Williams, Steven; Sonatore, Lisa; Villafania, Artjohn; Agrawal, Sony

    2016-12-01

    The pace and complexity of modern drug discovery places ever-increasing demands on scientists for data analysis and interpretation. Data flow programming and modern visualization tools address these demands directly. Three different requirements-one for allosteric modulator analysis, one for a specialized clotting analysis, and one for enzyme global progress curve analysis-are reviewed, and their execution in a combined data flow/visualization environment is outlined. © 2016 Society for Laboratory Automation and Screening.

  18. Miniaturized embryo array for automated trapping, immobilization and microperfusion of zebrafish embryos.

    Directory of Open Access Journals (Sweden)

    Jin Akagi

    Full Text Available Zebrafish (Danio rerio has recently emerged as a powerful experimental model in drug discovery and environmental toxicology. Drug discovery screens performed on zebrafish embryos mirror with a high level of accuracy the tests usually performed on mammalian animal models, and fish embryo toxicity assay (FET is one of the most promising alternative approaches to acute ecotoxicity testing with adult fish. Notwithstanding this, automated in-situ analysis of zebrafish embryos is still deeply in its infancy. This is mostly due to the inherent limitations of conventional techniques and the fact that metazoan organisms are not easily susceptible to laboratory automation. In this work, we describe the development of an innovative miniaturized chip-based device for the in-situ analysis of zebrafish embryos. We present evidence that automatic, hydrodynamic positioning, trapping and long-term immobilization of single embryos inside the microfluidic chips can be combined with time-lapse imaging to provide real-time developmental analysis. Our platform, fabricated using biocompatible polymer molding technology, enables rapid trapping of embryos in low shear stress zones, uniform drug microperfusion and high-resolution imaging without the need of manual embryo handling at various developmental stages. The device provides a highly controllable fluidic microenvironment and post-analysis eleuthero-embryo stage recovery. Throughout the incubation, the position of individual embryos is registered. Importantly, we also for first time show that microfluidic embryo array technology can be effectively used for the analysis of anti-angiogenic compounds using transgenic zebrafish line (fli1a:EGFP. The work provides a new rationale for rapid and automated manipulation and analysis of developing zebrafish embryos at a large scale.

  19. Modeling the Value of Micro Solutions in Air Force Financial Management

    National Research Council Canada - National Science Library

    O'Hare, Scott M; Krott, James E

    2005-01-01

    The purpose of this MBA Project was to develop a model that would estimate the value of applying available spreadsheet programming tools to automation opportunities in Air Force Financial Management (FM...

  20. ISA-TAB-Nano: A Specification for Sharing Nanomaterial Research Data in Spreadsheet-based Format

    Science.gov (United States)

    2013-01-01

    Background and motivation The high-throughput genomics communities have been successfully using standardized spreadsheet-based formats to capture and share data within labs and among public repositories. The nanomedicine community has yet to adopt similar standards to share the diverse and multi-dimensional types of data (including metadata) pertaining to the description and characterization of nanomaterials. Owing to the lack of standardization in representing and sharing nanomaterial data, most of the data currently shared via publications and data resources are incomplete, poorly-integrated, and not suitable for meaningful interpretation and re-use of the data. Specifically, in its current state, data cannot be effectively utilized for the development of predictive models that will inform the rational design of nanomaterials. Results We have developed a specification called ISA-TAB-Nano, which comprises four spreadsheet-based file formats for representing and integrating various types of nanomaterial data. Three file formats (Investigation, Study, and Assay files) have been adapted from the established ISA-TAB specification; while the Material file format was developed de novo to more readily describe the complexity of nanomaterials and associated small molecules. In this paper, we have discussed the main features of each file format and how to use them for sharing nanomaterial descriptions and assay metadata. Conclusion The ISA-TAB-Nano file formats provide a general and flexible framework to record and integrate nanomaterial descriptions, assay data (metadata and endpoint measurements) and protocol information. Like ISA-TAB, ISA-TAB-Nano supports the use of ontology terms to promote standardized descriptions and to facilitate search and integration of the data. The ISA-TAB-Nano specification has been submitted as an ASTM work item to obtain community feedback and to provide a nanotechnology data-sharing standard for public development and adoption. PMID

  1. Full automation and validation of a flexible ELISA platform for host cell protein and protein A impurity detection in biopharmaceuticals.

    Science.gov (United States)

    Rey, Guillaume; Wendeler, Markus W

    2012-11-01

    Monitoring host cell protein (HCP) and protein A impurities is important to ensure successful development of recombinant antibody drugs. Here, we report the full automation and validation of an ELISA platform on a robotic system that allows the detection of Chinese hamster ovary (CHO) HCPs and residual protein A of in-process control samples and final drug substance. The ELISA setup is designed to serve three main goals: high sample throughput, high quality of results, and sample handling flexibility. The processing of analysis requests, determination of optimal sample dilutions, and calculation of impurity content is performed automatically by a spreadsheet. Up to 48 samples in three unspiked and spiked dilutions each are processed within 24 h. The dilution of each sample is individually prepared based on the drug concentration and the expected impurity content. Adaptable dilution protocols allow the analysis of sample dilutions ranging from 1:2 to 1:2×10(7). The validity of results is assessed by automatic testing for dilutional linearity and spike recovery for each sample. This automated impurity ELISA facilitates multi-project process development, is easily adaptable to other impurity ELISA formats, and increases analytical capacity by combining flexible sample handling with high data quality. Copyright © 2012 Elsevier B.V. All rights reserved.

  2. A controlled trial of automated classification of negation from clinical notes

    Directory of Open Access Journals (Sweden)

    Carruth William

    2005-05-01

    Full Text Available Abstract Background Identification of negation in electronic health records is essential if we are to understand the computable meaning of the records: Our objective is to compare the accuracy of an automated mechanism for assignment of Negation to clinical concepts within a compositional expression with Human Assigned Negation. Also to perform a failure analysis to identify the causes of poorly identified negation (i.e. Missed Conceptual Representation, Inaccurate Conceptual Representation, Missed Negation, Inaccurate identification of Negation. Methods 41 Clinical Documents (Medical Evaluations; sometimes outside of Mayo these are referred to as History and Physical Examinations were parsed using the Mayo Vocabulary Server Parsing Engine. SNOMED-CT™ was used to provide concept coverage for the clinical concepts in the record. These records resulted in identification of Concepts and textual clues to Negation. These records were reviewed by an independent medical terminologist, and the results were tallied in a spreadsheet. Where questions on the review arose Internal Medicine Faculty were employed to make a final determination. Results SNOMED-CT was used to provide concept coverage of the 14,792 Concepts in 41 Health Records from John's Hopkins University. Of these, 1,823 Concepts were identified as negative by Human review. The sensitivity (Recall of the assignment of negation was 97.2% (p Conclusion Automated assignment of negation to concepts identified in health records based on review of the text is feasible and practical. Lexical assignment of negation is a good test of true Negativity as judged by the high sensitivity, specificity and positive likelihood ratio of the test. SNOMED-CT had overall coverage of 88.7% of the concepts being negated.

  3. A Cross-Layer Route Discovery Framework for Mobile Ad Hoc Networks

    Directory of Open Access Journals (Sweden)

    Wu Jieyi

    2005-01-01

    Full Text Available Most reactive routing protocols in MANETs employ a random delay between rebroadcasting route requests (RREQ in order to avoid "broadcast storms." However this can lead to problems such as "next hop racing" and "rebroadcast redundancy." In addition to this, existing routing protocols for MANETs usually take a single routing strategy for all flows. This may lead to inefficient use of resources. In this paper we propose a cross-layer route discovery framework (CRDF to address these problems by exploiting the cross-layer information. CRDF solves the above problems efficiently and enables a new technique: routing strategy automation (RoSAuto. RoSAuto refers to the technique that each source node automatically decides the routing strategy based on the application requirements and each intermediate node further adapts the routing strategy so that the network resource usage can be optimized. To demonstrate the effectiveness and the efficiency of CRDF, we design and evaluate a macrobian route discovery strategy under CRDF.

  4. Altering user' acceptance of automation through prior automation exposure.

    Science.gov (United States)

    Bekier, Marek; Molesworth, Brett R C

    2017-06-01

    Air navigation service providers worldwide see increased use of automation as one solution to overcome the capacity constraints imbedded in the present air traffic management (ATM) system. However, increased use of automation within any system is dependent on user acceptance. The present research sought to determine if the point at which an individual is no longer willing to accept or cooperate with automation can be manipulated. Forty participants underwent training on a computer-based air traffic control programme, followed by two ATM exercises (order counterbalanced), one with and one without the aid of automation. Results revealed after exposure to a task with automation assistance, user acceptance of high(er) levels of automation ('tipping point') decreased; suggesting it is indeed possible to alter automation acceptance. Practitioner Summary: This paper investigates whether the point at which a user of automation rejects automation (i.e. 'tipping point') is constant or can be manipulated. The results revealed after exposure to a task with automation assistance, user acceptance of high(er) levels of automation decreased; suggesting it is possible to alter automation acceptance.

  5. Basic statistics with Microsoft Excel: a review.

    Science.gov (United States)

    Divisi, Duilio; Di Leonardo, Gabriella; Zaccagna, Gino; Crisci, Roberto

    2017-06-01

    The scientific world is enriched daily with new knowledge, due to new technologies and continuous discoveries. The mathematical functions explain the statistical concepts particularly those of mean, median and mode along with those of frequency and frequency distribution associated to histograms and graphical representations, determining elaborative processes on the basis of the spreadsheet operations. The aim of the study is to highlight the mathematical basis of statistical models that regulate the operation of spreadsheets in Microsoft Excel.

  6. Proteomic Biomarker Discovery in 1000 Human Plasma Samples with Mass Spectrometry.

    Science.gov (United States)

    Cominetti, Ornella; Núñez Galindo, Antonio; Corthésy, John; Oller Moreno, Sergio; Irincheeva, Irina; Valsesia, Armand; Astrup, Arne; Saris, Wim H M; Hager, Jörg; Kussmann, Martin; Dayon, Loïc

    2016-02-05

    The overall impact of proteomics on clinical research and its translation has lagged behind expectations. One recognized caveat is the limited size (subject numbers) of (pre)clinical studies performed at the discovery stage, the findings of which fail to be replicated in larger verification/validation trials. Compromised study designs and insufficient statistical power are consequences of the to-date still limited capacity of mass spectrometry (MS)-based workflows to handle large numbers of samples in a realistic time frame, while delivering comprehensive proteome coverages. We developed a highly automated proteomic biomarker discovery workflow. Herein, we have applied this approach to analyze 1000 plasma samples from the multicentered human dietary intervention study "DiOGenes". Study design, sample randomization, tracking, and logistics were the foundations of our large-scale study. We checked the quality of the MS data and provided descriptive statistics. The data set was interrogated for proteins with most stable expression levels in that set of plasma samples. We evaluated standard clinical variables that typically impact forthcoming results and assessed body mass index-associated and gender-specific proteins at two time points. We demonstrate that analyzing a large number of human plasma samples for biomarker discovery with MS using isobaric tagging is feasible, providing robust and consistent biological results.

  7. Volatility Discovery

    DEFF Research Database (Denmark)

    Dias, Gustavo Fruet; Scherrer, Cristina; Papailias, Fotis

    The price discovery literature investigates how homogenous securities traded on different markets incorporate information into prices. We take this literature one step further and investigate how these markets contribute to stochastic volatility (volatility discovery). We formally show...... that the realized measures from homogenous securities share a fractional stochastic trend, which is a combination of the price and volatility discovery measures. Furthermore, we show that volatility discovery is associated with the way that market participants process information arrival (market sensitivity......). Finally, we compute volatility discovery for 30 actively traded stocks in the U.S. and report that Nyse and Arca dominate Nasdaq....

  8. How To Use the Spreadsheet as a Tool in the Secondary School Mathematics Classroom. Second Edition (for Windows and Macintosh Operating Systems).

    Science.gov (United States)

    Masalski, William J.

    This book seeks to develop, enhance, and expand students' understanding of mathematics by using technology. Topics covered include the advantages of spreadsheets along with the opportunity to explore the 'what if?' type of questions encountered in the problem-solving process, enhancing the user's insight into the development and use of algorithms,…

  9. Complacency and Automation Bias in the Use of Imperfect Automation.

    Science.gov (United States)

    Wickens, Christopher D; Clegg, Benjamin A; Vieane, Alex Z; Sebok, Angelia L

    2015-08-01

    We examine the effects of two different kinds of decision-aiding automation errors on human-automation interaction (HAI), occurring at the first failure following repeated exposure to correctly functioning automation. The two errors are incorrect advice, triggering the automation bias, and missing advice, reflecting complacency. Contrasts between analogous automation errors in alerting systems, rather than decision aiding, have revealed that alerting false alarms are more problematic to HAI than alerting misses are. Prior research in decision aiding, although contrasting the two aiding errors (incorrect vs. missing), has confounded error expectancy. Participants performed an environmental process control simulation with and without decision aiding. For those with the aid, automation dependence was created through several trials of perfect aiding performance, and an unexpected automation error was then imposed in which automation was either gone (one group) or wrong (a second group). A control group received no automation support. The correct aid supported faster and more accurate diagnosis and lower workload. The aid failure degraded all three variables, but "automation wrong" had a much greater effect on accuracy, reflecting the automation bias, than did "automation gone," reflecting the impact of complacency. Some complacency was manifested for automation gone, by a longer latency and more modest reduction in accuracy. Automation wrong, creating the automation bias, appears to be a more problematic form of automation error than automation gone, reflecting complacency. Decision-aiding automation should indicate its lower degree of confidence in uncertain environments to avoid the automation bias. © 2015, Human Factors and Ergonomics Society.

  10. Parallel Density-Based Clustering for Discovery of Ionospheric Phenomena

    Science.gov (United States)

    Pankratius, V.; Gowanlock, M.; Blair, D. M.

    2015-12-01

    Ionospheric total electron content maps derived from global networks of dual-frequency GPS receivers can reveal a plethora of ionospheric features in real-time and are key to space weather studies and natural hazard monitoring. However, growing data volumes from expanding sensor networks are making manual exploratory studies challenging. As the community is heading towards Big Data ionospheric science, automation and Computer-Aided Discovery become indispensable tools for scientists. One problem of machine learning methods is that they require domain-specific adaptations in order to be effective and useful for scientists. Addressing this problem, our Computer-Aided Discovery approach allows scientists to express various physical models as well as perturbation ranges for parameters. The search space is explored through an automated system and parallel processing of batched workloads, which finds corresponding matches and similarities in empirical data. We discuss density-based clustering as a particular method we employ in this process. Specifically, we adapt Density-Based Spatial Clustering of Applications with Noise (DBSCAN). This algorithm groups geospatial data points based on density. Clusters of points can be of arbitrary shape, and the number of clusters is not predetermined by the algorithm; only two input parameters need to be specified: (1) a distance threshold, (2) a minimum number of points within that threshold. We discuss an implementation of DBSCAN for batched workloads that is amenable to parallelization on manycore architectures such as Intel's Xeon Phi accelerator with 60+ general-purpose cores. This manycore parallelization can cluster large volumes of ionospheric total electronic content data quickly. Potential applications for cluster detection include the visualization, tracing, and examination of traveling ionospheric disturbances or other propagating phenomena. Acknowledgments. We acknowledge support from NSF ACI-1442997 (PI V. Pankratius).

  11. DEVELOPMENT OF A SPREADSHEET BASED VENDOR MANAGED INVENTORY MODEL FOR A SINGLE ECHELON SUPPLY CHAIN: A CASE STUDY

    Directory of Open Access Journals (Sweden)

    Karanam Prahlada Rao

    2010-11-01

    Full Text Available Vendor managed inventory (VMI is a supply chain initiative where the supplier assumes the responsibility for managing inventories using advanced communication means such as online messaging and data retrieval system. A well collaborated vendor manage inventory system can improve supply chain performance by decreasing the inventory level and increasing the fill rate. This paper investigates the implementation of vendor managed inventory systems in a consumer goods industry. We consider (r, Q policy for replenishing its inventory. The objective of work is to minimize the inventory across the supply chain and maximize the service level. The major contribution of this work is to develop a spreadsheet model for VMI system, Evaluation of Total inventory cost by using spreadsheet based method and Analytical method, Quantifying inventory reduction, Estimating service efficiency level, and validating the VMI spread sheet model with randomly generated demand. In the application, VMI as an inventory control system is able to reduce the inventory cost without sacrificing the service level. The results further more show that the inventory reduction obtained from analytical method is closer to the spread sheet based approach, which reveals the VMI success. However the VMI success is impacted by the quality of buyersupplier relationships, the quality of the IT system and the intensity of information sharing, but not by the quality of information shared.

  12. The use of kragten spreadsheets for uncertainty evaluation of uranium potentiometric analysis by the Brazilian Safeguards Laboratory

    International Nuclear Information System (INIS)

    Silva, Jose Wanderley S. da; Barros, Pedro Dionisio de; Araujo, Radier Mario S. de

    2009-01-01

    In safeguards, independent analysis of uranium content and enrichment of nuclear materials to verify operator's declarations is an important tool to evaluate the accountability system applied by nuclear installations. This determination may be performed by nondestructive (NDA) methods, generally done in the field using portable radiation detection systems, or destructive (DA) methods by chemical analysis when more accurate and precise results are necessary. Samples for DA analysis are collected by inspectors during safeguards inspections and sent to Safeguards Laboratory (LASAL) of the Brazilian Nuclear Energy Commission - (CNEN), where the analysis take place. The method used by LASAL for determination of uranium in different physical and chemical forms is the Davies and Gray/NBL using an automatic potentiometric titrator, which performs the titration of uranium IV by a standard solution of K 2 Cr 2 O 7 . Uncertainty budgets have been determined based on the concepts of the ISO 'Guide to the Expression of Uncertainty in Measurement' (GUM). In order to simplify the calculation of the uncertainty, a computational tool named Kragten Spreadsheet was used. Such spreadsheet uses the concepts established by the GUM and provides results that numerically approximates to those obtained by propagation of uncertainty with analytically determined sensitivity coefficients. The main parameters (input quantities) interfering on the uncertainty were studied. In order to evaluate their contribution in the final uncertainty, the uncertainties of all steps of the analytical method were estimated and compiled. (author)

  13. Robotic liquid handling and automation in epigenetics.

    Science.gov (United States)

    Gaisford, Wendy

    2012-10-01

    Automated liquid-handling robots and high-throughput screening (HTS) are widely used in the pharmaceutical industry for the screening of large compound libraries, small molecules for activity against disease-relevant target pathways, or proteins. HTS robots capable of low-volume dispensing reduce assay setup times and provide highly accurate and reproducible dispensing, minimizing variation between sample replicates and eliminating the potential for manual error. Low-volume automated nanoliter dispensers ensure accuracy of pipetting within volume ranges that are difficult to achieve manually. In addition, they have the ability to potentially expand the range of screening conditions from often limited amounts of valuable sample, as well as reduce the usage of expensive reagents. The ability to accurately dispense lower volumes provides the potential to achieve a greater amount of information than could be otherwise achieved using manual dispensing technology. With the emergence of the field of epigenetics, an increasing number of drug discovery companies are beginning to screen compound libraries against a range of epigenetic targets. This review discusses the potential for the use of low-volume liquid handling robots, for molecular biological applications such as quantitative PCR and epigenetics.

  14. Predicting changes in cardiac myocyte contractility during early drug discovery with in vitro assays

    International Nuclear Information System (INIS)

    Morton, M.J.; Armstrong, D.; Abi Gerges, N.; Bridgland-Taylor, M.; Pollard, C.E.; Bowes, J.; Valentin, J.-P.

    2014-01-01

    Cardiovascular-related adverse drug effects are a major concern for the pharmaceutical industry. Activity of an investigational drug at the L-type calcium channel could manifest in a number of ways, including changes in cardiac contractility. The aim of this study was to define which of the two assay technologies – radioligand-binding or automated electrophysiology – was most predictive of contractility effects in an in vitro myocyte contractility assay. The activity of reference and proprietary compounds at the L-type calcium channel was measured by radioligand-binding assays, conventional patch-clamp, automated electrophysiology, and by measurement of contractility in canine isolated cardiac myocytes. Activity in the radioligand-binding assay at the L-type Ca channel phenylalkylamine binding site was most predictive of an inotropic effect in the canine cardiac myocyte assay. The sensitivity was 73%, specificity 83% and predictivity 78%. The radioligand-binding assay may be run at a single test concentration and potency estimated. The least predictive assay was automated electrophysiology which showed a significant bias when compared with other assay formats. Given the importance of the L-type calcium channel, not just in cardiac function, but also in other organ systems, a screening strategy emerges whereby single concentration ligand-binding can be performed early in the discovery process with sufficient predictivity, throughput and turnaround time to influence chemical design and address a significant safety-related liability, at relatively low cost. - Highlights: • The L-type calcium channel is a significant safety liability during drug discovery. • Radioligand-binding to the L-type calcium channel can be measured in vitro. • The assay can be run at a single test concentration as part of a screening cascade. • This measurement is highly predictive of changes in cardiac myocyte contractility

  15. Predicting changes in cardiac myocyte contractility during early drug discovery with in vitro assays

    Energy Technology Data Exchange (ETDEWEB)

    Morton, M.J., E-mail: michael.morton@astrazeneca.com [Discovery Sciences, AstraZeneca, Macclesfield, Cheshire SK10 4TG (United Kingdom); Armstrong, D.; Abi Gerges, N. [Drug Safety and Metabolism, AstraZeneca, Macclesfield, Cheshire SK10 4TG (United Kingdom); Bridgland-Taylor, M. [Discovery Sciences, AstraZeneca, Macclesfield, Cheshire SK10 4TG (United Kingdom); Pollard, C.E.; Bowes, J.; Valentin, J.-P. [Drug Safety and Metabolism, AstraZeneca, Macclesfield, Cheshire SK10 4TG (United Kingdom)

    2014-09-01

    Cardiovascular-related adverse drug effects are a major concern for the pharmaceutical industry. Activity of an investigational drug at the L-type calcium channel could manifest in a number of ways, including changes in cardiac contractility. The aim of this study was to define which of the two assay technologies – radioligand-binding or automated electrophysiology – was most predictive of contractility effects in an in vitro myocyte contractility assay. The activity of reference and proprietary compounds at the L-type calcium channel was measured by radioligand-binding assays, conventional patch-clamp, automated electrophysiology, and by measurement of contractility in canine isolated cardiac myocytes. Activity in the radioligand-binding assay at the L-type Ca channel phenylalkylamine binding site was most predictive of an inotropic effect in the canine cardiac myocyte assay. The sensitivity was 73%, specificity 83% and predictivity 78%. The radioligand-binding assay may be run at a single test concentration and potency estimated. The least predictive assay was automated electrophysiology which showed a significant bias when compared with other assay formats. Given the importance of the L-type calcium channel, not just in cardiac function, but also in other organ systems, a screening strategy emerges whereby single concentration ligand-binding can be performed early in the discovery process with sufficient predictivity, throughput and turnaround time to influence chemical design and address a significant safety-related liability, at relatively low cost. - Highlights: • The L-type calcium channel is a significant safety liability during drug discovery. • Radioligand-binding to the L-type calcium channel can be measured in vitro. • The assay can be run at a single test concentration as part of a screening cascade. • This measurement is highly predictive of changes in cardiac myocyte contractility.

  16. Computer-Aided Experiment Planning toward Causal Discovery in Neuroscience.

    Science.gov (United States)

    Matiasz, Nicholas J; Wood, Justin; Wang, Wei; Silva, Alcino J; Hsu, William

    2017-01-01

    Computers help neuroscientists to analyze experimental results by automating the application of statistics; however, computer-aided experiment planning is far less common, due to a lack of similar quantitative formalisms for systematically assessing evidence and uncertainty. While ontologies and other Semantic Web resources help neuroscientists to assimilate required domain knowledge, experiment planning requires not only ontological but also epistemological (e.g., methodological) information regarding how knowledge was obtained. Here, we outline how epistemological principles and graphical representations of causality can be used to formalize experiment planning toward causal discovery. We outline two complementary approaches to experiment planning: one that quantifies evidence per the principles of convergence and consistency, and another that quantifies uncertainty using logical representations of constraints on causal structure. These approaches operationalize experiment planning as the search for an experiment that either maximizes evidence or minimizes uncertainty. Despite work in laboratory automation, humans must still plan experiments and will likely continue to do so for some time. There is thus a great need for experiment-planning frameworks that are not only amenable to machine computation but also useful as aids in human reasoning.

  17. Platform for Automated Real-Time High Performance Analytics on Medical Image Data.

    Science.gov (United States)

    Allen, William J; Gabr, Refaat E; Tefera, Getaneh B; Pednekar, Amol S; Vaughn, Matthew W; Narayana, Ponnada A

    2018-03-01

    Biomedical data are quickly growing in volume and in variety, providing clinicians an opportunity for better clinical decision support. Here, we demonstrate a robust platform that uses software automation and high performance computing (HPC) resources to achieve real-time analytics of clinical data, specifically magnetic resonance imaging (MRI) data. We used the Agave application programming interface to facilitate communication, data transfer, and job control between an MRI scanner and an off-site HPC resource. In this use case, Agave executed the graphical pipeline tool GRAphical Pipeline Environment (GRAPE) to perform automated, real-time, quantitative analysis of MRI scans. Same-session image processing will open the door for adaptive scanning and real-time quality control, potentially accelerating the discovery of pathologies and minimizing patient callbacks. We envision this platform can be adapted to other medical instruments, HPC resources, and analytics tools.

  18. Numeric calculation of celestial bodies with spreadsheet analysis

    Science.gov (United States)

    Koch, Alexander

    2016-04-01

    The motion of the planets and moons in our solar system can easily be calculated for any time by the Kepler laws of planetary motion. The Kepler laws are a special case of the gravitational law of Newton, especially if you consider more than two celestial bodies. Therefore it is more basic to calculate the motion by using the gravitational law. But the problem is, that by gravitational law it is not possible to calculate the state of motion with only one step of calculation. The motion has to be numerical calculated for many time intervalls. For this reason, spreadsheet analysis is helpful for students. Skills in programmes like Excel, Calc or Gnumeric are important in professional life and can easily be learnt by students. These programmes can help to calculate the complex motions with many intervalls. The more intervalls are used, the more exact are the calculated orbits. The sutdents will first get a quick course in Excel. After that they calculate with instructions the 2-D-coordinates of the orbits of Moon and Mars. Step by step the students are coding the formulae for calculating physical parameters like coordinates, force, acceleration and velocity. The project is limited to 4 weeks or 8 lessons. So the calcualtion will only include the calculation of one body around the central mass like Earth or Sun. The three-body problem can only be shortly discussed at the end of the project.

  19. Improving the driver-automation interaction: an approach using automation uncertainty.

    Science.gov (United States)

    Beller, Johannes; Heesen, Matthias; Vollrath, Mark

    2013-12-01

    The aim of this study was to evaluate whether communicating automation uncertainty improves the driver-automation interaction. A false system understanding of infallibility may provoke automation misuse and can lead to severe consequences in case of automation failure. The presentation of automation uncertainty may prevent this false system understanding and, as was shown by previous studies, may have numerous benefits. Few studies, however, have clearly shown the potential of communicating uncertainty information in driving. The current study fills this gap. We conducted a driving simulator experiment, varying the presented uncertainty information between participants (no uncertainty information vs. uncertainty information) and the automation reliability (high vs.low) within participants. Participants interacted with a highly automated driving system while engaging in secondary tasks and were required to cooperate with the automation to drive safely. Quantile regressions and multilevel modeling showed that the presentation of uncertainty information increases the time to collision in the case of automation failure. Furthermore, the data indicated improved situation awareness and better knowledge of fallibility for the experimental group. Consequently, the automation with the uncertainty symbol received higher trust ratings and increased acceptance. The presentation of automation uncertaintythrough a symbol improves overall driver-automation cooperation. Most automated systems in driving could benefit from displaying reliability information. This display might improve the acceptance of fallible systems and further enhances driver-automation cooperation.

  20. Automation Hooks Architecture Trade Study for Flexible Test Orchestration

    Science.gov (United States)

    Lansdowne, Chatwin A.; Maclean, John R.; Graffagnino, Frank J.; McCartney, Patrick A.

    2010-01-01

    We describe the conclusions of a technology and communities survey supported by concurrent and follow-on proof-of-concept prototyping to evaluate feasibility of defining a durable, versatile, reliable, visible software interface to support strategic modularization of test software development. The objective is that test sets and support software with diverse origins, ages, and abilities can be reliably integrated into test configurations that assemble and tear down and reassemble with scalable complexity in order to conduct both parametric tests and monitored trial runs. The resulting approach is based on integration of three recognized technologies that are currently gaining acceptance within the test industry and when combined provide a simple, open and scalable test orchestration architecture that addresses the objectives of the Automation Hooks task. The technologies are automated discovery using multicast DNS Zero Configuration Networking (zeroconf), commanding and data retrieval using resource-oriented Restful Web Services, and XML data transfer formats based on Automatic Test Markup Language (ATML). This open-source standards-based approach provides direct integration with existing commercial off-the-shelf (COTS) analysis software tools.

  1. Automated Generation of Fault Management Artifacts from a Simple System Model

    Science.gov (United States)

    Kennedy, Andrew K.; Day, John C.

    2013-01-01

    Our understanding of off-nominal behavior - failure modes and fault propagation - in complex systems is often based purely on engineering intuition; specific cases are assessed in an ad hoc fashion as a (fallible) fault management engineer sees fit. This work is an attempt to provide a more rigorous approach to this understanding and assessment by automating the creation of a fault management artifact, the Failure Modes and Effects Analysis (FMEA) through querying a representation of the system in a SysML model. This work builds off the previous development of an off-nominal behavior model for the upcoming Soil Moisture Active-Passive (SMAP) mission at the Jet Propulsion Laboratory. We further developed the previous system model to more fully incorporate the ideas of State Analysis, and it was restructured in an organizational hierarchy that models the system as layers of control systems while also incorporating the concept of "design authority". We present software that was developed to traverse the elements and relationships in this model to automatically construct an FMEA spreadsheet. We further discuss extending this model to automatically generate other typical fault management artifacts, such as Fault Trees, to efficiently portray system behavior, and depend less on the intuition of fault management engineers to ensure complete examination of off-nominal behavior.

  2. SkyDiscovery: Humans and Machines Working Together

    Science.gov (United States)

    Donalek, Ciro; Fang, K.; Drake, A. J.; Djorgovski, S. G.; Graham, M. J.; Mahabal, A.; Williams, R.

    2011-01-01

    Synoptic sky surveys are now discovering tens to hundreds of transient events every clear night, and that data rate is expected to increase dramatically as we move towards the LSST. A key problem is classification of transients, which determines their scientific interest and possible follow-up. Some of the relevant information is contextual, and easily recognizable by humans looking at images, but it is very hard to encode in the data pipelines. Crowdsourcing (aka Citizen Science) provides one possible way to gather such information. SkyDiscovery.org is a website that allows experts and citizen science enthusiasts to work together and share information in a collaborative scientific discovery environment. Currently there are two projects running on the website. In the Event Classification project users help finding candidate transients through a series of questions related to the images shown. Event classification depends very much form the contextual information and humans are remarkably effective at recognizing noise in incomplete heterogeneous data and figuring out which contextual information is important. In the SNHunt project users are requested to look for new objects appearing on images of galaxies taken by the Catalina Real-time Transient Survey, in order to find all the supernovae occurring in nearby bright galaxies. Images are served alongside with other tools that can help the discovery. A multi level approach allows the complexity of the interface to be tailored to the expertise level of the user. An entry level user can just review images and validate events as being real, while a more advanced user would be able to interact with the data associated to an event. The data gathered will not be only analyzed and used directly for some specific science project, but also to train well-defined algorithms to be used in automating such data analysis in the future.

  3. Towards agile large-scale predictive modelling in drug discovery with flow-based programming design principles.

    Science.gov (United States)

    Lampa, Samuel; Alvarsson, Jonathan; Spjuth, Ola

    2016-01-01

    Predictive modelling in drug discovery is challenging to automate as it often contains multiple analysis steps and might involve cross-validation and parameter tuning that create complex dependencies between tasks. With large-scale data or when using computationally demanding modelling methods, e-infrastructures such as high-performance or cloud computing are required, adding to the existing challenges of fault-tolerant automation. Workflow management systems can aid in many of these challenges, but the currently available systems are lacking in the functionality needed to enable agile and flexible predictive modelling. We here present an approach inspired by elements of the flow-based programming paradigm, implemented as an extension of the Luigi system which we name SciLuigi. We also discuss the experiences from using the approach when modelling a large set of biochemical interactions using a shared computer cluster.Graphical abstract.

  4. Computational discovery of extremal microstructure families

    Science.gov (United States)

    Chen, Desai; Skouras, Mélina; Zhu, Bo; Matusik, Wojciech

    2018-01-01

    Modern fabrication techniques, such as additive manufacturing, can be used to create materials with complex custom internal structures. These engineered materials exhibit a much broader range of bulk properties than their base materials and are typically referred to as metamaterials or microstructures. Although metamaterials with extraordinary properties have many applications, designing them is very difficult and is generally done by hand. We propose a computational approach to discover families of microstructures with extremal macroscale properties automatically. Using efficient simulation and sampling techniques, we compute the space of mechanical properties covered by physically realizable microstructures. Our system then clusters microstructures with common topologies into families. Parameterized templates are eventually extracted from families to generate new microstructure designs. We demonstrate these capabilities on the computational design of mechanical metamaterials and present five auxetic microstructure families with extremal elastic material properties. Our study opens the way for the completely automated discovery of extremal microstructures across multiple domains of physics, including applications reliant on thermal, electrical, and magnetic properties. PMID:29376124

  5. Web-Scale Discovery Services Retrieve Relevant Results in Health Sciences Topics Including MEDLINE Content

    Directory of Open Access Journals (Sweden)

    Elizabeth Margaret Stovold

    2017-06-01

    coverage of MEDLINE, they recorded the first 50 results from each of the 6 PubMed searches in a spreadsheet. During data collection at the WSD sites, they searched for these references to discover if the WSD tool at each site indexed these known items. Authors adopted measures to control for any customisation of the product setup at each data collection site. In particular, they excluded local holdings from the results by limiting the searches to scholarly, peer-reviewed articles. Main results – Authors reported results for 5 of the 6 sites. All of the WSD tools retrieved between 50-60% relevant results. EDS retrieved the highest number of relevant records (195/360 and 216/360, while Primo retrieved the lowest (167/328 and 169/325. There was good observer agreement (k=0.725 for the relevance assessment. The duplicate detection rate was similar in EDS and Summon (between 96-97% unique articles, while the Primo searches returned 82.9-84.9% unique articles. All three tools retrieved relevant results that were not indexed in MEDLINE, and retrieved relevant material indexed in MEDLINE that was not retrieved in the PubMed searches. EDS and Summon retrieved more non-MEDLINE material than Primo. EDS performed best in the known-item searches, with 300/300 and 299/300 items retrieved, while Primo performed worst with 230/300 and 267/300 items retrieved. The Summon platform features an “automated query expansion” search function, where user-entered keywords are matched to related search terms and these are automatically searched along with the original keyword. The authors observed that this function resulted in a wholly relevant first page of results for one of the search questions tested in Summon. Conclusion – While EDS performed slightly better overall, the difference was not great enough in this small sample of test sites to recommend EDS over the other tools being tested. The automated query expansion found in Summon is a useful function that is worthy of further

  6. Automated genome mining of ribosomal peptide natural products

    Energy Technology Data Exchange (ETDEWEB)

    Mohimani, Hosein; Kersten, Roland; Liu, Wei; Wang, Mingxun; Purvine, Samuel O.; Wu, Si; Brewer, Heather M.; Pasa-Tolic, Ljiljana; Bandeira, Nuno; Moore, Bradley S.; Pevzner, Pavel A.; Dorrestein, Pieter C.

    2014-07-31

    Ribosomally synthesized and posttranslationally modified peptides (RiPPs), especially from microbial sources, are a large group of bioactive natural products that are a promising source of new (bio)chemistry and bioactivity (1). In light of exponentially increasing microbial genome databases and improved mass spectrometry (MS)-based metabolomic platforms, there is a need for computational tools that connect natural product genotypes predicted from microbial genome sequences with their corresponding chemotypes from metabolomic datasets. Here, we introduce RiPPquest, a tandem mass spectrometry database search tool for identification of microbial RiPPs and apply it for lanthipeptide discovery. RiPPquest uses genomics to limit search space to the vicinity of RiPP biosynthetic genes and proteomics to analyze extensive peptide modifications and compute p-values of peptide-spectrum matches (PSMs). We highlight RiPPquest by connection of multiple RiPPs from extracts of Streptomyces to their gene clusters and by the discovery of a new class III lanthipeptide, informatipeptin, from Streptomyces viridochromogenes DSM 40736 as the first natural product to be identified in an automated fashion by genome mining. The presented tool is available at cy-clo.ucsd.edu.

  7. Automated sample preparation using membrane microtiter extraction for bioanalytical mass spectrometry.

    Science.gov (United States)

    Janiszewski, J; Schneider, P; Hoffmaster, K; Swyden, M; Wells, D; Fouda, H

    1997-01-01

    The development and application of membrane solid phase extraction (SPE) in 96-well microtiter plate format is described for the automated analysis of drugs in biological fluids. The small bed volume of the membrane allows elution of the analyte in a very small solvent volume, permitting direct HPLC injection and negating the need for the time consuming solvent evaporation step. A programmable liquid handling station (Quadra 96) was modified to automate all SPE steps. To avoid drying of the SPE bed and to enhance the analytical precision a novel protocol for performing the condition, load and wash steps in rapid succession was utilized. A block of 96 samples can now be extracted in 10 min., about 30 times faster than manual solvent extraction or single cartridge SPE methods. This processing speed complements the high-throughput speed of contemporary high performance liquid chromatography mass spectrometry (HPLC/MS) analysis. The quantitative analysis of a test analyte (Ziprasidone) in plasma demonstrates the utility and throughput of membrane SPE in combination with HPLC/MS. The results obtained with the current automated procedure compare favorably with those obtained using solvent and traditional solid phase extraction methods. The method has been used for the analysis of numerous drug prototypes in biological fluids to support drug discovery efforts.

  8. Shotgun Proteomics and Biomarker Discovery

    Directory of Open Access Journals (Sweden)

    W. Hayes McDonald

    2002-01-01

    Full Text Available Coupling large-scale sequencing projects with the amino acid sequence information that can be gleaned from tandem mass spectrometry (MS/MS has made it much easier to analyze complex mixtures of proteins. The limits of this “shotgun” approach, in which the protein mixture is proteolytically digested before separation, can be further expanded by separating the resulting mixture of peptides prior to MS/MS analysis. Both single dimensional high pressure liquid chromatography (LC and multidimensional LC (LC/LC can be directly interfaced with the mass spectrometer to allow for automated collection of tremendous quantities of data. While there is no single technique that addresses all proteomic challenges, the shotgun approaches, especially LC/LC-MS/MS-based techniques such as MudPIT (multidimensional protein identification technology, show advantages over gel-based techniques in speed, sensitivity, scope of analysis, and dynamic range. Advances in the ability to quantitate differences between samples and to detect for an array of post-translational modifications allow for the discovery of classes of protein biomarkers that were previously unassailable.

  9. Closing the Loop: Automated Data-Driven Cognitive Model Discoveries Lead to Improved Instruction and Learning Gains

    Science.gov (United States)

    Liu, Ran; Koedinger, Kenneth R.

    2017-01-01

    As the use of educational technology becomes more ubiquitous, an enormous amount of learning process data is being produced. Educational data mining seeks to analyze and model these data, with the ultimate goal of improving learning outcomes. The most firmly grounded and rigorous evaluation of an educational data mining discovery is whether it…

  10. Low cost automation

    International Nuclear Information System (INIS)

    1987-03-01

    This book indicates method of building of automation plan, design of automation facilities, automation and CHIP process like basics of cutting, NC processing machine and CHIP handling, automation unit, such as drilling unit, tapping unit, boring unit, milling unit and slide unit, application of oil pressure on characteristics and basic oil pressure circuit, application of pneumatic, automation kinds and application of process, assembly, transportation, automatic machine and factory automation.

  11. Modelling accidental releases of tritium in the environment: application as an excel spreadsheet

    International Nuclear Information System (INIS)

    Le Dizes, S.; Tamponnet, C.

    2004-01-01

    An application as an Excel spreadsheet of the simplified modelling approach of tritium transfer in the environment developed by Tamponnet (2002) is presented. Based on the use of growth models of biological systems (plants, animals, etc.), the two-pool model (organic tritium and tritiated water) that was developed estimates the concentration of tritium within the different compartments of the food chain and in fine the dose to man by ingestion in the case of a chronic or accidental release of tritium in a river or the atmosphere. Data and knowledge have been implemented on Excel using the object-oriented programming language VisualBasic (Microsoft Visual Basic 6.0). The structure of the conceptual model and the Excel sheet are first briefly exposed. A numerical application of the model under a scenario of an accidental release of tritium in the atmosphere is then presented. Simulation results and perspectives are discussed. (author)

  12. SAMPL4 & DOCK3.7: lessons for automated docking procedures

    Science.gov (United States)

    Coleman, Ryan G.; Sterling, Teague; Weiss, Dahlia R.

    2014-03-01

    The SAMPL4 challenges were used to test current automated methods for solvation energy, virtual screening, pose and affinity prediction of the molecular docking pipeline DOCK 3.7. Additionally, first-order models of binding affinity were proposed as milestones for any method predicting binding affinity. Several important discoveries about the molecular docking software were made during the challenge: (1) Solvation energies of ligands were five-fold worse than any other method used in SAMPL4, including methods that were similarly fast, (2) HIV Integrase is a challenging target, but automated docking on the correct allosteric site performed well in terms of virtual screening and pose prediction (compared to other methods) but affinity prediction, as expected, was very poor, (3) Molecular docking grid sizes can be very important, serious errors were discovered with default settings that have been adjusted for all future work. Overall, lessons from SAMPL4 suggest many changes to molecular docking tools, not just DOCK 3.7, that could improve the state of the art. Future difficulties and projects will be discussed.

  13. Topology Discovery Using Cisco Discovery Protocol

    OpenAIRE

    Rodriguez, Sergio R.

    2009-01-01

    In this paper we address the problem of discovering network topology in proprietary networks. Namely, we investigate topology discovery in Cisco-based networks. Cisco devices run Cisco Discovery Protocol (CDP) which holds information about these devices. We first compare properties of topologies that can be obtained from networks deploying CDP versus Spanning Tree Protocol (STP) and Management Information Base (MIB) Forwarding Database (FDB). Then we describe a method of discovering topology ...

  14. Autonomy and Automation

    Science.gov (United States)

    Shively, Jay

    2017-01-01

    A significant level of debate and confusion has surrounded the meaning of the terms autonomy and automation. Automation is a multi-dimensional concept, and we propose that Remotely Piloted Aircraft Systems (RPAS) automation should be described with reference to the specific system and task that has been automated, the context in which the automation functions, and other relevant dimensions. In this paper, we present definitions of automation, pilot in the loop, pilot on the loop and pilot out of the loop. We further propose that in future, the International Civil Aviation Organization (ICAO) RPAS Panel avoids the use of the terms autonomy and autonomous when referring to automated systems on board RPA. Work Group 7 proposes to develop, in consultation with other workgroups, a taxonomy of Levels of Automation for RPAS.

  15. Discovery of gigantic molecular nanostructures using a flow reaction array as a search engine.

    Science.gov (United States)

    Zang, Hong-Ying; de la Oliva, Andreu Ruiz; Miras, Haralampos N; Long, De-Liang; McBurney, Roy T; Cronin, Leroy

    2014-04-28

    The discovery of gigantic molecular nanostructures like coordination and polyoxometalate clusters is extremely time-consuming since a vast combinatorial space needs to be searched, and even a systematic and exhaustive exploration of the available synthetic parameters relies on a great deal of serendipity. Here we present a synthetic methodology that combines a flow reaction array and algorithmic control to give a chemical 'real-space' search engine leading to the discovery and isolation of a range of new molecular nanoclusters based on [Mo(2)O(2)S(2)](2+)-based building blocks with either fourfold (C4) or fivefold (C5) symmetry templates and linkers. This engine leads us to isolate six new nanoscale cluster compounds: 1, {Mo(10)(C5)}; 2, {Mo(14)(C4)4(C5)2}; 3, {Mo(60)(C4)10}; 4, {Mo(48)(C4)6}; 5, {Mo(34)(C4)4}; 6, {Mo(18)(C4)9}; in only 200 automated experiments from a parameter space spanning ~5 million possible combinations.

  16. A novel real-time data acquisition using an Excel spreadsheet in pendulum experiment tool with light-based timer

    Science.gov (United States)

    Adhitama, Egy; Fauzi, Ahmad

    2018-05-01

    In this study, a pendulum experimental tool with a light-based timer has been developed to measure the period of a simple pendulum. The obtained data was automatically recorded in an Excel spreadsheet. The intensity of monochromatic light, sensed by a 3DU5C phototransistor, dynamically changes as the pendulum swings. The changed intensity varies the resistance value and was processed by the microcontroller, ATMega328, to obtain a signal period as a function of time and brightness when the pendulum crosses the light. Through the experiment, using calculated average periods, the gravitational acceleration value has been accurately and precisely determined.

  17. Geena 2, improved automated analysis of MALDI/TOF mass spectra.

    Science.gov (United States)

    Romano, Paolo; Profumo, Aldo; Rocco, Mattia; Mangerini, Rosa; Ferri, Fabio; Facchiano, Angelo

    2016-03-02

    Mass spectrometry (MS) is producing high volumes of data supporting oncological sciences, especially for translational research. Most of related elaborations can be carried out by combining existing tools at different levels, but little is currently available for the automation of the fundamental steps. For the analysis of MALDI/TOF spectra, a number of pre-processing steps are required, including joining of isotopic abundances for a given molecular species, normalization of signals against an internal standard, background noise removal, averaging multiple spectra from the same sample, and aligning spectra from different samples. In this paper, we present Geena 2, a public software tool for the automated execution of these pre-processing steps for MALDI/TOF spectra. Geena 2 has been developed in a Linux-Apache-MySQL-PHP web development environment, with scripts in PHP and Perl. Input and output are managed as simple formats that can be consumed by any database system and spreadsheet software. Input data may also be stored in a MySQL database. Processing methods are based on original heuristic algorithms which are introduced in the paper. Three simple and intuitive web interfaces are available: the Standard Search Interface, which allows a complete control over all parameters, the Bright Search Interface, which leaves to the user the possibility to tune parameters for alignment of spectra, and the Quick Search Interface, which limits the number of parameters to a minimum by using default values for the majority of parameters. Geena 2 has been utilized, in conjunction with a statistical analysis tool, in three published experimental works: a proteomic study on the effects of long-term cryopreservation on the low molecular weight fraction of serum proteome, and two retrospective serum proteomic studies, one on the risk of developing breat cancer in patients affected by gross cystic disease of the breast (GCDB) and the other for the identification of a predictor of

  18. From Data to Knowledge to Discoveries: Artificial Intelligence and Scientific Workflows

    Directory of Open Access Journals (Sweden)

    Yolanda Gil

    2009-01-01

    Full Text Available Scientific computing has entered a new era of scale and sharing with the arrival of cyberinfrastructure facilities for computational experimentation. A key emerging concept is scientific workflows, which provide a declarative representation of complex scientific applications that can be automatically managed and executed in distributed shared resources. In the coming decades, computational experimentation will push the boundaries of current cyberinfrastructure in terms of inter-disciplinary scope and integrative models of scientific phenomena under study. This paper argues that knowledge-rich workflow environments will provide necessary capabilities for that vision by assisting scientists to validate and vet complex analysis processes and by automating important aspects of scientific exploration and discovery.

  19. The UNIX/XENIX Advantage: Applications in Libraries.

    Science.gov (United States)

    Gordon, Kelly L.

    1988-01-01

    Discusses the application of the UNIX/XENIX operating system to support administrative office automation functions--word processing, spreadsheets, database management systems, electronic mail, and communications--at the Central Michigan University Libraries. Advantages and disadvantages of the XENIX operating system and system configuration are…

  20. An automated Y-maze based on a reduced instruction set computer (RISC) microcontroller for the assessment of continuous spontaneous alternation in rats.

    Science.gov (United States)

    Heredia-López, Francisco J; Álvarez-Cervera, Fernando J; Collí-Alfaro, José G; Bata-García, José L; Arankowsky-Sandoval, Gloria; Góngora-Alfaro, José L

    2016-12-01

    Continuous spontaneous alternation behavior (SAB) in a Y-maze is used for evaluating working memory in rodents. Here, the design of an automated Y-maze equipped with three infrared optocouplers per arm, and commanded by a reduced instruction set computer (RISC) microcontroller is described. The software was devised for recording only true entries and exits to the arms. Experimental settings are programmed via a keyboard with three buttons and a display. The sequence of arm entries and the time spent in each arm and the neutral zone (NZ) are saved as a text file in a non-volatile memory for later transfer to a USB flash memory. Data files are analyzed with a program developed under LabVIEW® environment, and the results are exported to an Excel® spreadsheet file. Variables measured are: latency to exit the starting arm, sequence and number of arm entries, number of alternations, alternation percentage, and cumulative times spent in each arm and NZ. The automated Y-maze accurately detected the SAB decrease produced in rats by the muscarinic antagonist trihexyphenidyl, and its reversal by caffeine, having 100 % concordance with the alternation percentages calculated by two trained observers who independently watched videos of the same experiments. Although the values of time spent in the arms and NZ measured by the automated system had small discrepancies with those calculated by the observers, Bland-Altman analysis showed 95 % concordance in three pairs of comparisons, while in one it was 90 %, indicating that this system is a reliable and inexpensive alternative for the study of continuous SAB in rodents.

  1. Home Automation

    OpenAIRE

    Ahmed, Zeeshan

    2010-01-01

    In this paper I briefly discuss the importance of home automation system. Going in to the details I briefly present a real time designed and implemented software and hardware oriented house automation research project, capable of automating house's electricity and providing a security system to detect the presence of unexpected behavior.

  2. 14 CFR 406.143 - Discovery.

    Science.gov (United States)

    2010-01-01

    ... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Discovery. 406.143 Section 406.143... Transportation Adjudications § 406.143 Discovery. (a) Initiation of discovery. Any party may initiate discovery... after a complaint has been filed. (b) Methods of discovery. The following methods of discovery are...

  3. Higgs Discovery

    DEFF Research Database (Denmark)

    Sannino, Francesco

    2013-01-01

    has been challenged by the discovery of a not-so-heavy Higgs-like state. I will therefore review the recent discovery \\cite{Foadi:2012bb} that the standard model top-induced radiative corrections naturally reduce the intrinsic non-perturbative mass of the composite Higgs state towards the desired...... via first principle lattice simulations with encouraging results. The new findings show that the recent naive claims made about new strong dynamics at the electroweak scale being disfavoured by the discovery of a not-so-heavy composite Higgs are unwarranted. I will then introduce the more speculative......I discuss the impact of the discovery of a Higgs-like state on composite dynamics starting by critically examining the reasons in favour of either an elementary or composite nature of this state. Accepting the standard model interpretation I re-address the standard model vacuum stability within...

  4. Optimization of automation: I. Estimation method of cognitive automation rates reflecting the effects of automation on human operators in nuclear power plants

    International Nuclear Information System (INIS)

    Lee, Seung Min; Kim, Jong Hyun; Seong, Poong Hyun

    2014-01-01

    Highlights: • We propose an estimation method of the automation rate by taking the advantages of automation as the estimation measures. • We conduct the experiments to examine the validity of the suggested method. • The higher the cognitive automation rate is, the greater the decreased rate of the working time will be. • The usefulness of the suggested estimation method is proved by statistical analyses. - Abstract: Since automation was introduced in various industrial fields, the concept of the automation rate has been used to indicate the inclusion proportion of automation among all work processes or facilities. Expressions of the inclusion proportion of automation are predictable, as is the ability to express the degree of the enhancement of human performance. However, many researchers have found that a high automation rate does not guarantee high performance. Therefore, to reflect the effects of automation on human performance, this paper proposes a new estimation method of the automation rate that considers the effects of automation on human operators in nuclear power plants (NPPs). Automation in NPPs can be divided into two types: system automation and cognitive automation. Some general descriptions and characteristics of each type of automation are provided, and the advantages of automation are investigated. The advantages of each type of automation are used as measures of the estimation method of the automation rate. One advantage was found to be a reduction in the number of tasks, and another was a reduction in human cognitive task loads. The system and the cognitive automation rate were proposed as quantitative measures by taking advantage of the aforementioned benefits. To quantify the required human cognitive task loads and thus suggest the cognitive automation rate, Conant’s information-theory-based model was applied. The validity of the suggested method, especially as regards the cognitive automation rate, was proven by conducting

  5. An Automated Platform for Assessment of Congenital and Drug-Induced Arrhythmia with hiPSC-Derived Cardiomyocytes

    Directory of Open Access Journals (Sweden)

    Wesley L. McKeithan

    2017-10-01

    Full Text Available The ability to produce unlimited numbers of human induced pluripotent stem cell derived cardiomyocytes (hiPSC-CMs harboring disease and patient-specific gene variants creates a new paradigm for modeling congenital heart diseases (CHDs and predicting proarrhythmic liabilities of drug candidates. However, a major roadblock to implementing hiPSC-CM technology in drug discovery is that conventional methods for monitoring action potential (AP kinetics and arrhythmia phenotypes in vitro have been too costly or technically challenging to execute in high throughput. Herein, we describe the first large-scale, fully automated and statistically robust analysis of AP kinetics and drug-induced proarrhythmia in hiPSC-CMs. The platform combines the optical recording of a small molecule fluorescent voltage sensing probe (VoltageFluor2.1.Cl, an automated high throughput microscope and automated image analysis to rapidly generate physiological measurements of cardiomyocytes (CMs. The technique can be readily adapted on any high content imager to study hiPSC-CM physiology and predict the proarrhythmic effects of drug candidates.

  6. VeriClick: an efficient tool for table format verification

    Science.gov (United States)

    Nagy, George; Tamhankar, Mangesh

    2012-01-01

    The essential layout attributes of a visual table can be defined by the location of four critical grid cells. Although these critical cells can often be located by automated analysis, some means of human interaction is necessary for correcting residual errors. VeriClick is a macro-enabled spreadsheet interface that provides ground-truthing, confirmation, correction, and verification functions for CSV tables. All user actions are logged. Experimental results of seven subjects on one hundred tables suggest that VeriClick can provide a ten- to twenty-fold speedup over performing the same functions with standard spreadsheet editing commands.

  7. Modelling accidental releases of carbon 14 in the environment: application as an excel spreadsheet

    International Nuclear Information System (INIS)

    Le Dizes, S.; Tamponnet, C.

    2004-01-01

    An application as an Excel spreadsheet of the simplified modelling approach of carbon 14 transfer in the environment developed by Tamponnet (2002) is presented. Based on the use of growth models of biological systems (plants, animals, etc.), the one-pool model (organic carbon) that was developed estimates the concentration of carbon 14 within the different compartments of the food chain and in fine the dose to man by ingestion in the case of a chronic or accidental release of carbon 14 in a river or the atmosphere. Data and knowledge have been implemented on Excel using the object-oriented programming language VisualBasic (Microsoft Visual Basic 6.0). The structure of the conceptual model and the Excel sheet are first briefly exposed. A numerical application of the model under a scenario of an accidental release of carbon 14 in the atmosphere is then presented. Simulation results and perspectives are discussed. (author)

  8. Development of a Genetic Algorithm to Automate Clustering of a Dependency Structure Matrix

    Science.gov (United States)

    Rogers, James L.; Korte, John J.; Bilardo, Vincent J.

    2006-01-01

    Much technology assessment and organization design data exists in Microsoft Excel spreadsheets. Tools are needed to put this data into a form that can be used by design managers to make design decisions. One need is to cluster data that is highly coupled. Tools such as the Dependency Structure Matrix (DSM) and a Genetic Algorithm (GA) can be of great benefit. However, no tool currently combines the DSM and a GA to solve the clustering problem. This paper describes a new software tool that interfaces a GA written as an Excel macro with a DSM in spreadsheet format. The results of several test cases are included to demonstrate how well this new tool works.

  9. A procedure to improve the information flow in the assessment of discoveries of oil and gas resources in the Brazilian context

    Energy Technology Data Exchange (ETDEWEB)

    Rosa, Henrique; Suslick, Saul B.; Sousa, Sergio H.G. de [Universidade Estadual de Campinas, SP (Brazil). Inst. of Geosciences; Castro, Jonas Q. [ANP - Brazilian National Petroleum Agency, Rio de Janeiro, RJ (Brazil)

    2004-07-01

    This paper is focused on the elaboration of a standardization model for the existing flow of information between the Petroleum National Agency (ANP) and the concessionaire companies in the event of the discovery of any potentially commercial hydrocarbon resources inside their concession areas. The method proposed by Rosa (2003) included the analysis of a small sample of Oil and Gas Discovery Assessment Plans (PADs), elaborated by companies that operate in exploratory blocks in Brazil, under the regulatory context introduced by the Petroleum Law (Law 9478, August, 6th, 1997). The analysis of these documents made it possible to identify and target the problems originated from the lack of standardization. The results obtained facilitated the development of a model that helps the creation process of Oil and Gas Discovery Assessment Plans. It turns out that the standardization procedures suggested provide considerable advantages while speeding up several technical and regulatory steps. A software called 'ePADs' was developed to consolidate the automation of the several steps in the model for the standardization of the Oil and Gas Discovery Assessment Plans. A preliminary version has been tested with several different types of discoveries indicating a good performance by complying with all regulatory aspects and operational requirements. (author)

  10. BioinformatiqTM - integrating data types for proteomic discovery

    International Nuclear Information System (INIS)

    Arthur, J.W.; Harrison, M.; Manoharan, A.; Traini, M.; Shaw, E.; Wilkins, M.

    2001-01-01

    Proteomics (Wilkins et al. 1997) involves the large-scale analysis of expressed proteins. At each stage of the discovery process the researcher accumulates large volumes of data. These include: clinical or biological data about the sample being studied; details of sample purification and separation; images of 2D gels and associated information; MALDI mass spectra; MS/MS and PSD spectra; as well as meta-data relating to the projects undertaken and experiments performed. All this must be combined with existing databases of protein and EST sequences, post-translational modifications, and protein glycosylation, then processed with sophisticated bioinformatics tools in order to extract meaningful answers to questions of biological, clinical, and agricultural significance. BioinformatlQ TM is a web-based application for the storage, management, and automated bioinformatic analysis of proteomic information. This poster will demonstrate the integration of these disparate data sources in proteomics

  11. Deep data: discovery and visualization Application to hyperspectral ALMA imagery

    Science.gov (United States)

    Merényi, Erzsébet; Taylor, Joshua; Isella, Andrea

    2017-06-01

    Leading-edge telescopes such as the Atacama Large Millimeter and sub-millimeter Array (ALMA), and near-future ones, are capable of imaging the same sky area at hundreds-to-thousands of frequencies with both high spectral and spatial resolution. This provides unprecedented opportunities for discovery about the spatial, kinematical and compositional structure of sources such as molecular clouds or protoplanetary disks, and more. However, in addition to enormous volume, the data also exhibit unprecedented complexity, mandating new approaches for extracting and summarizing relevant information. Traditional techniques such as examining images at selected frequencies become intractable while tools that integrate data across frequencies or pixels (like moment maps) can no longer fully exploit and visualize the rich information. We present a neural map-based machine learning approach that can handle all spectral channels simultaneously, utilizing the full depth of these data for discovery and visualization of spectrally homogeneous spatial regions (spectral clusters) that characterize distinct kinematic behaviors. We demonstrate the effectiveness on an ALMA image cube of the protoplanetary disk HD142527. The tools we collectively name ``NeuroScope'' are efficient for ``Big Data'' due to intelligent data summarization that results in significant sparsity and noise reduction. We also demonstrate a new approach to automate our clustering for fast distillation of large data cubes.

  12. Process automation

    International Nuclear Information System (INIS)

    Moser, D.R.

    1986-01-01

    Process automation technology has been pursued in the chemical processing industries and to a very limited extent in nuclear fuel reprocessing. Its effective use has been restricted in the past by the lack of diverse and reliable process instrumentation and the unavailability of sophisticated software designed for process control. The Integrated Equipment Test (IET) facility was developed by the Consolidated Fuel Reprocessing Program (CFRP) in part to demonstrate new concepts for control of advanced nuclear fuel reprocessing plants. A demonstration of fuel reprocessing equipment automation using advanced instrumentation and a modern, microprocessor-based control system is nearing completion in the facility. This facility provides for the synergistic testing of all chemical process features of a prototypical fuel reprocessing plant that can be attained with unirradiated uranium-bearing feed materials. The unique equipment and mission of the IET facility make it an ideal test bed for automation studies. This effort will provide for the demonstration of the plant automation concept and for the development of techniques for similar applications in a full-scale plant. A set of preliminary recommendations for implementing process automation has been compiled. Some of these concepts are not generally recognized or accepted. The automation work now under way in the IET facility should be useful to others in helping avoid costly mistakes because of the underutilization or misapplication of process automation. 6 figs

  13. A simple method to estimate the optimum iodine concentration of contrast material through microcatheters: hydrodynamic calculation with spreadsheet software

    International Nuclear Information System (INIS)

    Yamauchi, Teiyu; Hayashi, Toshihiko; Yamada, Takeshi; Futami, Choichiro; Tsukiyama, Yumiko; Harada, Motoko; Furui, Shigeru; Suzuki, Shigeru; Mimura, Kohshiro

    2008-01-01

    It is important to increase the iodine delivery rate (I), that is the iodine concentration of the contrast material (C) x the flow rate of the contrast material (Q), through microcatheters to obtain arteriograms of the highest contrast. It is known that C is an important factor that influences I. The purpose of this study is to establish a method of hydrodynamic calculation of the optimum iodine concentration (i.e., the iodine concentration at which I becomes maximum) of the contrast material and its flow rate through commercially available microcatheters. Iopamidol, ioversol and iohexol of ten iodine concentrations were used. Iodine delivery rates (I meas) of each contrast material through ten microcatheters were measured. The calculated iodine delivery rate (I cal) and calculated optimum iodine concentration (calculated C opt) were obtained with spreadsheet software. The agreement between I cal and I meas was studied by correlation and logarithmic Bland-Altman analyses. The value of the calculated C opt was within the optimum range of iodine concentrations (i.e. the range of iodine concentrations at which I meas becomes 90% or more of the maximum) in all cases. A good correlation between I cal and I meas (I cal = 1.08 I meas, r = 0.99) was observed. Logarithmic Bland-Altman analysis showed that the 95% confidence interval of I cal/I meas was between 0.82 and 1.29. In conclusion, hydrodynamic calculation with spreadsheet software is an accurate, generally applicable and cost-saving method to estimate the value of the optimum iodine concentration and its flow rate through microcatheters

  14. The discovery of the periodic table as a case of simultaneous discovery.

    Science.gov (United States)

    Scerri, Eric

    2015-03-13

    The article examines the question of priority and simultaneous discovery in the context of the discovery of the periodic system. It is argued that rather than being anomalous, simultaneous discovery is the rule. Moreover, I argue that the discovery of the periodic system by at least six authors in over a period of 7 years represents one of the best examples of a multiple discovery. This notion is supported by a new view of the evolutionary development of science through a mechanism that is dubbed Sci-Gaia by analogy with Lovelock's Gaia hypothesis. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  15. When social actions get translated into spreadsheets: economics and social work with children and youth in Denmark

    DEFF Research Database (Denmark)

    Schrøder, Ida Marie

    2013-01-01

    interventions to help children and young people. Inspired by the sociologist John Law, my preliminary study suggests that taking into account economy often becomes a question of translating social interventions into spreadsheets, rather than making economically-based decisions. I classify three kinds...... in order to strengthen collaborative knowledge of how to take into account public sector economy, and to reflect on how technologies can interfere with decision processes in social work.......As a means of reducing public spending, social workers in Danish municipalities are expected to take into account public sector economy when deciding on how to solve social problems. Researchers have previously investigated the impact of social work on the public sector economy, the cost...

  16. Beyond Discovery

    DEFF Research Database (Denmark)

    Korsgaard, Steffen; Sassmannshausen, Sean Patrick

    2017-01-01

    In this chapter we explore four alternatives to the dominant discovery view of entrepreneurship; the development view, the construction view, the evolutionary view, and the Neo-Austrian view. We outline the main critique points of the discovery presented in these four alternatives, as well...

  17. "Eureka, Eureka!" Discoveries in Science

    Science.gov (United States)

    Agarwal, Pankaj

    2011-01-01

    Accidental discoveries have been of significant value in the progress of science. Although accidental discoveries are more common in pharmacology and chemistry, other branches of science have also benefited from such discoveries. While most discoveries are the result of persistent research, famous accidental discoveries provide a fascinating…

  18. 30 CFR 44.24 - Discovery.

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Discovery. 44.24 Section 44.24 Mineral... Discovery. Parties shall be governed in their conduct of discovery by appropriate provisions of the Federal... discovery. Alternative periods of time for discovery may be prescribed by the presiding administrative law...

  19. 19 CFR 356.20 - Discovery.

    Science.gov (United States)

    2010-04-01

    ... 19 Customs Duties 3 2010-04-01 2010-04-01 false Discovery. 356.20 Section 356.20 Customs Duties... § 356.20 Discovery. (a) Voluntary discovery. All parties are encouraged to engage in voluntary discovery... sanctions proceeding. (b) Limitations on discovery. The administrative law judge shall place such limits...

  20. Chemical Discovery

    Science.gov (United States)

    Brown, Herbert C.

    1974-01-01

    The role of discovery in the advance of the science of chemistry and the factors that are currently operating to handicap that function are considered. Examples are drawn from the author's work with boranes. The thesis that exploratory research and discovery should be encouraged is stressed. (DT)

  1. 24 CFR 180.500 - Discovery.

    Science.gov (United States)

    2010-04-01

    ... 24 Housing and Urban Development 1 2010-04-01 2010-04-01 false Discovery. 180.500 Section 180.500... OPPORTUNITY CONSOLIDATED HUD HEARING PROCEDURES FOR CIVIL RIGHTS MATTERS Discovery § 180.500 Discovery. (a) In general. This subpart governs discovery in aid of administrative proceedings under this part. Discovery in...

  2. 22 CFR 224.21 - Discovery.

    Science.gov (United States)

    2010-04-01

    ... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Discovery. 224.21 Section 224.21 Foreign....21 Discovery. (a) The following types of discovery are authorized: (1) Requests for production of... parties, discovery is available only as ordered by the ALJ. The ALJ shall regulate the timing of discovery...

  3. 19 CFR 207.109 - Discovery.

    Science.gov (United States)

    2010-04-01

    ... 19 Customs Duties 3 2010-04-01 2010-04-01 false Discovery. 207.109 Section 207.109 Customs Duties... and Committee Proceedings § 207.109 Discovery. (a) Discovery methods. All parties may obtain discovery under such terms and limitations as the administrative law judge may order. Discovery may be by one or...

  4. 15 CFR 25.21 - Discovery.

    Science.gov (United States)

    2010-01-01

    ... 15 Commerce and Foreign Trade 1 2010-01-01 2010-01-01 false Discovery. 25.21 Section 25.21... Discovery. (a) The following types of discovery are authorized: (1) Requests for production of documents for..., discovery is available only as ordered by the ALJ. The ALJ shall regulate the timing of discovery. (d...

  5. Isothermal Titration Calorimetry Can Provide Critical Thinking Opportunities

    Science.gov (United States)

    Moore, Dale E.; Goode, David R.; Seney, Caryn S.; Boatwright, Jennifer M.

    2016-01-01

    College chemistry faculties might not have considered including isothermal titration calorimetry (ITC) in their majors' curriculum because experimental data from this instrumental method are often analyzed via automation (software). However, the software-based data analysis can be replaced with a spreadsheet-based analysis that is readily…

  6. Proteomics wants cRacker: automated standardized data analysis of LC-MS derived proteomic data.

    Science.gov (United States)

    Zauber, Henrik; Schulze, Waltraud X

    2012-11-02

    The large-scale analysis of thousands of proteins under various experimental conditions or in mutant lines has gained more and more importance in hypothesis-driven scientific research and systems biology in the past years. Quantitative analysis by large scale proteomics using modern mass spectrometry usually results in long lists of peptide ion intensities. The main interest for most researchers, however, is to draw conclusions on the protein level. Postprocessing and combining peptide intensities of a proteomic data set requires expert knowledge, and the often repetitive and standardized manual calculations can be time-consuming. The analysis of complex samples can result in very large data sets (lists with several 1000s to 100,000 entries of different peptides) that cannot easily be analyzed using standard spreadsheet programs. To improve speed and consistency of the data analysis of LC-MS derived proteomic data, we developed cRacker. cRacker is an R-based program for automated downstream proteomic data analysis including data normalization strategies for metabolic labeling and label free quantitation. In addition, cRacker includes basic statistical analysis, such as clustering of data, or ANOVA and t tests for comparison between treatments. Results are presented in editable graphic formats and in list files.

  7. 39 CFR 963.14 - Discovery.

    Science.gov (United States)

    2010-07-01

    ... 39 Postal Service 1 2010-07-01 2010-07-01 false Discovery. 963.14 Section 963.14 Postal Service... PANDERING ADVERTISEMENTS STATUTE, 39 U.S.C. 3008 § 963.14 Discovery. Discovery is to be conducted on a... such discovery as he or she deems reasonable and necessary. Discovery may include one or more of the...

  8. Development of a VBA macro-based spreadsheet application for RELAP5 data post-processing

    International Nuclear Information System (INIS)

    Belchior Junior, Antonio; Andrade, Delvonei A.; Sabundjian, Gaiane; Macedo, Luiz A.; Angelo, Gabriel; Torres, Walmir M.; Umbehaun, Pedro E.; Conti, Thadeu N.; Bruel, Renata N.

    2011-01-01

    During the use of thermal-hydraulic codes such as RELAP5, large amount of data has to be managed in order to prepare its input data and also to analyze the produced results. This work presents a helpful tool developed to make it easier to handle the RELAP5 output data file. The XTRIP application is an electronic spreadsheet that contains some programmed macros that should be used for post-processing the RELAP5 output file. It can directly read the RELAP5 restart-plot binary output file and, through a user-friendly interface, transient results can be chosen and exported directly into an electronic worksheet. The XTRIP program can also do some data unit conversion as well as export these data to other programs such as Wingraf, Grapher and COBRA, etc. The main features of the developed Excel Visual Basic for Application macro as well as an example of use are presented and discussed. (author)

  9. An automated high throughput screening-compatible assay to identify regulators of stem cell neural differentiation.

    Science.gov (United States)

    Casalino, Laura; Magnani, Dario; De Falco, Sandro; Filosa, Stefania; Minchiotti, Gabriella; Patriarca, Eduardo J; De Cesare, Dario

    2012-03-01

    The use of Embryonic Stem Cells (ESCs) holds considerable promise both for drug discovery programs and the treatment of degenerative disorders in regenerative medicine approaches. Nevertheless, the successful use of ESCs is still limited by the lack of efficient control of ESC self-renewal and differentiation capabilities. In this context, the possibility to modulate ESC biological properties and to obtain homogenous populations of correctly specified cells will help developing physiologically relevant screens, designed for the identification of stem cell modulators. Here, we developed a high throughput screening-suitable ESC neural differentiation assay by exploiting the Cell(maker) robotic platform and demonstrated that neural progenies can be generated from ESCs in complete automation, with high standards of accuracy and reliability. Moreover, we performed a pilot screening providing proof of concept that this assay allows the identification of regulators of ESC neural differentiation in full automation.

  10. Usability of Discovery Portals

    OpenAIRE

    Bulens, J.D.; Vullings, L.A.E.; Houtkamp, J.M.; Vanmeulebrouk, B.

    2013-01-01

    As INSPIRE progresses to be implemented in the EU, many new discovery portals are built to facilitate finding spatial data. Currently the structure of the discovery portals is determined by the way spatial data experts like to work. However, we argue that the main target group for discovery portals are not spatial data experts but professionals with limited spatial knowledge, and a focus outside the spatial domain. An exploratory usability experiment was carried out in which three discovery p...

  11. Lean automation development : applying lean principles to the automation development process

    OpenAIRE

    Granlund, Anna; Wiktorsson, Magnus; Grahn, Sten; Friedler, Niklas

    2014-01-01

    By a broad empirical study it is indicated that automation development show potential of improvement. In the paper, 13 lean product development principles are contrasted to the automation development process and it is suggested why and how these principles can facilitate, support and improve the automation development process. The paper summarises a description of what characterises a lean automation development process and what consequences it entails. Main differences compared to current pr...

  12. ATLAS distributed computing operation shift teams experience during the discovery year and beginning of the long shutdown 1

    International Nuclear Information System (INIS)

    Sedov, Alexey; Girolamo, Alessandro Di; Negri, Guidone; Sakamoto, Hiroshi; Schovancová, Jaroslava; Smirnov, Iouri; Vartapetian, Armen; Yu, Jaehoon

    2014-01-01

    ATLAS Distributed Computing Operation Shifts evolve to meet new requirements. New monitoring tools as well as operational changes lead to modifications in organization of shifts. In this paper we describe the structure of shifts, the roles of different shifts in ATLAS computing grid operation, the influence of a Higgs-like particle discovery on shift operation, the achievements in monitoring and automation that allowed extra focus on the experiment priority tasks, and the influence of the Long Shutdown 1 and operational changes related to the no beam period.

  13. Both Automation and Paper.

    Science.gov (United States)

    Purcell, Royal

    1988-01-01

    Discusses the concept of a paperless society and the current situation in library automation. Various applications of automation and telecommunications are addressed, and future library automation is considered. Automation at the Monroe County Public Library in Bloomington, Indiana, is described as an example. (MES)

  14. Automated Groundwater Screening

    International Nuclear Information System (INIS)

    Taylor, Glenn A.; Collard, Leonard B.

    2005-01-01

    The Automated Intruder Analysis has been extended to include an Automated Ground Water Screening option. This option screens 825 radionuclides while rigorously applying the National Council on Radiation Protection (NCRP) methodology. An extension to that methodology is presented to give a more realistic screening factor for those radionuclides which have significant daughters. The extension has the promise of reducing the number of radionuclides which must be tracked by the customer. By combining the Automated Intruder Analysis with the Automated Groundwater Screening a consistent set of assumptions and databases is used. A method is proposed to eliminate trigger values by performing rigorous calculation of the screening factor thereby reducing the number of radionuclides sent to further analysis. Using the same problem definitions as in previous groundwater screenings, the automated groundwater screening found one additional nuclide, Ge-68, which failed the screening. It also found that 18 of the 57 radionuclides contained in NCRP Table 3.1 failed the screening. This report describes the automated groundwater screening computer application

  15. Students' meaning making in science: solving energy resource problems in virtual worlds combined with spreadsheets to develop graphs

    Science.gov (United States)

    Krange, Ingeborg; Arnseth, Hans Christian

    2012-09-01

    The aim of this study is to scrutinize the characteristics of conceptual meaning making when students engage with virtual worlds in combination with a spreadsheet with the aim to develop graphs. We study how these tools and the representations they contain or enable students to construct serve to influence their understanding of energy resource consumption. The data were gathered in 1st grade upper-secondary science classes and they constitute the basis for the interaction analysis of students' meaning making with representations. Our analyses demonstrate the difficulties involved in developing students' orientation toward more conceptual orientations to representations of the knowledge domain. Virtual worlds do not in themselves represent a solution to this problem.

  16. 19 CFR 354.10 - Discovery.

    Science.gov (United States)

    2010-04-01

    ... 19 Customs Duties 3 2010-04-01 2010-04-01 false Discovery. 354.10 Section 354.10 Customs Duties... ANTIDUMPING OR COUNTERVAILING DUTY ADMINISTRATIVE PROTECTIVE ORDER § 354.10 Discovery. (a) Voluntary discovery. All parties are encouraged to engage in voluntary discovery procedures regarding any matter, not...

  17. 36 CFR 1150.63 - Discovery.

    Science.gov (United States)

    2010-07-01

    ... 36 Parks, Forests, and Public Property 3 2010-07-01 2010-07-01 false Discovery. 1150.63 Section... PRACTICE AND PROCEDURES FOR COMPLIANCE HEARINGS Prehearing Conferences and Discovery § 1150.63 Discovery. (a) Parties are encouraged to engage in voluntary discovery procedures. For good cause shown under...

  18. 37 CFR 11.52 - Discovery.

    Science.gov (United States)

    2010-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2010-07-01 2010-07-01 false Discovery. 11.52 Section 11... Disciplinary Proceedings; Jurisdiction, Sanctions, Investigations, and Proceedings § 11.52 Discovery. Discovery... establishes that discovery is reasonable and relevant, the hearing officer, under such conditions as he or she...

  19. Sign use and cognition in automated scientific discovery: are computers only special kinds of signs?

    Science.gov (United States)

    Giza, Piotr

    2018-04-01

    James Fetzer criticizes the computational paradigm, prevailing in cognitive science by questioning, what he takes to be, its most elementary ingredient: that cognition is computation across representations. He argues that if cognition is taken to be a purposive, meaningful, algorithmic problem solving activity, then computers are incapable of cognition. Instead, they appear to be signs of a special kind, that can facilitate computation. He proposes the conception of minds as semiotic systems as an alternative paradigm for understanding mental phenomena, one that seems to overcome the difficulties of computationalism. Now, I argue, that with computer systems dealing with scientific discovery, the matter is not so simple as that. The alleged superiority of humans using signs to stand for something other over computers being merely "physical symbol systems" or "automatic formal systems" is only easy to establish in everyday life, but becomes far from obvious when scientific discovery is at stake. In science, as opposed to everyday life, the meaning of symbols is, apart from very low-level experimental investigations, defined implicitly by the way the symbols are used in explanatory theories or experimental laws relevant to the field, and in consequence, human and machine discoverers are much more on a par. Moreover, the great practical success of the genetic programming method and recent attempts to apply it to automatic generation of cognitive theories seem to show, that computer systems are capable of very efficient problem solving activity in science, which is neither purposive nor meaningful, nor algorithmic. This, I think, undermines Fetzer's argument that computer systems are incapable of cognition because computation across representations is bound to be a purposive, meaningful, algorithmic problem solving activity.

  20. Usability of Discovery Portals

    NARCIS (Netherlands)

    Bulens, J.D.; Vullings, L.A.E.; Houtkamp, J.M.; Vanmeulebrouk, B.

    2013-01-01

    As INSPIRE progresses to be implemented in the EU, many new discovery portals are built to facilitate finding spatial data. Currently the structure of the discovery portals is determined by the way spatial data experts like to work. However, we argue that the main target group for discovery portals

  1. 14 CFR 16.213 - Discovery.

    Science.gov (United States)

    2010-01-01

    ... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Discovery. 16.213 Section 16.213... PRACTICE FOR FEDERALLY-ASSISTED AIRPORT ENFORCEMENT PROCEEDINGS Hearings § 16.213 Discovery. (a) Discovery... discovery permitted by this section if a party shows that— (1) The information requested is cumulative or...

  2. 28 CFR 76.21 - Discovery.

    Science.gov (United States)

    2010-07-01

    ... 28 Judicial Administration 2 2010-07-01 2010-07-01 false Discovery. 76.21 Section 76.21 Judicial... POSSESSION OF CERTAIN CONTROLLED SUBSTANCES § 76.21 Discovery. (a) Scope. Discovery under this part covers... as a general guide for discovery practices in proceedings before the Judge. However, unless otherwise...

  3. 40 CFR 27.21 - Discovery.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 1 2010-07-01 2010-07-01 false Discovery. 27.21 Section 27.21... Discovery. (a) The following types of discovery are authorized: (1) Requests for production of documents for..., discovery is available only as ordered by the presiding officer. The presiding officer shall regulate the...

  4. 37 CFR 41.150 - Discovery.

    Science.gov (United States)

    2010-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2010-07-01 2010-07-01 false Discovery. 41.150 Section 41... COMMERCE PRACTICE BEFORE THE BOARD OF PATENT APPEALS AND INTERFERENCES Contested Cases § 41.150 Discovery. (a) Limited discovery. A party is not entitled to discovery except as authorized in this subpart. The...

  5. 14 CFR 13.220 - Discovery.

    Science.gov (United States)

    2010-01-01

    ... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Discovery. 13.220 Section 13.220... INVESTIGATIVE AND ENFORCEMENT PROCEDURES Rules of Practice in FAA Civil Penalty Actions § 13.220 Discovery. (a) Initiation of discovery. Any party may initiate discovery described in this section, without the consent or...

  6. 49 CFR 604.38 - Discovery.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 7 2010-10-01 2010-10-01 false Discovery. 604.38 Section 604.38 Transportation... TRANSPORTATION CHARTER SERVICE Hearings. § 604.38 Discovery. (a) Permissible forms of discovery shall be within the discretion of the PO. (b) The PO shall limit the frequency and extent of discovery permitted by...

  7. 15 CFR 719.10 - Discovery.

    Science.gov (United States)

    2010-01-01

    ... 15 Commerce and Foreign Trade 2 2010-01-01 2010-01-01 false Discovery. 719.10 Section 719.10... Discovery. (a) General. The parties are encouraged to engage in voluntary discovery regarding any matter... the Federal Rules of Civil Procedure relating to discovery apply to the extent consistent with this...

  8. 24 CFR 26.18 - Discovery.

    Science.gov (United States)

    2010-04-01

    ... 24 Housing and Urban Development 1 2010-04-01 2010-04-01 false Discovery. 26.18 Section 26.18... PROCEDURES Hearings Before Hearing Officers Discovery § 26.18 Discovery. (a) General. The parties are encouraged to engage in voluntary discovery procedures, which may commence at any time after an answer has...

  9. 42 CFR 426.532 - Discovery.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 3 2010-10-01 2010-10-01 false Discovery. 426.532 Section 426.532 Public Health... § 426.532 Discovery. (a) General rule. If the Board orders discovery, the Board must establish a reasonable timeframe for discovery. (b) Protective order—(1) Request for a protective order. Any party...

  10. 49 CFR 1503.633 - Discovery.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 9 2010-10-01 2010-10-01 false Discovery. 1503.633 Section 1503.633... Rules of Practice in TSA Civil Penalty Actions § 1503.633 Discovery. (a) Initiation of discovery. Any party may initiate discovery described in this section, without the consent or approval of the ALJ, at...

  11. 14 CFR 1264.120 - Discovery.

    Science.gov (United States)

    2010-01-01

    ... 14 Aeronautics and Space 5 2010-01-01 2010-01-01 false Discovery. 1264.120 Section 1264.120... PENALTIES ACT OF 1986 § 1264.120 Discovery. (a) The following types of discovery are authorized: (1..., discovery is available only as ordered by the presiding officer. The presiding officer shall regulate the...

  12. 22 CFR 128.6 - Discovery.

    Science.gov (United States)

    2010-04-01

    ... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Discovery. 128.6 Section 128.6 Foreign... Discovery. (a) Discovery by the respondent. The respondent, through the Administrative Law Judge, may... discovery if the interests of national security or foreign policy so require, or if necessary to comply with...

  13. 24 CFR 26.42 - Discovery.

    Science.gov (United States)

    2010-04-01

    ... 24 Housing and Urban Development 1 2010-04-01 2010-04-01 false Discovery. 26.42 Section 26.42... PROCEDURES Hearings Pursuant to the Administrative Procedure Act Discovery § 26.42 Discovery. (a) General. The parties are encouraged to engage in voluntary discovery procedures, which may commence at any time...

  14. 49 CFR 386.37 - Discovery.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 5 2010-10-01 2010-10-01 false Discovery. 386.37 Section 386.37 Transportation... and Hearings § 386.37 Discovery. (a) Parties may obtain discovery by one or more of the following...; and requests for admission. (b) Discovery may not commence until the matter is pending before the...

  15. 29 CFR 1955.32 - Discovery.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Discovery. 1955.32 Section 1955.32 Labor Regulations...) PROCEDURES FOR WITHDRAWAL OF APPROVAL OF STATE PLANS Preliminary Conference and Discovery § 1955.32 Discovery... allow discovery by any other appropriate procedure, such as by interrogatories upon a party or request...

  16. 42 CFR 426.432 - Discovery.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 3 2010-10-01 2010-10-01 false Discovery. 426.432 Section 426.432 Public Health... § 426.432 Discovery. (a) General rule. If the ALJ orders discovery, the ALJ must establish a reasonable timeframe for discovery. (b) Protective order—(1) Request for a protective order. Any party receiving a...

  17. 10 CFR 13.21 - Discovery.

    Science.gov (United States)

    2010-01-01

    ... 10 Energy 1 2010-01-01 2010-01-01 false Discovery. 13.21 Section 13.21 Energy NUCLEAR REGULATORY COMMISSION PROGRAM FRAUD CIVIL REMEDIES § 13.21 Discovery. (a) The following types of discovery are...) Unless mutually agreed to by the parties, discovery is available only as ordered by the ALJ. The ALJ...

  18. 49 CFR 1121.2 - Discovery.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 8 2010-10-01 2010-10-01 false Discovery. 1121.2 Section 1121.2 Transportation... TRANSPORTATION RULES OF PRACTICE RAIL EXEMPTION PROCEDURES § 1121.2 Discovery. Discovery shall follow the procedures set forth at 49 CFR part 1114, subpart B. Discovery may begin upon the filing of the petition for...

  19. 38 CFR 42.21 - Discovery.

    Science.gov (United States)

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 2 2010-07-01 2010-07-01 false Discovery. 42.21 Section... IMPLEMENTING THE PROGRAM FRAUD CIVIL REMEDIES ACT § 42.21 Discovery. (a) The following types of discovery are... creation of a document. (c) Unless mutually agreed to by the parties, discovery is available only as...

  20. 22 CFR 521.21 - Discovery.

    Science.gov (United States)

    2010-04-01

    ... 22 Foreign Relations 2 2010-04-01 2010-04-01 true Discovery. 521.21 Section 521.21 Foreign... Discovery. (a) The following types of discovery are authorized: (1) Requests for production of documents for... interpreted to require the creation of a document. (c) Unless mutually agreed to by the parties, discovery is...

  1. 31 CFR 10.71 - Discovery.

    Science.gov (United States)

    2010-07-01

    ... 31 Money and Finance: Treasury 1 2010-07-01 2010-07-01 false Discovery. 10.71 Section 10.71 Money... SERVICE Rules Applicable to Disciplinary Proceedings § 10.71 Discovery. (a) In general. Discovery may be... relevance, materiality and reasonableness of the requested discovery and subject to the requirements of § 10...

  2. 39 CFR 955.15 - Discovery.

    Science.gov (United States)

    2010-07-01

    ... 39 Postal Service 1 2010-07-01 2010-07-01 false Discovery. 955.15 Section 955.15 Postal Service... APPEALS § 955.15 Discovery. (a) The parties are encouraged to engage in voluntary discovery procedures. In connection with any deposition or other discovery procedure, the Board may issue any order which justice...

  3. 43 CFR 35.21 - Discovery.

    Science.gov (United States)

    2010-10-01

    ... 43 Public Lands: Interior 1 2010-10-01 2010-10-01 false Discovery. 35.21 Section 35.21 Public... AND STATEMENTS § 35.21 Discovery. (a) The following types of discovery are authorized: (1) Requests...) Unless mutually agreed to by the parties, discovery is available only as ordered by the ALJ. The ALJ...

  4. 15 CFR 766.9 - Discovery.

    Science.gov (United States)

    2010-01-01

    ... 15 Commerce and Foreign Trade 2 2010-01-01 2010-01-01 false Discovery. 766.9 Section 766.9... PROCEEDINGS § 766.9 Discovery. (a) General. The parties are encouraged to engage in voluntary discovery... provisions of the Federal Rules of Civil Procedure relating to discovery apply to the extent consistent with...

  5. Automated model building

    CERN Document Server

    Caferra, Ricardo; Peltier, Nicholas

    2004-01-01

    This is the first book on automated model building, a discipline of automated deduction that is of growing importance Although models and their construction are important per se, automated model building has appeared as a natural enrichment of automated deduction, especially in the attempt to capture the human way of reasoning The book provides an historical overview of the field of automated deduction, and presents the foundations of different existing approaches to model construction, in particular those developed by the authors Finite and infinite model building techniques are presented The main emphasis is on calculi-based methods, and relevant practical results are provided The book is of interest to researchers and graduate students in computer science, computational logic and artificial intelligence It can also be used as a textbook in advanced undergraduate courses

  6. Get Involved in Planetary Discoveries through New Worlds, New Discoveries

    Science.gov (United States)

    Shupla, Christine; Shipp, S. S.; Halligan, E.; Dalton, H.; Boonstra, D.; Buxner, S.; SMD Planetary Forum, NASA

    2013-01-01

    "New Worlds, New Discoveries" is a synthesis of NASA’s 50-year exploration history which provides an integrated picture of our new understanding of our solar system. As NASA spacecraft head to and arrive at key locations in our solar system, "New Worlds, New Discoveries" provides an integrated picture of our new understanding of the solar system to educators and the general public! The site combines the amazing discoveries of past NASA planetary missions with the most recent findings of ongoing missions, and connects them to the related planetary science topics. "New Worlds, New Discoveries," which includes the "Year of the Solar System" and the ongoing celebration of the "50 Years of Exploration," includes 20 topics that share thematic solar system educational resources and activities, tied to the national science standards. This online site and ongoing event offers numerous opportunities for the science community - including researchers and education and public outreach professionals - to raise awareness, build excitement, and make connections with educators, students, and the public about planetary science. Visitors to the site will find valuable hands-on science activities, resources and educational materials, as well as the latest news, to engage audiences in planetary science topics and their related mission discoveries. The topics are tied to the big questions of planetary science: how did the Sun’s family of planets and bodies originate and how have they evolved? How did life begin and evolve on Earth, and has it evolved elsewhere in our solar system? Scientists and educators are encouraged to get involved either directly or by sharing "New Worlds, New Discoveries" and its resources with educators, by conducting presentations and events, sharing their resources and events to add to the site, and adding their own public events to the site’s event calendar! Visit to find quality resources and ideas. Connect with educators, students and the public to

  7. 13 CFR 134.213 - Discovery.

    Science.gov (United States)

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Discovery. 134.213 Section 134.213... OFFICE OF HEARINGS AND APPEALS Rules of Practice for Most Cases § 134.213 Discovery. (a) Motion. A party may obtain discovery only upon motion, and for good cause shown. (b) Forms. The forms of discovery...

  8. 31 CFR 16.21 - Discovery.

    Science.gov (United States)

    2010-07-01

    ... 31 Money and Finance: Treasury 1 2010-07-01 2010-07-01 false Discovery. 16.21 Section 16.21 Money... FRAUD CIVIL REMEDIES ACT OF 1986 § 16.21 Discovery. (a) The following types of discovery are authorized... to require the creation of a document. (c) Unless mutually agreed to by the parties, discovery is...

  9. Automated discovery of functional generality of human gene expression programs.

    Directory of Open Access Journals (Sweden)

    Georg K Gerber

    2007-08-01

    Full Text Available An important research problem in computational biology is the identification of expression programs, sets of co-expressed genes orchestrating normal or pathological processes, and the characterization of the functional breadth of these programs. The use of human expression data compendia for discovery of such programs presents several challenges including cellular inhomogeneity within samples, genetic and environmental variation across samples, uncertainty in the numbers of programs and sample populations, and temporal behavior. We developed GeneProgram, a new unsupervised computational framework based on Hierarchical Dirichlet Processes that addresses each of the above challenges. GeneProgram uses expression data to simultaneously organize tissues into groups and genes into overlapping programs with consistent temporal behavior, to produce maps of expression programs, which are sorted by generality scores that exploit the automatically learned groupings. Using synthetic and real gene expression data, we showed that GeneProgram outperformed several popular expression analysis methods. We applied GeneProgram to a compendium of 62 short time-series gene expression datasets exploring the responses of human cells to infectious agents and immune-modulating molecules. GeneProgram produced a map of 104 expression programs, a substantial number of which were significantly enriched for genes involved in key signaling pathways and/or bound by NF-kappaB transcription factors in genome-wide experiments. Further, GeneProgram discovered expression programs that appear to implicate surprising signaling pathways or receptor types in the response to infection, including Wnt signaling and neurotransmitter receptors. We believe the discovered map of expression programs involved in the response to infection will be useful for guiding future biological experiments; genes from programs with low generality scores might serve as new drug targets that exhibit minimal

  10. Microsoft excel spreadsheets for calculation of P-V-T relations and thermodynamic properties from equations of state of MgO, diamond and nine metals as pressure markers in high-pressure and high-temperature experiments

    Science.gov (United States)

    Sokolova, Tatiana S.; Dorogokupets, Peter I.; Dymshits, Anna M.; Danilov, Boris S.; Litasov, Konstantin D.

    2016-09-01

    We present Microsoft Excel spreadsheets for calculation of thermodynamic functions and P-V-T properties of MgO, diamond and 9 metals, Al, Cu, Ag, Au, Pt, Nb, Ta, Mo, and W, depending on temperature and volume or temperature and pressure. The spreadsheets include the most common pressure markers used in in situ experiments with diamond anvil cell and multianvil techniques. The calculations are based on the equation of state formalism via the Helmholtz free energy. The program was developed using Visual Basic for Applications in Microsoft Excel and is a time-efficient tool to evaluate volume, pressure and other thermodynamic functions using T-P and T-V data only as input parameters. This application is aimed to solve practical issues of high pressure experiments in geosciences and mineral physics.

  11. Computer modeling in free spreadsheets OpenOffice.Calc as one of the modern methods of teaching physics and mathematics cycle subjects in primary and secondary schools

    Directory of Open Access Journals (Sweden)

    Markushevich M.V.

    2016-10-01

    Full Text Available the article details the use of such modern method of training as computer simulation applied to modelling of various kinds of mechanical motion of a material point in the free spreadsheet OpenOffice.org Calc while designing physics and computer science lessons in primary and secondary schools. Particular attention is paid to the application of computer modeling integrated with other modern teaching methods.

  12. 78 FR 53466 - Modification of Two National Customs Automation Program (NCAP) Tests Concerning Automated...

    Science.gov (United States)

    2013-08-29

    ... Customs Automation Program (NCAP) Tests Concerning Automated Commercial Environment (ACE) Document Image... National Customs Automation Program (NCAP) tests concerning document imaging, known as the Document Image... the National Customs Automation Program (NCAP) tests concerning document imaging, known as the...

  13. Optimization of automation: III. Development of optimization method for determining automation rate in nuclear power plants

    International Nuclear Information System (INIS)

    Lee, Seung Min; Kim, Jong Hyun; Kim, Man Cheol; Seong, Poong Hyun

    2016-01-01

    Highlights: • We propose an appropriate automation rate that enables the best human performance. • We analyze the shortest working time considering Situation Awareness Recovery (SAR). • The optimized automation rate is estimated by integrating the automation and ostracism rate estimation methods. • The process to derive the optimized automation rate is demonstrated through case studies. - Abstract: Automation has been introduced in various industries, including the nuclear field, because it is commonly believed that automation promises greater efficiency, lower workloads, and fewer operator errors through reducing operator errors and enhancing operator and system performance. However, the excessive introduction of automation has deteriorated operator performance due to the side effects of automation, which are referred to as Out-of-the-Loop (OOTL), and this is critical issue that must be resolved. Thus, in order to determine the optimal level of automation introduction that assures the best human operator performance, a quantitative method of optimizing the automation is proposed in this paper. In order to propose the optimization method for determining appropriate automation levels that enable the best human performance, the automation rate and ostracism rate, which are estimation methods that quantitatively analyze the positive and negative effects of automation, respectively, are integrated. The integration was conducted in order to derive the shortest working time through considering the concept of situation awareness recovery (SAR), which states that the automation rate with the shortest working time assures the best human performance. The process to derive the optimized automation rate is demonstrated through an emergency operation scenario-based case study. In this case study, four types of procedures are assumed through redesigning the original emergency operating procedure according to the introduced automation and ostracism levels. Using the

  14. Decades of Discovery

    Science.gov (United States)

    2011-06-01

    For the past two-and-a-half decades, the Office of Science at the U.S. Department of Energy has been at the forefront of scientific discovery. Over 100 important discoveries supported by the Office of Science are represented in this document.

  15. An Ontology-supported Approach for Automatic Chaining of Web Services in Geospatial Knowledge Discovery

    Science.gov (United States)

    di, L.; Yue, P.; Yang, W.; Yu, G.

    2006-12-01

    Recent developments in geospatial semantic Web have shown promise for automatic discovery, access, and use of geospatial Web services to quickly and efficiently solve particular application problems. With the semantic Web technology, it is highly feasible to construct intelligent geospatial knowledge systems that can provide answers to many geospatial application questions. A key challenge in constructing such intelligent knowledge system is to automate the creation of a chain or process workflow that involves multiple services and highly diversified data and can generate the answer to a specific question of users. This presentation discusses an approach for automating composition of geospatial Web service chains by employing geospatial semantics described by geospatial ontologies. It shows how ontology-based geospatial semantics are used for enabling the automatic discovery, mediation, and chaining of geospatial Web services. OWL-S is used to represent the geospatial semantics of individual Web services and the type of the services it belongs to and the type of the data it can handle. The hierarchy and classification of service types are described in the service ontology. The hierarchy and classification of data types are presented in the data ontology. For answering users' geospatial questions, an Artificial Intelligent (AI) planning algorithm is used to construct the service chain by using the service and data logics expressed in the ontologies. The chain can be expressed as a graph with nodes representing services and connection weights representing degrees of semantic matching between nodes. The graph is a visual representation of logical geo-processing path for answering users' questions. The graph can be instantiated to a physical service workflow for execution to generate the answer to a user's question. A prototype system, which includes real world geospatial applications, is implemented to demonstrate the concept and approach.

  16. Discovery and the atom

    International Nuclear Information System (INIS)

    1989-01-01

    ''Discovery and the Atom'' tells the story of the founding of nuclear physics. This programme looks at nuclear physics up to the discovery of the neutron in 1932. Animation explains the science of the classic experiments, such as the scattering of alpha particles by Rutherford and the discovery of the nucleus. Archive film shows the people: Lord Rutherford, James Chadwick, Marie Curie. (author)

  17. A Semantically Automated Protocol Adapter for Mapping SOAP Web Services to RESTful HTTP Format to Enable the Web Infrastructure, Enhance Web Service Interoperability and Ease Web Service Migration

    Directory of Open Access Journals (Sweden)

    Frank Doheny

    2012-04-01

    Full Text Available Semantic Web Services (SWS are Web Service (WS descriptions augmented with semantic information. SWS enable intelligent reasoning and automation in areas such as service discovery, composition, mediation, ranking and invocation. This paper applies SWS to a previous protocol adapter which, operating within clearly defined constraints, maps SOAP Web Services to RESTful HTTP format. However, in the previous adapter, the configuration element is manual and the latency implications are locally based. This paper applies SWS technologies to automate the configuration element and the latency tests are conducted in a more realistic Internet based setting.

  18. World-wide distribution automation systems

    International Nuclear Information System (INIS)

    Devaney, T.M.

    1994-01-01

    A worldwide power distribution automation system is outlined. Distribution automation is defined and the status of utility automation is discussed. Other topics discussed include a distribution management system, substation feeder, and customer functions, potential benefits, automation costs, planning and engineering considerations, automation trends, databases, system operation, computer modeling of system, and distribution management systems

  19. WIDAFELS flexible automation systems

    International Nuclear Information System (INIS)

    Shende, P.S.; Chander, K.P.; Ramadas, P.

    1990-01-01

    After discussing the various aspects of automation, some typical examples of various levels of automation are given. One of the examples is of automated production line for ceramic fuel pellets. (M.G.B.)

  20. Calculation of economic and financing of NPP and conventional power plant using spreadsheet innovation

    International Nuclear Information System (INIS)

    Moch Djoko Birmano; Imam Bastori

    2008-01-01

    The study for calculating the economic and financing of Nuclear Power Plant (NPP) and conventional power plant using spreadsheet Innovation has been done. As case study, the NPP of PWR type of class 1050 MWe is represented by OPR-1000 (Optimized Power Reactor, 1000 MWe) and the conventional plant of class 600 MWe, is coal power plant (Coal PP). The purpose of the study is to assess the economic and financial feasibility level of OPR-1000 and Coal PP. The study result concludes that economically, OPR-1000 is more feasible compared to Coal PP because its generation cost is cheaper. Whereas financially, OPR-1000 is more beneficial compared to Coal PP because the higher benefit at the end of economic lifetime (NPV) and the higher ratio of benefit and cost (B/C Ratio). For NPP and Coal PP, the higher Discount Rate (%) is not beneficial. NPP is more sensitive to the change of discount rate compared to coal PP, whereas Coal PP is more sensitive to the change of power purchasing price than NPP. (author)

  1. Automation in Clinical Microbiology

    Science.gov (United States)

    Ledeboer, Nathan A.

    2013-01-01

    Historically, the trend toward automation in clinical pathology laboratories has largely bypassed the clinical microbiology laboratory. In this article, we review the historical impediments to automation in the microbiology laboratory and offer insight into the reasons why we believe that we are on the cusp of a dramatic change that will sweep a wave of automation into clinical microbiology laboratories. We review the currently available specimen-processing instruments as well as the total laboratory automation solutions. Lastly, we outline the types of studies that will need to be performed to fully assess the benefits of automation in microbiology laboratories. PMID:23515547

  2. Virtual automation.

    Science.gov (United States)

    Casis, E; Garrido, A; Uranga, B; Vives, A; Zufiaurre, C

    2001-01-01

    Total laboratory automation (TLA) can be substituted in mid-size laboratories by a computer sample workflow control (virtual automation). Such a solution has been implemented in our laboratory using PSM, software developed in cooperation with Roche Diagnostics (Barcelona, Spain), to this purpose. This software is connected to the online analyzers and to the laboratory information system and is able to control and direct the samples working as an intermediate station. The only difference with TLA is the replacement of transport belts by personnel of the laboratory. The implementation of this virtual automation system has allowed us the achievement of the main advantages of TLA: workload increase (64%) with reduction in the cost per test (43%), significant reduction in the number of biochemistry primary tubes (from 8 to 2), less aliquoting (from 600 to 100 samples/day), automation of functional testing, drastic reduction of preanalytical errors (from 11.7 to 0.4% of the tubes) and better total response time for both inpatients (from up to 48 hours to up to 4 hours) and outpatients (from up to 10 days to up to 48 hours). As an additional advantage, virtual automation could be implemented without hardware investment and significant headcount reduction (15% in our lab).

  3. Flood AI: An Intelligent Systems for Discovery and Communication of Disaster Knowledge

    Science.gov (United States)

    Demir, I.; Sermet, M. Y.

    2017-12-01

    Communities are not immune from extreme events or natural disasters that can lead to large-scale consequences for the nation and public. Improving resilience to better prepare, plan, recover, and adapt to disasters is critical to reduce the impacts of extreme events. The National Research Council (NRC) report discusses the topic of how to increase resilience to extreme events through a vision of resilient nation in the year 2030. The report highlights the importance of data, information, gaps and knowledge challenges that needs to be addressed, and suggests every individual to access the risk and vulnerability information to make their communities more resilient. This project presents an intelligent system, Flood AI, for flooding to improve societal preparedness by providing a knowledge engine using voice recognition, artificial intelligence, and natural language processing based on a generalized ontology for disasters with a primary focus on flooding. The knowledge engine utilizes the flood ontology and concepts to connect user input to relevant knowledge discovery channels on flooding by developing a data acquisition and processing framework utilizing environmental observations, forecast models, and knowledge bases. Communication channels of the framework includes web-based systems, agent-based chat bots, smartphone applications, automated web workflows, and smart home devices, opening the knowledge discovery for flooding to many unique use cases.

  4. 29 CFR 2700.56 - Discovery; general.

    Science.gov (United States)

    2010-07-01

    ...(c) or 111 of the Act has been filed. 30 U.S.C. 815(c) and 821. (e) Completion of discovery... 29 Labor 9 2010-07-01 2010-07-01 false Discovery; general. 2700.56 Section 2700.56 Labor... Hearings § 2700.56 Discovery; general. (a) Discovery methods. Parties may obtain discovery by one or more...

  5. Prerequisites for the Establishment of the Automated Monitoring System and Accounting of the Displacement of the Roof of Underground Mines for the Improvement of Safety of Mining Work

    Science.gov (United States)

    Abramovich, Alexandr; Pudov, Evgeniy; Kuzin, Evgeny

    2017-11-01

    In the article the necessity of continuous control over the condition of the roof of mine workings is considered, to increase the safety in the conduct of mining operations. Provided the rationale for monitoring in complex mining and geological conditions, as well as in areas prone to rock blows and sudden coal emissions. The existing methods for controlling the displacement of the roof rocks are described, and their shortcomings are given. An idea is given of an automated system for monitoring the displacement of the workings. The stages of the system as a whole are considered, including the choice of a linear displacement sensor, a platform for software development, and a programming language. In order to ensure integration into other systems and subsequent analysis of the results, it is envisaged to output data to spreadsheets. Are shown the interfaces of the program and the output of the readings from the sensors to the monitors of the mining manager.

  6. Automation of Test Cases for Web Applications : Automation of CRM Test Cases

    OpenAIRE

    Seyoum, Alazar

    2012-01-01

    The main theme of this project was to design a test automation framework for automating web related test cases. Automating test cases designed for testing a web interface provide a means of improving a software development process by shortening the testing phase in the software development life cycle. In this project an existing AutoTester framework and iMacros test automation tools were used. CRM Test Agent was developed to integrate AutoTester to iMacros and to enable the AutoTester,...

  7. Automating Partial Period Bond Valuation with Excel's Day Counting Functions

    Science.gov (United States)

    Vicknair, David; Spruell, James

    2009-01-01

    An Excel model for calculating the actual price of bonds under a 30 day/month, 360 day/year day counting assumption by nesting the DAYS360 function within the PV function is developed. When programmed into an Excel spreadsheet, the model can accommodate annual and semiannual payment bonds sold on or between interest dates using six fundamental…

  8. An Automation Planning Primer.

    Science.gov (United States)

    Paynter, Marion

    1988-01-01

    This brief planning guide for library automation incorporates needs assessment and evaluation of options to meet those needs. A bibliography of materials on automation planning and software reviews, library software directories, and library automation journals is included. (CLB)

  9. Automated Search for new Quantum Experiments.

    Science.gov (United States)

    Krenn, Mario; Malik, Mehul; Fickler, Robert; Lapkiewicz, Radek; Zeilinger, Anton

    2016-03-04

    Quantum mechanics predicts a number of, at first sight, counterintuitive phenomena. It therefore remains a question whether our intuition is the best way to find new experiments. Here, we report the development of the computer algorithm Melvin which is able to find new experimental implementations for the creation and manipulation of complex quantum states. Indeed, the discovered experiments extensively use unfamiliar and asymmetric techniques which are challenging to understand intuitively. The results range from the first implementation of a high-dimensional Greenberger-Horne-Zeilinger state, to a vast variety of experiments for asymmetrically entangled quantum states-a feature that can only exist when both the number of involved parties and dimensions is larger than 2. Additionally, new types of high-dimensional transformations are found that perform cyclic operations. Melvin autonomously learns from solutions for simpler systems, which significantly speeds up the discovery rate of more complex experiments. The ability to automate the design of a quantum experiment can be applied to many quantum systems and allows the physical realization of quantum states previously thought of only on paper.

  10. Automation in Immunohematology

    Directory of Open Access Journals (Sweden)

    Meenu Bajpai

    2012-01-01

    Full Text Available There have been rapid technological advances in blood banking in South Asian region over the past decade with an increasing emphasis on quality and safety of blood products. The conventional test tube technique has given way to newer techniques such as column agglutination technique, solid phase red cell adherence assay, and erythrocyte-magnetized technique. These new technologies are adaptable to automation and major manufacturers in this field have come up with semi and fully automated equipments for immunohematology tests in the blood bank. Automation improves the objectivity and reproducibility of tests. It reduces human errors in patient identification and transcription errors. Documentation and traceability of tests, reagents and processes and archiving of results is another major advantage of automation. Shifting from manual methods to automation is a major undertaking for any transfusion service to provide quality patient care with lesser turnaround time for their ever increasing workload. This article discusses the various issues involved in the process.

  11. Automated Budget System -

    Data.gov (United States)

    Department of Transportation — The Automated Budget System (ABS) automates management and planning of the Mike Monroney Aeronautical Center (MMAC) budget by providing enhanced capability to plan,...

  12. Bioprocessing automation in cell therapy manufacturing: Outcomes of special interest group automation workshop.

    Science.gov (United States)

    Ball, Oliver; Robinson, Sarah; Bure, Kim; Brindley, David A; Mccall, David

    2018-04-01

    Phacilitate held a Special Interest Group workshop event in Edinburgh, UK, in May 2017. The event brought together leading stakeholders in the cell therapy bioprocessing field to identify present and future challenges and propose potential solutions to automation in cell therapy bioprocessing. Here, we review and summarize discussions from the event. Deep biological understanding of a product, its mechanism of action and indication pathogenesis underpin many factors relating to bioprocessing and automation. To fully exploit the opportunities of bioprocess automation, therapeutics developers must closely consider whether an automation strategy is applicable, how to design an 'automatable' bioprocess and how to implement process modifications with minimal disruption. Major decisions around bioprocess automation strategy should involve all relevant stakeholders; communication between technical and business strategy decision-makers is of particular importance. Developers should leverage automation to implement in-process testing, in turn applicable to process optimization, quality assurance (QA)/ quality control (QC), batch failure control, adaptive manufacturing and regulatory demands, but a lack of precedent and technical opportunities can complicate such efforts. Sparse standardization across product characterization, hardware components and software platforms is perceived to complicate efforts to implement automation. The use of advanced algorithmic approaches such as machine learning may have application to bioprocess and supply chain optimization. Automation can substantially de-risk the wider supply chain, including tracking and traceability, cryopreservation and thawing and logistics. The regulatory implications of automation are currently unclear because few hardware options exist and novel solutions require case-by-case validation, but automation can present attractive regulatory incentives. Copyright © 2018 International Society for Cellular Therapy

  13. Rapid access to compound libraries through flow technology: fully automated synthesis of a 3-aminoindolizine library via orthogonal diversification.

    Science.gov (United States)

    Lange, Paul P; James, Keith

    2012-10-08

    A novel methodology for the synthesis of druglike heterocycle libraries has been developed through the use of flow reactor technology. The strategy employs orthogonal modification of a heterocyclic core, which is generated in situ, and was used to construct both a 25-membered library of druglike 3-aminoindolizines, and selected examples of a 100-member virtual library. This general protocol allows a broad range of acylation, alkylation and sulfonamidation reactions to be performed in conjunction with a tandem Sonogashira coupling/cycloisomerization sequence. All three synthetic steps were conducted under full automation in the flow reactor, with no handling or isolation of intermediates, to afford the desired products in good yields. This fully automated, multistep flow approach opens the way to highly efficient generation of druglike heterocyclic systems as part of a lead discovery strategy or within a lead optimization program.

  14. A Tale of Two Discoveries: Comparing the Usability of Summon and EBSCO Discovery Service

    Science.gov (United States)

    Foster, Anita K.; MacDonald, Jean B.

    2013-01-01

    Web-scale discovery systems are gaining momentum among academic libraries as libraries seek a means to provide their users with a one-stop searching experience. Illinois State University's Milner Library found itself in the unique position of having access to two distinct discovery products, EBSCO Discovery Service and Serials Solutions' Summon.…

  15. USING CLOUD COMPUTING IN SOLVING THE PROBLEMS OF LOGIC

    Directory of Open Access Journals (Sweden)

    Pavlo V. Mykytenko

    2017-02-01

    Full Text Available The article provides an overview of the most popular cloud services, in particular those which have their complete office suites, the basic functional characteristics and highlights the advantages and disadvantages of cloud services in the educational process. It was made a comparative analysis of the spreadsheets that are in office suites such cloud services like Zoho Office Suite, Microsoft Office 365 and Google Docs. On the basis of the research and the findings it was suggested the best cloud services for use in the educational process. The possibility of using spreadsheets in the study of logic, from creating formulas that implement logical operations, the creation of means of automation of problem solving process was considered.

  16. 29 CFR 2200.208 - Discovery.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Discovery. 2200.208 Section 2200.208 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH REVIEW COMMISSION RULES OF PROCEDURE Simplified Proceedings § 2200.208 Discovery. Discovery, including requests for admissions, will only be...

  17. 47 CFR 65.105 - Discovery.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 3 2010-10-01 2010-10-01 false Discovery. 65.105 Section 65.105... OF RETURN PRESCRIPTION PROCEDURES AND METHODOLOGIES Procedures § 65.105 Discovery. (a) Participants... evidence. (c) Discovery requests pursuant to § 65.105(b), including written interrogatories, shall be filed...

  18. 49 CFR 209.313 - Discovery.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 4 2010-10-01 2010-10-01 false Discovery. 209.313 Section 209.313 Transportation... TRANSPORTATION RAILROAD SAFETY ENFORCEMENT PROCEDURES Disqualification Procedures § 209.313 Discovery. (a... parties. Discovery is designed to enable a party to obtain relevant information needed for preparation of...

  19. Sports Stars: Analyzing the Performance of Astronomers at Visualization-based Discovery

    Science.gov (United States)

    Fluke, C. J.; Parrington, L.; Hegarty, S.; MacMahon, C.; Morgan, S.; Hassan, A. H.; Kilborn, V. A.

    2017-05-01

    In this data-rich era of astronomy, there is a growing reliance on automated techniques to discover new knowledge. The role of the astronomer may change from being a discoverer to being a confirmer. But what do astronomers actually look at when they distinguish between “sources” and “noise?” What are the differences between novice and expert astronomers when it comes to visual-based discovery? Can we identify elite talent or coach astronomers to maximize their potential for discovery? By looking to the field of sports performance analysis, we consider an established, domain-wide approach, where the expertise of the viewer (i.e., a member of the coaching team) plays a crucial role in identifying and determining the subtle features of gameplay that provide a winning advantage. As an initial case study, we investigate whether the SportsCode performance analysis software can be used to understand and document how an experienced Hi astronomer makes discoveries in spectral data cubes. We find that the process of timeline-based coding can be applied to spectral cube data by mapping spectral channels to frames within a movie. SportsCode provides a range of easy to use methods for annotation, including feature-based codes and labels, text annotations associated with codes, and image-based drawing. The outputs, including instance movies that are uniquely associated with coded events, provide the basis for a training program or team-based analysis that could be used in unison with discipline specific analysis software. In this coordinated approach to visualization and analysis, SportsCode can act as a visual notebook, recording the insight and decisions in partnership with established analysis methods. Alternatively, in situ annotation and coding of features would be a valuable addition to existing and future visualization and analysis packages.

  20. Quantifying the Ease of Scientific Discovery.

    Science.gov (United States)

    Arbesman, Samuel

    2011-02-01

    It has long been known that scientific output proceeds on an exponential increase, or more properly, a logistic growth curve. The interplay between effort and discovery is clear, and the nature of the functional form has been thought to be due to many changes in the scientific process over time. Here I show a quantitative method for examining the ease of scientific progress, another necessary component in understanding scientific discovery. Using examples from three different scientific disciplines - mammalian species, chemical elements, and minor planets - I find the ease of discovery to conform to an exponential decay. In addition, I show how the pace of scientific discovery can be best understood as the outcome of both scientific output and ease of discovery. A quantitative study of the ease of scientific discovery in the aggregate, such as done here, has the potential to provide a great deal of insight into both the nature of future discoveries and the technical processes behind discoveries in science.

  1. 15 CFR 280.210 - Discovery.

    Science.gov (United States)

    2010-01-01

    ... 15 Commerce and Foreign Trade 1 2010-01-01 2010-01-01 false Discovery. 280.210 Section 280.210... STANDARDS AND TECHNOLOGY, DEPARTMENT OF COMMERCE ACCREDITATION AND ASSESSMENT PROGRAMS FASTENER QUALITY Enforcement § 280.210 Discovery. (a) General. The parties are encouraged to engage in voluntary discovery...

  2. 78 FR 66039 - Modification of National Customs Automation Program Test Concerning Automated Commercial...

    Science.gov (United States)

    2013-11-04

    ... Customs Automation Program Test Concerning Automated Commercial Environment (ACE) Cargo Release (Formerly...) plan to both rename and modify the National Customs Automation Program (NCAP) test concerning the... data elements required to obtain release for cargo transported by air. The test will now be known as...

  3. 10 CFR 1013.21 - Discovery.

    Science.gov (United States)

    2010-01-01

    ... 10 Energy 4 2010-01-01 2010-01-01 false Discovery. 1013.21 Section 1013.21 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) PROGRAM FRAUD CIVIL REMEDIES AND PROCEDURES § 1013.21 Discovery. (a) The following types of discovery are authorized: (1) Requests for production of documents for inspection and...

  4. 37 CFR 2.120 - Discovery.

    Science.gov (United States)

    2010-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2010-07-01 2010-07-01 false Discovery. 2.120 Section 2... COMMERCE RULES OF PRACTICE IN TRADEMARK CASES Procedure in Inter Partes Proceedings § 2.120 Discovery. (a... to disclosure and discovery shall apply in opposition, cancellation, interference and concurrent use...

  5. 46 CFR 550.502 - Discovery.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 9 2010-10-01 2010-10-01 false Discovery. 550.502 Section 550.502 Shipping FEDERAL... Proceedings § 550.502 Discovery. The Commission may authorize a party to a proceeding to use depositions, written interrogatories, and discovery procedures that, to the extent practicable, are in conformity with...

  6. 15 CFR 785.8 - Discovery.

    Science.gov (United States)

    2010-01-01

    ... 15 Commerce and Foreign Trade 2 2010-01-01 2010-01-01 false Discovery. 785.8 Section 785.8... INDUSTRY AND SECURITY, DEPARTMENT OF COMMERCE ADDITIONAL PROTOCOL REGULATIONS ENFORCEMENT § 785.8 Discovery. (a) General. The parties are encouraged to engage in voluntary discovery regarding any matter, not...

  7. 22 CFR 35.21 - Discovery.

    Science.gov (United States)

    2010-04-01

    ... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Discovery. 35.21 Section 35.21 Foreign Relations DEPARTMENT OF STATE CLAIMS AND STOLEN PROPERTY PROGRAM FRAUD CIVIL REMEDIES § 35.21 Discovery. (a) The following types of discovery are authorized: (1) Requests for production of documents for...

  8. 45 CFR 96.65 - Discovery.

    Science.gov (United States)

    2010-10-01

    ... 45 Public Welfare 1 2010-10-01 2010-10-01 false Discovery. 96.65 Section 96.65 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION BLOCK GRANTS Hearing Procedure § 96.65 Discovery. The use of interrogatories, depositions, and other forms of discovery shall not be allowed. ...

  9. 49 CFR 31.21 - Discovery.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 1 2010-10-01 2010-10-01 false Discovery. 31.21 Section 31.21 Transportation Office of the Secretary of Transportation PROGRAM FRAUD CIVIL REMEDIES § 31.21 Discovery. (a) The following types of discovery are authorized: (1) Requests for production of documents for inspection and...

  10. 43 CFR 4.1130 - Discovery methods.

    Science.gov (United States)

    2010-10-01

    ... 43 Public Lands: Interior 1 2010-10-01 2010-10-01 false Discovery methods. 4.1130 Section 4.1130... Special Rules Applicable to Surface Coal Mining Hearings and Appeals Discovery § 4.1130 Discovery methods. Parties may obtain discovery by one or more of the following methods— (a) Depositions upon oral...

  11. Automation-aided Task Loads Index based on the Automation Rate Reflecting the Effects on Human Operators in NPPs

    International Nuclear Information System (INIS)

    Lee, Seungmin; Seong, Poonghyun; Kim, Jonghyun

    2013-01-01

    Many researchers have found that a high automation rate does not guarantee high performance. Therefore, to reflect the effects of automation on human performance, a new estimation method of the automation rate that considers the effects of automation on human operators in nuclear power plants (NPPs) was suggested. These suggested measures express how much automation support human operators but it cannot express the change of human operators' workload, whether the human operators' workload is increased or decreased. Before considering automation rates, whether the adopted automation is good or bad might be estimated in advance. In this study, to estimate the appropriateness of automation according to the change of the human operators' task loads, automation-aided task loads index is suggested based on the concept of the suggested automation rate. To insure plant safety and efficiency on behalf of human operators, various automation systems have been installed in NPPs, and many works which were previously conducted by human operators can now be supported by computer-based operator aids. According to the characteristics of the automation types, the estimation method of the system automation and the cognitive automation rate were suggested. The proposed estimation method concentrates on the effects of introducing automation, so it directly express how much the automated system support human operators. Based on the suggested automation rates, the way to estimate how much the automated system can affect the human operators' cognitive task load is suggested in this study. When there is no automation, the calculated index is 1, and it means there is no change of human operators' task load

  12. Asleep at the automated wheel-Sleepiness and fatigue during highly automated driving.

    Science.gov (United States)

    Vogelpohl, Tobias; Kühn, Matthias; Hummel, Thomas; Vollrath, Mark

    2018-03-20

    Due to the lack of active involvement in the driving situation and due to monotonous driving environments drivers with automation may be prone to become fatigued faster than manual drivers (e.g. Schömig et al., 2015). However, little is known about the progression of fatigue during automated driving and its effects on the ability to take back manual control after a take-over request. In this driving simulator study with Nö=ö60 drivers we used a three factorial 2ö×ö2ö×ö12 mixed design to analyze the progression (12ö×ö5ömin; within subjects) of driver fatigue in drivers with automation compared to manual drivers (between subjects). Driver fatigue was induced as either mainly sleep related or mainly task related fatigue (between subjects). Additionally, we investigated the drivers' reactions to a take-over request in a critical driving scenario to gain insights into the ability of fatigued drivers to regain manual control and situation awareness after automated driving. Drivers in the automated driving condition exhibited facial indicators of fatigue after 15 to 35ömin of driving. Manual drivers only showed similar indicators of fatigue if they suffered from a lack of sleep and then only after a longer period of driving (approx. 40ömin). Several drivers in the automated condition closed their eyes for extended periods of time. In the driving with automation condition mean automation deactivation times after a take-over request were slower for a certain percentage (about 30%) of the drivers with a lack of sleep (Mö=ö3.2; SDö=ö2.1ös) compared to the reaction times after a long drive (Mö=ö2.4; SDö=ö0.9ös). Drivers with automation also took longer than manual drivers to first glance at the speed display after a take-over request and were more likely to stay behind a braking lead vehicle instead of overtaking it. Drivers are unable to stay alert during extended periods of automated driving without non-driving related tasks. Fatigued drivers could

  13. Knowledge Discovery in Data in Construction Projects

    Directory of Open Access Journals (Sweden)

    Szelka J.

    2016-06-01

    Full Text Available Decision-making processes, including the ones related to ill-structured problems, are of considerable significance in the area of construction projects. Computer-aided inference under such conditions requires the employment of specific methods and tools (non-algorithmic ones, the best recognized and successfully used in practice represented by expert systems. The knowledge indispensable for such systems to perform inference is most frequently acquired directly from experts (through a dialogue: a domain expert - a knowledge engineer and from various source documents. Little is known, however, about the possibility of automating knowledge acquisition in this area and as a result, in practice it is scarcely ever used. It has to be noted that in numerous areas of management more and more attention is paid to the issue of acquiring knowledge from available data. What is known and successfully employed in the practice of aiding the decision-making is the different methods and tools. The paper attempts to select methods for knowledge discovery in data and presents possible ways of representing the acquired knowledge as well as sample tools (including programming ones, allowing for the use of this knowledge in the area under consideration.

  14. Procedure automation: the effect of automated procedure execution on situation awareness and human performance

    International Nuclear Information System (INIS)

    Andresen, Gisle; Svengren, Haakan; Heimdal, Jan O.; Nilsen, Svein; Hulsund, John-Einar; Bisio, Rossella; Debroise, Xavier

    2004-04-01

    As advised by the procedure workshop convened in Halden in 2000, the Halden Project conducted an experiment on the effect of automation of Computerised Procedure Systems (CPS) on situation awareness and human performance. The expected outcome of the study was to provide input for guidance on CPS design, and to support the Halden Project's ongoing research on human reliability analysis. The experiment was performed in HAMMLAB using the HAMBO BWR simulator and the COPMA-III CPS. Eight crews of operators from Forsmark 3 and Oskarshamn 3 participated. Three research questions were investigated: 1) Does procedure automation create Out-Of-The-Loop (OOTL) performance problems? 2) Does procedure automation affect situation awareness? 3) Does procedure automation affect crew performance? The independent variable, 'procedure configuration', had four levels: paper procedures, manual CPS, automation with breaks, and full automation. The results showed that the operators experienced OOTL problems in full automation, but that situation awareness and crew performance (response time) were not affected. One possible explanation for this is that the operators monitored the automated procedure execution conscientiously, something which may have prevented the OOTL problems from having negative effects on situation awareness and crew performance. In a debriefing session, the operators clearly expressed their dislike for the full automation condition, but that automation with breaks could be suitable for some tasks. The main reason why the operators did not like the full automation was that they did not feel being in control. A qualitative analysis addressing factors contributing to response time delays revealed that OOTL problems did not seem to cause delays, but that some delays could be explained by the operators having problems with the freeze function of the CPS. Also other factors such as teamwork and operator tendencies were of importance. Several design implications were drawn

  15. 6 CFR 13.21 - Discovery.

    Science.gov (United States)

    2010-01-01

    ... 6 Domestic Security 1 2010-01-01 2010-01-01 false Discovery. 13.21 Section 13.21 Domestic Security DEPARTMENT OF HOMELAND SECURITY, OFFICE OF THE SECRETARY PROGRAM FRAUD CIVIL REMEDIES § 13.21 Discovery. (a) In general. (1) The following types of discovery are authorized: (i) Requests for production of...

  16. 45 CFR 99.23 - Discovery.

    Science.gov (United States)

    2010-10-01

    ... 45 Public Welfare 1 2010-10-01 2010-10-01 false Discovery. 99.23 Section 99.23 Public Welfare... DEVELOPMENT FUND Hearing Procedures § 99.23 Discovery. The Department, the Lead Agency, and any individuals or groups recognized as parties shall have the right to conduct discovery (including depositions) against...

  17. 20 CFR 355.21 - Discovery.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Discovery. 355.21 Section 355.21 Employees... UNDER THE PROGRAM FRAUD CIVIL REMEDIES ACT OF 1986 § 355.21 Discovery. (a) The following types of discovery are authorized: (1) Requests for production of documents for inspection and copying; (2) Requests...

  18. 10 CFR 2.1018 - Discovery.

    Science.gov (United States)

    2010-01-01

    ... 10 Energy 1 2010-01-01 2010-01-01 false Discovery. 2.1018 Section 2.1018 Energy NUCLEAR REGULATORY... Geologic Repository § 2.1018 Discovery. (a)(1) Parties, potential parties, and interested governmental participants in the high-level waste licensing proceeding may obtain discovery by one or more of the following...

  19. 28 CFR 71.21 - Discovery.

    Science.gov (United States)

    2010-07-01

    ... 28 Judicial Administration 2 2010-07-01 2010-07-01 false Discovery. 71.21 Section 71.21 Judicial... REMEDIES ACT OF 1986 Implementation for Actions Initiated by the Department of Justice § 71.21 Discovery. (a) The following types of discovery are authorized: (1) Requests for production of documents for...

  20. 13 CFR 134.310 - Discovery.

    Science.gov (United States)

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Discovery. 134.310 Section 134.310 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION RULES OF PROCEDURE GOVERNING CASES BEFORE THE... Designations § 134.310 Discovery. Discovery will not be permitted in appeals from size determinations or NAICS...

  1. 34 CFR 33.21 - Discovery.

    Science.gov (United States)

    2010-07-01

    ... 34 Education 1 2010-07-01 2010-07-01 false Discovery. 33.21 Section 33.21 Education Office of the Secretary, Department of Education PROGRAM FRAUD CIVIL REMEDIES ACT § 33.21 Discovery. (a) The following types of discovery are authorized: (1) Requests for production of documents for inspection and copying...

  2. 28 CFR 18.7 - Discovery.

    Science.gov (United States)

    2010-07-01

    ... 28 Judicial Administration 1 2010-07-01 2010-07-01 false Discovery. 18.7 Section 18.7 Judicial Administration DEPARTMENT OF JUSTICE OFFICE OF JUSTICE PROGRAMS HEARING AND APPEAL PROCEDURES § 18.7 Discovery.... Such order may be entered upon a showing that the deposition is necessary for discovery purposes, and...

  3. 7 CFR 1.322 - Discovery.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 1 2010-01-01 2010-01-01 false Discovery. 1.322 Section 1.322 Agriculture Office of... Under the Program Fraud Civil Remedies Act of 1986 § 1.322 Discovery. (a) The following types of discovery are authorized: (1) Requests for production, inspection and photocopying of documents; (2...

  4. 45 CFR 1386.103 - Discovery.

    Science.gov (United States)

    2010-10-01

    ... 45 Public Welfare 4 2010-10-01 2010-10-01 false Discovery. 1386.103 Section 1386.103 Public... Hearing Procedures § 1386.103 Discovery. The Department and any party named in the Notice issued pursuant to § 1386.90 has the right to conduct discovery (including depositions) against opposing parties as...

  5. 45 CFR 79.21 - Discovery.

    Science.gov (United States)

    2010-10-01

    ... 45 Public Welfare 1 2010-10-01 2010-10-01 false Discovery. 79.21 Section 79.21 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION PROGRAM FRAUD CIVIL REMEDIES § 79.21 Discovery. (a) The following types of discovery are authorized: (1) Requests for production of documents for...

  6. 12 CFR 308.520 - Discovery.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 4 2010-01-01 2010-01-01 false Discovery. 308.520 Section 308.520 Banks and... PROCEDURE Program Fraud Civil Remedies and Procedures § 308.520 Discovery. (a) The following types of discovery are authorized: (1) Requests for production of documents for inspection and copying; (2) Requests...

  7. 47 CFR 1.729 - Discovery.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false Discovery. 1.729 Section 1.729..., and Reports Involving Common Carriers Formal Complaints § 1.729 Discovery. (a) Subject to paragraph (i... seek discovery of any non-privileged matter that is relevant to the material facts in dispute in the...

  8. 7 CFR 283.12 - Discovery.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 4 2010-01-01 2010-01-01 false Discovery. 283.12 Section 283.12 Agriculture... of $50,000 or More § 283.12 Discovery. (a) Dispositions—(1) Motion for taking deposition. Only upon a... exist if the information sought appears reasonably calculated to lead to the discovery of admissible...

  9. Reducing the cost of semi-automated in-gel tryptic digestion and GeLC sample preparation for high-throughput proteomics.

    Science.gov (United States)

    Ruelcke, Jayde E; Loo, Dorothy; Hill, Michelle M

    2016-10-21

    Peptide generation by trypsin digestion is typically the first step in mass spectrometry-based proteomics experiments, including 'bottom-up' discovery and targeted proteomics using multiple reaction monitoring. Manual tryptic digest and the subsequent clean-up steps can add variability even before the sample reaches the analytical platform. While specialized filter plates and tips have been designed for automated sample processing, the specialty reagents required may not be accessible or feasible due to their high cost. Here, we report a lower-cost semi-automated protocol for in-gel digestion and GeLC using standard 96-well microplates. Further cost savings were realized by re-using reagent tips with optimized sample ordering. To evaluate the methodology, we compared a simple mixture of 7 proteins and a complex cell-lysate sample. The results across three replicates showed that our semi-automated protocol had performance equal to or better than a manual in-gel digestion with respect to replicate variability and level of contamination. In this paper, we also provide the Agilent Bravo method file, which can be adapted to other liquid handlers. The simplicity, reproducibility, and cost-effectiveness of our semi-automated protocol make it ideal for routine in-gel and GeLC sample preparations, as well as high throughput processing of large clinical sample cohorts. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Automation-aided Task Loads Index based on the Automation Rate Reflecting the Effects on Human Operators in NPPs

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seungmin; Seong, Poonghyun [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Kim, Jonghyun [KEPCO International Nuclear Graduate School, Ulsan (Korea, Republic of)

    2013-05-15

    Many researchers have found that a high automation rate does not guarantee high performance. Therefore, to reflect the effects of automation on human performance, a new estimation method of the automation rate that considers the effects of automation on human operators in nuclear power plants (NPPs) was suggested. These suggested measures express how much automation support human operators but it cannot express the change of human operators' workload, whether the human operators' workload is increased or decreased. Before considering automation rates, whether the adopted automation is good or bad might be estimated in advance. In this study, to estimate the appropriateness of automation according to the change of the human operators' task loads, automation-aided task loads index is suggested based on the concept of the suggested automation rate. To insure plant safety and efficiency on behalf of human operators, various automation systems have been installed in NPPs, and many works which were previously conducted by human operators can now be supported by computer-based operator aids. According to the characteristics of the automation types, the estimation method of the system automation and the cognitive automation rate were suggested. The proposed estimation method concentrates on the effects of introducing automation, so it directly express how much the automated system support human operators. Based on the suggested automation rates, the way to estimate how much the automated system can affect the human operators' cognitive task load is suggested in this study. When there is no automation, the calculated index is 1, and it means there is no change of human operators' task load.

  11. 42 CFR 1005.7 - Discovery.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 5 2010-10-01 2010-10-01 false Discovery. 1005.7 Section 1005.7 Public Health... OF EXCLUSIONS, CIVIL MONEY PENALTIES AND ASSESSMENTS § 1005.7 Discovery. (a) A party may make a... and any forms of discovery, other than those permitted under paragraph (a) of this section, are not...

  12. 29 CFR 1603.210 - Discovery.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 4 2010-07-01 2010-07-01 false Discovery. 1603.210 Section 1603.210 Labor Regulations... GOVERNMENT EMPLOYEE RIGHTS ACT OF 1991 Hearings § 1603.210 Discovery. (a) Unless otherwise ordered by the administrative law judge, discovery may begin as soon as the complaint has been transmitted to the administrative...

  13. 45 CFR 150.435 - Discovery.

    Science.gov (United States)

    2010-10-01

    ... 45 Public Welfare 1 2010-10-01 2010-10-01 false Discovery. 150.435 Section 150.435 Public Welfare... AND INDIVIDUAL INSURANCE MARKETS Administrative Hearings § 150.435 Discovery. (a) The parties must identify any need for discovery from the opposing party as soon as possible, but no later than the time for...

  14. 34 CFR 81.16 - Discovery.

    Science.gov (United States)

    2010-07-01

    ... 34 Education 1 2010-07-01 2010-07-01 false Discovery. 81.16 Section 81.16 Education Office of the... Discovery. (a) The parties to a case are encouraged to exchange relevant documents and information voluntarily. (b) The ALJ, at a party's request, may order compulsory discovery described in paragraph (c) of...

  15. 29 CFR 1905.25 - Discovery.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 5 2010-07-01 2010-07-01 false Discovery. 1905.25 Section 1905.25 Labor Regulations... OCCUPATIONAL SAFETY AND HEALTH ACT OF 1970 Hearings § 1905.25 Discovery. (a) Depositions. (1) For reasons of... discovery. Whenever appropriate to a just disposition of any issue in a hearing, the presiding hearing...

  16. 12 CFR 1780.26 - Discovery.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 7 2010-01-01 2010-01-01 false Discovery. 1780.26 Section 1780.26 Banks and... OF PRACTICE AND PROCEDURE RULES OF PRACTICE AND PROCEDURE Prehearing Proceedings § 1780.26 Discovery. (a) Limits on discovery. Subject to the limitations set out in paragraphs (b), (d), and (e) of this...

  17. 45 CFR 160.516 - Discovery.

    Science.gov (United States)

    2010-10-01

    ... 45 Public Welfare 1 2010-10-01 2010-10-01 false Discovery. 160.516 Section 160.516 Public Welfare... ADMINISTRATIVE REQUIREMENTS Procedures for Hearings § 160.516 Discovery. (a) A party may make a request to... forms of discovery, other than those permitted under paragraph (a) of this section, are not authorized...

  18. 42 CFR 430.86 - Discovery.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Discovery. 430.86 Section 430.86 Public Health... Plans and Practice to Federal Requirements § 430.86 Discovery. CMS and any party named in the notice issued under § 430.70 has the right to conduct discovery (including depositions) against opposing parties...

  19. Discovery Driven Growth

    DEFF Research Database (Denmark)

    Bukh, Per Nikolaj

    2009-01-01

    Anmeldelse af Discovery Driven Growh : A breakthrough process to reduce risk and seize opportunity, af Rita G. McGrath & Ian C. MacMillan, Boston: Harvard Business Press. Udgivelsesdato: 14 august......Anmeldelse af Discovery Driven Growh : A breakthrough process to reduce risk and seize opportunity, af Rita G. McGrath & Ian C. MacMillan, Boston: Harvard Business Press. Udgivelsesdato: 14 august...

  20. Automated genomic DNA purification options in agricultural applications using MagneSil paramagnetic particles

    Science.gov (United States)

    Bitner, Rex M.; Koller, Susan C.

    2002-06-01

    The automated high throughput purification of genomic DNA form plant materials can be performed using MagneSil paramagnetic particles on the Beckman-Coulter FX, BioMek 2000, and the Tecan Genesis robot. Similar automated methods are available for DNA purifications from animal blood. These methods eliminate organic extractions, lengthy incubations and cumbersome filter plates. The DNA is suitable for applications such as PCR and RAPD analysis. Methods are described for processing traditionally difficult samples such as those containing large amounts of polyphenolics or oils, while still maintaining a high level of DNA purity. The robotic protocols have ben optimized for agricultural applications such as marker assisted breeding, seed-quality testing, and SNP discovery and scoring. In addition to high yield purification of DNA from plant samples or animal blood, the use of Promega's DNA-IQ purification system is also described. This method allows for the purification of a narrow range of DNA regardless of the amount of additional DNA that is present in the initial sample. This simultaneous Isolation and Quantification of DNA allows the DNA to be used directly in applications such as PCR, SNP analysis, and RAPD, without the need for separate quantitation of the DNA.

  1. 42 CFR 405.1037 - Discovery.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 2 2010-10-01 2010-10-01 false Discovery. 405.1037 Section 405.1037 Public Health... Appeals Under Original Medicare (Part A and Part B) Alj Hearings § 405.1037 Discovery. (a) General rules. (1) Discovery is permissible only when CMS or its contractor elects to participate in an ALJ hearing...

  2. 20 CFR 498.207 - Discovery.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Discovery. 498.207 Section 498.207 Employees... § 498.207 Discovery. (a) For the purpose of inspection and copying, a party may make a request to...) Any form of discovery other than that permitted under paragraph (a) of this section, such as requests...

  3. 42 CFR 93.512 - Discovery.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 1 2010-10-01 2010-10-01 false Discovery. 93.512 Section 93.512 Public Health... Process § 93.512 Discovery. (a) Request to provide documents. A party may only request another party to...) Responses to a discovery request. Within 30 days of receiving a request for the production of documents, a...

  4. 42 CFR 3.516 - Discovery.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 1 2010-10-01 2010-10-01 false Discovery. 3.516 Section 3.516 Public Health PUBLIC... AND PATIENT SAFETY WORK PRODUCT Enforcement Program § 3.516 Discovery. (a) A party may make a request... and any forms of discovery, other than those permitted under paragraph (a) of this section, are not...

  5. 29 CFR 22.21 - Discovery.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 1 2010-07-01 2010-07-01 true Discovery. 22.21 Section 22.21 Labor Office of the Secretary of Labor PROGRAM FRAUD CIVIL REMEDIES ACT OF 1986 § 22.21 Discovery. (a) The following types of discovery are authorized: (1) Requests for production of documents for inspection and copying; (2) Requests...

  6. Automation of radioimmunoassay

    International Nuclear Information System (INIS)

    Yamaguchi, Chisato; Yamada, Hideo; Iio, Masahiro

    1974-01-01

    Automation systems for measuring Australian antigen by radioimmunoassay under development were discussed. Samples were processed as follows: blood serum being dispensed by automated sampler to the test tube, and then incubated under controlled time and temperature; first counting being omitted; labelled antibody being dispensed to the serum after washing; samples being incubated and then centrifuged; radioactivities in the precipitate being counted by auto-well counter; measurements being tabulated by automated typewriter. Not only well-type counter but also position counter was studied. (Kanao, N.)

  7. Computational methods in drug discovery

    Directory of Open Access Journals (Sweden)

    Sumudu P. Leelananda

    2016-12-01

    Full Text Available The process for drug discovery and development is challenging, time consuming and expensive. Computer-aided drug discovery (CADD tools can act as a virtual shortcut, assisting in the expedition of this long process and potentially reducing the cost of research and development. Today CADD has become an effective and indispensable tool in therapeutic development. The human genome project has made available a substantial amount of sequence data that can be used in various drug discovery projects. Additionally, increasing knowledge of biological structures, as well as increasing computer power have made it possible to use computational methods effectively in various phases of the drug discovery and development pipeline. The importance of in silico tools is greater than ever before and has advanced pharmaceutical research. Here we present an overview of computational methods used in different facets of drug discovery and highlight some of the recent successes. In this review, both structure-based and ligand-based drug discovery methods are discussed. Advances in virtual high-throughput screening, protein structure prediction methods, protein–ligand docking, pharmacophore modeling and QSAR techniques are reviewed.

  8. 77 FR 48527 - National Customs Automation Program (NCAP) Test Concerning Automated Commercial Environment (ACE...

    Science.gov (United States)

    2012-08-14

    ... National Customs Automation Program (NCAP) test concerning the simplified entry functionality in the... DEPARTMENT OF HOMELAND SECURITY U.S. Customs and Border Protection National Customs Automation Program (NCAP) Test Concerning Automated Commercial Environment (ACE) Simplified Entry: Modification of...

  9. Automated Detection of Small Bodies by Space Based Observation

    Science.gov (United States)

    Bidstrup, P. R.; Grillmayer, G.; Andersen, A. C.; Haack, H.; Jorgensen, J. L.

    The number of known comets and asteroids is increasing every year. Up till now this number is including approximately 250,000 of the largest minor planets, as they are usually referred. These discoveries are due to the Earth-based observation which has intensified over the previous decades. Additionally larger telescopes and arrays of telescopes are being used for exploring our Solar System. It is believed that all near- Earth and Main-Belt asteroids of diameters above 10 to 30 km have been discovered, leaving these groups of objects as observationally complete. However, the cataloguing of smaller bodies is incomplete as only a very small fraction of the expected number has been discovered. It is estimated that approximately 1010 main belt asteroids in the size range 1 m to 1 km are too faint to be observed using Earth-based telescopes. In order to observe these small bodies, space-based search must be initiated to remove atmospheric disturbances and to minimize the distance to the asteroids and thereby minimising the requirement for long camera integration times. A new method of space-based detection of moving non-stellar objects is currently being developed utilising the Advanced Stellar Compass (ASC) built for spacecraft attitude determination by Ørsted, Danish Technical University. The ASC serves as a backbone technology in the project as it is capable of fully automated distinction of known and unknown celestial objects. By only processing objects of particular interest, i.e. moving objects, it will be possible to discover small bodies with a minimum of ground control, with the ultimate ambition of a fully automated space search probe. Currently, the ASC is being mounted on the Flying Laptop satellite of the Institute of Space Systems, Universität Stuttgart. It will, after a launch into a low Earth polar orbit in 2008, test the detection method with the ASC equipment that already had significant in-flight experience. A future use of the ASC based automated

  10. 10 CFR 205.198 - Discovery.

    Science.gov (United States)

    2010-01-01

    ... 10 Energy 3 2010-01-01 2010-01-01 false Discovery. 205.198 Section 205.198 Energy DEPARTMENT OF... of Proposed Disallowance, and Order of Disallowance § 205.198 Discovery. (a) If a person intends to file a Motion for Discovery, he must file it at the same time that he files his Statement of Objections...

  11. 12 CFR 908.46 - Discovery.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 7 2010-01-01 2010-01-01 false Discovery. 908.46 Section 908.46 Banks and... PRACTICE AND PROCEDURE IN HEARINGS ON THE RECORD Pre-Hearing Proceedings § 908.46 Discovery. (a) Limits on discovery. Subject to the limitations set out in paragraphs (b), (d), and (e) of this section, any party to...

  12. 21 CFR 17.23 - Discovery.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 1 2010-04-01 2010-04-01 false Discovery. 17.23 Section 17.23 Food and Drugs FOOD... HEARINGS § 17.23 Discovery. (a) No later than 60 days prior to the hearing, unless otherwise ordered by the..., depositions, and any forms of discovery, other than those permitted under paragraphs (a) and (e) of this...

  13. Automating crystallographic structure solution and refinement of protein–ligand complexes

    International Nuclear Information System (INIS)

    Echols, Nathaniel; Moriarty, Nigel W.; Klei, Herbert E.; Afonine, Pavel V.; Bunkóczi, Gábor; Headd, Jeffrey J.; McCoy, Airlie J.; Oeffner, Robert D.; Read, Randy J.; Terwilliger, Thomas C.; Adams, Paul D.

    2013-01-01

    A software system for automated protein–ligand crystallography has been implemented in the Phenix suite. This significantly reduces the manual effort required in high-throughput crystallographic studies. High-throughput drug-discovery and mechanistic studies often require the determination of multiple related crystal structures that only differ in the bound ligands, point mutations in the protein sequence and minor conformational changes. If performed manually, solution and refinement requires extensive repetition of the same tasks for each structure. To accelerate this process and minimize manual effort, a pipeline encompassing all stages of ligand building and refinement, starting from integrated and scaled diffraction intensities, has been implemented in Phenix. The resulting system is able to successfully solve and refine large collections of structures in parallel without extensive user intervention prior to the final stages of model completion and validation

  14. Laboratory Automation and Middleware.

    Science.gov (United States)

    Riben, Michael

    2015-06-01

    The practice of surgical pathology is under constant pressure to deliver the highest quality of service, reduce errors, increase throughput, and decrease turnaround time while at the same time dealing with an aging workforce, increasing financial constraints, and economic uncertainty. Although not able to implement total laboratory automation, great progress continues to be made in workstation automation in all areas of the pathology laboratory. This report highlights the benefits and challenges of pathology automation, reviews middleware and its use to facilitate automation, and reviews the progress so far in the anatomic pathology laboratory. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. Leveraging Crowdsourcing and Linked Open Data for Geoscience Data Sharing and Discovery

    Science.gov (United States)

    Narock, T. W.; Rozell, E. A.; Hitzler, P.; Arko, R. A.; Chandler, C. L.; Wilson, B. D.

    2013-12-01

    Data citation standards can form the basis for increased incentives, recognition, and rewards for scientists. Additionally, knowing which data were utilized in a particular publication can enhance discovery and reuse. Yet, a lack of data citation information in existing publications as well as ambiguities across datasets can limit the accuracy of automated linking approaches. We describe a crowdsourcing approach, based on Linked Open Data, in which AGU abstracts are linked to the data used in those presentations. We discuss our efforts to incentivize participants through promotion of their research, the role that the Semantic Web can play in this effort, and how this work differs from existing platforms such as Mendeley and ResearchGate. Further, we discuss the benefits and challenges of Linked Open Data as a technical solution including the role of provenance, trust, and computational reasoning.

  16. Managing laboratory automation.

    Science.gov (United States)

    Saboe, T J

    1995-01-01

    This paper discusses the process of managing automated systems through their life cycles within the quality-control (QC) laboratory environment. The focus is on the process of directing and managing the evolving automation of a laboratory; system examples are given. The author shows how both task and data systems have evolved, and how they interrelate. A BIG picture, or continuum view, is presented and some of the reasons for success or failure of the various examples cited are explored. Finally, some comments on future automation need are discussed.

  17. Determination of the Optimized Automation Rate considering Effects of Automation on Human Operators in Nuclear Power Plants

    International Nuclear Information System (INIS)

    Lee, Seung Min; Seong, Poong Hyun; Kim, Jong Hyun; Kim, Man Cheol

    2015-01-01

    Automation refers to the use of a device or a system to perform a function previously performed by a human operator. It is introduced to reduce the human errors and to enhance the performance in various industrial fields, including the nuclear industry. However, these positive effects are not always achieved in complex systems such as nuclear power plants (NPPs). An excessive introduction of automation can generate new roles for human operators and change activities in unexpected ways. As more automation systems are accepted, the ability of human operators to detect automation failures and resume manual control is diminished. This disadvantage of automation is called the Out-of-the- Loop (OOTL) problem. We should consider the positive and negative effects of automation at the same time to determine the appropriate level of the introduction of automation. Thus, in this paper, we suggest an estimation method to consider the positive and negative effects of automation at the same time to determine the appropriate introduction of automation. This concept is limited in that it does not consider the effects of automation on human operators. Thus, a new estimation method for automation rate was suggested to overcome this problem

  18. 78 FR 44142 - Modification of Two National Customs Automation Program (NCAP) Tests Concerning Automated...

    Science.gov (United States)

    2013-07-23

    ... Customs Automation Program (NCAP) Tests Concerning Automated Commercial Environment (ACE) Document Image... (CBP's) plan to modify the National Customs Automation Program (NCAP) tests concerning document imaging... entry process by reducing the number of data elements required to obtain release for cargo transported...

  19. The Science of Home Automation

    Science.gov (United States)

    Thomas, Brian Louis

    Smart home technologies and the concept of home automation have become more popular in recent years. This popularity has been accompanied by social acceptance of passive sensors installed throughout the home. The subsequent increase in smart homes facilitates the creation of home automation strategies. We believe that home automation strategies can be generated intelligently by utilizing smart home sensors and activity learning. In this dissertation, we hypothesize that home automation can benefit from activity awareness. To test this, we develop our activity-aware smart automation system, CARL (CASAS Activity-aware Resource Learning). CARL learns the associations between activities and device usage from historical data and utilizes the activity-aware capabilities to control the devices. To help validate CARL we deploy and test three different versions of the automation system in a real-world smart environment. To provide a foundation of activity learning, we integrate existing activity recognition and activity forecasting into CARL home automation. We also explore two alternatives to using human-labeled data to train the activity learning models. The first unsupervised method is Activity Detection, and the second is a modified DBSCAN algorithm that utilizes Dynamic Time Warping (DTW) as a distance metric. We compare the performance of activity learning with human-defined labels and with automatically-discovered activity categories. To provide evidence in support of our hypothesis, we evaluate CARL automation in a smart home testbed. Our results indicate that home automation can be boosted through activity awareness. We also find that the resulting automation has a high degree of usability and comfort for the smart home resident.

  20. The circumstances of minor planet discovery

    International Nuclear Information System (INIS)

    Pilcher, F.

    1989-01-01

    The circumstances of discoveries of minor planets are presented in tabular form. Complete data are given for planets 2125-4044, together with notes pertaining to these planets. Information in the table includes the permanent number; the official name; for planets 330 and forward, the table includes the provisional designation attached to the discovery apparition and the year, month, the day of discovery, and the discovery place