WorldWideScience

Sample records for automating spreadsheet discovery

  1. Spreadsheet

    International Nuclear Information System (INIS)

    Anon.

    1991-01-01

    The spreadsheet shown in tables is intended to show how environmental costs can be calculated, displayed, and modified. It is not intended to show the environmental costs of any real resource or its effects, although it could show such costs if actual data were used. It is based on a hypothetical coal plant emitting various quantities of pollutants to which people are exposed. The environmental cost of the plant consists of the economic value of the ensuing health risks. The values used in the table are intended to be illustrative only, although they are based on modified versions of actual data from a study for the Bonneville Power Administration. The formulas used to calculate the values are also displayed. Although only one environmental effect (health risks) is calculated and valued in this spreadsheet, the same or similar procedure could be used for a variety of other environmental effects. This spreadsheet is intended to be a model; a complete accounting for all environmental costs associated with a coal plant is beyond the scope of this project

  2. Automated Supernova Discovery (Abstract)

    Science.gov (United States)

    Post, R. S.

    2015-12-01

    (Abstract only) We are developing a system of robotic telescopes for automatic recognition of Supernovas as well as other transient events in collaboration with the Puckett Supernova Search Team. At the SAS2014 meeting, the discovery program, SNARE, was first described. Since then, it has been continuously improved to handle searches under a wide variety of atmospheric conditions. Currently, two telescopes are used to build a reference library while searching for PSN with a partial library. Since data is taken every night without clouds, we must deal with varying atmospheric and high background illumination from the moon. Software is configured to identify a PSN, reshoot for verification with options to change the run plan to acquire photometric or spectrographic data. The telescopes are 24-inch CDK24, with Alta U230 cameras, one in CA and one in NM. Images and run plans are sent between sites so the CA telescope can search while photometry is done in NM. Our goal is to find bright PSNs with magnitude 17.5 or less which is the limit of our planned spectroscopy. We present results from our first automated PSN discoveries and plans for PSN data acquisition.

  3. Automated Discovery of Speech Act Categories in Educational Games

    Science.gov (United States)

    Rus, Vasile; Moldovan, Cristian; Niraula, Nobal; Graesser, Arthur C.

    2012-01-01

    In this paper we address the important task of automated discovery of speech act categories in dialogue-based, multi-party educational games. Speech acts are important in dialogue-based educational systems because they help infer the student speaker's intentions (the task of speech act classification) which in turn is crucial to providing adequate…

  4. Automated cell type discovery and classification through knowledge transfer

    Science.gov (United States)

    Lee, Hao-Chih; Kosoy, Roman; Becker, Christine E.

    2017-01-01

    Abstract Motivation: Recent advances in mass cytometry allow simultaneous measurements of up to 50 markers at single-cell resolution. However, the high dimensionality of mass cytometry data introduces computational challenges for automated data analysis and hinders translation of new biological understanding into clinical applications. Previous studies have applied machine learning to facilitate processing of mass cytometry data. However, manual inspection is still inevitable and becoming the barrier to reliable large-scale analysis. Results: We present a new algorithm called Automated Cell-type Discovery and Classification (ACDC) that fully automates the classification of canonical cell populations and highlights novel cell types in mass cytometry data. Evaluations on real-world data show ACDC provides accurate and reliable estimations compared to manual gating results. Additionally, ACDC automatically classifies previously ambiguous cell types to facilitate discovery. Our findings suggest that ACDC substantially improves both reliability and interpretability of results obtained from high-dimensional mass cytometry profiling data. Availability and Implementation: A Python package (Python 3) and analysis scripts for reproducing the results are availability on https://bitbucket.org/dudleylab/acdc. Contact: brian.kidd@mssm.edu or joel.dudley@mssm.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:28158442

  5. Automated vocabulary discovery for geo-parsing online epidemic intelligence.

    Science.gov (United States)

    Keller, Mikaela; Freifeld, Clark C; Brownstein, John S

    2009-11-24

    Automated surveillance of the Internet provides a timely and sensitive method for alerting on global emerging infectious disease threats. HealthMap is part of a new generation of online systems designed to monitor and visualize, on a real-time basis, disease outbreak alerts as reported by online news media and public health sources. HealthMap is of specific interest for national and international public health organizations and international travelers. A particular task that makes such a surveillance useful is the automated discovery of the geographic references contained in the retrieved outbreak alerts. This task is sometimes referred to as "geo-parsing". A typical approach to geo-parsing would demand an expensive training corpus of alerts manually tagged by a human. Given that human readers perform this kind of task by using both their lexical and contextual knowledge, we developed an approach which relies on a relatively small expert-built gazetteer, thus limiting the need of human input, but focuses on learning the context in which geographic references appear. We show in a set of experiments, that this approach exhibits a substantial capacity to discover geographic locations outside of its initial lexicon. The results of this analysis provide a framework for future automated global surveillance efforts that reduce manual input and improve timeliness of reporting.

  6. Automated vocabulary discovery for geo-parsing online epidemic intelligence

    Directory of Open Access Journals (Sweden)

    Freifeld Clark C

    2009-11-01

    Full Text Available Abstract Background Automated surveillance of the Internet provides a timely and sensitive method for alerting on global emerging infectious disease threats. HealthMap is part of a new generation of online systems designed to monitor and visualize, on a real-time basis, disease outbreak alerts as reported by online news media and public health sources. HealthMap is of specific interest for national and international public health organizations and international travelers. A particular task that makes such a surveillance useful is the automated discovery of the geographic references contained in the retrieved outbreak alerts. This task is sometimes referred to as "geo-parsing". A typical approach to geo-parsing would demand an expensive training corpus of alerts manually tagged by a human. Results Given that human readers perform this kind of task by using both their lexical and contextual knowledge, we developed an approach which relies on a relatively small expert-built gazetteer, thus limiting the need of human input, but focuses on learning the context in which geographic references appear. We show in a set of experiments, that this approach exhibits a substantial capacity to discover geographic locations outside of its initial lexicon. Conclusion The results of this analysis provide a framework for future automated global surveillance efforts that reduce manual input and improve timeliness of reporting.

  7. GWATCH: a web platform for automated gene association discovery analysis

    Science.gov (United States)

    2014-01-01

    Background As genome-wide sequence analyses for complex human disease determinants are expanding, it is increasingly necessary to develop strategies to promote discovery and validation of potential disease-gene associations. Findings Here we present a dynamic web-based platform – GWATCH – that automates and facilitates four steps in genetic epidemiological discovery: 1) Rapid gene association search and discovery analysis of large genome-wide datasets; 2) Expanded visual display of gene associations for genome-wide variants (SNPs, indels, CNVs), including Manhattan plots, 2D and 3D snapshots of any gene region, and a dynamic genome browser illustrating gene association chromosomal regions; 3) Real-time validation/replication of candidate or putative genes suggested from other sources, limiting Bonferroni genome-wide association study (GWAS) penalties; 4) Open data release and sharing by eliminating privacy constraints (The National Human Genome Research Institute (NHGRI) Institutional Review Board (IRB), informed consent, The Health Insurance Portability and Accountability Act (HIPAA) of 1996 etc.) on unabridged results, which allows for open access comparative and meta-analysis. Conclusions GWATCH is suitable for both GWAS and whole genome sequence association datasets. We illustrate the utility of GWATCH with three large genome-wide association studies for HIV-AIDS resistance genes screened in large multicenter cohorts; however, association datasets from any study can be uploaded and analyzed by GWATCH. PMID:25374661

  8. Automated cell type discovery and classification through knowledge transfer.

    Science.gov (United States)

    Lee, Hao-Chih; Kosoy, Roman; Becker, Christine E; Dudley, Joel T; Kidd, Brian A

    2017-06-01

    Recent advances in mass cytometry allow simultaneous measurements of up to 50 markers at single-cell resolution. However, the high dimensionality of mass cytometry data introduces computational challenges for automated data analysis and hinders translation of new biological understanding into clinical applications. Previous studies have applied machine learning to facilitate processing of mass cytometry data. However, manual inspection is still inevitable and becoming the barrier to reliable large-scale analysis. We present a new algorithm called utomated ell-type iscovery and lassification (ACDC) that fully automates the classification of canonical cell populations and highlights novel cell types in mass cytometry data. Evaluations on real-world data show ACDC provides accurate and reliable estimations compared to manual gating results. Additionally, ACDC automatically classifies previously ambiguous cell types to facilitate discovery. Our findings suggest that ACDC substantially improves both reliability and interpretability of results obtained from high-dimensional mass cytometry profiling data. A Python package (Python 3) and analysis scripts for reproducing the results are availability on https://bitbucket.org/dudleylab/acdc . brian.kidd@mssm.edu or joel.dudley@mssm.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press.

  9. Measuring Spreadsheet Formula Understandability

    NARCIS (Netherlands)

    Hermans, F.F.J.; Pinzger, M.; Van Deursen, A.

    2012-01-01

    Spreadsheets are widely used in industry, because they are flexible and easy to use. Often they are used for business-critical applications. It is however difficult for spreadsheet users to correctly assess the quality of spreadsheets, especially with respect to the understandability.

  10. Optimization modeling with spreadsheets

    CERN Document Server

    Baker, Kenneth R

    2015-01-01

    An accessible introduction to optimization analysis using spreadsheets Updated and revised, Optimization Modeling with Spreadsheets, Third Edition emphasizes model building skills in optimization analysis. By emphasizing both spreadsheet modeling and optimization tools in the freely available Microsoft® Office Excel® Solver, the book illustrates how to find solutions to real-world optimization problems without needing additional specialized software. The Third Edition includes many practical applications of optimization models as well as a systematic framework that il

  11. Analyzing and Visualizing Spreadsheets

    NARCIS (Netherlands)

    Hermans, F.F.J.

    2013-01-01

    Spreadsheets are used extensively in industry: they are the number one tool for financial analysis and are also prevalent in other domains, such as logistics and planning. Their flexibility and immediate feedback make them easy to use for non-programmers. But as easy as spreadsheets are to build, so

  12. Managing Cataloging Statistics with a Spreadsheet.

    Science.gov (United States)

    Shelton, Judith M.

    1995-01-01

    Presents aspects of Pullen Library's move from manual to automated management of cataloging statistics, and offers advice to libraries in similar situations. The difficulties included staff resistance, finding the right software package, and spreadsheet training; and the advantage was that the Quattro Pro program reduces complicated spreadsheets…

  13. AMD, an Automated Motif Discovery Tool Using Stepwise Refinement of Gapped Consensuses

    OpenAIRE

    Shi, Jiantao; Yang, Wentao; Chen, Mingjie; Du, Yanzhi; Zhang, Ji; Wang, Kankan

    2011-01-01

    Motif discovery is essential for deciphering regulatory codes from high throughput genomic data, such as those from ChIP-chip/seq experiments. However, there remains a lack of effective and efficient methods for the identification of long and gapped motifs in many relevant tools reported to date. We describe here an automated tool that allows for de novo discovery of transcription factor binding sites, regardless of whether the motifs are long or short, gapped or contiguous.

  14. CLAB Transuranic Waste Spreadsheets

    Energy Technology Data Exchange (ETDEWEB)

    Leyba, J.D.

    2000-08-11

    The Building 772-F Far-Field Transuranic (TRU) Waste Counting System is used to measure the radionuclide content of waste packages produced at the Central Laboratory Facilities (CLAB). Data from the instrument are entered into one of two Excel spreadsheets. The waste stream associated with the waste package determines which spreadsheet is actually used. The spreadsheets calculate the necessary information required for completion of the Transuranic Waste Characterization Form (OSR 29-90) and the Radioactive Solid Waste Burial Ground Record (OSR 7-375 or OSR 7-375A). In addition, the spreadsheets calculate the associated Low Level Waste (LLW) stream information that potentially could be useful if the waste container is ever downgraded from TRU to LLW. The spreadsheets also have the capability to sum activities from source material added to a waste container after assay. A validation data set for each spreadsheet along with the appropriate results are also presented in this report for spreadsheet verification prior to each use.

  15. Novel automated biomarker discovery work flow for urinary peptidomics

    DEFF Research Database (Denmark)

    Balog, Crina I.; Hensbergen, Paul J.; Derks, Rico

    2009-01-01

    eluted peptides using MALDI-TOF, Fourier transform ion cyclotron resonance, and liquid chromatography-iontrap mass spectrometry. We determined qualitative and quantitative reproducibility of the system and robustness of the method using BSA digests and urine samples, and we used a selected set of urine......Urine is potentially a rich source of peptide biomarkers, but reproducible, high-throughput peptidomic analysis is often hampered by the inherent variability in factors such as pH and salt concentration. Our goal was to develop a generally applicable, rapid, and robust method for screening large...... samples from Schistosoma haematobium-infected individuals to evaluate clinical applicability. RESULTS: The automated RP-SCX sample cleanup and fractionation system exhibits a high qualitative and quantitative reproducibility, with both BSA standards and urine samples. Because of the relatively high...

  16. Semi-automated knowledge discovery: identifying and profiling human trafficking

    Science.gov (United States)

    Poelmans, Jonas; Elzinga, Paul; Ignatov, Dmitry I.; Kuznetsov, Sergei O.

    2012-11-01

    We propose an iterative and human-centred knowledge discovery methodology based on formal concept analysis. The proposed approach recognizes the important role of the domain expert in mining real-world enterprise applications and makes use of specific domain knowledge, including human intelligence and domain-specific constraints. Our approach was empirically validated at the Amsterdam-Amstelland police to identify suspects and victims of human trafficking in 266,157 suspicious activity reports. Based on guidelines of the Attorney Generals of the Netherlands, we first defined multiple early warning indicators that were used to index the police reports. Using concept lattices, we revealed numerous unknown human trafficking and loverboy suspects. In-depth investigation by the police resulted in a confirmation of their involvement in illegal activities resulting in actual arrestments been made. Our human-centred approach was embedded into operational policing practice and is now successfully used on a daily basis to cope with the vastly growing amount of unstructured information.

  17. Automated discovery of functional generality of human gene expression programs.

    Directory of Open Access Journals (Sweden)

    Georg K Gerber

    2007-08-01

    Full Text Available An important research problem in computational biology is the identification of expression programs, sets of co-expressed genes orchestrating normal or pathological processes, and the characterization of the functional breadth of these programs. The use of human expression data compendia for discovery of such programs presents several challenges including cellular inhomogeneity within samples, genetic and environmental variation across samples, uncertainty in the numbers of programs and sample populations, and temporal behavior. We developed GeneProgram, a new unsupervised computational framework based on Hierarchical Dirichlet Processes that addresses each of the above challenges. GeneProgram uses expression data to simultaneously organize tissues into groups and genes into overlapping programs with consistent temporal behavior, to produce maps of expression programs, which are sorted by generality scores that exploit the automatically learned groupings. Using synthetic and real gene expression data, we showed that GeneProgram outperformed several popular expression analysis methods. We applied GeneProgram to a compendium of 62 short time-series gene expression datasets exploring the responses of human cells to infectious agents and immune-modulating molecules. GeneProgram produced a map of 104 expression programs, a substantial number of which were significantly enriched for genes involved in key signaling pathways and/or bound by NF-kappaB transcription factors in genome-wide experiments. Further, GeneProgram discovered expression programs that appear to implicate surprising signaling pathways or receptor types in the response to infection, including Wnt signaling and neurotransmitter receptors. We believe the discovered map of expression programs involved in the response to infection will be useful for guiding future biological experiments; genes from programs with low generality scores might serve as new drug targets that exhibit minimal

  18. High Throughput Light Absorber Discovery, Part 1: An Algorithm for Automated Tauc Analysis.

    Science.gov (United States)

    Suram, Santosh K; Newhouse, Paul F; Gregoire, John M

    2016-11-14

    High-throughput experimentation provides efficient mapping of composition-property relationships, and its implementation for the discovery of optical materials enables advancements in solar energy and other technologies. In a high throughput pipeline, automated data processing algorithms are often required to match experimental throughput, and we present an automated Tauc analysis algorithm for estimating band gap energies from optical spectroscopy data. The algorithm mimics the judgment of an expert scientist, which is demonstrated through its application to a variety of high throughput spectroscopy data, including the identification of indirect or direct band gaps in Fe 2 O 3 , Cu 2 V 2 O 7 , and BiVO 4 . The applicability of the algorithm to estimate a range of band gap energies for various materials is demonstrated by a comparison of direct-allowed band gaps estimated by expert scientists and by automated algorithm for 60 optical spectra.

  19. The deductive spreadsheet

    CERN Document Server

    Cervesato, Iliano

    2013-01-01

    This book describes recent multidisciplinary research at the confluence of the fields of logic programming, database theory and human-computer interaction. The goal of this effort was to develop the basis of a deductive spreadsheet, a user productivity application that allows users without formal training in computer science to make decisions about generic data in the same simple way they currently use spreadsheets to make decisions about numerical data. The result is an elegant design supported by the most recent developments in the above disciplines.The first half of the book focuses on the

  20. Implementing function spreadsheets

    DEFF Research Database (Denmark)

    Sestoft, Peter

    2008-01-01

    : that of turning an expression into a named function. Hence they proposed a way to define a function in terms of a worksheet with designated input and output cells; we shall call it a function sheet. The goal of our work is to develop implementations of function sheets and study their application to realistic...... examples. Therefore, we are also developing a simple yet comprehensive spreadsheet core implementation for experimentation with this technology. Here we report briefly on our experiments with function sheets as well as other uses of our spreadsheet core implementation....

  1. Spreadsheet analysis of gamma spectra for nuclear material measurements

    Energy Technology Data Exchange (ETDEWEB)

    Mosby, W.R.; Pace, D.M.

    1990-01-01

    A widely available commercial spreadsheet package for personal computers is used to calculate gamma spectra peak areas using both region of interest and peak fitting methods. The gamma peak areas obtained are used for uranium enrichment assays and for isotopic analyses of mixtures of transuranics. The use of spreadsheet software with an internal processing language allows automation of routine analysis procedures increasing ease of use and reducing processing errors while providing great flexibility in addressing unusual measurement problems. 4 refs., 9 figs.

  2. Early identification of hERG liability in drug discovery programs by automated patch clamp.

    Science.gov (United States)

    Danker, Timm; Möller, Clemens

    2014-01-01

    Blockade of the cardiac ion channel coded by human ether-à-gogo-related gene (hERG) can lead to cardiac arrhythmia, which has become a major concern in drug discovery and development. Automated electrophysiological patch clamp allows assessment of hERG channel effects early in drug development to aid medicinal chemistry programs and has become routine in pharmaceutical companies. However, a number of potential sources of errors in setting up hERG channel assays by automated patch clamp can lead to misinterpretation of data or false effects being reported. This article describes protocols for automated electrophysiology screening of compound effects on the hERG channel current. Protocol details and the translation of criteria known from manual patch clamp experiments to automated patch clamp experiments to achieve good quality data are emphasized. Typical pitfalls and artifacts that may lead to misinterpretation of data are discussed. While this article focuses on hERG channel recordings using the QPatch (Sophion A/S, Copenhagen, Denmark) technology, many of the assay and protocol details given in this article can be transferred for setting up different ion channel assays by automated patch clamp and are similar on other planar patch clamp platforms.

  3. AutoDrug: fully automated macromolecular crystallography workflows for fragment-based drug discovery

    International Nuclear Information System (INIS)

    Tsai, Yingssu; McPhillips, Scott E.; González, Ana; McPhillips, Timothy M.; Zinn, Daniel; Cohen, Aina E.; Feese, Michael D.; Bushnell, David; Tiefenbrunn, Theresa; Stout, C. David; Ludaescher, Bertram; Hedman, Britt; Hodgson, Keith O.; Soltis, S. Michael

    2013-01-01

    New software has been developed for automating the experimental and data-processing stages of fragment-based drug discovery at a macromolecular crystallography beamline. A new workflow-automation framework orchestrates beamline-control and data-analysis software while organizing results from multiple samples. AutoDrug is software based upon the scientific workflow paradigm that integrates the Stanford Synchrotron Radiation Lightsource macromolecular crystallography beamlines and third-party processing software to automate the crystallography steps of the fragment-based drug-discovery process. AutoDrug screens a cassette of fragment-soaked crystals, selects crystals for data collection based on screening results and user-specified criteria and determines optimal data-collection strategies. It then collects and processes diffraction data, performs molecular replacement using provided models and detects electron density that is likely to arise from bound fragments. All processes are fully automated, i.e. are performed without user interaction or supervision. Samples can be screened in groups corresponding to particular proteins, crystal forms and/or soaking conditions. A single AutoDrug run is only limited by the capacity of the sample-storage dewar at the beamline: currently 288 samples. AutoDrug was developed in conjunction with RestFlow, a new scientific workflow-automation framework. RestFlow simplifies the design of AutoDrug by managing the flow of data and the organization of results and by orchestrating the execution of computational pipeline steps. It also simplifies the execution and interaction of third-party programs and the beamline-control system. Modeling AutoDrug as a scientific workflow enables multiple variants that meet the requirements of different user groups to be developed and supported. A workflow tailored to mimic the crystallography stages comprising the drug-discovery pipeline of CoCrystal Discovery Inc. has been deployed and successfully

  4. Automated search-model discovery and preparation for structure solution by molecular replacement.

    Science.gov (United States)

    Keegan, Ronan M; Winn, Martyn D

    2007-04-01

    A novel automation pipeline for macromolecular structure solution by molecular replacement is described. There is a special emphasis on the discovery and preparation of a large number of search models, all of which can be passed to the core molecular-replacement programs. For routine molecular-replacement problems, the pipeline automates what a crystallographer might do and its value is simply one of convenience. For more difficult cases, the pipeline aims to discover the particular template structure and model edits required to produce a viable search model and may succeed in finding an efficacious combination that would be missed otherwise. The pipeline is described in detail and a number of examples are given. The examples are chosen to illustrate successes in real crystallography problems and also particular features of the pipeline. It is concluded that exploring a range of search models automatically can be valuable in many cases.

  5. AutoDrug: fully automated macromolecular crystallography workflows for fragment-based drug discovery.

    Science.gov (United States)

    Tsai, Yingssu; McPhillips, Scott E; González, Ana; McPhillips, Timothy M; Zinn, Daniel; Cohen, Aina E; Feese, Michael D; Bushnell, David; Tiefenbrunn, Theresa; Stout, C David; Ludaescher, Bertram; Hedman, Britt; Hodgson, Keith O; Soltis, S Michael

    2013-05-01

    AutoDrug is software based upon the scientific workflow paradigm that integrates the Stanford Synchrotron Radiation Lightsource macromolecular crystallography beamlines and third-party processing software to automate the crystallography steps of the fragment-based drug-discovery process. AutoDrug screens a cassette of fragment-soaked crystals, selects crystals for data collection based on screening results and user-specified criteria and determines optimal data-collection strategies. It then collects and processes diffraction data, performs molecular replacement using provided models and detects electron density that is likely to arise from bound fragments. All processes are fully automated, i.e. are performed without user interaction or supervision. Samples can be screened in groups corresponding to particular proteins, crystal forms and/or soaking conditions. A single AutoDrug run is only limited by the capacity of the sample-storage dewar at the beamline: currently 288 samples. AutoDrug was developed in conjunction with RestFlow, a new scientific workflow-automation framework. RestFlow simplifies the design of AutoDrug by managing the flow of data and the organization of results and by orchestrating the execution of computational pipeline steps. It also simplifies the execution and interaction of third-party programs and the beamline-control system. Modeling AutoDrug as a scientific workflow enables multiple variants that meet the requirements of different user groups to be developed and supported. A workflow tailored to mimic the crystallography stages comprising the drug-discovery pipeline of CoCrystal Discovery Inc. has been deployed and successfully demonstrated. This workflow was run once on the same 96 samples that the group had examined manually and the workflow cycled successfully through all of the samples, collected data from the same samples that were selected manually and located the same peaks of unmodeled density in the resulting difference Fourier

  6. Automated Discovery of Elementary Chemical Reaction Steps Using Freezing String and Berny Optimization Methods.

    Science.gov (United States)

    Suleimanov, Yury V; Green, William H

    2015-09-08

    We present a simple protocol which allows fully automated discovery of elementary chemical reaction steps using in cooperation double- and single-ended transition-state optimization algorithms--the freezing string and Berny optimization methods, respectively. To demonstrate the utility of the proposed approach, the reactivity of several single-molecule systems of combustion and atmospheric chemistry importance is investigated. The proposed algorithm allowed us to detect without any human intervention not only "known" reaction pathways, manually detected in the previous studies, but also new, previously "unknown", reaction pathways which involve significant atom rearrangements. We believe that applying such a systematic approach to elementary reaction path finding will greatly accelerate the discovery of new chemistry and will lead to more accurate computer simulations of various chemical processes.

  7. Predicting Causal Relationships from Biological Data: Applying Automated Casual Discovery on Mass Cytometry Data of Human Immune Cells

    KAUST Repository

    Triantafillou, Sofia

    2017-03-31

    Learning the causal relationships that define a molecular system allows us to predict how the system will respond to different interventions. Distinguishing causality from mere association typically requires randomized experiments. Methods for automated causal discovery from limited experiments exist, but have so far rarely been tested in systems biology applications. In this work, we apply state-of-the art causal discovery methods on a large collection of public mass cytometry data sets, measuring intra-cellular signaling proteins of the human immune system and their response to several perturbations. We show how different experimental conditions can be used to facilitate causal discovery, and apply two fundamental methods that produce context-specific causal predictions. Causal predictions were reproducible across independent data sets from two different studies, but often disagree with the KEGG pathway databases. Within this context, we discuss the caveats we need to overcome for automated causal discovery to become a part of the routine data analysis in systems biology.

  8. Facilitating drug discovery: an automated high-content inflammation assay in zebrafish.

    Science.gov (United States)

    Wittmann, Christine; Reischl, Markus; Shah, Asmi H; Mikut, Ralf; Liebel, Urban; Grabher, Clemens

    2012-07-16

    Zebrafish larvae are particularly amenable to whole animal small molecule screens due to their small size and relative ease of manipulation and observation, as well as the fact that compounds can simply be added to the bathing water and are readily absorbed when administered in a Introduction of the chemically induced inflammation (ChIn) assay eliminated these obstacles. Since wounding is inflicted chemically the number of embryos that can be treated simultaneously is virtually unlimited. Temporary treatment of zebrafish larvae with copper sulfate selectively induces cell death in hair cells of the lateral line system and results in rapid granulocyte recruitment to injured neuromasts. The inflammatory response can be followed in real-time by using compound transgenic cldnB::GFP/lysC::DsRED2 zebrafish larvae that express a green fluorescent protein in neuromast cells, as well as a red fluorescent protein labeling granulocytes. In order to devise a screening strategy that would allow both high-content and high-throughput analyses we introduced robotic liquid handling and combined automated microscopy with a custom developed software script. This script enables automated quantification of the inflammatory response by scoring the percent area occupied by red fluorescent leukocytes within an empirically defined area surrounding injured green fluorescent neuromasts. Furthermore, we automated data processing, handling, visualization, and storage all based on custom developed MATLAB and Python scripts. In brief, we introduce an automated HC/HT screen that allows testing of chemical compounds for their effect on initiation, progression or resolution of a granulocytic inflammatory response. This protocol serves a good starting point for more in-depth analyses of drug mechanisms and pathways involved in the orchestration of an innate immune response. In the future, it may help identifying intolerable toxic or off-target effects at earlier phases of drug discovery and thereby

  9. Automated Antibody De Novo Sequencing and Its Utility in Biopharmaceutical Discovery

    Science.gov (United States)

    Sen, K. Ilker; Tang, Wilfred H.; Nayak, Shruti; Kil, Yong J.; Bern, Marshall; Ozoglu, Berk; Ueberheide, Beatrix; Davis, Darryl; Becker, Christopher

    2017-05-01

    Applications of antibody de novo sequencing in the biopharmaceutical industry range from the discovery of new antibody drug candidates to identifying reagents for research and determining the primary structure of innovator products for biosimilar development. When murine, phage display, or patient-derived monoclonal antibodies against a target of interest are available, but the cDNA or the original cell line is not, de novo protein sequencing is required to humanize and recombinantly express these antibodies, followed by in vitro and in vivo testing for functional validation. Availability of fully automated software tools for monoclonal antibody de novo sequencing enables efficient and routine analysis. Here, we present a novel method to automatically de novo sequence antibodies using mass spectrometry and the Supernovo software. The robustness of the algorithm is demonstrated through a series of stress tests.

  10. Automated discovery of safety and efficacy concerns for joint & muscle pain relief treatments from online reviews.

    Science.gov (United States)

    Adams, David Z; Gruss, Richard; Abrahams, Alan S

    2017-04-01

    Product issues can cost companies millions in lawsuits and have devastating effects on a firm's sales, image and goodwill, especially in the era of social media. The ability for a system to detect the presence of safety and efficacy (S&E) concerns early on could not only protect consumers from injuries due to safety hazards, but could also mitigate financial damage to the manufacturer. Prior studies in the field of automated defect discovery have found industry-specific techniques appropriate to the automotive, consumer electronics, home appliance, and toy industries, but have not investigated pain relief medicines and medical devices. In this study, we focus specifically on automated discovery of S&E concerns in over-the-counter (OTC) joint and muscle pain relief remedies and devices. We select a dataset of over 32,000 records for three categories of Joint & Muscle Pain Relief treatments from Amazon's online product reviews, and train "smoke word" dictionaries which we use to score holdout reviews, for the presence of safety and efficacy issues. We also score using conventional sentiment analysis techniques. Compared to traditional sentiment analysis techniques, we found that smoke term dictionaries were better suited to detect product concerns from online consumer reviews, and significantly outperformed the sentiment analysis techniques in uncovering both efficacy and safety concerns, across all product subcategories. Our research can be applied to the healthcare and pharmaceutical industry in order to detect safety and efficacy concerns, reducing risks that consumers face using these products. These findings can be highly beneficial to improving quality assurance and management in joint and muscle pain relief. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Spreadsheet interface for transfer of drug-use data from a mainframe to a personal computer.

    Science.gov (United States)

    Smith, S L; Lubiejewski, T; Farnum, J

    1990-11-01

    The development of spreadsheets and automated instructions for transfer of drug-use data from a hospital's mainframe computer to the pharmacy department computer is described. Drug-use data are down-loaded from the mainframe as an ASCII file on a floppy disk. Instructions within the spreadsheet ("macros") are used to import the data into the first spreadsheet, arrange the data to be compatible with the database portion of the spreadsheet, and transfer the data to another spreadsheet where drug use and cost are calculated and placed in the column for the appropriate month. A series of formulas are used to arrange the imported data to account for items that have not been used. By eliminating the need for manual data input into the spreadsheets, the macros and associated formulas saved pharmacists substantial time.

  12. Early detection of pharmacovigilance signals with automated methods based on false discovery rates: a comparative study.

    Science.gov (United States)

    Ahmed, Ismaïl; Thiessard, Frantz; Miremont-Salamé, Ghada; Haramburu, Françoise; Kreft-Jais, Carmen; Bégaud, Bernard; Tubert-Bitter, Pascale

    2012-06-01

    Improving the detection of drug safety signals has led several pharmacovigilance regulatory agencies to incorporate automated quantitative methods into their spontaneous reporting management systems. The three largest worldwide pharmacovigilance databases are routinely screened by the lower bound of the 95% confidence interval of proportional reporting ratio (PRR₀₂.₅), the 2.5% quantile of the Information Component (IC₀₂.₅) or the 5% quantile of the Gamma Poisson Shrinker (GPS₀₅). More recently, Bayesian and non-Bayesian False Discovery Rate (FDR)-based methods were proposed that address the arbitrariness of thresholds and allow for a built-in estimate of the FDR. These methods were also shown through simulation studies to be interesting alternatives to the currently used methods. The objective of this work was twofold. Based on an extensive retrospective study, we compared PRR₀₂.₅, GPS₀₅ and IC₀₂.₅ with two FDR-based methods derived from the Fisher's exact test and the GPS model (GPS(pH0) [posterior probability of the null hypothesis H₀ calculated from the Gamma Poisson Shrinker model]). Secondly, restricting the analysis to GPS(pH0), we aimed to evaluate the added value of using automated signal detection tools compared with 'traditional' methods, i.e. non-automated surveillance operated by pharmacovigilance experts. The analysis was performed sequentially, i.e. every month, and retrospectively on the whole French pharmacovigilance database over the period 1 January 1996-1 July 2002. Evaluation was based on a list of 243 reference signals (RSs) corresponding to investigations launched by the French Pharmacovigilance Technical Committee (PhVTC) during the same period. The comparison of detection methods was made on the basis of the number of RSs detected as well as the time to detection. Results comparing the five automated quantitative methods were in favour of GPS(pH0) in terms of both number of detections of true signals and

  13. Auditing spreadsheets : with or without a tool?

    NARCIS (Netherlands)

    Felienne Hermans; Hans Duits; Michiel van der Ven; Simone Schalkwijk

    2015-01-01

    Spreadsheets are known to be error-prone. Over the last decade, research has been done to determine the causes of the high rate of errors in spreadsheets. This paper examines the added value of a spreadsheet tool (PerfectXL) that visualizes spreadsheet dependencies and determines possible errors in

  14. The Semantic Automated Discovery and Integration (SADI) Web service Design-Pattern, API and Reference Implementation.

    Science.gov (United States)

    Wilkinson, Mark D; Vandervalk, Benjamin; McCarthy, Luke

    2011-10-24

    The complexity and inter-related nature of biological data poses a difficult challenge for data and tool integration. There has been a proliferation of interoperability standards and projects over the past decade, none of which has been widely adopted by the bioinformatics community. Recent attempts have focused on the use of semantics to assist integration, and Semantic Web technologies are being welcomed by this community. SADI - Semantic Automated Discovery and Integration - is a lightweight set of fully standards-compliant Semantic Web service design patterns that simplify the publication of services of the type commonly found in bioinformatics and other scientific domains. Using Semantic Web technologies at every level of the Web services "stack", SADI services consume and produce instances of OWL Classes following a small number of very straightforward best-practices. In addition, we provide codebases that support these best-practices, and plug-in tools to popular developer and client software that dramatically simplify deployment of services by providers, and the discovery and utilization of those services by their consumers. SADI Services are fully compliant with, and utilize only foundational Web standards; are simple to create and maintain for service providers; and can be discovered and utilized in a very intuitive way by biologist end-users. In addition, the SADI design patterns significantly improve the ability of software to automatically discover appropriate services based on user-needs, and automatically chain these into complex analytical workflows. We show that, when resources are exposed through SADI, data compliant with a given ontological model can be automatically gathered, or generated, from these distributed, non-coordinating resources - a behaviour we have not observed in any other Semantic system. Finally, we show that, using SADI, data dynamically generated from Web services can be explored in a manner very similar to data housed in

  15. The Semantic Automated Discovery and Integration (SADI Web service Design-Pattern, API and Reference Implementation

    Directory of Open Access Journals (Sweden)

    Wilkinson Mark D

    2011-10-01

    Full Text Available Abstract Background The complexity and inter-related nature of biological data poses a difficult challenge for data and tool integration. There has been a proliferation of interoperability standards and projects over the past decade, none of which has been widely adopted by the bioinformatics community. Recent attempts have focused on the use of semantics to assist integration, and Semantic Web technologies are being welcomed by this community. Description SADI - Semantic Automated Discovery and Integration - is a lightweight set of fully standards-compliant Semantic Web service design patterns that simplify the publication of services of the type commonly found in bioinformatics and other scientific domains. Using Semantic Web technologies at every level of the Web services "stack", SADI services consume and produce instances of OWL Classes following a small number of very straightforward best-practices. In addition, we provide codebases that support these best-practices, and plug-in tools to popular developer and client software that dramatically simplify deployment of services by providers, and the discovery and utilization of those services by their consumers. Conclusions SADI Services are fully compliant with, and utilize only foundational Web standards; are simple to create and maintain for service providers; and can be discovered and utilized in a very intuitive way by biologist end-users. In addition, the SADI design patterns significantly improve the ability of software to automatically discover appropriate services based on user-needs, and automatically chain these into complex analytical workflows. We show that, when resources are exposed through SADI, data compliant with a given ontological model can be automatically gathered, or generated, from these distributed, non-coordinating resources - a behaviour we have not observed in any other Semantic system. Finally, we show that, using SADI, data dynamically generated from Web services

  16. A Romberg Integral Spreadsheet Calculator

    Directory of Open Access Journals (Sweden)

    Kim Gaik Tay

    2015-04-01

    Full Text Available Motivated by the work of Richardson’s extrapolation spreadsheet calculator up to level 4 to approximate definite differentiation, we have developed a Romberg integral spreadsheet calculator to approximate definite integral. The main feature of this version of spreadsheet calculator is a friendly graphical user interface developed to capture the needed information to solve the integral by Romberg method. Users simply need to enter the variable in the integral, function to be integrated, lower and upper limits of the integral, select the desired accuracy of computation, select the exact function if it exists and lastly click the Compute button which is associated with VBA programming written to compute Romberg integral table. The full solution of the Romberg integral table up to any level can be obtained quickly and easily using this method. The attached spreadsheet calculator together with this paper helps educators to prepare their marking scheme easily and assist students in checking their answers instead of reconstructing the answers from scratch. A summative evaluation of this Romberg Spreadsheet Calculator has been conducted by involving 36 students as sample. The data was collected using questionnaire. The findings showed that the majority of the students agreed that the Romberg Spreadsheet Calculator provides a structured learning environment that allows learners to be guided through a step-by-step solution.

  17. Upscaling and automation of electrophysiology: toward high throughput screening in ion channel drug discovery

    DEFF Research Database (Denmark)

    Asmild, Margit; Oswald, Nicholas; Krzywkowski, Karen M

    2003-01-01

    by developing two lines of automated patch clamp products, a traditional pipette-based system called Apatchi-1, and a silicon chip-based system QPatch. The degree of automation spans from semi-automation (Apatchi-1) where a trained technician interacts with the system in a limited way, to a complete automation...

  18. Automated Sample Preparation Platform for Mass Spectrometry-Based Plasma Proteomics and Biomarker Discovery

    Directory of Open Access Journals (Sweden)

    Vilém Guryča

    2014-03-01

    Full Text Available The identification of novel biomarkers from human plasma remains a critical need in order to develop and monitor drug therapies for nearly all disease areas. The discovery of novel plasma biomarkers is, however, significantly hampered by the complexity and dynamic range of proteins within plasma, as well as the inherent variability in composition from patient to patient. In addition, it is widely accepted that most soluble plasma biomarkers for diseases such as cancer will be represented by tissue leakage products, circulating in plasma at low levels. It is therefore necessary to find approaches with the prerequisite level of sensitivity in such a complex biological matrix. Strategies for fractionating the plasma proteome have been suggested, but improvements in sensitivity are often negated by the resultant process variability. Here we describe an approach using multidimensional chromatography and on-line protein derivatization, which allows for higher sensitivity, whilst minimizing the process variability. In order to evaluate this automated process fully, we demonstrate three levels of processing and compare sensitivity, throughput and reproducibility. We demonstrate that high sensitivity analysis of the human plasma proteome is possible down to the low ng/mL or even high pg/mL level with a high degree of technical reproducibility.

  19. molSimplify: A toolkit for automating discovery in inorganic chemistry.

    Science.gov (United States)

    Ioannidis, Efthymios I; Gani, Terry Z H; Kulik, Heather J

    2016-08-15

    We present an automated, open source toolkit for the first-principles screening and discovery of new inorganic molecules and intermolecular complexes. Challenges remain in the automatic generation of candidate inorganic molecule structures due to the high variability in coordination and bonding, which we overcome through a divide-and-conquer tactic that flexibly combines force-field preoptimization of organic fragments with alignment to first-principles-trained metal-ligand distances. Exploration of chemical space is enabled through random generation of ligands and intermolecular complexes from large chemical databases. We validate the generated structures with the root mean squared (RMS) gradients evaluated from density functional theory (DFT), which are around 0.02 Ha/au across a large 150 molecule test set. Comparison of molSimplify results to full optimization with the universal force field reveals that RMS DFT gradients are improved by 40%. Seamless generation of input files, preparation and execution of electronic structure calculations, and post-processing for each generated structure aids interpretation of underlying chemical and energetic trends. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  20. Predicting Causal Relationships from Biological Data: Applying Automated Causal Discovery on Mass Cytometry Data of Human Immune Cells.

    Science.gov (United States)

    Triantafillou, Sofia; Lagani, Vincenzo; Heinze-Deml, Christina; Schmidt, Angelika; Tegner, Jesper; Tsamardinos, Ioannis

    2017-10-05

    Learning the causal relationships that define a molecular system allows us to predict how the system will respond to different interventions. Distinguishing causality from mere association typically requires randomized experiments. Methods for automated  causal discovery from limited experiments exist, but have so far rarely been tested in systems biology applications. In this work, we apply state-of-the art causal discovery methods on a large collection of public mass cytometry data sets, measuring intra-cellular signaling proteins of the human immune system and their response to several perturbations. We show how different experimental conditions can be used to facilitate causal discovery, and apply two fundamental methods that produce context-specific causal predictions. Causal predictions were reproducible across independent data sets from two different studies, but often disagree with the KEGG pathway databases. Within this context, we discuss the caveats we need to overcome for automated causal discovery to become a part of the routine data analysis in systems biology.

  1. Predicting Causal Relationships from Biological Data: Applying Automated Causal Discovery on Mass Cytometry Data of Human Immune Cells

    KAUST Repository

    Triantafillou, Sofia

    2017-09-29

    Learning the causal relationships that define a molecular system allows us to predict how the system will respond to different interventions. Distinguishing causality from mere association typically requires randomized experiments. Methods for automated  causal discovery from limited experiments exist, but have so far rarely been tested in systems biology applications. In this work, we apply state-of-the art causal discovery methods on a large collection of public mass cytometry data sets, measuring intra-cellular signaling proteins of the human immune system and their response to several perturbations. We show how different experimental conditions can be used to facilitate causal discovery, and apply two fundamental methods that produce context-specific causal predictions. Causal predictions were reproducible across independent data sets from two different studies, but often disagree with the KEGG pathway databases. Within this context, we discuss the caveats we need to overcome for automated causal discovery to become a part of the routine data analysis in systems biology.

  2. New Richardson's extrapolation spreadsheet calculator using VBA programming for numerical differentiations

    Science.gov (United States)

    Tay, Kim Gaik; Kek, Sie Long; Abdul-Kahar, Rosmila

    2015-05-01

    In this paper, we have further improved the limitations of our previous two Richardson's extrapolation spreadsheet calculators for computing differentiations numerically. The new feature in this new Richardson's extrapolation spreadsheet calculator is fully automated up to any level based on the stopping criteria using VBA programming. The new version is more flexible because it is controlled by programming. Furthermore, it reduces computational time and CPU memory.

  3. The Devil and Daniel's Spreadsheet

    Science.gov (United States)

    Burke, Maurice J.

    2012-01-01

    "When making mathematical models, technology is valuable for varying assumptions, exploring consequences, and comparing predictions with data," notes the Common Core State Standards Initiative (2010, p. 72). This exploration of the recursive process in the Devil and Daniel Webster problem reveals that the symbolic spreadsheet fits this bill.…

  4. Detecting Code Smells in Spreadsheet Formulas

    NARCIS (Netherlands)

    Hermans, F.; Pinzger, M.; Van Deursen, A.

    2011-01-01

    Spreadsheets are used extensively in business processes around the world and just like software, spreadsheets are changed throughout their lifetime causing maintainability issues. This paper adapts known code smells to spreadsheet formulas. To that end we present a list of metrics by which we can

  5. Detecting Problematic Lookup Functions in Spreadsheets

    NARCIS (Netherlands)

    Hermans, F.; Aivaloglou, E.; Jansen, B.

    2015-01-01

    Spreadsheets are used heavily in many business domains around the world. They are easy to use and as such enable end-user programmers to and build and maintain all sorts of reports and analyses. In addition to using spreadsheets for modeling and calculation, spreadsheets are often also used for

  6. Semi-automated Biopanning of Bacterial Display Libraries for Peptide Affinity Reagent Discovery and Analysis of Resulting Isolates.

    Science.gov (United States)

    Sarkes, Deborah A; Jahnke, Justin P; Stratis-Cullum, Dimitra N

    2017-12-06

    Biopanning bacterial display libraries is a proven technique for peptide affinity reagent discovery for recognition of both biotic and abiotic targets. Peptide affinity reagents can be used for similar applications to antibodies, including sensing and therapeutics, but are more robust and able to perform in more extreme environments. Specific enrichment of peptide capture agents to a protein target of interest is enhanced using semi-automated sorting methods which improve binding and wash steps and therefore decrease the occurrence of false positive binders. A semi-automated sorting method is described herein for use with a commercial automated magnetic-activated cell sorting device with an unconstrained bacterial display sorting library expressing random 15-mer peptides. With slight modifications, these methods are extendable to other automated devices, other sorting libraries, and other organisms. A primary goal of this work is to provide a comprehensive methodology and expound the thought process applied in analyzing and minimizing the resulting pool of candidates. These techniques include analysis of on-cell binding using fluorescence-activated cell sorting (FACS), to assess affinity and specificity during sorting and in comparing individual candidates, and the analysis of peptide sequences to identify trends and consensus sequences for understanding and potentially improving the affinity to and specificity for the target of interest.

  7. Simple Functions Spreadsheet tool presentation

    International Nuclear Information System (INIS)

    Grive, Mireia; Domenech, Cristina; Montoya, Vanessa; Garcia, David; Duro, Lara

    2010-09-01

    This document is a guide for users of the Simple Functions Spreadsheet tool. The Simple Functions Spreadsheet tool has been developed by Amphos 21 to determine the solubility limits of some radionuclides and it has been especially designed for Performance Assessment exercises. The development of this tool has been promoted by the necessity expressed by SKB of having a confident and easy-to-handle tool to calculate solubility limits in an agile and relatively fast manner. Its development started in 2005 and since then, it has been improved until the current version. This document describes the accurate and preliminary study following expert criteria that has been used to select the simplified aqueous speciation and solid phase system included in the tool. This report also gives the basic instructions to use this tool and to interpret its results. Finally, this document also reports the different validation tests and sensitivity analyses that have been done during the verification process

  8. GENPLAT: an automated platform for biomass enzyme discovery and cocktail optimization.

    Science.gov (United States)

    Walton, Jonathan; Banerjee, Goutami; Car, Suzana

    2011-10-24

    The high cost of enzymes for biomass deconstruction is a major impediment to the economic conversion of lignocellulosic feedstocks to liquid transportation fuels such as ethanol. We have developed an integrated high throughput platform, called GENPLAT, for the discovery and development of novel enzymes and enzyme cocktails for the release of sugars from diverse pretreatment/biomass combinations. GENPLAT comprises four elements: individual pure enzymes, statistical design of experiments, robotic pipeting of biomass slurries and enzymes, and automated colorimeteric determination of released Glc and Xyl. Individual enzymes are produced by expression in Pichia pastoris or Trichoderma reesei, or by chromatographic purification from commercial cocktails or from extracts of novel microorganisms. Simplex lattice (fractional factorial) mixture models are designed using commercial Design of Experiment statistical software. Enzyme mixtures of high complexity are constructed using robotic pipeting into a 96-well format. The measurement of released Glc and Xyl is automated using enzyme-linked colorimetric assays. Optimized enzyme mixtures containing as many as 16 components have been tested on a variety of feedstock and pretreatment combinations. GENPLAT is adaptable to mixtures of pure enzymes, mixtures of commercial products (e.g., Accellerase 1000 and Novozyme 188), extracts of novel microbes, or combinations thereof. To make and test mixtures of ˜10 pure enzymes requires less than 100 μg of each protein and fewer than 100 total reactions, when operated at a final total loading of 15 mg protein/g glucan. We use enzymes from several sources. Enzymes can be purified from natural sources such as fungal cultures (e.g., Aspergillus niger, Cochliobolus carbonum, and Galerina marginata), or they can be made by expression of the encoding genes (obtained from the increasing number of microbial genome sequences) in hosts such as E. coli, Pichia pastoris, or a filamentous fungus such

  9. VirusDetect: An automated pipeline for efficient virus discovery using deep sequencing of small RNAs

    Science.gov (United States)

    Accurate detection of viruses in plants and animals is critical for agriculture production and human health. Deep sequencing and assembly of virus-derived siRNAs has proven to be a highly efficient approach for virus discovery. However, to date no computational tools specifically designed for both k...

  10. Supporting professional spreadsheet users by generating leveled dataflow diagrams

    NARCIS (Netherlands)

    Hermans, F.; Pinzger, M.; Van Deursen, A.

    2010-01-01

    Thanks to their flexibility and intuitive programming model, spreadsheets are widely used in industry, often for businesscritical applications. Similar to software developers, professional spreadsheet users demand support for maintaining and transferring their spreadsheets. In this paper, we first

  11. DataSpread: Unifying Databases and Spreadsheets.

    Science.gov (United States)

    Bendre, Mangesh; Sun, Bofan; Zhang, Ding; Zhou, Xinyan; Chang, Kevin ChenChuan; Parameswaran, Aditya

    2015-08-01

    Spreadsheet software is often the tool of choice for ad-hoc tabular data management, processing, and visualization, especially on tiny data sets. On the other hand, relational database systems offer significant power, expressivity, and efficiency over spreadsheet software for data management, while lacking in the ease of use and ad-hoc analysis capabilities. We demonstrate DataSpread, a data exploration tool that holistically unifies databases and spreadsheets. It continues to offer a Microsoft Excel-based spreadsheet front-end, while in parallel managing all the data in a back-end database, specifically, PostgreSQL. DataSpread retains all the advantages of spreadsheets, including ease of use, ad-hoc analysis and visualization capabilities, and a schema-free nature, while also adding the advantages of traditional relational databases, such as scalability and the ability to use arbitrary SQL to import, filter, or join external or internal tables and have the results appear in the spreadsheet. DataSpread needs to reason about and reconcile differences in the notions of schema, addressing of cells and tuples, and the current "pane" (which exists in spreadsheets but not in traditional databases), and support data modifications at both the front-end and the back-end. Our demonstration will center on our first and early prototype of the DataSpread, and will give the attendees a sense for the enormous data exploration capabilities offered by unifying spreadsheets and databases.

  12. Spreadsheet Design: An Optimal Checklist for Accountants

    Science.gov (United States)

    Barnes, Jeffrey N.; Tufte, David; Christensen, David

    2009-01-01

    Just as good grammar, punctuation, style, and content organization are important to well-written documents, basic fundamentals of spreadsheet design are essential to clear communication. In fact, the very principles of good writing should be integrated into spreadsheet workpaper design and organization. The unique contributions of this paper are…

  13. Automatically extracting class diagrams from spreadsheets

    NARCIS (Netherlands)

    Hermans, F.; Pinzger, M.; Van Deursen, A.

    2010-01-01

    The use of spreadsheets to capture information is widespread in industry. Spreadsheets can thus be a wealthy source of domain information. We propose to automatically extract this information and transform it into class diagrams. The resulting class diagram can be used by software engineers to

  14. The Growing Problems with Spreadsheet Budgeting

    Science.gov (United States)

    Solomon, Jeff; Johnson, Stella; Wilcox, Leon; Olson, Tom

    2010-01-01

    The ubiquitous spreadsheet in some version has been the sole and unrivaled instrument of financial management for decades. And it has served well. The spreadsheet provides the flexibility to design a unique business process. It allows users to create formulas that execute complex calculations, and it is available in the globally standardized Excel…

  15. Discovery of Overcoating Metal Oxides on Photoelectrode for Water Splitting by Automated Screening.

    Science.gov (United States)

    Saito, Rie; Miseki, Yugo; Nini, Wang; Sayama, Kazuhiro

    2015-10-12

    We applied an automated semiconductor synthesis and screen system to discover overcoating film materials and optimize coating conditions on the BiVO4/WO3 composite photoelectrode to enhance stability and photocurrent. Thirteen metallic elements for overcoating oxides were examined with various coating amounts. The stability of the BiVO4/WO3 photoelectrode in a highly concentrated carbonate electrolyte aqueous solution was significantly improved by overcoating with Ta2O5 film, which was amorphous and porous when calcined at 550 °C. The photocurrent for the water oxidation reaction was only minimally inhibited by the presence of the Ta2O5 film on the BiVO4/WO3 photoelectrode.

  16. Spreadsheets in the Cloud { Not Ready Yet

    Directory of Open Access Journals (Sweden)

    Bruce D. McCullogh

    2013-01-01

    Full Text Available Cloud computing is a relatively new technology that facilitates collaborative creation and modification of documents over the internet in real time. Here we provide an introductory assessment of the available statistical functions in three leading cloud spreadsheets namely Google Spreadsheet, Microsoft Excel Web App, and Zoho Sheet. Our results show that the developers of cloud-based spreadsheets are not performing basic quality control, resulting in statistical computations that are misleading and erroneous. Moreover, the developers do not provide sufficient information regarding the software and the hardware, which can change at any time without notice. Indeed, rerunning the tests after several months we obtained different and sometimes worsened results.

  17. A probabilistic approach for automated discovery of perturbed genes using expression data from microarray or RNA-Seq.

    Science.gov (United States)

    Sundaramurthy, Gopinath; Eghbalnia, Hamid R

    2015-12-01

    In complex diseases, alterations of multiple molecular and cellular components in response to perturbations are indicative of disease physiology. While expression level of genes from high-throughput analysis can vary among patients, the common path among disease progression suggests that the underlying cellular sub-processes involving associated genes follow similar fates. Motivated by the interconnected nature of sub-processes, we have developed an automated methodology that combines ideas from biological networks, statistical models, and game theory, to probe connected cellular processes. The core concept in our approach uses probability of change (POC) to indicate the probability that a gene's expression level has changed between two conditions. POC facilitates the definition of change at the neighborhood, pathway, and network levels and enables evaluation of the influence of diseases on the expression. The 'connected' disease-related genes (DRG) identified display coherent and concomitant differential expression levels along paths. RNA-Seq and microarray breast cancer subtyping expression data sets were used to identify DRG between subtypes. A machine-learning algorithm was trained for subtype discrimination using the DRG, and the training yielded a set of biomarkers. The discriminative power of the biomarkers was tested using an unseen data set. Biomarkers identified overlaps with disease-specific identified genes, and we were able to classify disease subtypes with 100% and 80% agreement with PAM50, for microarray and RNA-Seq data set respectively. We present an automated probabilistic approach that offers unbiased and reproducible results, thus complementing existing methods in DRG and biomarker discovery for complex diseases. Copyright © 2015. Published by Elsevier Ltd.

  18. Whole animal automated platform for drug discovery against multi-drug resistant Staphylococcus aureus.

    Directory of Open Access Journals (Sweden)

    Rajmohan Rajamuthiah

    Full Text Available Staphylococcus aureus, the leading cause of hospital-acquired infections in the United States, is also pathogenic to the model nematode Caenorhabditis elegans. The C. elegans-S. aureus infection model was previously carried out on solid agar plates where the bacteriovorous C. elegans feeds on a lawn of S. aureus. However, agar-based assays are not amenable to large scale screens for antibacterial compounds. We have developed a high throughput liquid screening assay that uses robotic instrumentation to dispense a precise amount of methicillin resistant S. aureus (MRSA and worms in 384-well assay plates, followed by automated microscopy and image analysis. In validation of the liquid assay, an MRSA cell wall defective mutant, MW2ΔtarO, which is attenuated for killing in the agar-based assay, was found to be less virulent in the liquid assay. This robust assay with a Z'-factor consistently greater than 0.5 was utilized to screen the Biomol 4 compound library consisting of 640 small molecules with well characterized bioactivities. As proof of principle, 27 of the 30 clinically used antibiotics present in the library conferred increased C. elegans survival and were identified as hits in the screen. Surprisingly, the antihelminthic drug closantel was also identified as a hit in the screen. In further studies, we confirmed the anti-staphylococcal activity of closantel against vancomycin-resistant S. aureus isolates and other Gram-positive bacteria. The liquid C. elegans-S. aureus assay described here allows screening for anti-staphylococcal compounds that are not toxic to the host.

  19. Whole animal automated platform for drug discovery against multi-drug resistant Staphylococcus aureus.

    Science.gov (United States)

    Rajamuthiah, Rajmohan; Fuchs, Beth Burgwyn; Jayamani, Elamparithi; Kim, Younghoon; Larkins-Ford, Jonah; Conery, Annie; Ausubel, Frederick M; Mylonakis, Eleftherios

    2014-01-01

    Staphylococcus aureus, the leading cause of hospital-acquired infections in the United States, is also pathogenic to the model nematode Caenorhabditis elegans. The C. elegans-S. aureus infection model was previously carried out on solid agar plates where the bacteriovorous C. elegans feeds on a lawn of S. aureus. However, agar-based assays are not amenable to large scale screens for antibacterial compounds. We have developed a high throughput liquid screening assay that uses robotic instrumentation to dispense a precise amount of methicillin resistant S. aureus (MRSA) and worms in 384-well assay plates, followed by automated microscopy and image analysis. In validation of the liquid assay, an MRSA cell wall defective mutant, MW2ΔtarO, which is attenuated for killing in the agar-based assay, was found to be less virulent in the liquid assay. This robust assay with a Z'-factor consistently greater than 0.5 was utilized to screen the Biomol 4 compound library consisting of 640 small molecules with well characterized bioactivities. As proof of principle, 27 of the 30 clinically used antibiotics present in the library conferred increased C. elegans survival and were identified as hits in the screen. Surprisingly, the antihelminthic drug closantel was also identified as a hit in the screen. In further studies, we confirmed the anti-staphylococcal activity of closantel against vancomycin-resistant S. aureus isolates and other Gram-positive bacteria. The liquid C. elegans-S. aureus assay described here allows screening for anti-staphylococcal compounds that are not toxic to the host.

  20. Exploring the Birthday Problem with Spreadsheets.

    Science.gov (United States)

    Lesser, Lawrence M.

    1999-01-01

    Explores a birthday-related problem that asks about the probability of having people with the same birthday in a room. Utilizes spreadsheets to work on the problem and discusses related teaching issues. Contains 20 references. (ASK)

  1. Petrogenetic Modeling with a Spreadsheet Program.

    Science.gov (United States)

    Holm, Paul Eric

    1988-01-01

    Describes how interactive programs for scientific modeling may be created by using spreadsheet software such as LOTUS 1-2-3. Lists the advantages of using this method. Discusses fractional distillation, batch partial melting, and combination models as examples. (CW)

  2. Declarative Parallel Programming in Spreadsheet End-User Development

    DEFF Research Database (Denmark)

    Biermann, Florian

    2016-01-01

    Spreadsheets are first-order functional languages and are widely used in research and industry as a tool to conveniently perform all kinds of computations. Because cells on a spreadsheet are immutable, there are possibilities for implicit parallelization of spreadsheet computations. In this liter...... can directly apply results from functional array programming to a spreadsheet model of computations.......Spreadsheets are first-order functional languages and are widely used in research and industry as a tool to conveniently perform all kinds of computations. Because cells on a spreadsheet are immutable, there are possibilities for implicit parallelization of spreadsheet computations....... In this literature study, we provide an overview of the publications on spreadsheet end-user programming and declarative array programming to inform further research on parallel programming in spreadsheets. Our results show that there is a clear overlap between spreadsheet programming and array programming and we...

  3. A spreadsheet approach to facilitate visualization of uncertainty in information.

    Science.gov (United States)

    Streit, Alexander; Pham, Binh; Brown, Ross

    2008-01-01

    Information uncertainty is inherent in many problems and is often subtle and complicated to understand. Although visualization is a powerful means for exploring and understanding information, information uncertainty visualization is ad hoc and not widespread. This paper identifies two main barriers to the uptake of information uncertainty visualization: firstly, the difficulty of modeling and propagating the uncertainty information; and secondly, the difficulty of mapping uncertainty to visual elements. To overcome these barriers, we extend the spreadsheet paradigm to encapsulate uncertainty details within cells. This creates an inherent awareness of the uncertainty associated with each variable. The spreadsheet can hide the uncertainty details, enabling the user to think simply in terms of variables. Furthermore, the system can aid with automated propagation of uncertainty information, since it is intrinsically aware of the uncertainty. The system also enables mapping the encapsulated uncertainty to visual elements via the formula language and a visualization sheet. Support for such low-level visual mapping provides flexibility to explore new techniques for information uncertainty visualization.

  4. Spreadsheet tool for estimating noise reduction costs

    International Nuclear Information System (INIS)

    Frank, L.; Senden, V.; Leszczynski, Y.

    2009-01-01

    The Northeast Capital Industrial Association (NCIA) represents industry in Alberta's industrial heartland. The organization is in the process of developing a regional noise management plan (RNMP) for their member companies. The RNMP includes the development of a noise reduction cost spreadsheet tool to conduct reviews of practical noise control treatments available for individual plant equipment, inclusive of ranges of noise attenuation achievable, which produces a budgetary prediction of the installed cost of practical noise control treatments. This paper discussed the noise reduction cost spreadsheet tool, with particular reference to noise control best practices approaches and spreadsheet tool development such as prerequisite, assembling data required, approach, and unit pricing database. Use and optimization of the noise reduction cost spreadsheet tool was also discussed. It was concluded that the noise reduction cost spreadsheet tool is an easy interactive tool to estimate implementation costs related to different strategies and options of noise control mitigating measures and was very helpful in gaining insight for noise control planning purposes. 2 tabs.

  5. Enabling students to make investigations through spreadsheets

    Directory of Open Access Journals (Sweden)

    Erol - KARAKIRIK

    2015-01-01

    Full Text Available Spreadsheets are widely used in education to save time and simulate many scenarios in many different disciplines. However, they are not usually employed to provide rich interactions in the class although they provide huge computational powers and relate data in different places. It is argued in this paper that spreadsheets could also be used as an empowering tool for making inferences in the class by enabling students to investigate an open-ended problem, to make discussions about the problem and to share their results by turning a spreadsheet into an empowering tool. Constructivist approach requires students make conjectures and test them through cognitive tools provided such as Dynamic geometry environments. User could be required to investigate an open-ended problem by the help of available macros or templates provided by the instructor in a spreadsheet. It not only motivates the user to participate in the class but also make them think about the problem at hand. Our approach is exemplified through macros prepared for a math class and divisibility features of binomial coefficients were investigated through spreadsheets in this paper.

  6. On the Numerical Accuracy of Spreadsheets

    Directory of Open Access Journals (Sweden)

    Alejandro C. Frery

    2010-10-01

    Full Text Available This paper discusses the numerical precision of five spreadsheets (Calc, Excel, Gnumeric, NeoOffice and Oleo running on two hardware platforms (i386 and amd64 and on three operating systems (Windows Vista, Ubuntu Intrepid and Mac OS Leopard. The methodology consists of checking the number of correct significant digits returned by each spreadsheet when computing the sample mean, standard deviation, first-order autocorrelation, F statistic in ANOVA tests, linear and nonlinear regression and distribution functions. A discussion about the algorithms for pseudorandom number generation provided by these platforms is also conducted. We conclude that there is no safe choice among the spreadsheets here assessed: they all fail in nonlinear regression and they are not suited for Monte Carlo experiments.

  7. LICSS - a chemical spreadsheet in microsoft excel.

    Science.gov (United States)

    Lawson, Kevin R; Lawson, Jonty

    2012-02-02

    Representations of chemical datasets in spreadsheet format are important for ready data assimilation and manipulation. In addition to the normal spreadsheet facilities, chemical spreadsheets need to have visualisable chemical structures and data searchable by chemical as well as textual queries. Many such chemical spreadsheet tools are available, some operating in the familiar Microsoft Excel environment. However, within this group, the performance of Excel is often compromised, particularly in terms of the number of compounds which can usefully be stored on a sheet. LICSS is a lightweight chemical spreadsheet within Microsoft Excel for Windows. LICSS stores structures solely as Smiles strings. Chemical operations are carried out by calling Java code modules which use the CDK, JChemPaint and OPSIN libraries to provide cheminformatics functionality. Compounds in sheets or charts may be visualised (individually or en masse), and sheets may be searched by substructure or similarity. All the molecular descriptors available in CDK may be calculated for compounds (in batch or on-the-fly), and various cheminformatic operations such as fingerprint calculation, Sammon mapping, clustering and R group table creation may be carried out.We detail here the features of LICSS and how they are implemented. We also explain the design criteria, particularly in terms of potential corporate use, which led to this particular implementation. LICSS is an Excel-based chemical spreadsheet with a difference:• It can usefully be used on sheets containing hundreds of thousands of compounds; it doesn't compromise the normal performance of Microsoft Excel• It is designed to be installed and run in environments in which users do not have admin privileges; installation involves merely file copying, and sharing of LICSS sheets invokes automatic installation• It is free and extensibleLICSS is open source software and we hope sufficient detail is provided here to enable developers to add their

  8. Three Spreadsheet Models Of A Simple Pendulum

    Directory of Open Access Journals (Sweden)

    Jan Benacka

    2008-10-01

    Full Text Available The paper gives three spreadsheet models of simple pendulum motion. In two of them, the graph of the pendulum angle is drawn upon the exact integral solution using numeric integration – the simpler model only uses spreadsheet functions, the general model uses a VBA program. The period results directly from the calculations. In the third model, the period is calculated first using the power series formula. The graph of the pendulum angle is drawn afterwards using Euler’s method of solving differential equations. The error in gravity acceleration if calculated upon the standard cosine approximate formula instead of the exact one is graphed.

  9. Translating mainframe computer data to spreadsheet format.

    Science.gov (United States)

    Burnakis, T G

    1991-12-01

    The translation of mainframe-stored information in ASCII into spreadsheet format for use in Lotus 1-2-3 is explained. Details are presented on how to use the Data Parse command to create a translation template that tells Lotus 1-2-3 how to interpret a file written in ASCII. Lotus 1-2-3 can also translate some files in formats other than ASCII. It can translate files in the Data Interchange Format directly into its own format. Translating mainframe computer data into spreadsheet format is relatively simple and obviates the rekeying of those data.

  10. Algorithms using Java for Spreadsheet Dependent Cell Recomputation

    National Research Council Canada - National Science Library

    Francoeur, Joe

    2002-01-01

    Java implementations of algorithms used by spreadsheets to antomatically recompute the set of cells dependent on a changed cell are described using a mathematical model for spreadsheets based on graph theory...

  11. A Literature Review of Spreadsheet Technology

    DEFF Research Database (Denmark)

    Bock, Alexander

    2016-01-01

    It was estimated that there would be over 55 million end-user programmers in 2012 in many different fields such as engineering,insurance and banking, and the numbers are not expected to have dwindled since. Consequently, technological advancements of spreadsheets is of great interest to a wide...

  12. Enron versus EUSES : A comparison of two spreadsheet corpora

    NARCIS (Netherlands)

    Jansen, B.

    2015-01-01

    Spreadsheets are widely used within companies and often form the basis for business decisions. Numerous cases are known where incorrect information in spreadsheets lead to incorrect decisions. Such cases underline the relevance of research on the professional use of spreadsheets. Recently a new

  13. Computerized cost estimation spreadsheet and cost data base for fusion devices

    International Nuclear Information System (INIS)

    Hamilton, W.R.; Rothe, K.E.

    1985-01-01

    An automated approach to performing and cataloging cost estimates has been developed at the Fusion Engineering Design Center (FEDC), wherein the cost estimate record is stored in the LOTUS 1-2-3 spreadsheet on an IBM personal computer. The cost estimation spreadsheet is based on the cost coefficient/cost algorithm approach and incorporates a detailed generic code of cost accounts for both tokamak and tandem mirror devices. Component design parameters (weight, surface area, etc.) and cost factors are input, and direct and indirect costs are calculated. The cost data base file derived from actual cost experience within the fusion community and refined to be compatible with the spreadsheet costing approach is a catalog of cost coefficients, algorithms, and component costs arranged into data modules corresponding to specific components and/or subsystems. Each data module contains engineering, equipment, and installation labor cost data for different configurations and types of the specific component or subsystem. This paper describes the assumptions, definitions, methodology, and architecture incorporated in the development of the cost estimation spreadsheet and cost data base, along with the type of input required and the output format

  14. Semi-automated high-throughput fluorescent intercalator displacement-based discovery of cytotoxic DNA binding agents from a large compound library.

    Science.gov (United States)

    Glass, Lateca S; Bapat, Aditi; Kelley, Mark R; Georgiadis, Millie M; Long, Eric C

    2010-03-01

    High-throughput fluorescent intercalator displacement (HT-FID) was adapted to the semi-automated screening of a commercial compound library containing 60,000 molecules resulting in the discovery of cytotoxic DNA-targeted agents. Although commercial libraries are routinely screened in drug discovery efforts, the DNA binding potential of the compounds they contain has largely been overlooked. HT-FID led to the rapid identification of a number of compounds for which DNA binding properties were validated through demonstration of concentration-dependent DNA binding and increased thermal melting of A/T- or G/C-rich DNA sequences. Selected compounds were assayed further for cell proliferation inhibition in glioblastoma cells. Seven distinct compounds emerged from this screening procedure that represent structures unknown previously to be capable of targeting DNA leading to cell death. These agents may represent structures worthy of further modification to optimally explore their potential as cytotoxic anti-cancer agents. In addition, the general screening strategy described may find broader impact toward the rapid discovery of DNA targeted agents with biological activity. Copyright 2010 Elsevier Ltd. All rights reserved.

  15. Closing the Loop: Automated Data-Driven Cognitive Model Discoveries Lead to Improved Instruction and Learning Gains

    Science.gov (United States)

    Liu, Ran; Koedinger, Kenneth R.

    2017-01-01

    As the use of educational technology becomes more ubiquitous, an enormous amount of learning process data is being produced. Educational data mining seeks to analyze and model these data, with the ultimate goal of improving learning outcomes. The most firmly grounded and rigorous evaluation of an educational data mining discovery is whether it…

  16. Robofurnace: A semi-automated laboratory chemical vapor deposition system for high-throughput nanomaterial synthesis and process discovery

    Science.gov (United States)

    Oliver, C. Ryan; Westrick, William; Koehler, Jeremy; Brieland-Shoultz, Anna; Anagnostopoulos-Politis, Ilias; Cruz-Gonzalez, Tizoc; Hart, A. John

    2013-11-01

    Laboratory research and development on new materials, such as nanostructured thin films, often utilizes manual equipment such as tube furnaces due to its relatively low cost and ease of setup. However, these systems can be prone to inconsistent outcomes due to variations in standard operating procedures and limitations in performance such as heating and cooling rates restrict the parameter space that can be explored. Perhaps more importantly, maximization of research throughput and the successful and efficient translation of materials processing knowledge to production-scale systems, relies on the attainment of consistent outcomes. In response to this need, we present a semi-automated lab-scale chemical vapor deposition (CVD) furnace system, called "Robofurnace." Robofurnace is an automated CVD system built around a standard tube furnace, which automates sample insertion and removal and uses motion of the furnace to achieve rapid heating and cooling. The system has a 10-sample magazine and motorized transfer arm, which isolates the samples from the lab atmosphere and enables highly repeatable placement of the sample within the tube. The system is designed to enable continuous operation of the CVD reactor, with asynchronous loading/unloading of samples. To demonstrate its performance, Robofurnace is used to develop a rapid CVD recipe for carbon nanotube (CNT) forest growth, achieving a 10-fold improvement in CNT forest mass density compared to a benchmark recipe using a manual tube furnace. In the long run, multiple systems like Robofurnace may be linked to share data among laboratories by methods such as Twitter. Our hope is Robofurnace and like automation will enable machine learning to optimize and discover relationships in complex material synthesis processes.

  17. Robofurnace: A semi-automated laboratory chemical vapor deposition system for high-throughput nanomaterial synthesis and process discovery

    International Nuclear Information System (INIS)

    Oliver, C. Ryan; Westrick, William; Koehler, Jeremy; Brieland-Shoultz, Anna; Anagnostopoulos-Politis, Ilias; Cruz-Gonzalez, Tizoc; Hart, A. John

    2013-01-01

    Laboratory research and development on new materials, such as nanostructured thin films, often utilizes manual equipment such as tube furnaces due to its relatively low cost and ease of setup. However, these systems can be prone to inconsistent outcomes due to variations in standard operating procedures and limitations in performance such as heating and cooling rates restrict the parameter space that can be explored. Perhaps more importantly, maximization of research throughput and the successful and efficient translation of materials processing knowledge to production-scale systems, relies on the attainment of consistent outcomes. In response to this need, we present a semi-automated lab-scale chemical vapor deposition (CVD) furnace system, called “Robofurnace.” Robofurnace is an automated CVD system built around a standard tube furnace, which automates sample insertion and removal and uses motion of the furnace to achieve rapid heating and cooling. The system has a 10-sample magazine and motorized transfer arm, which isolates the samples from the lab atmosphere and enables highly repeatable placement of the sample within the tube. The system is designed to enable continuous operation of the CVD reactor, with asynchronous loading/unloading of samples. To demonstrate its performance, Robofurnace is used to develop a rapid CVD recipe for carbon nanotube (CNT) forest growth, achieving a 10-fold improvement in CNT forest mass density compared to a benchmark recipe using a manual tube furnace. In the long run, multiple systems like Robofurnace may be linked to share data among laboratories by methods such as Twitter. Our hope is Robofurnace and like automation will enable machine learning to optimize and discover relationships in complex material synthesis processes

  18. Enron’s Spreadsheets and Related Emails : A Dataset and Analysis

    NARCIS (Netherlands)

    Hermans, F.; Murphy-Hill, E.

    2014-01-01

    Spreadsheets are used extensively in business processes around the world and as such, a topic of research interest. Over the past few years, many spreadsheet studies have been performed on the EUSES spreadsheet corpus. While this corpus has served the spreadsheet community well, the spreadsheets it

  19. NET PRESENT VALUE SIMULATING WITH A SPREADSHEET

    Directory of Open Access Journals (Sweden)

    Maria CONSTANTINESCU

    2010-01-01

    Full Text Available Decision making has always been a difficult process, based on various combinations if objectivity (when scientific tools were used and subjectivity (considering that decisions are finally made by people, with their strengths and weaknesses. The IT revolution has also reached the areas of management and decision making, helping managers make better and more informed decisions by providing them with a variety of tools, from the personal computers to the specialized software. Most simulations are performed in a spreadsheet, because the number of calculations required soon overwhelms human capability.

  20. Viscoelastic Pavement Modeling with a Spreadsheet

    DEFF Research Database (Denmark)

    Levenberg, Eyal

    2016-01-01

    The aim herein was to equip civil engineers and students with an advanced pavement modeling tool that is both easy to use and highly adaptive. To achieve this, a mathematical solution for a layered viscoelastic half-space subjected to a moving load was developed and subsequently implemented...... in a spreadsheet environment. The final program can consider up to five fully bonded layers, each isotropic, homogeneous and weightless. The top layer (as well as others if desired) is linear viscoelastic, while the remaining layers are linear elastic. The load is applied vertically to the surface of the system...

  1. Using Spreadsheets to Produce Acid-Base Titration Curves.

    Science.gov (United States)

    Cawley, Martin James; Parkinson, John

    1995-01-01

    Describes two spreadsheets for producing acid-base titration curves, one uses relatively simple cell formulae that can be written into the spreadsheet by inexperienced students and the second uses more complex formulae that are best written by the teacher. (JRH)

  2. Integrating Critical Spreadsheet Competencies into the Accounting Curriculum

    Science.gov (United States)

    Walters, L. Melissa; Pergola, Teresa M.

    2012-01-01

    The American Institute of Certified Public Accountants (AICPA) and the International Accounting Education Standards Board (IAESB) identify spreadsheet technology as a key information technology (IT) competency for accounting professionals. However requisite spreadsheet competencies are not specifically defined by the AICPA or IAESB nor are they…

  3. Spreadsheet-Based Program for Simulating Atomic Emission Spectra

    Science.gov (United States)

    Flannigan, David J.

    2014-01-01

    A simple Excel spreadsheet-based program for simulating atomic emission spectra from the properties of neutral atoms (e.g., energies and statistical weights of the electronic states, electronic partition functions, transition probabilities, etc.) is described. The contents of the spreadsheet (i.e., input parameters, formulas for calculating…

  4. A Spreadsheet-Based Approach for Operations Research Teaching

    Science.gov (United States)

    Munisamy, Susila

    2009-01-01

    This paper considers the use of spreadsheet for introducing students to a variety of quantitative models covered in an introductory Operations Research (OR) course at the University of Malaya, Malaysia. This approach allows students to develop skills in modeling as they learn to apply the various quantitative models in a spreadsheet. Indeed,…

  5. Detecting and Visualizing Inter-Worksheet Smells in Spreadsheets

    NARCIS (Netherlands)

    Hermans, F.; Pinger, M.; Van Deursen, A.

    2011-01-01

    Spreadsheets are often used in business, for simple tasks, as well as for mission critical tasks such as finance or forecasting. Similar to software, some spreadsheets are of better quality than others, for instance with respect to usability, maintainability or reliability. In contrast with software

  6. Detecting and refactoring code smells in spreadsheet formula

    NARCIS (Netherlands)

    Hermans, F.F.J.; Pinzger, M.; Van Deursen, A.

    2013-01-01

    Preprint of article published in: Empirical Software Engineering, February 2014, Springer Science+Business Media New York, doi:10.1007/s10664-013-9296-2 Spreadsheets are used extensively in business processes around the world and just like software, spreadsheets are changed throughout their lifetime

  7. Excel Spreadsheets for Algebra: Improving Mental Modeling for Problem Solving

    Science.gov (United States)

    Engerman, Jason; Rusek, Matthew; Clariana, Roy

    2014-01-01

    This experiment investigates the effectiveness of Excel spreadsheets in a high school algebra class. Students in the experiment group convincingly outperformed the control group on a post lesson assessment. The student responses, teacher observations involving Excel spreadsheet revealed that it operated as a mindtool, which formed the users'…

  8. Investigation of divisibility in a spreadsheet environment

    Directory of Open Access Journals (Sweden)

    Stanislav Lukáč

    2016-07-01

    Full Text Available This classroom note is focused on the application of inquiry approaches to teaching divisibility in the set of whole numbers. The main attention is devoted to the composition of a sequence of questions implemented within the spreadsheet environment, the solution of which should encourage students to actively learn and discover the divisibility rule for number eleven. This is an extra-curricular topic in Slovakia aimed to be taught in extended Mathematics classes as many of their students encounter it in competitions. An important part of the development of the questions on workbook sheets is the implementation of feedback which provides evaluation of students' solutions and auxiliary instructions for the guidance of learning.

  9. Excel spreadsheet in teaching numerical methods

    Science.gov (United States)

    Djamila, Harimi

    2017-09-01

    One of the important objectives in teaching numerical methods for undergraduates’ students is to bring into the comprehension of numerical methods algorithms. Although, manual calculation is important in understanding the procedure, it is time consuming and prone to error. This is specifically the case when considering the iteration procedure used in many numerical methods. Currently, many commercial programs are useful in teaching numerical methods such as Matlab, Maple, and Mathematica. These are usually not user-friendly by the uninitiated. Excel spreadsheet offers an initial level of programming, which it can be used either in or off campus. The students will not be distracted with writing codes. It must be emphasized that general commercial software is required to be introduced later to more elaborated questions. This article aims to report on a teaching numerical methods strategy for undergraduates engineering programs. It is directed to students, lecturers and researchers in engineering field.

  10. Spreadsheet eases heat balance, payback calculations

    International Nuclear Information System (INIS)

    Conner, K.P.

    1992-01-01

    This paper reports that a generalized Lotus type spreadsheet program has been developed to perform the heat balance and simple payback calculations for various turbine-generator (TG) inlet steam pressures. It can be used for potential plant expansions or new cogeneration installations. The program performs the basic heat balance calculations that are associated with turbine-generator, feedwater heating process steam requirements and desuperheating. The printout, shows the basic data and formulation used in the calculations. The turbine efficiency data used are applicable for automatic extraction turbine-generators in the 30-80 MW range. Simple payback calculations are for chemical recovery boilers and power boilers used in the pulp and paper industry. However, the program will also accommodate boilers common to other industries

  11. Phylogenetic Conflict in Bears Identified by Automated Discovery of Transposable Element Insertions in Low-Coverage Genomes

    Science.gov (United States)

    Gallus, Susanne; Janke, Axel

    2017-01-01

    Abstract Phylogenetic reconstruction from transposable elements (TEs) offers an additional perspective to study evolutionary processes. However, detecting phylogenetically informative TE insertions requires tedious experimental work, limiting the power of phylogenetic inference. Here, we analyzed the genomes of seven bear species using high-throughput sequencing data to detect thousands of TE insertions. The newly developed pipeline for TE detection called TeddyPi (TE detection and discovery for Phylogenetic Inference) identified 150,513 high-quality TE insertions in the genomes of ursine and tremarctine bears. By integrating different TE insertion callers and using a stringent filtering approach, the TeddyPi pipeline produced highly reliable TE insertion calls, which were confirmed by extensive in vitro validation experiments. Analysis of single nucleotide substitutions in the flanking regions of the TEs shows that these substitutions correlate with the phylogenetic signal from the TE insertions. Our phylogenomic analyses show that TEs are a major driver of genomic variation in bears and enabled phylogenetic reconstruction of a well-resolved species tree, despite strong signals for incomplete lineage sorting and introgression. The analyses show that the Asiatic black, sun, and sloth bear form a monophyletic clade, in which phylogenetic incongruence originates from incomplete lineage sorting. TeddyPi is open source and can be adapted to various TE and structural variation callers. The pipeline makes it possible to confidently extract thousands of TE insertions even from low-coverage genomes (∼10×) of nonmodel organisms. This opens new possibilities for biologists to study phylogenies and evolutionary processes as well as rates and patterns of (retro-)transposition and structural variation. PMID:28985298

  12. Phylogenetic Conflict in Bears Identified by Automated Discovery of Transposable Element Insertions in Low-Coverage Genomes.

    Science.gov (United States)

    Lammers, Fritjof; Gallus, Susanne; Janke, Axel; Nilsson, Maria A

    2017-10-01

    Phylogenetic reconstruction from transposable elements (TEs) offers an additional perspective to study evolutionary processes. However, detecting phylogenetically informative TE insertions requires tedious experimental work, limiting the power of phylogenetic inference. Here, we analyzed the genomes of seven bear species using high-throughput sequencing data to detect thousands of TE insertions. The newly developed pipeline for TE detection called TeddyPi (TE detection and discovery for Phylogenetic Inference) identified 150,513 high-quality TE insertions in the genomes of ursine and tremarctine bears. By integrating different TE insertion callers and using a stringent filtering approach, the TeddyPi pipeline produced highly reliable TE insertion calls, which were confirmed by extensive in vitro validation experiments. Analysis of single nucleotide substitutions in the flanking regions of the TEs shows that these substitutions correlate with the phylogenetic signal from the TE insertions. Our phylogenomic analyses show that TEs are a major driver of genomic variation in bears and enabled phylogenetic reconstruction of a well-resolved species tree, despite strong signals for incomplete lineage sorting and introgression. The analyses show that the Asiatic black, sun, and sloth bear form a monophyletic clade, in which phylogenetic incongruence originates from incomplete lineage sorting. TeddyPi is open source and can be adapted to various TE and structural variation callers. The pipeline makes it possible to confidently extract thousands of TE insertions even from low-coverage genomes (∼10×) of nonmodel organisms. This opens new possibilities for biologists to study phylogenies and evolutionary processes as well as rates and patterns of (retro-)transposition and structural variation. © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  13. Affordances of Spreadsheets In Mathematical Investigation: Potentialities For Learning

    Directory of Open Access Journals (Sweden)

    Nigel Calder

    2009-10-01

    Full Text Available This article, is concerned with the ways learning is shaped when mathematics problems are investigated in spreadsheet environments. It considers how the opportunities and constraints the digital media affords influenced the decisions the students made, and the direction of their enquiry pathway. How might the learning trajectory unfold, and the learning process and mathematical understanding emerge? Will the spreadsheet, as the pedagogical medium, evoke learning in a distinctive manner? The article reports on an aspect of an ongoing study involving students as they engage mathematical investigative tasks through digital media, the spreadsheet in particular. It considers the affordances of this learning environment for primary-aged students.

  14. Service discovery at home

    NARCIS (Netherlands)

    Sundramoorthy, V.; Scholten, Johan; Jansen, P.G.; Hartel, Pieter H.

    2003-01-01

    Service discovery is a fairly new field that kicked off since the advent of ubiquitous computing and has been found essential in the making of intelligent networks by implementing automated discovery and remote control between devices. This paper provides an overview and comparison of several

  15. Service Discovery At Home

    NARCIS (Netherlands)

    Sundramoorthy, V.; Scholten, Johan; Jansen, P.G.; Hartel, Pieter H.

    Service discovery is a fady new field that kicked off since the advent of ubiquitous computing and has been found essential in the making of intelligent networks by implementing automated discovery and remote control between deviies. This paper provides an ovewiew and comparison of several prominent

  16. Spreadsheet Decision Support Model for Training Exercise Material Requirements Planning

    National Research Council Canada - National Science Library

    Tringali, Arthur

    1997-01-01

    ... associated with military training exercises. The model combines the business practice of Material Requirements Planning and the commercial spreadsheet software capabilities of Lotus 1-2-3 to calculate the requirements for food, consumable...

  17. Spreadsheet-Enhanced Problem Solving in Context as Modeling

    Directory of Open Access Journals (Sweden)

    Sergei Abramovich

    2003-07-01

    development through situated mathematical problem solving. Modeling activities described in this paper support the epistemological position regarding the interplay that exists between the development of mathematical concepts and available methods of calculation. The spreadsheet used is Microsoft Excel 2001

  18. Spreadsheet Decision Support Model for Training Exercise Material Requirements Planning

    National Research Council Canada - National Science Library

    Tringali, Arthur

    1997-01-01

    This thesis focuses on developing a spreadsheet decision support model that can be used by combat engineer platoon and company commanders in determining the material requirements and estimated costs...

  19. A Spreadsheet for Estimating Soil Water Characteristic Curves (SWCC)

    Science.gov (United States)

    2017-05-01

    SWCCs. The spreadsheet presented herein utilizes different methods that use basic soil properties, such as grain size distribution and Atterberg...model. The spreadsheet calculates the SWCCs using seven different methods for comparison. It also compares four closed form models, Gardner (1958...installed instrumentation To perform a transient seepage analysis using modeling programs like SEEP/W (Geo-Slope 2012), it is required to provide the

  20. Using spreadsheet modelling to teach about feedback in physics

    Science.gov (United States)

    Lingard, Michael

    2003-09-01

    This article looks generally at spreadsheet modelling of feedback situations. It has several benefits as a teaching tool. Additionally, a consideration of the limitations of calculating at many discrete points can lead, at A-level, to an appreciation of the need for the calculus. Feedback situations can be used to introduce the idea of differential equations. Microsoft ExcelTM is the spreadsheet used.

  1. 18 excel spreadsheets by species and year giving reproduction and growth data. One excel spreadsheet of herbicide treatment chemistry.

    Data.gov (United States)

    U.S. Environmental Protection Agency — Excel spreadsheets by species (4 letter code is abbreviation for genus and species used in study, year 2010 or 2011 is year data collected, SH indicates data for...

  2. Electronic spreadsheet to acquire the reflectance from the TM and ETM+ Landsat images

    Directory of Open Access Journals (Sweden)

    Antonio R. Formaggio

    2005-08-01

    Full Text Available The reflectance of agricultural cultures and other terrestrial surface "targets" is an intrinsic parameter of these targets, so in many situations, it must be used instead of the values of "gray levels" that is found in the satellite images. In order to get reflectance values, it is necessary to eliminate the atmospheric interference and to make a set of calculations that uses sensor parameters and information regarding the original image. The automation of this procedure has the advantage to speed up the process and to reduce the possibility of errors during calculations. The objective of this paper is to present an electronic spreadsheet that simplifies and automatizes the transformation of the digital numbers of the TM/Landsat-5 and ETM+/Landsat-7 images into reflectance. The method employed for atmospheric correction was the dark object subtraction (DOS. The electronic spreadsheet described here is freely available to users and can be downloaded at the following website: http://www.dsr.inpe.br/Calculo_Reflectancia.xls.

  3. An integration of spreadsheet and project management software for cost optimal time scheduling in construction

    Directory of Open Access Journals (Sweden)

    Valenko Tadej

    2017-12-01

    Full Text Available Successful performance and completion of construction projects highly depend on an adequate time scheduling of the project activities. On implementation of time scheduling, the execution modes of activities are most often required to be set in a manner that enables in achieving the minimum total project cost. This paper presents an approach to cost optimal time scheduling, which integrates a spreadsheet application and data transfer to project management software (PMS. At this point, the optimization problem of project time scheduling is modelled employing Microsoft Excel and solved to optimality using Solver while organization of data is dealt by macros. Thereupon, Microsoft Project software is utilized for further managing and presentation of optimized time scheduling solution. In this way, the data flow between programs is automated and possibilities of error occurrence during scheduling process are reduced to a minimum. Moreover, integration of spreadsheet and PMS for cost optimal time scheduling in construction is performed within well-known program environment that increases the possibilities of its wider use in practice. An application example is shown in this paper to demonstrate the advantages of proposed approach.

  4. Automated Discovery of Mimicry Attacks

    National Research Council Canada - National Science Library

    Giffin, Jonathon T; Jha, Somesh; Miller, Barton P

    2006-01-01

    .... These systems are useful only if they detect actual attacks. Previous research developed manually-constructed mimicry and evasion attacks that avoided detection by hiding a malicious series of system calls within a valid sequence allowed by the model...

  5. Novel Spreadsheet Direct Method for Optimal Control Problems

    Directory of Open Access Journals (Sweden)

    Chahid Kamel Ghaddar

    2018-01-01

    Full Text Available We devise a simple yet highly effective technique for solving general optimal control problems in Excel spreadsheets. The technique exploits Excel’s native nonlinear programming (NLP Solver Command, in conjunction with two calculus worksheet functions, namely, an initial value problem solver and a discrete data integrator, in a direct solution paradigm adapted to the spreadsheet. The technique is tested on several highly nonlinear constrained multivariable control problems with remarkable results in terms of reliability, consistency with pseudo-spectral reported answers, and computing times in the order of seconds. The technique requires no more than defining a few analogous formulas to the problem mathematical equations using basic spreadsheet operations, and no programming skills are needed. It introduces an alternative, simpler tool for solving optimal control problems in social and natural science disciplines.

  6. A Comparative Study of Spreadsheet Applications on Mobile Devices

    Directory of Open Access Journals (Sweden)

    Veera V. S. M. Chintapalli

    2016-01-01

    Full Text Available Advances in mobile screen sizes and feature enhancement for mobile applications have increased the number of users accessing spreadsheets on mobile devices. This paper reports a comparative usability study on four popular mobile spreadsheet applications: OfficeSuite Viewer 6, Documents To Go, ThinkFree Online, and Google Drive. We compare them against three categories of usability criteria: visibility; navigation, scrolling, and feedback; and interaction, satisfaction, simplicity, and convenience. Measures for each criterion were derived in a survey. Questionnaires were designed to address the measures based on the comparative criteria provided in the analysis.

  7. Using the Talbot_Lau_interferometer_parameters Spreadsheet

    Energy Technology Data Exchange (ETDEWEB)

    Kallman, Jeffrey S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-06-04

    Talbot-Lau interferometers allow incoherent X-ray sources to be used for phase contrast imaging. A spreadsheet for exploring the parameter space of Talbot and Talbot-Lau interferometers has been assembled. This spreadsheet allows the user to examine the consequences of choosing phase grating pitch, source energy, and source location on the overall geometry of a Talbot or Talbot-Lau X-ray interferometer. For the X-ray energies required to penetrate scanned luggage the spacing between gratings is large enough that the mechanical tolerances for amplitude grating positioning are unlikely to be met.

  8. Spreadsheets, Graphing Calculators and the Line of Best Fit

    Directory of Open Access Journals (Sweden)

    Bernie O'Sullivan

    2003-07-01

    One technique that can now be done, almost mindlessly, is the line of best fit. Both the graphing calculator and the Excel spreadsheet produce models for collected data that appear to be very good fits, but upon closer scrutiny, are revealed to be quite poor. This article will examine one such case. I will couch the paper within the framework of a very good classroom investigation that will help generate students’ understanding of the basic principles of curve fitting and will enable them to produce a very accurate model of collected data by combining the technology of the graphing calculator and the spreadsheet.

  9. Information Spreadsheet for Engines and Vehicles Compliance Information System (EV-CIS) User Registration

    Science.gov (United States)

    In this spreadsheet, user(s) provide their company’s manufacturer code, user contact information for EV-CIS, and user roles. This spreadsheet is used for the Company Authorizing Official (CAO), CROMERR Signer, and EV-CIS Submitters.

  10. Breaking Out of the Cell: On The Benefits of a New Spreadsheet User-Interaction Paradigm

    OpenAIRE

    Hellman, Ziv

    2008-01-01

    Contemporary spreadsheets are plagued by a profusion of errors, auditing difficulties, lack of uniform development methodologies, and barriers to easy comprehension of the underlying business models they represent. This paper presents a case that most of these difficulties stem from the fact that the standard spreadsheet user-interaction paradigm - the 'cell-matrix' approach - is appropriate for spreadsheet data presentation but has significant drawbacks with respect to spreadsheet creation, ...

  11. When Spreadsheets Become Software - Quality Control Challenges and Approaches - 13360

    International Nuclear Information System (INIS)

    Fountain, Stefanie A.; Chen, Emmie G.; Beech, John F.; Wyatt, Elizabeth E.; Quinn, Tanya B.; Seifert, Robert W.; Bonczek, Richard R.

    2013-01-01

    As part of a preliminary waste acceptance criteria (PWAC) development, several commercial models were employed, including the Hydrologic Evaluation of Landfill Performance model (HELP) [1], the Disposal Unit Source Term - Multiple Species model (DUSTMS) [2], and the Analytical Transient One, Two, and Three-Dimensional model (AT123D) [3]. The results of these models were post-processed in MS Excel spreadsheets to convert the model results to alternate units, compare the groundwater concentrations to the groundwater concentration thresholds, and then to adjust the waste contaminant masses (based on average concentration over the waste volume) as needed in an attempt to achieve groundwater concentrations at the limiting point of assessment that would meet the compliance concentrations while maximizing the potential use of the landfill (i.e., maximizing the volume of projected waste being generated that could be placed in the landfill). During the course of the PWAC calculation development, one of the Microsoft (MS) Excel spreadsheets used to post-process the results of the commercial model packages grew to include more than 575,000 formulas across 18 worksheets. This spreadsheet was used to assess six base scenarios as well as nine uncertainty/sensitivity scenarios. The complexity of the spreadsheet resulted in the need for a rigorous quality control (QC) procedure to verify data entry and confirm the accuracy of formulas. (authors)

  12. Introducing Artificial Neural Networks through a Spreadsheet Model

    Science.gov (United States)

    Rienzo, Thomas F.; Athappilly, Kuriakose K.

    2012-01-01

    Business students taking data mining classes are often introduced to artificial neural networks (ANN) through point and click navigation exercises in application software. Even if correct outcomes are obtained, students frequently do not obtain a thorough understanding of ANN processes. This spreadsheet model was created to illuminate the roles of…

  13. A Spreadsheet-based GIS tool for planning aerial photography

    Science.gov (United States)

    The U.S.EPA's Pacific Coastal Ecology Branch has developed a tool which facilitates planning aerial photography missions. This tool is an Excel spreadsheet which accepts various input parameters such as desired photo-scale and boundary coordinates of the study area and compiles ...

  14. Using a general purpose spreadsheet software package to estimate ...

    African Journals Online (AJOL)

    The objective of this analysis was to evaluate the accuracy of a standard spreadsheet software package to estimate best-fit parameters for an exponential plus constant model (y=a+b.e cx) applied to blood lactate concentration versus work rate data. During an incremental cycle test, blood lactate concentrations were ...

  15. Negative Effects of Learning Spreadsheet Management on Learning Database Management

    Science.gov (United States)

    Vágner, Anikó; Zsakó, László

    2015-01-01

    A lot of students learn spreadsheet management before database management. Their similarities can cause a lot of negative effects when learning database management. In this article, we consider these similarities and explain what can cause problems. First, we analyse the basic concepts such as table, database, row, cell, reference, etc. Then, we…

  16. Development of an excel spreadsheet formean glandular dose in mammography

    International Nuclear Information System (INIS)

    Nagoshi, Kazuyo; Fujisaki, Tatsuya

    2008-01-01

    The purpose of this study was to develop an Excel spreadsheet to calculate mean glandular dose (D g ) in mammography using clinical exposure data. D g can be calculated as the product of incident air kerma (K a ) and D gN (i.e., D g =K a x D gN ). According to the method of Klein et al (Phys Med Biol 1997; 42: 651-671), K a was measured at the entrance surface with an ionization dosimeter. Normalized glandular dose (D gN ) coefficients, taking into account breast glandularity, were computed using Boone's method (Med Phys 2002; 29: 869-875). D gN coefficients can be calculated for any arbitrary X-ray spectrum. These calculation procedures were input into a Microsoft Excel spreadsheet. The resulting Excel spreadsheet is easy to use and is always applicable in the field of mammography. The exposure conditions concerning D g in clinical practice were also investigated in 22 women. Four exposure conditions (target/filter combination and tube voltage) were automatically selected in this study. This investigation found that average D g for each exposure was 1.9 mGy. Because it is recommended that quality control of radiation dose management in mammography is done using an American College of Radiology (ACR) phantom, information about patient dose is not obtained in many facilities. The present Excel spreadsheet was accordingly considered useful for optimization of exposure conditions and explanation of mammography to patients. (author)

  17. Solving L-L Extraction Problems with Excel Spreadsheet

    Science.gov (United States)

    Teppaitoon, Wittaya

    2016-01-01

    This work aims to demonstrate the use of Excel spreadsheets for solving L-L extraction problems. The key to solving the problems successfully is to be able to determine a tie line on the ternary diagram where the calculation must be carried out. This enables the reader to analyze the extraction process starting with a simple operation, the…

  18. Spreadsheet based teaching aids in chemical engineering education

    OpenAIRE

    Heyen, Georges; Kalitventzeff, Boris

    1999-01-01

    Modern spreadsheet programs offer a wide range of calculation, charting and dialog options, allowing the development of effective teaching aids. Examples of worksheet illustrating numerical analysis, thermodynamics and unit operations are shown. All offer complete interaction between the problem statement and the presentation of results. Peer reviewed

  19. A methodology for constructing the calculation model of scientific spreadsheets

    NARCIS (Netherlands)

    Vos, de M.; Wielemaker, J.; Schreiber, G.; Wielinga, B.; Top, J.L.

    2015-01-01

    Spreadsheets models are frequently used by scientists to analyze research data. These models are typically described in a paper or a report, which serves as single source of information on the underlying research project. As the calculation workflow in these models is not made explicit, readers are

  20. When Spreadsheets Become Software - Quality Control Challenges and Approaches - 13360

    Energy Technology Data Exchange (ETDEWEB)

    Fountain, Stefanie A.; Chen, Emmie G.; Beech, John F. [Geosyntec Consultants, Inc., 1255 Roberts Boulevard NW, Suite 200, Kennesaw, GA 30144 (United States); Wyatt, Elizabeth E. [LATA Environmental Services of Kentucky, LLC, 761 Veterans Ave, Kevil, KY 42053 (United States); Quinn, Tanya B. [Geosyntec Consultants, Inc., 2002 Summit Boulevard NE, Suite 885, Atlanta, GA 30319 (United States); Seifert, Robert W. [Portsmouth/Paducah Project Office, United States Department of Energy, 5600 Hobbs Rd, Kevil, KY 42053 (United States); Bonczek, Richard R. [Portsmouth/Paducah Project Office, United States Department of Energy, 1017 Majestic Drive, Lexington, KY 40513 (United States)

    2013-07-01

    As part of a preliminary waste acceptance criteria (PWAC) development, several commercial models were employed, including the Hydrologic Evaluation of Landfill Performance model (HELP) [1], the Disposal Unit Source Term - Multiple Species model (DUSTMS) [2], and the Analytical Transient One, Two, and Three-Dimensional model (AT123D) [3]. The results of these models were post-processed in MS Excel spreadsheets to convert the model results to alternate units, compare the groundwater concentrations to the groundwater concentration thresholds, and then to adjust the waste contaminant masses (based on average concentration over the waste volume) as needed in an attempt to achieve groundwater concentrations at the limiting point of assessment that would meet the compliance concentrations while maximizing the potential use of the landfill (i.e., maximizing the volume of projected waste being generated that could be placed in the landfill). During the course of the PWAC calculation development, one of the Microsoft (MS) Excel spreadsheets used to post-process the results of the commercial model packages grew to include more than 575,000 formulas across 18 worksheets. This spreadsheet was used to assess six base scenarios as well as nine uncertainty/sensitivity scenarios. The complexity of the spreadsheet resulted in the need for a rigorous quality control (QC) procedure to verify data entry and confirm the accuracy of formulas. (authors)

  1. A Spreadsheet-Based, Matrix Formulation Linear Programming Lesson

    DEFF Research Database (Denmark)

    Harrod, Steven

    2009-01-01

    The article focuses on the spreadsheet-based, matrix formulation linear programming lesson. According to the article, it makes a higher level of theoretical mathematics approachable by a wide spectrum of students wherein many may not be decision sciences or quantitative methods majors. Moreover...

  2. Using a Spreadsheet Scroll Bar to Solve Equilibrium Concentrations

    Science.gov (United States)

    Raviolo, Andres

    2012-01-01

    A simple, conceptual method is described for using the spreadsheet scroll bar to find the composition of a system at chemical equilibrium. Simulation of any kind of chemical equilibrium can be carried out using this method, and the effects of different disturbances can be predicted. This simulation, which can be used in general chemistry…

  3. Knowing what was done: uses of a spreadsheet log file

    Directory of Open Access Journals (Sweden)

    Andy Adler

    2005-10-01

    Full Text Available Spreadsheet use in educational environments has become widespread, likely because of the flexibility and ease of use of these tools. However, they have serious shortcomings if the teacher is to understand exactly what students or others have done. It is far too easy for students to replace a formula that gives an apparently unacceptable answer with a number that they believe to be correct. The same concern applies to recorded marks, as well as to business spreadsheets and to other reports that are used for decision-making. While intentionally misleading changes to spreadsheet files receive much attention, simple mistakes are probably more common. Some of these, such as the Trans-Alta Utilities (Globe and Mail, 2003 cut and paste error that cost the firm $24 million (US, have extreme consequences. Few are merely embarrassing. A log file or audit trail, enhanced by suitable filters, can allow both intentional and accidental changes that cause erroneous results to be caught. In order to meet these requirements, we have developed server based software tool (“TellTable” which allows editing, version control, and auditing of spreadsheet files. Users connect to the server using a standard web browser, and are able to access and edit spreadsheet files in a Java applet in the browser window. TellTable has been used for a pilot study to maintain marks and course information for a multi-section courses with several instructions and teaching assistants. This paper describes the TellTable software and preliminary results of the pilot test.

  4. Spreadsheets Across the Curriculum, 1: The Idea and the Resource

    Directory of Open Access Journals (Sweden)

    H.L. Vacher

    2010-07-01

    Full Text Available This paper introduces Spreadsheets Across the Curriculum, a workshop-based educational materials development project to build a resource to facilitate connecting mathematics and context in undergraduate college courses where mathematical problem solving is relevant. The central idea is “spreadsheet modules,” which, in essence, are elaborate word problems in the form of short PowerPoint presentations with embedded Excel spreadsheets. Students work through the presentations on their own, making and/or completing the spreadsheets displayed on the slides in order to perform calculations or draw graphs that address the issue (context posed in the word problem. The end result of the project is the resource: an online library of 55 modules developed by 40 authors from 21 institutions that touch on 26 subjects as differentiated by Library of Congress classification categories. Judging from online requests for instructor versions, the SSAC Web site disseminated the SSAC module idea to an additional 60 institutions via instructors of courses with 67 different titles. The disciplinary distribution of authors and requests for instructor versions shows that the SSAC resource serves both sides of the mathematics-in-context interpretation of quantitative literacy: mathematics educators seeking ways of bringing context into their teaching of mathematics; non-mathematics educators seeking to infuse mathematics into their teaching of disciplinary subjects. The SSAC experience suggests two answers to the question: “What works to spread teaching of QL across the curriculum?”—spreadsheet exercises in which students do math to solve problems, and workshops or workshop sessions that focus on educational materials.

  5. The Flight of the Space Shuttle "Discovery" (STS-119)

    Science.gov (United States)

    Stinner, Arthur; Metz, Don

    2010-01-01

    This article is intended to model the ascent of the space shuttle for high school teachers and students. It provides a background for a sufficiently comprehensive description of the physics (kinematics and dynamics) of the March 16, 2009, "Discovery" launch. Our data are based on a comprehensive spreadsheet kindly sent to us by Bill Harwood, the…

  6. A spreadsheet-coupled SOLGAS: A computerized thermodynamic equilibrium calculation tool. Revision 1

    Energy Technology Data Exchange (ETDEWEB)

    Trowbridge, L.D.; Leitnaker, J.M. [Oak Ridge K-25 Site, TN (United States). Technical Analysis and Operations Div.

    1995-07-01

    SOLGAS, an early computer program for calculating equilibrium in a chemical system, has been made more user-friendly, and several ``bells and whistles`` have been added. The necessity to include elemental species has been eliminated. The input of large numbers of starting conditions has been automated. A revised spreadsheet-based format for entering data, including non-ideal binary and ternary mixtures, simplifies and reduces chances for error. Calculational errors by SOLGAS are flagged, and several programming errors are corrected. Auxiliary programs are available to assemble and partially automate plotting of large amounts of data. Thermodynamic input data can be changed on line. The program can be operated with or without a co-processor. Copies of the program, suitable for the IBM-PC or compatibles with at least 384 bytes of low RAM, are available from the authors. This user manual contains appendices with examples of the use of SOLGAS. These range from elementary examples, such as, the relationships among water, ice, and water vapor, to more complex systems: phase diagram calculation of UF{sub 4} and UF{sub 6} system; burning UF{sub 4} in fluorine; thermodynamic calculation of the Cl-F-O-H system; equilibria calculations in the CCl{sub 4}--CH{sub 3}OH system; and limitations applicable to aqueous solutions. An appendix also contains the source code.

  7. Discrete Phase-Locked Loop Systems and Spreadsheets

    Directory of Open Access Journals (Sweden)

    Sergei Abramovich

    2005-10-01

    Full Text Available This paper demonstrates the use of a spreadsheet in exploring non-linear difference equations that describe digital control systems used in radio engineering, communication and computer architecture. These systems, being the focus of intensive studies of mathematicians and engineers over the last 40 years, may exhibit extremely complicated behavior interpreted in contemporary terms as transition from global asymptotic stability to chaos through period-doubling bifurcations. The authors argue that embedding advanced mathematical ideas in the technological tool enables one to introduce fundamentals of discrete control systems in tertiary curricula without learners having to deal with complex machinery that rigorous mathematical methods of investigation require. In particular, in the appropriately designed spreadsheet environment, one can effectively visualize a qualitative difference in the behavior of systems with different types of non-linear characteristic.

  8. Introduction to supercritical fluids a spreadsheet-based approach

    CERN Document Server

    Smith, Richard; Peters, Cor

    2013-01-01

    This text provides an introduction to supercritical fluids with easy-to-use Excel spreadsheets suitable for both specialized-discipline (chemistry or chemical engineering student) and mixed-discipline (engineering/economic student) classes. Each chapter contains worked examples, tip boxes and end-of-the-chapter problems and projects. Part I covers web-based chemical information resources, applications and simplified theory presented in a way that allows students of all disciplines to delve into the properties of supercritical fluids and to design energy, extraction and materials formation systems for real-world processes that use supercritical water or supercritical carbon dioxide. Part II takes a practical approach and addresses the thermodynamic framework, equations of state, fluid phase equilibria, heat and mass transfer, chemical equilibria and reaction kinetics of supercritical fluids. Spreadsheets are arranged as Visual Basic for Applications (VBA) functions and macros that are completely (source code) ...

  9. The Architecture of a Complex GIS & Spreadsheet Based DSS

    Directory of Open Access Journals (Sweden)

    Dinu Airinei

    2010-01-01

    Full Text Available The decision support applications available on today market use to combine the decision analysis of historical databased on On-Line Analytical Processing (OLAP products or spreadsheet pivot tables with some new reporting facilities as alerts or key performance indicators available in portal dashboards or in complex spreadsheet-like reports, both corresponding to a new approach of the field called Business Intelligence. Moreover the geographical features of GIS added to DSS applications become more and more required by many kinds of businesses. In fact they are more useful this way than as distinctive parts.The paper tries to present a certain DSS architecture based on the association between such approaches and technologies. The particular examples are meant to support all the theoretical arguments and to complete the understanding of the interaction schemas available.

  10. A Spreadsheet-Based, Matrix Formulation Linear Programming Lesson

    DEFF Research Database (Denmark)

    Harrod, Steven

    2009-01-01

    The article focuses on the spreadsheet-based, matrix formulation linear programming lesson. According to the article, it makes a higher level of theoretical mathematics approachable by a wide spectrum of students wherein many may not be decision sciences or quantitative methods majors. Moreover......, it is consistent with the Arganbright Principles because the arrays and functions are clear in their operation and easily manipulated by the user....

  11. Firing Range Contaminants and Climate Change Tool: Spreadsheet User Instructions

    Science.gov (United States)

    2017-09-18

    Change spreadsheet was developed as a tool to help assess the cost of range management strategies for various potential climate futures.* The subsections...scenarios using the discount rate input by the user. Range managers may use this graph to choose a primary management strategy for the range. Managers...utilizes the “Likelihood of Occurrence” provided for each assessed climate scenario. This is useful in choosing a management strategy considering multiple

  12. Automated Phase Mapping with AgileFD and its Application to Light Absorber Discovery in the V-Mn-Nb Oxide System.

    Science.gov (United States)

    Suram, Santosh K; Xue, Yexiang; Bai, Junwen; Le Bras, Ronan; Rappazzo, Brendan; Bernstein, Richard; Bjorck, Johan; Zhou, Lan; van Dover, R Bruce; Gomes, Carla P; Gregoire, John M

    2017-01-09

    Rapid construction of phase diagrams is a central tenet of combinatorial materials science with accelerated materials discovery efforts often hampered by challenges in interpreting combinatorial X-ray diffraction data sets, which we address by developing AgileFD, an artificial intelligence algorithm that enables rapid phase mapping from a combinatorial library of X-ray diffraction patterns. AgileFD models alloying-based peak shifting through a novel expansion of convolutional nonnegative matrix factorization, which not only improves the identification of constituent phases but also maps their concentration and lattice parameter as a function of composition. By incorporating Gibbs' phase rule into the algorithm, physically meaningful phase maps are obtained with unsupervised operation, and more refined solutions are attained by injecting expert knowledge of the system. The algorithm is demonstrated through investigation of the V-Mn-Nb oxide system where decomposition of eight oxide phases, including two with substantial alloying, provides the first phase map for this pseudoternary system. This phase map enables interpretation of high-throughput band gap data, leading to the discovery of new solar light absorbers and the alloying-based tuning of the direct-allowed band gap energy of MnV 2 O 6 . The open-source family of AgileFD algorithms can be implemented into a broad range of high throughput workflows to accelerate materials discovery.

  13. Automated security management

    CERN Document Server

    Al-Shaer, Ehab; Xie, Geoffrey

    2013-01-01

    In this contributed volume, leading international researchers explore configuration modeling and checking, vulnerability and risk assessment, configuration analysis, and diagnostics and discovery. The authors equip readers to understand automated security management systems and techniques that increase overall network assurability and usability. These constantly changing networks defend against cyber attacks by integrating hundreds of security devices such as firewalls, IPSec gateways, IDS/IPS, authentication servers, authorization/RBAC servers, and crypto systems. Automated Security Managemen

  14. Dismal: a spreadsheet for sequential data analysis and HCI experimentation.

    Science.gov (United States)

    Ritter, Frank E; Wood, Alexander B

    2005-02-01

    Dismal is a spreadsheet that works within GNU Emacs, a widely available programmable editor. Dismal has three features of particular interest to those who study behavior: (1) the ability to manipulate and align sequential data, (2) an open architecture that allows users to expand it to meet their particular needs, and (3) an instrumented and accessible interface for studies of human-computer interaction (HCI). Example uses of each of these capabilities are provided, including cognitive models that have had their sequential behavior aligned with subject's protocols, extensions useful for teaching and doing HCI design, and studies in which keystroke logs from the timing package in Dismal have been used.

  15. Head First Excel A learner's guide to spreadsheets

    CERN Document Server

    Milton, Michael

    2010-01-01

    Do you use Excel for simple lists, but get confused and frustrated when it comes to actually doing something useful with all that data? Stop tearing your hair out: Head First Excel helps you painlessly move from spreadsheet dabbler to savvy user. Whether you're completely new to Excel or an experienced user looking to make the program work better for you, this book will help you incorporate Excel into every aspect of your workflow, from a scratch pad for data-based brainstorming to exploratory analysis with PivotTables, optimizing outcomes with Goal Seek, and presenting your conclusions wit

  16. Use of a commercial spreadsheet for quality control in radiotherapy

    International Nuclear Information System (INIS)

    Sales, D.A.G.; Batista, D.V.S.

    2001-01-01

    This work presents the results obtained from elaboration of a spreadsheet to quality control of physical and clinical dosimetry of a radiotherapy service. It was developed using the resources of a commercial software, in the way to behave an independent verification of manual calculation and therapy planning system calculation to routine procedures of radiotherapy service of Instituto Nacional de Cancer. Its validation was made with the reference of current manual calculation proposed at literature and with the results of therapy planning system for test cases. (author)

  17. Application of an automated natural language processing (NLP) workflow to enable federated search of external biomedical content in drug discovery and development.

    Science.gov (United States)

    McEntire, Robin; Szalkowski, Debbie; Butler, James; Kuo, Michelle S; Chang, Meiping; Chang, Man; Freeman, Darren; McQuay, Sarah; Patel, Jagruti; McGlashen, Michael; Cornell, Wendy D; Xu, Jinghai James

    2016-05-01

    External content sources such as MEDLINE(®), National Institutes of Health (NIH) grants and conference websites provide access to the latest breaking biomedical information, which can inform pharmaceutical and biotechnology company pipeline decisions. The value of the sites for industry, however, is limited by the use of the public internet, the limited synonyms, the rarity of batch searching capability and the disconnected nature of the sites. Fortunately, many sites now offer their content for download and we have developed an automated internal workflow that uses text mining and tailored ontologies for programmatic search and knowledge extraction. We believe such an efficient and secure approach provides a competitive advantage to companies needing access to the latest information for a range of use cases and complements manually curated commercial sources. Copyright © 2016. Published by Elsevier Ltd.

  18. Automated methods of corrosion measurement

    DEFF Research Database (Denmark)

    Andersen, Jens Enevold Thaulov; Bech-Nielsen, Gregers; Reeve, John Ch

    1997-01-01

    to revise assumptions regarding the basis of the method, which sometimes leads to the discovery of as-yet unnoticed phenomena. The present selection of automated methods for corrosion measurements is not motivated simply by the fact that a certain measurement can be performed automatically. Automation...

  19. End-User Service Computing: Spreadsheets as a Service Composition Tool

    NARCIS (Netherlands)

    Z. Obrenovic; D. Gasevic

    2008-01-01

    htmlabstractIn this paper, we show how spreadsheets, an end-user development paradigm proven to be highly productive and simple to learn and use, can be used for complex service compositions. We identify the requirements for spreadsheet-based service composition, and present our framework that

  20. Combining information on structure and content to automatically annotate natural science spreadsheets

    NARCIS (Netherlands)

    Vos, de Martine; Wielemaker, Jan; Rijgersberg, Hajo; Schreiber, Guus; Wielinga, Bob; Top, Jan

    2017-01-01

    In this paper we propose several approaches for automatic annotation of natural science spreadsheets using a combination of structural properties of the tables and external vocabularies. During the design process of their spreadsheets, domain scientists implicitly include their domain model in

  1. Combining information on structure and content to automatically annotate natural science spreadsheets

    NARCIS (Netherlands)

    de Vos, Martine; Wielemaker, Jan; Rijgersberg, Hajo; Schreiber, Guus; Wielinga, Bob; Top, Jan

    2017-01-01

    In this paper we propose several approaches for automatic annotation of natural science spreadsheets using a combination of structural properties of the tables and external vocabularies. During the design process of their spreadsheets, domain scientists implicitly include their domain model in the

  2. Electrochemical Impedance Spectra of Dye-Sensitized Solar Cells: Fundamentals and Spreadsheet Calculation

    Directory of Open Access Journals (Sweden)

    Subrata Sarker

    2014-01-01

    Full Text Available Electrochemical impedance spectroscopy (EIS is one of the most important tools to elucidate the charge transfer and transport processes in various electrochemical systems including dye-sensitized solar cells (DSSCs. Even though there are many books and reports on EIS, it is often very difficult to explain the EIS spectra of DSSCs. Understanding EIS through calculating EIS spectra on spreadsheet can be a powerful approach as the user, without having any programming knowledge, can go through each step of calculation on a spreadsheet and get instant feedback by visualizing the calculated results or plot on the same spreadsheet. Here, a brief account of the EIS of DSSCs is given with fundamental aspects and their spreadsheet calculation. The review should help one to develop a basic understanding about EIS of DSSCs through interacting with spreadsheet.

  3. Cutting solid figures by plane - analytical solution and spreadsheet implementation

    Science.gov (United States)

    Benacka, Jan

    2012-07-01

    In some secondary mathematics curricula, there is a topic called Stereometry that deals with investigating the position and finding the intersection, angle, and distance of lines and planes defined within a prism or pyramid. Coordinate system is not used. The metric tasks are solved using Pythagoras' theorem, trigonometric functions, and sine and cosine rules. The basic problem is to find the section of the figure by a plane that is defined by three points related to the figure. In this article, a formula is derived that gives the positions of the intersection points of such a plane and the figure edges, that is, the vertices of the section polygon. Spreadsheet implementations of the formula for cuboid and right rectangular pyramids are presented. The user can check his/her graphical solution, or proceed if he/she is not able to complete the section.

  4. The Flight of the Space Shuttle Discovery (STS-119)

    Science.gov (United States)

    Stinner, Arthur; Metz, Don

    2010-03-01

    This article is intended to model the ascent of the space shuttle for high school teachers and students. It provides a background for a sufficiently comprehensive description of the physics (kinematics and dynamics) of the March 16, 2009, Discovery launch. Our data are based on a comprehensive spreadsheet kindly sent to us by Bill Harwood, the "CBS News" space consultant. The spreadsheet provides detailed and authentic information about the prediction of the ascent of flight STS-119, the 36th flight of Discovery and the 125th shuttle flight to date. We have used the data for our calculations and the production of the graphs. A limited version of the ascent data is available on the "CBS News" STS-119 trajectory timeline.

  5. Volatility Discovery

    DEFF Research Database (Denmark)

    Dias, Gustavo Fruet; Scherrer, Cristina; Papailias, Fotis

    The price discovery literature investigates how homogenous securities traded on different markets incorporate information into prices. We take this literature one step further and investigate how these markets contribute to stochastic volatility (volatility discovery). We formally show...... that the realized measures from homogenous securities share a fractional stochastic trend, which is a combination of the price and volatility discovery measures. Furthermore, we show that volatility discovery is associated with the way that market participants process information arrival (market sensitivity......). Finally, we compute volatility discovery for 30 actively traded stocks in the U.S. and report that Nyse and Arca dominate Nasdaq....

  6. A system for automated quantification of cutaneous electrogastrograms

    DEFF Research Database (Denmark)

    Paskaranandavadivel, Niranchan; Bull, Simon Henry; Parsell, Doug

    2015-01-01

    Clinical evaluation of cutaneous electrogastrograms (EGG) is important for understanding the role of slow waves in functional motility disorders and may be a useful diagnostic aid. An automated software package has been developed which computes metrics of interest from EGG and from slow wave...... and amplitude were compared to automated estimates. The methods were packaged into a software executable which processes the data and presents the results in an intuitive graphical and a spreadsheet formats. Automated EGG analysis allows for clinical translation of bio-electrical analysis for potential...

  7. The FAO/IAEA interactive spreadsheet for design and operation of insect mass rearing facilities

    International Nuclear Information System (INIS)

    Caceres, Carlos; Rendon, Pedro

    2006-01-01

    An electronic spreadsheet is described which helps users to design, equip and operate facilities for the mass rearing of insects for use in insect pest control programmes integrating the sterile insect technique. The spreadsheet was designed based on experience accumulated in the mass rearing of the Mediterranean fruit fly, Ceratitis capitata (Wiedemann), using genetic sexing strains based on a temperature sensitive lethal (tsl) mutation. The spreadsheet takes into account the biological, production, and quality control parameters of the species to be mass reared, as well as the diets and equipment required. All this information is incorporated into the spreadsheet for user-friendly calculation of the main components involved in facility design and operation. Outputs of the spreadsheet include size of the different rearing areas, rearing equipment, volumes of diet ingredients, other consumables, as well as personnel requirements. By adding cost factors to these components, the spreadsheet can estimate the costs of facility construction, equipment, and operation. All the output parameters can be easily generated by simply entering the target number of sterile insects required per week. For other insect species, the biological and production characteristics need to be defined and inputted accordingly to obtain outputs relevant to these species. This spreadsheet, available under http://www-naweb.iaea.org/nafa/ipc/index.html, is a powerful tool for project and facility managers as it can be used to estimate facility cost, production cost, and production projections under different rearing efficiency scenarios. (author)

  8. The governance of risk arising from the use of spreadsheets in organisations

    Directory of Open Access Journals (Sweden)

    Tessa Minter

    2014-06-01

    Full Text Available The key to maximising the effectiveness of spreadsheet models for critical decision making is appropriate risk governance. Those responsible for governance need, at a macro level, to identify the specific spreadsheet risks, determine the reasons for such exposures and establish where and when risk exposures occur from point of initiation to usage and storage. It is essential to identify which parties could create the exposure taking cognisance of the entire supply chain of the organisation. If management’s risk strategy is to control the risks then the question reverts to how these risks can be prevented and/or detected and corrected? This paper attempts to address each of these critical issues and to offer guidance in the governance of spreadsheet risk. The paper identifies the the risk exposures and sets out the responsibilities of directors in relation to spreadsheets and the spreadsheet cycle. Spreadsheet risk exposure can be managed in terms of setting the control environment, undertaking risk assessment, providing the requisite information and communicating with internal and external parties as well as implementing spreadsheet lifecycle application controls and monitoring activities

  9. Quick Correct: A Method to Automatically Evaluate Student Work in MS Excel Spreadsheets

    Directory of Open Access Journals (Sweden)

    Laura R Wetzel

    2007-11-01

    Full Text Available The quick correct method allows instructors to easily assess Excel spreadsheet assignments and notifies students immediately if their answers are acceptable. The instructor creates a spread-sheet template for students to complete. To evaluate student answers within the template, the instructor places logic functions (e.g., IF, AND, OR into a column adjacent to student responses. These “quick correct” formulae are then password protected and hidden from view. If a student enters an incorrect answer while completing the spreadsheet template, the logic function returns an appropriate warning that encourages corrections.

  10. Spreadsheet Activities with Conditional Progression and Automatically Generated Feedback and Grades

    Directory of Open Access Journals (Sweden)

    Thomas C Juster

    2013-02-01

    Full Text Available Spreadsheet activities following the Spreadsheets Across the Curriculum (SSAC model have been modified using VBA programming to automatically generate feedback, calculate grades, and ensure that students complete them in a linear fashion. Feedback is based not only on the value of cells, but also on the formulas used to compute the values. These changes greatly ease the burden of grading on instructors, and help students more quickly master tasks and concepts by providing immediate and directed feedback to their answers. Students performed significantly better on the new spreadsheet activities compared to traditional SSAC versions, with 87% achieving perfect scores of 100%.

  11. The Computer Bulletin Board. Modified Gran Plots of Very Weak Acids on a Spreadsheet.

    Science.gov (United States)

    Chau, F. T.; And Others

    1990-01-01

    Presented are two applications of computer technology to chemistry instruction: the use of a spreadsheet program to analyze acid-base titration curves and the use of database software to catalog stockroom inventories. (CW)

  12. Integrating Computer Spreadsheet Modeling into a Microeconomics Curriculum: Principles to Managerial.

    Science.gov (United States)

    Clark, Joy L.; Hegji, Charles E.

    1997-01-01

    Notes that using spreadsheets to teach microeconomics principles enables learning by doing in the exploration of basic concepts. Introduction of increasingly complex topics leads to exploration of theory and managerial decision making. (SK)

  13. Spreadsheet Error Detection: an Empirical Examination in the Context of Greece

    Directory of Open Access Journals (Sweden)

    Dimitrios Maditinos

    2012-06-01

    Full Text Available The personal computers era made advanced programming tasks available to end users. Spreadsheet models are one of the most widely used applications that can produce valuable results with minimal training and effort. However, errors contained in most spreadsheets may be catastrophic and difficult to detect. This study attempts to investigate the influence of experience and spreadsheet presentation on the error finding performance by end users. To reach the target of the study, 216 business and finance students participated in a task of finding errors in a simple free cash flow model. The findings of the study reveal that presentation of the spreadsheet is of major importance as far as the error finding performance is concerned, while experience does not seem to affect students on their performance. Further research proposals and limitations of the study are, moreover, discussed.

  14. FURTHER CONSIDERATIONS ON SPREADSHEET-BASED AUTOMATIC TREND LINES

    Directory of Open Access Journals (Sweden)

    DANIEL HOMOCIANU

    2015-12-01

    Full Text Available Most of the nowadays business applications working with data sets allow exports to the spreadsheet format. This fact is related to the experience of common business users with such products and to the possibility to couple what they have with something containing many models, functions and possibilities to process and represent data, by that getting something in dynamics and much more than a simple static less useful report. The purpose of Business Intelligence is to identify clusters, profiles, association rules, decision trees and many other patterns or even behaviours, but also to generate alerts for exceptions, determine trends and make predictions about the future based on historical data. In this context, the paper shows some practical results obtained after testing both the automatic creation of scatter charts and trend lines corresponding to the user’s preferences and the automatic suggesting of the most appropriate trend for the tested data mostly based on the statistical measure of how close they are to the regression function.

  15. Abdominal surgery process modeling framework for simulation using spreadsheets.

    Science.gov (United States)

    Boshkoska, Biljana Mileva; Damij, Talib; Jelenc, Franc; Damij, Nadja

    2015-08-01

    We provide a continuation of the existing Activity Table Modeling methodology with a modular spreadsheets simulation. The simulation model developed is comprised of 28 modeling elements for the abdominal surgery cycle process. The simulation of a two-week patient flow in an abdominal clinic with 75 beds demonstrates the applicability of the methodology. The simulation does not include macros, thus programming experience is not essential for replication or upgrading the model. Unlike the existing methods, the proposed solution employs a modular approach for modeling the activities that ensures better readability, the possibility of easily upgrading the model with other activities, and its easy extension and connectives with other similar models. We propose a first-in-first-served approach for simulation of servicing multiple patients. The uncertain time duration of the activities is modeled using the function "rand()". The patients movements from one activity to the next one is tracked with nested "if()" functions, thus allowing easy re-creation of the process without the need of complex programming. Copyright © 2015 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  16. Illustrating Probability through Roulette: A Spreadsheet Simulation Model

    Directory of Open Access Journals (Sweden)

    Kala Chand Seal

    2005-11-01

    Full Text Available Teaching probability can be challenging because the mathematical formulas often are too abstract and complex for the students to fully grasp the underlying meaning and effect of the concepts. Games can provide a way to address this issue. For example, the game of roulette can be an exciting application for teaching probability concepts. In this paper, we implement a model of roulette in a spreadsheet that can simulate outcomes of various betting strategies. The simulations can be analyzed to gain better insights into the corresponding probability structures. We use the model to simulate a particular betting strategy known as the bet-doubling, or Martingale, strategy. This strategy is quite popular and is often erroneously perceived as a winning strategy even though the probability analysis shows that such a perception is incorrect. The simulation allows us to present the true implications of such a strategy for a player with a limited betting budget and relate the results to the underlying theoretical probability structure. The overall validation of the model, its use for teaching, including its application to analyze other types of betting strategies are discussed.

  17. Numeric calculation of celestial bodies with spreadsheet analysis

    Science.gov (United States)

    Koch, Alexander

    2016-04-01

    The motion of the planets and moons in our solar system can easily be calculated for any time by the Kepler laws of planetary motion. The Kepler laws are a special case of the gravitational law of Newton, especially if you consider more than two celestial bodies. Therefore it is more basic to calculate the motion by using the gravitational law. But the problem is, that by gravitational law it is not possible to calculate the state of motion with only one step of calculation. The motion has to be numerical calculated for many time intervalls. For this reason, spreadsheet analysis is helpful for students. Skills in programmes like Excel, Calc or Gnumeric are important in professional life and can easily be learnt by students. These programmes can help to calculate the complex motions with many intervalls. The more intervalls are used, the more exact are the calculated orbits. The sutdents will first get a quick course in Excel. After that they calculate with instructions the 2-D-coordinates of the orbits of Moon and Mars. Step by step the students are coding the formulae for calculating physical parameters like coordinates, force, acceleration and velocity. The project is limited to 4 weeks or 8 lessons. So the calcualtion will only include the calculation of one body around the central mass like Earth or Sun. The three-body problem can only be shortly discussed at the end of the project.

  18. Beyond Discovery

    DEFF Research Database (Denmark)

    Korsgaard, Steffen; Sassmannshausen, Sean Patrick

    2017-01-01

    In this chapter we explore four alternatives to the dominant discovery view of entrepreneurship; the development view, the construction view, the evolutionary view, and the Neo-Austrian view. We outline the main critique points of the discovery presented in these four alternatives, as well as the...

  19. Using Spreadsheets for Teaching Principles of On-line Checking of Logic Circuits

    OpenAIRE

    Talis, V.; Levin, I.

    2004-01-01

    This paper examines the use of spreadsheets as a tool for learning theoretical principles of concurrent error detection. Basic concepts of concurrent checking are presented by using specific spreadsheet templates. A matrix representation of a system of logical functions is used for this aim. A specific technique is described for constructing a logic simulator implementing this matrix representation. After the logic simulator construction, students are able to solve practical tasks due to unde...

  20. Spreadsheet software to assess locomotor disability to quantify permanent physical impairment

    Directory of Open Access Journals (Sweden)

    Sunderraj Ellur

    2012-01-01

    Full Text Available Context: Assessment of physical disability is an important duty of a plastic surgeon especially for those of us who are in an institutional practice. Aim: The Gazette of India notification gives a guideline regarding the assessment of the disability. However, the calculations as per the guidelines are time consuming. In this article, a spreadsheet program which is based on the notification is presented. The aim of this article is to design a spreadsheet program which is simple, reproducible, user friendly, less time consuming and accurate. Materials and Methods: This spreadsheet program was designed using the Microsoft Excel. The spreadsheet program was designed on the basis of the guidelines in the Gazette of India Notification regarding the assessment of Locomotor Disability to Quantify Permanent Physical Impairment. Two representative examples are presented to help understand the application of this program. Results: Two spreadsheet programs, one for upper limb and another for the lower limb are presented. The representative examples show the accuracy of the program to match the results of the traditional method of calculation. Conclusion: A simple spreadsheet program can be designed to assess disability as per the Gazette of India Notification. This program is easy to use and is accurate.

  1. Automated Discovery of Flight Track Anomalies

    Data.gov (United States)

    National Aeronautics and Space Administration — As new technologies are developed to handle the complexities of the Next Generation Air Transportation System (NextGen), it is increasingly important to address both...

  2. The automated discovery of hybrid processes

    NARCIS (Netherlands)

    Maggi, F.M.; Slaats, T.; Reijers, H.A.

    2014-01-01

    The declarative-procedural dichotomy is highly relevant when choosing the most suitable process modeling language to represent a discovered process. Less-structured processes with a high level of variability can be described in a more compact way using a declarative language. By contrast, procedural

  3. The Automated Discovery of Hybrid Processes

    DEFF Research Database (Denmark)

    Slaats, Tijs; Reijers, Hajo; Maggi, Fabrizio Maria

    2014-01-01

    The declarative-procedural dichotomy is highly relevant when choosing the most suitable process modeling language to represent a discovered process. Less-structured processes with a high level of variability can be described in a more compact way using a declarative language. By contrast, procedu...

  4. Automating Information Discovery Within the Invisible Web

    Science.gov (United States)

    Sweeney, Edwina; Curran, Kevin; Xie, Ermai

    A Web crawler or spider crawls through the Web looking for pages to index, and when it locates a new page it passes the page on to an indexer. The indexer identifies links, keywords, and other content and stores these within its database. This database is searched by entering keywords through an interface and suitable Web pages are returned in a results page in the form of hyperlinks accompanied by short descriptions. The Web, however, is increasingly moving away from being a collection of documents to a multidimensional repository for sounds, images, audio, and other formats. This is leading to a situation where certain parts of the Web are invisible or hidden. The term known as the "Deep Web" has emerged to refer to the mass of information that can be accessed via the Web but cannot be indexed by conventional search engines. The concept of the Deep Web makes searches quite complex for search engines. Google states that the claim that conventional search engines cannot find such documents as PDFs, Word, PowerPoint, Excel, or any non-HTML page is not fully accurate and steps have been taken to address this problem by implementing procedures to search items such as academic publications, news, blogs, videos, books, and real-time information. However, Google still only provides access to a fraction of the Deep Web. This chapter explores the Deep Web and the current tools available in accessing it.

  5. Development of Spreadsheet-Based Integrated Transaction Processing Systems and Financial Reporting Systems

    Science.gov (United States)

    Ariana, I. M.; Bagiada, I. M.

    2018-01-01

    Development of spreadsheet-based integrated transaction processing systems and financial reporting systems is intended to optimize the capabilities of spreadsheet in accounting data processing. The purpose of this study are: 1) to describe the spreadsheet-based integrated transaction processing systems and financial reporting systems; 2) to test its technical and operational feasibility. This study type is research and development. The main steps of study are: 1) needs analysis (need assessment); 2) developing spreadsheet-based integrated transaction processing systems and financial reporting systems; and 3) testing the feasibility of spreadsheet-based integrated transaction processing systems and financial reporting systems. The technical feasibility include the ability of hardware and operating systems to respond the application of accounting, simplicity and ease of use. Operational feasibility include the ability of users using accounting applications, the ability of accounting applications to produce information, and control applications of the accounting applications. The instrument used to assess the technical and operational feasibility of the systems is the expert perception questionnaire. The instrument uses 4 Likert scale, from 1 (strongly disagree) to 4 (strongly agree). Data were analyzed using percentage analysis by comparing the number of answers within one (1) item by the number of ideal answer within one (1) item. Spreadsheet-based integrated transaction processing systems and financial reporting systems integrate sales, purchases, and cash transaction processing systems to produce financial reports (statement of profit or loss and other comprehensive income, statement of changes in equity, statement of financial position, and statement of cash flows) and other reports. Spreadsheet-based integrated transaction processing systems and financial reporting systems is feasible from the technical aspects (87.50%) and operational aspects (84.17%).

  6. Standard Evaluation Procedures (SEPs) and Data Entry Spreadsheet Templates (DESTs) for Endocrine Disruptor Screening Program (EDSP) Tier 1 Assays

    Science.gov (United States)

    This page provides information and access to Standard Evaluation Procedures (SEPs) and Data Entry Spreadsheet Templates (DESTs) developed by EPA's Office of Chemical Safety and Pollution Prevention (OCSPP).

  7. Comparison of two spreadsheets for calculation of radiation exposure following hyperthyroidism treatment with iodine-131

    International Nuclear Information System (INIS)

    Vrigneaud, J.M.; Carlier, T.

    2006-01-01

    Comparison of the two spreadsheets did not show any significant differences provided that proper biological models were used to follow 131 iodine clearance. This means that even simple assumptions can be used to give reasonable radiation safety recommendations. Nevertheless, a complete understanding of the formalism is required to use correctly these spreadsheets. Initial parameters must be chosen carefully and validation of the computed results must be done. Published guidelines are found to be in accordance with those issued from these spreadsheets. Furthermore, both programs make it possible to collect biological data from each patient and use it as input to calculate individual tailored radiation safety advices. Also, measured exposure rate may be entered into the spreadsheets to calculate patient-specific close contact delays required to reduce the dose to specified limits. These spreadsheets may be used to compute restriction times for any given radiopharmaceutical, provided that input parameters are chosen correctly. They can be of great help to physicians to provide patients with guidance on how to maintain doses to other individuals as low as reasonably achievable. (authors)

  8. Spreadsheet design and validation for characteristic limits determination in gross alpha and beta measurement

    International Nuclear Information System (INIS)

    Prado, Rodrigo G.P. do; Dalmazio, Ilza

    2013-01-01

    The identification and detection of ionizing radiation are essential requisites of radiation protection. Gross alpha and beta measurements are widely applied as a screening method in radiological characterization, environmental monitoring and industrial applications. As in any other analytical technique, test performance depends on the quality of instrumental measurements and reliability of calculations. Characteristic limits refer to three specific statistics, namely, decision threshold, detection limit and confidence interval, which are fundamental to ensuring the quality of determinations. This work describes a way to calculate characteristic limits for measurements of gross alpha and beta activity applying spreadsheets. The approach used for determination of decision threshold, detection limit and limits of the confidence interval, the mathematical expressions of measurands and uncertainty followed standards guidelines. A succinct overview of this approach and examples are presented and spreadsheets were validated using specific software. Furthermore, these spreadsheets could be used as tool to instruct beginner users of methods for ionizing radiation measurements. (author)

  9. A spreadsheet-based microcomputer application for determining cost-effectiveness of commercial lighting retrofit opportunities

    International Nuclear Information System (INIS)

    Spain, T.K.

    1992-01-01

    Lighting accounts for 20-25% of electricity use in the United States. With estimates of 50-70% potential reductions being made by energy engineers, lighting is a promising area for cost-effective energy conservation projects in commercial buildings. With an extensive array of alternatives available to replace or modify existing lighting systems, simple but effective calculation tools are needed to help energy auditors evaluate lighting retrofits. This paper describes a spreadsheet-based microcomputer application for determining the cost-effectiveness of commercial lighting retrofits. Developed to support walk-through energy audits conducted by the Industrial Energy Advisory Service (IdEAS), the spreadsheet provides essential comparative data for evaluating the payback of alternatives. The impact of alternatives on environmental emissions is calculated to help communicate external costs and sell the project, if appropriate. The methodology and calculations are fully documented to allow the user to duplicate the spreadsheet and modify it as needed

  10. [Development of an Excel spreadsheet for meta-analysis of indirect and mixed treatment comparisons].

    Science.gov (United States)

    Tobías, Aurelio; Catalá-López, Ferrán; Roqué, Marta

    2014-01-01

    Meta-analyses in clinical research usually aimed to evaluate treatment efficacy and safety in direct comparison with a unique comparator. Indirect comparisons, using the Bucher's method, can summarize primary data when information from direct comparisons is limited or nonexistent. Mixed comparisons allow combining estimates from direct and indirect comparisons, increasing statistical power. There is a need for simple applications for meta-analysis of indirect and mixed comparisons. These can easily be conducted using a Microsoft Office Excel spreadsheet. We developed a spreadsheet for indirect and mixed effects comparisons of friendly use for clinical researchers interested in systematic reviews, but non-familiarized with the use of more advanced statistical packages. The use of the proposed Excel spreadsheet for indirect and mixed comparisons can be of great use in clinical epidemiology to extend the knowledge provided by traditional meta-analysis when evidence from direct comparisons is limited or nonexistent.

  11. Excel Programming Your Visual Blueprint for Creating Interactive Spreadsheets

    CERN Document Server

    Etheridge, Denise

    2010-01-01

    A great guide to Excel programming that is perfect for visual learners and takes you beyond Excel basics!. This book is the perfect reference for Excel users who want to delve deeper into the application to create powerful and dynamic programs. From creating macros to customizing dialog boxes, this step-by-step guide helps you get more out of Excel than you knew was possible. Each step has callouts so you can see exactly where the action takes place and this Web site offers tons of usable code and sample macros that you can put to use instantly.: Explains step-by-step how to automate Excel, th

  12. Towards tool support for spreadsheet-based domain-specific languages

    DEFF Research Database (Denmark)

    Adam, Marian Sorin; Schultz, Ulrik Pagh

    2015-01-01

    the syntax of such spreadsheet-based DSLs (SDSLs), and there is no tool support for automatically generating language infrastructure such as parsers and IDE support. In this paper we define a simple notion of two-dimensional grammars for SDSLs, and show how such grammars can be used for automatically...... generating parsers that extract structured data from a spreadsheet in the form of an AST. We demonstrate automatic generation of parsers for a number of examples, including the questionnaire DSL from LWC2014 and a DSL for writing safety specifications....

  13. Rewriting High-Level Spreadsheet Structures into Higher-Order Functional Programs

    DEFF Research Database (Denmark)

    Biermann, Florian; Dou, Wensheng; Sestoft, Peter

    2017-01-01

    usually have some high-level structure that can be used to improve performance by performing independent computation in parallel. In this paper, we devise rules for rewriting high-level spreadsheet structure in the form of so-called cell arrays into higher-order functional programs that can be easily...... parallelized on multicore processors. We implement our rule set for the experimental Funcalc spreadsheet engine which already implements parallelizable higher-order array functions as well as user-defined higher-order functions. Benchmarks show that our rewriting approach improves recalculation performance...

  14. Spinning the Big Wheel on “The Price is Right”: A Spreadsheet Simulation Exercise

    Directory of Open Access Journals (Sweden)

    Keith A Willoughby

    2010-04-01

    Full Text Available A popular game played in each broadcast of the United States television game show “The Price is Right” has contestants spinning a large wheel comprised of twenty different monetary values (in 5-cent increments from $0.05 to $1.00. A player wins by scoring closest to, without exceeding, $1.00. Players may accomplish this in one or a total of two spins. We develop a spreadsheet modeling exercise, useful in an introductory undergraduate Spreadsheet Analytics course, to simulate the spinning of the wheel and to determine optimal spinning strategies.

  15. On Mathematical Problem Posing by Elementary Pre-teachers: The Case of Spreadsheets

    Directory of Open Access Journals (Sweden)

    Sergei Abramovich

    2008-07-01

    Full Text Available This article concerns the use of an electronic spreadsheet in mathematical problem posing by prospective elementary teachers. It introduces a didactic construct dealing with three types of a problem's coherence -- numerical, contextual and pedagogical. The main thesis of the article is that technological support of problem posing proves to be insufficient without one's use of this construct. The article reflects on work done with the teachers in a number of education courses. It suggests that including mathematics problem posing with spreadsheets into a coursework for the teachers provides them with research-like experience in curriculum development.

  16. Application of magnetic sensors in automation control

    International Nuclear Information System (INIS)

    Hou Chunhong; Qian Zhenghong

    2011-01-01

    Controls in automation need speed and position feedback. The feedback device is often referred to as encoder. Feedback technology includes mechanical, optical, and magnetic, etc. All advance with new inventions and discoveries. Magnetic sensing as a feedback technology offers certain advantages over other technologies like optical one. With new discoveries like GMR (Giant Magneto-Resistance), TMR (Tunneling Magneto-Resistance) becoming feasible for commercialization, more and more applications will be using advanced magnetic sensors in automation. This paper offers a general review on encoder and applications of magnetic sensors in automation control.

  17. Calculation spreadsheet for uncertainty estimation of measurement results in gamma-ray spectrometry and its validation for quality assurance purpose

    International Nuclear Information System (INIS)

    Ceccatelli, Alessia; Dybdal, Ashild; Fajgelj, Ales; Pitois, Aurelien

    2017-01-01

    An Excel calculation spreadsheet has been developed to estimate the uncertainty of measurement results in γ-ray spectrometry. It considers all relevant uncertainty components and calculates the combined standard uncertainty of the measurement result. The calculation spreadsheet has been validated using two independent open access software and is available for download free of charge. It provides a simple and easy-to-use template for estimating the uncertainty of γ-ray spectrometry measurement results and supports the radioanalytical laboratories seeking accreditation for their measurements using γ-ray spectrometry. - Highlights: • A calculation spreadsheet for estimation of measurement result uncertainty in γ-ray spectrometry was developed. • Two independent software were used to validate the calculation spreadsheet. • The calculation spreadsheet is available for download free of charge. • The work presented supports quality assurance of radioanalytical laboratories.

  18. Data Discovery

    Directory of Open Access Journals (Sweden)

    Gerhard Weikum

    2013-07-01

    Full Text Available Discovery of documents, data sources, facts, and opinions is at the very heart of digital information and knowledge services. Being able to search, discover, compile, and analyse relevant information for a user’s specific tasks is of utmost importance in science (e.g., computational life sciences, digital humanities, etc., business (e.g., market and media analytics, customer relationship management, etc. , and society at large (e.g., consumer information, traffic logistics, health discussions, etc..

  19. Cosmic Discovery

    Science.gov (United States)

    Harwit, Martin

    1984-04-01

    In the remarkable opening section of this book, a well-known Cornell astronomer gives precise thumbnail histories of the 43 basic cosmic discoveries - stars, planets, novae, pulsars, comets, gamma-ray bursts, and the like - that form the core of our knowledge of the universe. Many of them, he points out, were made accidentally and outside the mainstream of astronomical research and funding. This observation leads him to speculate on how many more major phenomena there might be and how they might be most effectively sought out in afield now dominated by large instruments and complex investigative modes and observational conditions. The book also examines discovery in terms of its political, financial, and sociological context - the role of new technologies and of industry and the military in revealing new knowledge; and methods of funding, of peer review, and of allotting time on our largest telescopes. It concludes with specific recommendations for organizing astronomy in ways that will best lead to the discovery of the many - at least sixty - phenomena that Harwit estimates are still waiting to be found.

  20. A new iterative heuristic to solve the joint replenishment problem using a spreadsheet technique

    NARCIS (Netherlands)

    Nilsson, A.; Segerstedt, A.; van der Sluis, E.

    2007-01-01

    In this paper, a heuristic method is presented which gives a novel approach to solve joint replenishment problems (JRP) with strict cycle policies. The heuristic solves the JRP in an iterative procedure and is based on a spreadsheet technique. The principle of the recursion procedure is to find a

  1. Handling Math Expressions in Economics: Recoding Spreadsheet Teaching Tool of Growth Models

    Science.gov (United States)

    Moro-Egido, Ana I.; Pedauga, Luis E.

    2017-01-01

    In the present paper, we develop a teaching methodology for economic theory. The main contribution of this paper relies on combining the interactive characteristics of spreadsheet programs such as Excel and Unicode plain-text linear format for mathematical expressions. The advantage of Unicode standard rests on its ease for writing and reading…

  2. VENTSAR XL - A spreadsheet for analyzing building effects and plume rise

    Energy Technology Data Exchange (ETDEWEB)

    Simpkins, A.A.

    1997-03-01

    VENTSAR XL is a Microsoft Excel spreadsheet that analyzes flow patterns of pollutants on or near buildings. Plume rise may be considered. This report provides a complete description and verification of all models within VENTSAR XL. User instructions also are included.

  3. Fuels planning: science synthesis and integration; environmental consequences fact sheet 11: Smoke Impact Spreadsheet (SIS) model

    Science.gov (United States)

    Trent Wickman; Ann Acheson

    2005-01-01

    The Smoke Impact Spreadsheet (SIS) is a simple-to-use planning model for calculating particulate matter (PM) emissions and concentrations downwind of wildland fires. This fact sheet identifies the intended users and uses, required inputs, what the model does and does not do, and tells the user how to obtain the model.

  4. Data Analysis and Graphing in an Introductory Physics Laboratory: Spreadsheet versus Statistics Suite

    Science.gov (United States)

    Peterlin, Primoz

    2010-01-01

    Two methods of data analysis are compared: spreadsheet software and a statistics software suite. Their use is compared analysing data collected in three selected experiments taken from an introductory physics laboratory, which include a linear dependence, a nonlinear dependence and a histogram. The merits of each method are compared. (Contains 7…

  5. A Computer Simulation Using Spreadsheets for Learning Concept of Steady-State Equilibrium

    Science.gov (United States)

    Sharda, Vandana; Sastri, O. S. K. S.; Bhardwaj, Jyoti; Jha, Arbind K.

    2016-01-01

    In this paper, we present a simple spreadsheet based simulation activity that can be performed by students at the undergraduate level. This simulation is implemented in free open source software (FOSS) LibreOffice Calc, which is available for both Windows and Linux platform. This activity aims at building the probability distribution for the…

  6. Pre-service mathematics teachers’ learning and teaching of activity-based lessons supported with spreadsheets

    NARCIS (Netherlands)

    Agyei, Douglas; Voogt, Joke

    2016-01-01

    In this study, 12 pre-service mathematics teachers worked in teams to develop their knowledge and skills in using teacher-led spreadsheet demonstrations to help students explore mathematics concepts, stimulate discussions and perform authentic tasks through activity-based lessons. Pre-service

  7. PROFESSIONAL ORIENTATION OF THE COURSE «COMPUTER INFORMATION TECHNOLOGY» AT STUDYING OF SPREADSHEETS.

    Directory of Open Access Journals (Sweden)

    N.V. Valko

    2010-11-01

    Full Text Available In work the examples of statement of the theme «Studying of spreadsheets» in course «Computer information technology» which promote effective mastering of a material, and also the further application of the received knowledge under production conditions are resulted.

  8. Learning Binomial Probability Concepts with Simulation, Random Numbers and a Spreadsheet

    Science.gov (United States)

    Rochowicz, John A., Jr.

    2005-01-01

    This paper introduces the reader to the concepts of binomial probability and simulation. A spreadsheet is used to illustrate these concepts. Random number generators are great technological tools for demonstrating the concepts of probability. Ideas of approximation, estimation, and mathematical usefulness provide numerous ways of learning…

  9. Random Numbers Demonstrate the Frequency of Type I Errors: Three Spreadsheets for Class Instruction

    Science.gov (United States)

    Duffy, Sean

    2010-01-01

    This paper describes three spreadsheet exercises demonstrating the nature and frequency of type I errors using random number generation. The exercises are designed specifically to address issues related to testing multiple relations using correlation (Demonstration I), t tests varying in sample size (Demonstration II) and multiple comparisons…

  10. Simplified risk assessment of noise induced hearing loss by means of 2 spreadsheet models

    Directory of Open Access Journals (Sweden)

    Arve Lie

    2016-12-01

    Full Text Available Objectives: The objective of this study has been to test 2 spreadsheet models to compare the observed with the expected hearing loss for a Norwegian reference population. Material and Methods: The prevalence rates of the Norwegian and the National Institute for Occupational Safety and Health (NIOSH definitions of hearing outcomes were calculated in terms of sex and age, 20–64 years old, for a screened (with no occupational noise exposure (N = 18 858 and unscreened (N = 38 333 Norwegian reference population from the Nord-Trøndelag Hearing Loss Study (NTHLS. Based on the prevalence rates, 2 different spreadsheet models were constructed in order to compare the prevalence rates of various groups of workers with the expected rates. The spreadsheets were then tested on 10 different occupational groups with varying degrees of hearing loss as compared to a reference population. Results: Hearing of office workers, train drivers, conductors and teachers differed little from the screened reference values based on the Norwegian and the NIOSH criterion. The construction workers, miners, farmers and military had an impaired hearing and railway maintenance workers and bus drivers had a mildly impaired hearing. The spreadsheet models give a valid assessment of the hearing loss. Conclusions: The use of spreadsheet models to compare hearing in occupational groups with that of a reference population is a simple and quick method. The results are in line with comparable hearing thresholds, and allow for significance testing. The method is believed to be useful for occupational health services in the assessment of risk of noise induced hearing loss (NIHL and the preventive potential in groups of noise-exposed workers. Int J Occup Med Environ Health 2016;29(6:991–999

  11. Simplified risk assessment of noise induced hearing loss by means of 2 spreadsheet models.

    Science.gov (United States)

    Lie, Arve; Engdahl, Bo; Tambs, Kristian

    2016-11-18

    The objective of this study has been to test 2 spreadsheet models to compare the observed with the expected hearing loss for a Norwegian reference population. The prevalence rates of the Norwegian and the National Institute for Occupational Safety and Health (NIOSH) definitions of hearing outcomes were calculated in terms of sex and age, 20-64 years old, for a screened (with no occupational noise exposure) (N = 18 858) and unscreened (N = 38 333) Norwegian reference population from the Nord-Trøndelag Hearing Loss Study (NTHLS). Based on the prevalence rates, 2 different spreadsheet models were constructed in order to compare the prevalence rates of various groups of workers with the expected rates. The spreadsheets were then tested on 10 different occupational groups with varying degrees of hearing loss as compared to a reference population. Hearing of office workers, train drivers, conductors and teachers differed little from the screened reference values based on the Norwegian and the NIOSH criterion. The construction workers, miners, farmers and military had an impaired hearing and railway maintenance workers and bus drivers had a mildly impaired hearing. The spreadsheet models give a valid assessment of the hearing loss. The use of spreadsheet models to compare hearing in occupational groups with that of a reference population is a simple and quick method. The results are in line with comparable hearing thresholds, and allow for significance testing. The method is believed to be useful for occupational health services in the assessment of risk of noise induced hearing loss (NIHL) and the preventive potential in groups of noise-exposed workers. Int J Occup Med Environ Health 2016;29(6):991-999. This work is available in Open Access model and licensed under a CC BY-NC 3.0 PL license.

  12. Library Automation

    OpenAIRE

    Dhakne, B. N.; Giri, V. V; Waghmode, S. S.

    2010-01-01

    New technologies library provides several new materials, media and mode of storing and communicating the information. Library Automation reduces the drudgery of repeated manual efforts in library routine. By use of library automation collection, Storage, Administration, Processing, Preservation and communication etc.

  13. Discovery Mondays

    CERN Multimedia

    2003-01-01

    Many people don't realise quite how much is going on at CERN. Would you like to gain first-hand knowledge of CERN's scientific and technological activities and their many applications? Try out some experiments for yourself, or pick the brains of the people in charge? If so, then the «Lundis Découverte» or Discovery Mondays, will be right up your street. Starting on May 5th, on every first Monday of the month you will be introduced to a different facet of the Laboratory. CERN staff, non-scientists, and members of the general public, everyone is welcome. So tell your friends and neighbours and make sure you don't miss this opportunity to satisfy your curiosity and enjoy yourself at the same time. You won't have to listen to a lecture, as the idea is to have open exchange with the expert in question and for each subject to be illustrated with experiments and demonstrations. There's no need to book, as Microcosm, CERN's interactive museum, will be open non-stop from 7.30 p.m. to 9 p.m. On the first Discovery M...

  14. A procedure to compute equilibrium concentrations in multicomponent systems by Gibbs energy minimization on spreadsheets

    International Nuclear Information System (INIS)

    Lima da Silva, Aline; Heck, Nestor Cesar

    2003-01-01

    Equilibrium concentrations are traditionally calculated with the help of equilibrium constant equations from selected reactions. This procedure, however, is only useful for simpler problems. Analysis of the equilibrium state in a multicomponent and multiphase system necessarily involves solution of several simultaneous equations, and, as the number of system components grows, the required computation becomes more complex and tedious. A more direct and general method for solving the problem is the direct minimization of the Gibbs energy function. The solution for the nonlinear problem consists in minimizing the objective function (Gibbs energy of the system) subjected to the constraints of the elemental mass-balance. To solve it, usually a computer code is developed, which requires considerable testing and debugging efforts. In this work, a simple method to predict equilibrium composition in multicomponent systems is presented, which makes use of an electronic spreadsheet. The ability to carry out these calculations within a spreadsheet environment shows several advantages. First, spreadsheets are available 'universally' on nearly all personal computers. Second, the input and output capabilities of spreadsheets can be effectively used to monitor calculated results. Third, no additional systems or programs need to be learned. In this way, spreadsheets can be as suitable in computing equilibrium concentrations as well as to be used as teaching and learning aids. This work describes, therefore, the use of the Solver tool, contained in the Microsoft Excel spreadsheet package, on computing equilibrium concentrations in a multicomponent system, by the method of direct Gibbs energy minimization. The four phases Fe-Cr-O-C-Ni system is used as an example to illustrate the method proposed. The pure stoichiometric phases considered in equilibrium calculations are: Cr 2 O 3 (s) and FeO C r 2 O 3 (s). The atmosphere consists of O 2 , CO e CO 2 constituents. The liquid iron

  15. AXAOTHER XL -- A spreadsheet for determining doses for incidents caused by tornadoes or high-velocity straight winds

    International Nuclear Information System (INIS)

    Simpkins, A.A.

    1996-09-01

    AXAOTHER XL is an Excel Spreadsheet used to determine dose to the maximally exposed offsite individual during high-velocity straight winds or tornado conditions. Both individual and population doses may be considered. Potential exposure pathways are inhalation and plume shine. For high-velocity straight winds the spreadsheet has the capability to determine the downwind relative air concentration, however for the tornado conditions, the user must enter the relative air concentration. Theoretical models are discussed and hand calculations are performed to ensure proper application of methodologies. A section has also been included that contains user instructions for the spreadsheet

  16. Discovery of new natural products by application of X-hitting, a novel algorithm for automated comparison of full UV-spectra, combined with structural determination by NMR spectroscophy

    DEFF Research Database (Denmark)

    Larsen, Thomas Ostenfeld; Petersen, Bent O.; Duus, Jens Øllgaard

    2005-01-01

    X-hitting, a newly developed algorithm for automated comparison of UV data, has been used for the tracking of two novel spiro-quinazoline metabolites, lapatins A (1)andB(2), in a screening study targeting quinazolines. The structures of 1 and 2 were elucidated by analysis of spectroscopic data...

  17. Process automation

    International Nuclear Information System (INIS)

    Moser, D.R.

    1986-01-01

    Process automation technology has been pursued in the chemical processing industries and to a very limited extent in nuclear fuel reprocessing. Its effective use has been restricted in the past by the lack of diverse and reliable process instrumentation and the unavailability of sophisticated software designed for process control. The Integrated Equipment Test (IET) facility was developed by the Consolidated Fuel Reprocessing Program (CFRP) in part to demonstrate new concepts for control of advanced nuclear fuel reprocessing plants. A demonstration of fuel reprocessing equipment automation using advanced instrumentation and a modern, microprocessor-based control system is nearing completion in the facility. This facility provides for the synergistic testing of all chemical process features of a prototypical fuel reprocessing plant that can be attained with unirradiated uranium-bearing feed materials. The unique equipment and mission of the IET facility make it an ideal test bed for automation studies. This effort will provide for the demonstration of the plant automation concept and for the development of techniques for similar applications in a full-scale plant. A set of preliminary recommendations for implementing process automation has been compiled. Some of these concepts are not generally recognized or accepted. The automation work now under way in the IET facility should be useful to others in helping avoid costly mistakes because of the underutilization or misapplication of process automation. 6 figs

  18. A simple spreadsheet-based, MIAME-supportive format for microarray data: MAGE-TAB

    Directory of Open Access Journals (Sweden)

    White Joseph

    2006-11-01

    Full Text Available Abstract Background Sharing of microarray data within the research community has been greatly facilitated by the development of the disclosure and communication standards MIAME and MAGE-ML by the MGED Society. However, the complexity of the MAGE-ML format has made its use impractical for laboratories lacking dedicated bioinformatics support. Results We propose a simple tab-delimited, spreadsheet-based format, MAGE-TAB, which will become a part of the MAGE microarray data standard and can be used for annotating and communicating microarray data in a MIAME compliant fashion. Conclusion MAGE-TAB will enable laboratories without bioinformatics experience or support to manage, exchange and submit well-annotated microarray data in a standard format using a spreadsheet. The MAGE-TAB format is self-contained, and does not require an understanding of MAGE-ML or XML.

  19. Integrating numerical computation into the undergraduate education physics curriculum using spreadsheet excel

    Science.gov (United States)

    Fauzi, Ahmad

    2017-11-01

    Numerical computation has many pedagogical advantages: it develops analytical skills and problem-solving skills, helps to learn through visualization, and enhances physics education. Unfortunately, numerical computation is not taught to undergraduate education physics students in Indonesia. Incorporate numerical computation into the undergraduate education physics curriculum presents many challenges. The main challenges are the dense curriculum that makes difficult to put new numerical computation course and most students have no programming experience. In this research, we used case study to review how to integrate numerical computation into undergraduate education physics curriculum. The participants of this research were 54 students of the fourth semester of physics education department. As a result, we concluded that numerical computation could be integrated into undergraduate education physics curriculum using spreadsheet excel combined with another course. The results of this research become complements of the study on how to integrate numerical computation in learning physics using spreadsheet excel.

  20. Spreadsheet Implementation of Numerical and Analytical Solutions to Some Classical Partial Differential Equations

    Directory of Open Access Journals (Sweden)

    Mark A Lau

    2016-09-01

    Full Text Available This paper presents the implementation of numerical and analytical solutions of some of the classical partial differential equations using Excel spreadsheets. In particular, the heat equation, wave equation, and Laplace’s equation are presented herein since these equations have well known analytical solutions. The numerical solutions can be easily obtained once the differential equations are discretized via finite differences and then using cell formulas to implement the resulting recursive algorithms and other iterative methods such as the successive over-relaxation (SOR method. The graphing capabilities of spreadsheets can be exploited to enhance the visualization of the solutions to these equations. Furthermore, using Visual Basic for Applications (VBA can greatly facilitate the implementation of the analytical solutions to these equations, and in the process, one obtains Fourier series approximations to functions governing initial and/or boundary conditions.

  1. Computerized cost estimation spreadsheet and cost data base for fusion devices

    International Nuclear Information System (INIS)

    Hamilton, W.R.; Rothe, K.E.

    1985-01-01

    Component design parameters (weight, surface area, etc.) and cost factors are input and direct and indirect costs are calculated. The cost data base file derived from actual cost experience within the fusion community and refined to be compatible with the spreadsheet costing approach is a catalog of cost coefficients, algorithms, and component costs arranged into data modules corresponding to specific components and/or subsystems. Each data module contains engineering, equipment, and installation labor cost data for different configurations and types of the specific component or subsystem. This paper describes the assumptions, definitions, methodology, and architecture incorporated in the development of the cost estimation spreadsheet and cost data base, along with the type of input required and the output format

  2. Teaching graphical simulations of Fourier series expansion of some periodic waves using spreadsheets

    Science.gov (United States)

    Singh, Iqbal; Kaur, Bikramjeet

    2018-05-01

    The present article demonstrates a way of programming using an Excel spreadsheet to teach Fourier series expansion in school/colleges without the knowledge of any typical programming language. By using this, a student learns to approximate partial sum of the n terms of Fourier series for some periodic signals such as square wave, saw tooth wave, half wave rectifier and full wave rectifier signals.

  3. Maxine: A spreadsheet for estimating dose from chronic atmospheric radioactive releases

    Energy Technology Data Exchange (ETDEWEB)

    Jannik, Tim [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Bell, Evaleigh [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Dixon, Kenneth [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2017-07-24

    MAXINE is an EXCEL© spreadsheet, which is used to estimate dose to individuals for routine and accidental atmospheric releases of radioactive materials. MAXINE does not contain an atmospheric dispersion model, but rather doses are estimated using air and ground concentrations as input. Minimal input is required to run the program and site specific parameters are used when possible. Complete code description, verification of models, and user’s manual have been included.

  4. Spreadsheet decision support model for MK 16 underwater breathing apparatus repair parts inventory management

    OpenAIRE

    Butler, Peter B.

    1994-01-01

    This thesis proposes a spreadsheet-based decision support model for determining the most effective repair parts inventory for the MK 16 Underwater Breathing Apparatus (MK 16). Incorporating U.S. Navy demand information, the model provides the inventory manager the ability to modify repair parts inventories as changes occur to the order and shipping times, tempo of operations, or the number of MK 16 assigned. The thesis explores the current methods of MK 16 repair parts inventory design and re...

  5. Number Theory, Dialogue, and the Use of Spreadsheets in Teacher Education

    Directory of Open Access Journals (Sweden)

    Sergei Abramovich

    2011-04-01

    Full Text Available This paper demonstrates the use of a spreadsheet in teaching topics in elementary number theory. It emphasizes both the power and deficiency of inductive reasoning using a number of historically significant examples. The notion of computational experiment as a modern approach to the teaching of mathematics is discussed. The paper, grounded in a teacher-student dialogue as an instructional method, is a reflection on the author’s work over the years with prospective teachers of secondary mathematics.

  6. SEDMIN - Microsoft Excel™ spreadsheet for calculating fine-grained sedimentary rock mineralogy from bulk geochemical analysis

    Science.gov (United States)

    Kackstaetter, Uwe

    2014-06-01

    Normative mineralogical calculations from bulk geochemistry of sedimentary rocks are problematic because of variable depositional environments, particle hydraulics and sedimentary source systems. The development of SEDMIN, a Microsoft Excel™ spreadsheet solution, is a practical attempt for a computational routine focusing specifically on smectite, chlorite, kaolinite, illite and the ambiguous sericite within various pelitic sedimentary lithologies. While in essence a mathematical approach, the use of statistical evaluation of empirical lithogeochemical data combined with modal analytical procedures yields reasonable geochemical associations, more precise chemical phases and revised procedural allotment paradigms. Thus, an algorithm using TiO2 as a key to the normative calculation of kaolinite is proposed. Incorporating additional parameters, such as LOI (Loss-on-ignition) in conjunction with carbon, sulfur, carbonate and sulfate, provides that clay phases can be more accurately determined than from bulk oxides alone. Even when presented with atypical sample data, the spreadsheet solution is able to accurately predict predominant clay minerals. Besides some drawbacks, the likely benefit from SEDMIN is the incorporation of results in classification norms and diagrams indicative of sedimentary lithologies. The "SEDMIN Sedimentary Mineral Calculator.xlsx" spreadsheet can be freely downloaded from http://earthscienceeducation.net/SEDMINSedimentaryMineralCalculator.xlsx.

  7. Spreadsheets Across the Curriculum, 2: Assessing Our Success with Students at Eckerd College

    Directory of Open Access Journals (Sweden)

    Laura Reiser Wetzel

    2011-01-01

    Full Text Available The Spreadsheets Across the Curriculum (SSAC library consists of activities to reinforce or teach quantitative literacy or mathematical concepts and skills in context. Each SSAC “module” consists of a PowerPoint presentation with embedded Excel spreadsheets. Each student works through a presentation, thinks about the in-context problem, figures out how to solve it mathematically, and builds spreadsheets to calculate and examine answers.To assess the effectiveness of SSAC modules, I surveyed Eckerd College undergraduates in two separate studies. Two undergraduate research assistants and I generated pre- and post-tests for 10 SSAC modules. We hired 21 undergraduates who conducted 62 individual module assessments during their free time in exchange for modest stipends. To complement this initial study, 12 students assessed three modules in the context of an upper-level geology course. In both the individual and in class experiments, students with a wide variety of academic interests and expertise showed improvements in quantitative and Excel skills.Based on my experiences, I recommend that instructors wishing to use SSAC modules carefully match student ability with module difficulty, use more than one module over the course of a semester, ensure that students have realistic expectations before starting, and facilitate student use in a supervised setting.

  8. INVARIANT PRACTICAL TASKS FOR WORK WITH ELECTRONIC SPREADSHEETS AT THE SECONDARY SCHOOL

    Directory of Open Access Journals (Sweden)

    Л И Карташова

    2016-12-01

    Full Text Available In article examples of practical jobs on creation and editing electronic spreadsheets for pupils of the main school are given. For fixing of knowledge and abilities of pupils on formatting of cells they are offered to create, for example, in the plate processor the table and to make its formatting on a sample which shall be brought to the computer monitor, is printed on a color printer and is laid out on the local area network in the form of the image. In the course of assimilation of data types jobs for determination and the explanation of data types to which different strings belong are offered school students. For assimilation of features of record of formulas school students are offered to write different mathematical expressions in the look suitable for use in electronic spreadsheets.Jobs reflect fundamental invariant approach to implementation of technology of operation with electronic spreadsheets as don’t depend on specific versions of computer programs. The provided jobs can be used in case of study of any plate processors. In training activity on the basis of use of invariant jobs there is a mastering the generalized methods of activities to numerical information that will allow to create a system view on use of information technologies and to consciously apply them to the solution of tasks.

  9. A Spreadsheet Simulation to Teach Concepts of Sampling Distributions and the Central Limit Theorem

    Directory of Open Access Journals (Sweden)

    Mark H. Haney

    2015-11-01

    Full Text Available This paper presents an interactive spreadsheet simulation model that may be used to help students understand the concept of sampling distributions and the implications of the central limit theorem for sampling distributions. The spreadsheet model simulates an approximation to a sampling distribution by taking 1,000 random samples from a population, calculating the mean of each sample, and then using percentage polygons to display the distribution of the sample means compared to the distribution of the population. A normal probability plot of the sample means is also created as a second tool for understanding the distribution of the sample means. The user may vary the size of the samples taken, and then observe the effects of sample size on the range and shape of the approximated sampling distribution. The spreadsheet model is built without macros or VBA programming, using only standard formulas and tools. The instructor may choose to build the model with students, or simply present it to them and lead them in experimenting with it, depending on the needs of the class.

  10. Station Program Note Pull Automation

    Science.gov (United States)

    Delgado, Ivan

    2016-01-01

    Upon commencement of my internship, I was in charge of maintaining the CoFR (Certificate of Flight Readiness) Tool. The tool acquires data from existing Excel workbooks on NASA's and Boeing's databases to create a new spreadsheet listing out all the potential safety concerns for upcoming flights and software transitions. Since the application was written in Visual Basic, I had to learn a new programming language and prepare to handle any malfunctions within the program. Shortly afterwards, I was given the assignment to automate the Station Program Note (SPN) Pull process. I developed an application, in Python, that generated a GUI (Graphical User Interface) that will be used by the International Space Station Safety & Mission Assurance team here at Johnson Space Center. The application will allow its users to download online files with the click of a button, import SPN's based on three different pulls, instantly manipulate and filter spreadsheets, and compare the three sources to determine which active SPN's (Station Program Notes) must be reviewed for any upcoming flights, missions, and/or software transitions. Initially, to perform the NASA SPN pull (one of three), I had created the program to allow the user to login to a secure webpage that stores data, input specific parameters, and retrieve the desired SPN's based on their inputs. However, to avoid any conflicts with sustainment, I altered it so that the user may login and download the NASA file independently. After the user has downloaded the file with the click of a button, I defined the program to check for any outdated or pre-existing files, for successful downloads, to acquire the spreadsheet, convert it from a text file to a comma separated file and finally into an Excel spreadsheet to be filtered and later scrutinized for specific SPN numbers. Once this file has been automatically manipulated to provide only the SPN numbers that are desired, they are stored in a global variable, shown on the GUI, and

  11. Towards Robot Scientists for autonomous scientific discovery.

    Science.gov (United States)

    Sparkes, Andrew; Aubrey, Wayne; Byrne, Emma; Clare, Amanda; Khan, Muhammed N; Liakata, Maria; Markham, Magdalena; Rowland, Jem; Soldatova, Larisa N; Whelan, Kenneth E; Young, Michael; King, Ross D

    2010-01-04

    We review the main components of autonomous scientific discovery, and how they lead to the concept of a Robot Scientist. This is a system which uses techniques from artificial intelligence to automate all aspects of the scientific discovery process: it generates hypotheses from a computer model of the domain, designs experiments to test these hypotheses, runs the physical experiments using robotic systems, analyses and interprets the resulting data, and repeats the cycle. We describe our two prototype Robot Scientists: Adam and Eve. Adam has recently proven the potential of such systems by identifying twelve genes responsible for catalysing specific reactions in the metabolic pathways of the yeast Saccharomyces cerevisiae. This work has been formally recorded in great detail using logic. We argue that the reporting of science needs to become fully formalised and that Robot Scientists can help achieve this. This will make scientific information more reproducible and reusable, and promote the integration of computers in scientific reasoning. We believe the greater automation of both the physical and intellectual aspects of scientific investigations to be essential to the future of science. Greater automation improves the accuracy and reliability of experiments, increases the pace of discovery and, in common with conventional laboratory automation, removes tedious and repetitive tasks from the human scientist.

  12. Advances in synthetic peptides reagent discovery

    Science.gov (United States)

    Adams, Bryn L.; Sarkes, Deborah A.; Finch, Amethist S.; Stratis-Cullum, Dimitra N.

    2013-05-01

    Bacterial display technology offers a number of advantages over competing display technologies (e.g, phage) for the rapid discovery and development of peptides with interaction targeted to materials ranging from biological hazards through inorganic metals. We have previously shown that discovery of synthetic peptide reagents utilizing bacterial display technology is relatively simple and rapid to make laboratory automation possible. This included extensive study of the protective antigen system of Bacillus anthracis, including development of discovery, characterization, and computational biology capabilities for in-silico optimization. Although the benefits towards CBD goals are evident, the impact is far-reaching due to our ability to understand and harness peptide interactions that are ultimately extendable to the hybrid biomaterials of the future. In this paper, we describe advances in peptide discovery including, new target systems (e.g. non-biological materials), advanced library development and clone analysis including integrated reporting.

  13. Automated External Defibrillator

    Science.gov (United States)

    ... To Health Topics / Automated External Defibrillator Automated External Defibrillator Also known as What Is An automated external ... in survival. Training To Use an Automated External Defibrillator Learning how to use an AED and taking ...

  14. Library Automation.

    Science.gov (United States)

    Husby, Ole

    1990-01-01

    The challenges and potential benefits of automating university libraries are reviewed, with special attention given to cooperative systems. Aspects discussed include database size, the role of the university computer center, storage modes, multi-institutional systems, resource sharing, cooperative system management, networking, and intelligent…

  15. Automated DBS microsampling, microscale automation and microflow LC-MS for therapeutic protein PK.

    Science.gov (United States)

    Zhang, Qian; Tomazela, Daniela; Vasicek, Lisa A; Spellman, Daniel S; Beaumont, Maribel; Shyong, BaoJen; Kenny, Jacqueline; Fauty, Scott; Fillgrove, Kerry; Harrelson, Jane; Bateman, Kevin P

    2016-04-01

    Reduce animal usage for discovery-stage PK studies for biologics programs using microsampling-based approaches and microscale LC-MS. We report the development of an automated DBS-based serial microsampling approach for studying the PK of therapeutic proteins in mice. Automated sample preparation and microflow LC-MS were used to enable assay miniaturization and improve overall assay throughput. Serial sampling of mice was possible over the full 21-day study period with the first six time points over 24 h being collected using automated DBS sample collection. Overall, this approach demonstrated comparable data to a previous study using single mice per time point liquid samples while reducing animal and compound requirements by 14-fold. Reduction in animals and drug material is enabled by the use of automated serial DBS microsampling for mice studies in discovery-stage studies of protein therapeutics.

  16. Validation and configuration management plan for the KE basins KE-PU spreadsheet code

    International Nuclear Information System (INIS)

    Harris, R.A.

    1996-01-01

    This report provides documentation of the spreadsheet KE-PU software that is used to verify compliance with the Operational Safety Requirement and Process Standard limit on the amount of plutonium in the KE-Basin sandfilter backwash pit. Included are: A summary of the verification of the method and technique used in KE-PU that were documented elsewhere, the requirements, plans, and results of validation tests that confirm the proper functioning of the software, the procedures and approvals required to make changes to the software, and the method used to maintain configuration control over the software

  17. Randomization for clinical research: an easy-to-use spreadsheet method.

    Science.gov (United States)

    Padhye, Nikhil S; Cron, Stanley G; Gusick, Gary M; Hamlin, Shannan K; Hanneman, Sandra K

    2009-10-01

    In this article, we illustrate a new method for random selection and random assignment that we developed in a pilot study for a randomized clinical trial. The randomization database is supported by a commonly available spreadsheet. Formulas were written for randomizing participants and for creating a "shadow" system to verify integrity of the randomization. Advantages of this method are that it is easy to use, effective, and portable, allowing it to be shared among multiple investigators at multiple study sites. Clinical researchers may find the method useful for research projects that are pilot studies or conducted with limited funding.

  18. Using Spreadsheets in Geoscience Education: Survey and Annotated Bibliography of Articles in the Journal of Geoscience Education through 2003

    Directory of Open Access Journals (Sweden)

    Beth Fratesi

    2005-10-01

    Full Text Available Thirty-eight papers published in the Journal of Geoscience Education (JGE from 1989 through 2003 explicitly use or recommend the use of spreadsheets as part of classroom or field exercises, projects, or entire courses. Many of the papers include the spreadsheets, and some include the equations. The papers demonstrate how spreadsheets allow students to explore a subject through problem-oriented, interactive, and quantitative exercises. We provide an annotated bibliography and classify the 38 JGE papers by spreadsheet use, mathematics skill area, and geologic subdiscipline. Our discussion of five selected articles — abundance of elements in the Earth’s crust; directional properties of inclined strata; U-shaped valleys scoured by mountain glaciers; the Laplace Equation for groundwater flow; the location of our solar system within the Milky Way galaxy — demonstrates the huge breadth of topics in the earth science curriculum. The 38 papers collectively, and the five examples individually, make the point that spreadsheets developed for geoscience education can provide context for principles taught in courses of other disciplines, including mathematics. Our classification by mathematics skill area follows the content standards of the National Council of Teachers of Mathematics (USA and may prove useful for educators seeking problems for skills-based assessment.

  19. Automated Motivic Analysis

    DEFF Research Database (Denmark)

    Lartillot, Olivier

    2016-01-01

    Motivic analysis provides very detailed understanding of musical composi- tions, but is also particularly difficult to formalize and systematize. A computational automation of the discovery of motivic patterns cannot be reduced to a mere extraction of all possible sequences of descriptions....... The systematic approach inexorably leads to a proliferation of redundant structures that needs to be addressed properly. Global filtering techniques cause a drastic elimination of interesting structures that damages the quality of the analysis. On the other hand, a selection of closed patterns allows...... for lossless compression. The structural complexity resulting from successive repetitions of patterns can be controlled through a simple modelling of cycles. Generally, motivic patterns cannot always be defined solely as sequences of descriptions in a fixed set of dimensions: throughout the descriptions...

  20. Ladtap XL Version 2017: A Spreadsheet For Estimating Dose Resulting From Aqueous Releases

    Energy Technology Data Exchange (ETDEWEB)

    Minter, K. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Jannik, T. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2017-06-15

    LADTAP XL© is an EXCEL© spreadsheet used to estimate dose to offsite individuals and populations resulting from routine and accidental releases of radioactive materials to the Savannah River. LADTAP XL© contains two worksheets: LADTAP and IRRIDOSE. The LADTAP worksheet estimates dose for environmental pathways including external exposure resulting from recreational activities on the Savannah River and internal exposure resulting from ingestion of water, fish, and invertebrates originating from the Savannah River. IRRIDOSE estimates offsite dose to individuals and populations from irrigation of foodstuffs with contaminated water from the Savannah River. In 2004, a complete description of the LADTAP XL© code and an associated user’s manual was documented in LADTAP XL©: A Spreadsheet for Estimating Dose Resulting from Aqueous Release (WSRC-TR-2004-00059) and revised input parameters, dose coefficients, and radionuclide decay constants were incorporated into LADTAP XL© Version 2013 (SRNL-STI-2011-00238). LADTAP XL© Version 2017 is a slight modification to Version 2013 with minor changes made for more user-friendly parameter inputs and organization, updates in the time conversion factors used within the dose calculations, and fixed an issue with the expected time build-up parameter referenced within the population shoreline dose calculations. This manual has been produced to update the code description, verification of the models, and provide an updated user’s manual. LADTAP XL© Version 2017 has been verified by Minter (2017) and is ready for use at the Savannah River Site (SRS).

  1. USE OF ELECTRONIC EDUCATIONAL RESOURCES WHEN TRAINING IN WORK WITH SPREADSHEETS

    Directory of Open Access Journals (Sweden)

    Х А Гербеков

    2017-12-01

    Full Text Available Today the tools for maintaining training courses based on opportunities of information and communication technologies are developed. Practically in all directions of preparation and on all subject matters electronic textbook and self-instruction manuals are created. Nevertheless the industry of computer educational and methodical materials actively develops and gets more and more areas of development and introduction. In this regard more and more urgent is a problem of development of the electronic educational resources adequate to modern educational requirements. Creation and the organization of training courses with use of electronic educational resources in particular on the basis of Internet technologies remains a difficult methodical task.In article the questions connected with development of electronic educational resources for use when studying the substantial line “Information technologies” of a school course of informatics in particular for studying of spreadsheets are considered. Also the analysis of maintenance of a school course and the unified state examination from the point of view of representation of task in him corresponding to the substantial line of studying “Information technologies” on mastering technology of information processing in spreadsheets and the methods of visualization given by means of charts and schedules is carried out.

  2. Peningkatan Hasil Belajar Operasional Spreadsheet Jenis dan Fungsi dengan Rumus Statistik Akuntansi melalui Demonstrasi dan Presentasi

    Directory of Open Access Journals (Sweden)

    NURBAITI SALPIDA GINAYANTI

    2016-08-01

    Full Text Available The research was purposed to find out the effectiveness of demonstration and presentation models are able to improve study result of students in operating Spreadsheet type and function by statistics in X Accounting 1 in SMKN 48 at Academic Year 2014/2015. The reasearch was conducted on August-November 2014. The method of the research was Action Research (PTK which was conducted in two cycles. Demonstration and Presentation were used as learning cycle model. A cycle consisted in three times of meetings and in the third meeting was done Post test. The indicators have been achieved by the result of research. As the expectation, in the second cycle, namely the number of students got the highest points was 97,45% and average value was 80,79%. In conclusion, demonstration and presentation are able to improve students’ study result in operating spreadsheet type and function by statistics if it was implemented appropriately in X Accounting 1 in SMKN 48 Jakarta.

  3. PLAN: a web platform for automating high-throughput BLAST searches and for managing and mining results

    Directory of Open Access Journals (Sweden)

    Zhao Xuechun

    2007-02-01

    Full Text Available Abstract Background BLAST searches are widely used for sequence alignment. The search results are commonly adopted for various functional and comparative genomics tasks such as annotating unknown sequences, investigating gene models and comparing two sequence sets. Advances in sequencing technologies pose challenges for high-throughput analysis of large-scale sequence data. A number of programs and hardware solutions exist for efficient BLAST searching, but there is a lack of generic software solutions for mining and personalized management of the results. Systematically reviewing the results and identifying information of interest remains tedious and time-consuming. Results Personal BLAST Navigator (PLAN is a versatile web platform that helps users to carry out various personalized pre- and post-BLAST tasks, including: (1 query and target sequence database management, (2 automated high-throughput BLAST searching, (3 indexing and searching of results, (4 filtering results online, (5 managing results of personal interest in favorite categories, (6 automated sequence annotation (such as NCBI NR and ontology-based annotation. PLAN integrates, by default, the Decypher hardware-based BLAST solution provided by Active Motif Inc. with a greatly improved efficiency over conventional BLAST software. BLAST results are visualized by spreadsheets and graphs and are full-text searchable. BLAST results and sequence annotations can be exported, in part or in full, in various formats including Microsoft Excel and FASTA. Sequences and BLAST results are organized in projects, the data publication levels of which are controlled by the registered project owners. In addition, all analytical functions are provided to public users without registration. Conclusion PLAN has proved a valuable addition to the community for automated high-throughput BLAST searches, and, more importantly, for knowledge discovery, management and sharing based on sequence alignment results

  4. PLAN: a web platform for automating high-throughput BLAST searches and for managing and mining results.

    Science.gov (United States)

    He, Ji; Dai, Xinbin; Zhao, Xuechun

    2007-02-09

    BLAST searches are widely used for sequence alignment. The search results are commonly adopted for various functional and comparative genomics tasks such as annotating unknown sequences, investigating gene models and comparing two sequence sets. Advances in sequencing technologies pose challenges for high-throughput analysis of large-scale sequence data. A number of programs and hardware solutions exist for efficient BLAST searching, but there is a lack of generic software solutions for mining and personalized management of the results. Systematically reviewing the results and identifying information of interest remains tedious and time-consuming. Personal BLAST Navigator (PLAN) is a versatile web platform that helps users to carry out various personalized pre- and post-BLAST tasks, including: (1) query and target sequence database management, (2) automated high-throughput BLAST searching, (3) indexing and searching of results, (4) filtering results online, (5) managing results of personal interest in favorite categories, (6) automated sequence annotation (such as NCBI NR and ontology-based annotation). PLAN integrates, by default, the Decypher hardware-based BLAST solution provided by Active Motif Inc. with a greatly improved efficiency over conventional BLAST software. BLAST results are visualized by spreadsheets and graphs and are full-text searchable. BLAST results and sequence annotations can be exported, in part or in full, in various formats including Microsoft Excel and FASTA. Sequences and BLAST results are organized in projects, the data publication levels of which are controlled by the registered project owners. In addition, all analytical functions are provided to public users without registration. PLAN has proved a valuable addition to the community for automated high-throughput BLAST searches, and, more importantly, for knowledge discovery, management and sharing based on sequence alignment results. The PLAN web interface is platform

  5. On the use of a standard spreadsheet to model physical systems in school teaching*

    Science.gov (United States)

    Quale, Andreas

    2012-05-01

    In the teaching of physics at upper secondary school level (K10-K12), the students are generally taught to solve problems analytically, i.e. using the dynamics describing a system (typically in the form of differential equations) to compute its evolution in time, e.g. the motion of a body along a straight line or in a plane. This reduces the scope of problems, i.e. the kind of problems that are within students' capabilities. To make the tasks mathematically solvable, one is restricted to very idealized situations; more realistic problems are too difficult (or even impossible) to handle analytically with the mathematical abilities that may be expected from students at this level. For instance, ordinary ballistic trajectories under the action of gravity, when air resistance is included, have been 'out of reach'; in school textbooks such trajectories are generally assumed to take place in a vacuum. Another example is that according to Newton's law of universal gravitation satellites will in general move around a large central body in elliptical orbits, but the students can only deal with the special case where the orbit is circular, thus precluding (for example) a verification and discussion of Kepler's laws. It is shown that standard spreadsheet software offers a tool that can handle many such realistic situations in a uniform way, and display the results both numerically and graphically on a computer screen, quite independently of whether the formal description of the physical system itself is 'mathematically tractable'. The method employed, which is readily accessible to high school students, is to perform a numerical integration of the equations of motion, exploiting the spreadsheet's capability of successive iterations. The software is used to model and study motion of bodies in external force fields; specifically, ballistic trajectories in a homogeneous gravity field with air resistance and satellite motion in a centrally symmetric gravitational field. The

  6. ThinTool: a spreadsheet model to evaluate fuel reduction thinning cost, net energy output, and nutrient impacts

    Science.gov (United States)

    Sang-Kyun Han; Han-Sup Han; William J. Elliot; Edward M. Bilek

    2017-01-01

    We developed a spreadsheet-based model, named ThinTool, to evaluate the cost of mechanical fuel reduction thinning including biomass removal, to predict net energy output, and to assess nutrient impacts from thinning treatments in northern California and southern Oregon. A combination of literature reviews, field-based studies, and contractor surveys was used to...

  7. The Euler’s Graphical User Interface Spreadsheet Calculator for Solving Ordinary Differential Equations by Visual Basic for Application Programming

    Science.gov (United States)

    Gaik Tay, Kim; Cheong, Tau Han; Foong Lee, Ming; Kek, Sie Long; Abdul-Kahar, Rosmila

    2017-08-01

    In the previous work on Euler’s spreadsheet calculator for solving an ordinary differential equation, the Visual Basic for Application (VBA) programming was used, however, a graphical user interface was not developed to capture users input. This weakness may make users confuse on the input and output since those input and output are displayed in the same worksheet. Besides, the existing Euler’s spreadsheet calculator is not interactive as there is no prompt message if there is a mistake in inputting the parameters. On top of that, there are no users’ instructions to guide users to input the derivative function. Hence, in this paper, we improved previous limitations by developing a user-friendly and interactive graphical user interface. This improvement is aimed to capture users’ input with users’ instructions and interactive prompt error messages by using VBA programming. This Euler’s graphical user interface spreadsheet calculator is not acted as a black box as users can click on any cells in the worksheet to see the formula used to implement the numerical scheme. In this way, it could enhance self-learning and life-long learning in implementing the numerical scheme in a spreadsheet and later in any programming language.

  8. Teaching Students to Model Neural Circuits and Neural Networks Using an Electronic Spreadsheet Simulator. Microcomputing Working Paper Series.

    Science.gov (United States)

    Hewett, Thomas T.

    There are a number of areas in psychology where an electronic spreadsheet simulator can be used to study and explore functional relationships among a number of parameters. For example, when dealing with sensation, perception, and pattern recognition, it is sometimes desirable for students to understand both the basic neurophysiology and the…

  9. Pre-service teachers’ TPACK competencies for spreadsheet integration: insights from a mathematics-specific instructional technology course

    NARCIS (Netherlands)

    Agyei, D.D.; Voogt, J.M.

    2015-01-01

    This article explored the impact of strategies applied in a mathematics instructional technology course for developing technology integration competencies, in particular in the use of spreadsheets, in pre-service teachers. In this respect, 104 pre-service mathematics teachers from a teacher training

  10. A Novel Real-Time Data Acquisition Using an Excel Spreadsheet in Pendulum Experiment Tool with Light-Based Timer

    Science.gov (United States)

    Adhitama, Egy; Fauzi, Ahmad

    2018-01-01

    In this study, a pendulum experimental tool with a light-based timer has been developed to measure the period of a simple pendulum. The obtained data was automatically recorded in an Excel spreadsheet. The intensity of monochromatic light, sensed by a 3DU5C phototransistor, dynamically changes as the pendulum swings. The changed intensity varies…

  11. Teaching Simulation and Computer-Aided Separation Optimization in Liquid Chromatography by Means of Illustrative Microsoft Excel Spreadsheets

    Science.gov (United States)

    Fasoula, S.; Nikitas, P.; Pappa-Louisi, A.

    2017-01-01

    A series of Microsoft Excel spreadsheets were developed to simulate the process of separation optimization under isocratic and simple gradient conditions. The optimization procedure is performed in a stepwise fashion using simple macros for an automatic application of this approach. The proposed optimization approach involves modeling of the peak…

  12. Autonomous Systems: Habitat Automation

    Data.gov (United States)

    National Aeronautics and Space Administration — The Habitat Automation Project Element within the Autonomous Systems Project is developing software to automate the automation of habitats and other spacecraft. This...

  13. An Automation Planning Primer.

    Science.gov (United States)

    Paynter, Marion

    1988-01-01

    This brief planning guide for library automation incorporates needs assessment and evaluation of options to meet those needs. A bibliography of materials on automation planning and software reviews, library software directories, and library automation journals is included. (CLB)

  14. Methodology for the National Water Savings Model and Spreadsheet Tool Commercial/Institutional

    Energy Technology Data Exchange (ETDEWEB)

    Chan, Peter [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Long, Tim [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Williams, Alison [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Melody, Moya [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2012-01-01

    Lawrence Berkeley National Laboratory (LBNL) has developed a mathematical model to quantify the water and monetary savings attributable to the United States Environmental Protection Agency’s (EPA’s) WaterSense labeling program for commercial and institutional products. The National Water Savings–Commercial/Institutional (NWS-CI) model is a spreadsheet tool with which the EPA can evaluate the success of its program for encouraging buyers in the commercial and institutional (CI) sectors to purchase more water-efficient products. WaterSense has begun by focusing on three water-using products commonly used in the CI sectors: flushometer valve toilets, urinals, and pre-rinse spray valves. To estimate the savings attributable to WaterSense for each of the three products, LBNL applies an accounting method to national product shipments and lifetimes to estimate the shipments of each product.

  15. ISA-TAB-Nano: A Specification for Sharing Nanomaterial Research Data in Spreadsheet-based Format

    Science.gov (United States)

    2013-01-01

    Background and motivation The high-throughput genomics communities have been successfully using standardized spreadsheet-based formats to capture and share data within labs and among public repositories. The nanomedicine community has yet to adopt similar standards to share the diverse and multi-dimensional types of data (including metadata) pertaining to the description and characterization of nanomaterials. Owing to the lack of standardization in representing and sharing nanomaterial data, most of the data currently shared via publications and data resources are incomplete, poorly-integrated, and not suitable for meaningful interpretation and re-use of the data. Specifically, in its current state, data cannot be effectively utilized for the development of predictive models that will inform the rational design of nanomaterials. Results We have developed a specification called ISA-TAB-Nano, which comprises four spreadsheet-based file formats for representing and integrating various types of nanomaterial data. Three file formats (Investigation, Study, and Assay files) have been adapted from the established ISA-TAB specification; while the Material file format was developed de novo to more readily describe the complexity of nanomaterials and associated small molecules. In this paper, we have discussed the main features of each file format and how to use them for sharing nanomaterial descriptions and assay metadata. Conclusion The ISA-TAB-Nano file formats provide a general and flexible framework to record and integrate nanomaterial descriptions, assay data (metadata and endpoint measurements) and protocol information. Like ISA-TAB, ISA-TAB-Nano supports the use of ontology terms to promote standardized descriptions and to facilitate search and integration of the data. The ISA-TAB-Nano specification has been submitted as an ASTM work item to obtain community feedback and to provide a nanotechnology data-sharing standard for public development and adoption. PMID

  16. Topology Discovery Using Cisco Discovery Protocol

    OpenAIRE

    Rodriguez, Sergio R.

    2009-01-01

    In this paper we address the problem of discovering network topology in proprietary networks. Namely, we investigate topology discovery in Cisco-based networks. Cisco devices run Cisco Discovery Protocol (CDP) which holds information about these devices. We first compare properties of topologies that can be obtained from networks deploying CDP versus Spanning Tree Protocol (STP) and Management Information Base (MIB) Forwarding Database (FDB). Then we describe a method of discovering topology ...

  17. Automated Budget System -

    Data.gov (United States)

    Department of Transportation — The Automated Budget System (ABS) automates management and planning of the Mike Monroney Aeronautical Center (MMAC) budget by providing enhanced capability to plan,...

  18. Automation 2017

    CERN Document Server

    Zieliński, Cezary; Kaliczyńska, Małgorzata

    2017-01-01

    This book consists of papers presented at Automation 2017, an international conference held in Warsaw from March 15 to 17, 2017. It discusses research findings associated with the concepts behind INDUSTRY 4.0, with a focus on offering a better understanding of and promoting participation in the Fourth Industrial Revolution. Each chapter presents a detailed analysis of a specific technical problem, in most cases followed by a numerical analysis, simulation and description of the results of implementing the solution in a real-world context. The theoretical results, practical solutions and guidelines presented are valuable for both researchers working in the area of engineering sciences and practitioners looking for solutions to industrial problems. .

  19. Marketing automation

    Directory of Open Access Journals (Sweden)

    TODOR Raluca Dania

    2017-01-01

    Full Text Available The automation of the marketing process seems to be nowadays, the only solution to face the major changes brought by the fast evolution of technology and the continuous increase in supply and demand. In order to achieve the desired marketing results, businessis have to employ digital marketing and communication services. These services are efficient and measurable thanks to the marketing technology used to track, score and implement each campaign. Due to the technical progress, the marketing fragmentation, demand for customized products and services on one side and the need to achieve constructive dialogue with the customers, immediate and flexible response and the necessity to measure the investments and the results on the other side, the classical marketing approached had changed continue to improve substantially.

  20. Polar Domain Discovery with Sparkler

    Science.gov (United States)

    Duerr, R.; Khalsa, S. J. S.; Mattmann, C. A.; Ottilingam, N. K.; Singh, K.; Lopez, L. A.

    2017-12-01

    The scientific web is vast and ever growing. It encompasses millions of textual, scientific and multimedia documents describing research in a multitude of scientific streams. Most of these documents are hidden behind forms which require user action to retrieve and thus can't be directly accessed by content crawlers. These documents are hosted on web servers across the world, most often on outdated hardware and network infrastructure. Hence it is difficult and time-consuming to aggregate documents from the scientific web, especially those relevant to a specific domain. Thus generating meaningful domain-specific insights is currently difficult. We present an automated discovery system (Figure 1) using Sparkler, an open-source, extensible, horizontally scalable crawler which facilitates high throughput and focused crawling of documents pertinent to a particular domain such as information about polar regions. With this set of highly domain relevant documents, we show that it is possible to answer analytical questions about that domain. Our domain discovery algorithm leverages prior domain knowledge to reach out to commercial/scientific search engines to generate seed URLs. Subject matter experts then annotate these seed URLs manually on a scale from highly relevant to irrelevant. We leverage this annotated dataset to train a machine learning model which predicts the `domain relevance' of a given document. We extend Sparkler with this model to focus crawling on documents relevant to that domain. Sparkler avoids disruption of service by 1) partitioning URLs by hostname such that every node gets a different host to crawl and by 2) inserting delays between subsequent requests. With an NSF-funded supercomputer Wrangler, we scaled our domain discovery pipeline to crawl about 200k polar specific documents from the scientific web, within a day.

  1. The State of the Art in Library Discovery 2010

    Science.gov (United States)

    Breeding, Marshall

    2010-01-01

    Resource discovery tops the charts as the foremost issue within the realm of library automation. As a new year commences, the author sees a more pressing need to accelerate the pace with which libraries deliver content and services in ways that users will find compelling, relevant, and convenient. The evolution of the web advances relentlessly,…

  2. Quimiometria II: planilhas eletrônicas para cálculos de planejamentos experimentais, um tutorial Chemometrics II: spreadsheets for experimental design calculations, a tutorial

    Directory of Open Access Journals (Sweden)

    Reinaldo F. Teófilo

    2006-04-01

    Full Text Available This work describes, through examples, a simple way to carry out experimental design calculations applying an spreadsheets. The aim of this tutorial is to introduce an alternative to sophisticated commercial programs that normally are too complex in data input and output. An overview of the principal methods is also briefly presented. The spreadsheets are suitable to handle different types of computations such as screening procedures applying factorial design and the optimization procedure based on response surface methodology. Furthermore, the spreadsheets are sufficiently versatile to be adapted to specific experimental designs.

  3. Can automation in radiotherapy reduce costs?

    Science.gov (United States)

    Massaccesi, Mariangela; Corti, Michele; Azario, Luigi; Balducci, Mario; Ferro, Milena; Mantini, Giovanna; Mattiucci, Gian Carlo; Valentini, Vincenzo

    2015-01-01

    Computerized automation is likely to play an increasingly important role in radiotherapy. The objective of this study was to report the results of the first part of a program to implement a model for economical evaluation based on micro-costing method. To test the efficacy of the model, the financial impact of the introduction of an automation tool was estimated. A single- and multi-center validation of the model by a prospective collection of data is planned as the second step of the program. The model was implemented by using an interactive spreadsheet (Microsoft Excel, 2010). The variables to be included were identified across three components: productivity, staff, and equipment. To calculate staff requirements, the workflow of Gemelli ART center was mapped out and relevant workload measures were defined. Profit and loss, productivity and staffing were identified as significant outcomes. Results were presented in terms of earnings before interest and taxes (EBIT). Three different scenarios were hypothesized: baseline situation at Gemelli ART (scenario 1); reduction by 2 minutes of the average duration of treatment fractions (scenario 2); and increased incidence of advanced treatment modalities (scenario 3). By using the model, predicted EBIT values for each scenario were calculated across a period of eight years (from 2015 to 2022). For both scenarios 2 and 3 costs are expected to slightly increase as compared to baseline situation that is particularly due to a little increase in clinical personnel costs. However, in both cases EBIT values are more favorable than baseline situation (EBIT values: scenario 1, 27%, scenario 2, 30%, scenario 3, 28% of revenues). A model based on a micro-costing method was able to estimate the financial consequences of the introduction of an automation tool in our radiotherapy department. A prospective collection of data at Gemelli ART and in a consortium of centers is currently under way to prospectively validate the model.

  4. Both Automation and Paper.

    Science.gov (United States)

    Purcell, Royal

    1988-01-01

    Discusses the concept of a paperless society and the current situation in library automation. Various applications of automation and telecommunications are addressed, and future library automation is considered. Automation at the Monroe County Public Library in Bloomington, Indiana, is described as an example. (MES)

  5. Academic Drug Discovery Centres

    DEFF Research Database (Denmark)

    Kirkegaard, Henriette Schultz; Valentin, Finn

    2014-01-01

    Academic drug discovery centres (ADDCs) are seen as one of the solutions to fill the innovation gap in early drug discovery, which has proven challenging for previous organisational models. Prior studies of ADDCs have identified the need to analyse them from the angle of their economic and organi......Academic drug discovery centres (ADDCs) are seen as one of the solutions to fill the innovation gap in early drug discovery, which has proven challenging for previous organisational models. Prior studies of ADDCs have identified the need to analyse them from the angle of their economic...... their performance....

  6. Mass spectrometry for protein quantification in biomarker discovery.

    Science.gov (United States)

    Wang, Mu; You, Jinsam

    2012-01-01

    Major technological advances have made proteomics an extremely active field for biomarker discovery in recent years due primarily to the development of newer mass spectrometric technologies and the explosion in genomic and protein bioinformatics. This leads to an increased emphasis on larger scale, faster, and more efficient methods for detecting protein biomarkers in human tissues, cells, and biofluids. Most current proteomic methodologies for biomarker discovery, however, are not highly automated and are generally labor-intensive and expensive. More automation and improved software programs capable of handling a large amount of data are essential to reduce the cost of discovery and to increase throughput. In this chapter, we discuss and describe mass spectrometry-based proteomic methods for quantitative protein analysis.

  7. MrBUMP: an automated pipeline for molecular replacement

    OpenAIRE

    Keegan, Ronan M.; Winn, Martyn D.

    2007-01-01

    A novel automation pipeline for macromolecular structure solution by molecular replacement is described. There is a special emphasis on the discovery and preparation of a large number of search models, all of which can be passed to the core molecular-replacement programs. For routine molecular-replacement problems, the pipeline automates what a crystallographer might do and its value is simply one of convenience. For more difficult cases, the pipeline aims to discover the particular template ...

  8. An Excel Spreadsheet Model for States and Districts to Assess the Cost-Benefit of School Nursing Services.

    Science.gov (United States)

    Wang, Li Yan; O'Brien, Mary Jane; Maughan, Erin D

    2016-11-01

    This paper describes a user-friendly, Excel spreadsheet model and two data collection instruments constructed by the authors to help states and districts perform cost-benefit analyses of school nursing services delivered by full-time school nurses. Prior to applying the model, states or districts need to collect data using two forms: "Daily Nurse Data Collection Form" and the "Teacher Survey." The former is used to record daily nursing activities, including number of student health encounters, number of medications administered, number of student early dismissals, and number of medical procedures performed. The latter is used to obtain estimates for the time teachers spend addressing student health issues. Once inputs are entered in the model, outputs are automatically calculated, including program costs, total benefits, net benefits, and benefit-cost ratio. The spreadsheet model, data collection tools, and instructions are available at the NASN website ( http://www.nasn.org/The/CostBenefitAnalysis ).

  9. Add-in macros for rapid and versatile calculation of non-compartmental pharmacokinetic parameters on Microsoft Excel spreadsheets.

    Science.gov (United States)

    Sato, H; Sato, S; Wang, Y M; Horikoshi, I

    1996-06-01

    We developed a package of macro programs (named PK_MOMENT) to automatically calculate non-compartmental pharmacokinetic parameters on Microsoft Excel spreadsheets. These macros include rigorous algorithms to execute moment calculations in a comprehensive manner. An optimum number of terminal data points for infinite-time extrapolation can be calculated with one of these macros so that automatic calculation of infinite moment parameters is possible. The moment calculation with PK_MOMENT provided satisfactory results using the hybrid (mixed linear-logarithmic) trapezoidal method rather than the conventional linear trapezoidal method. The macro-aided pharmacokinetic analyses turned out to be useful in that the macro-containing cells can be easily copied and pasted to analyze other data sets and that powerful tools of Excel can be utilized. The use of our macros will be significantly time-saving for routine pharmacokinetic analyses, considering that pharmacokinetic data are usually stored in a spreadsheet format, typically with Excel.

  10. A timetable organizer for the planning and implementation of screenings in manual or semi-automation mode.

    Science.gov (United States)

    Goktug, Asli N; Chai, Sergio C; Chen, Taosheng

    2013-09-01

    We have designed an Excel spreadsheet to facilitate the planning and execution of screenings performed manually or in semi-automation mode, following a sequential set of events. Many assays involve multiple steps, often including time-sensitive stages, thus complicating the proper implementation to ensure that all plates are treated equally to achieve reliable outcomes. The spreadsheet macro presented in this study analyzes and breaks down the timings for all tasks, calculates the limitation in the number of plates that suit the desired parameters, and allows for optimization based on tolerance of time delay and equal treatment of plates when possible. The generated Gantt charts allow for visual inspection of the screening process and provide timings in a tabulated form to assist the user to conduct the experiments as projected by the software. The program can be downloaded from http://sourceforge.net/projects/sams-hts/.

  11. "Eureka, Eureka!" Discoveries in Science

    Science.gov (United States)

    Agarwal, Pankaj

    2011-01-01

    Accidental discoveries have been of significant value in the progress of science. Although accidental discoveries are more common in pharmacology and chemistry, other branches of science have also benefited from such discoveries. While most discoveries are the result of persistent research, famous accidental discoveries provide a fascinating…

  12. A SPREADSHEET MAPPING APPROACH FOR ERROR CHECKING AND SHARING COLLECTION POINT DATA

    Directory of Open Access Journals (Sweden)

    Desmond Foley

    2010-11-01

    Full Text Available The ready availability of online maps of plant and animal collection locations has drawn attention to the need for georeference accuracy. Many obvious georeference errors, for example, that map land animals over sea, the wrong hemisphere, or the wrong country, may be avoided if collectors and data providers could easily map their data points prior to publication. Various tools are available for quality control of georeference data, but many involve an investment of time to learn the software involved. This paper presents a method for the rapid map display of longitude and latitude data using the chart function in Microsoft Office Excel®, arguably the most ubiquitous spreadsheet software. Advantages of this method include: immediate visual feedback to assess data point accuracy; and results that can be easily shared with others. Methods for making custom Excel chart maps are given, and we provide free charts for the world and a selection of countries at http://www.vectormap.org/resources.htm.

  13. MINFIT: A Spreadsheet-Based Tool for Parameter Estimation in an Equilibrium Speciation Software Program.

    Science.gov (United States)

    Xie, Xiongfei; Giammar, Daniel E; Wang, Zimeng

    2016-10-07

    Deterpmination of equilibrium constants describing chemical reactions in the aqueous phase and at solid-water interface relies on inverse modeling and parameter estimation. Although there are existing tools available, the steep learning curve prevents the wider community of environmental engineers and chemists to adopt those tools. Stemming from classical chemical equilibrium codes, MINEQL+ has been one of the most widely used chemical equilibrium software programs. We developed a spreadsheet-based tool, which we are calling MINFIT, that interacts with MINEQL+ to perform parameter estimations that optimize model fits to experimental data sets. MINFIT enables automatic and convenient screening of a large number of parameter sets toward the optimal solutions by calling MINEQL+ to perform iterative forward calculations following either exhaustive equidistant grid search or randomized search algorithms. The combined use of the two algorithms can securely guide the searches for the global optima. We developed interactive interfaces so that the optimization processes are transparent. Benchmark examples including both aqueous and surface complexation problems illustrate the parameter estimation and associated sensitivity analysis. MINFIT is accessible at http://minfit.strikingly.com .

  14. Spreadsheet modeling of optimal maintenance schedule for components in wear-out phase

    International Nuclear Information System (INIS)

    Artana, K.B.; Ishida, K.

    2002-01-01

    This paper addresses a method for determining the optimum maintenance schedule for components in the wear-out phase. The interval between maintenance for the components is optimized by minimizing the total cost. This consists of maintenance cost, operational cost, downtime cost and penalty cost. A decision to replace a component must also be taken when a component cannot attain the minimum reliability and availability index requirement. Premium solver platform, a spreadsheet-modeling tool, is utilized to model the optimization problem. Constraints, which are the considerations to be fulfilled, become the director of this process. A minimum and a maximum value are set on each constraint so as to give the working area of the optimization process. The optimization process investigates n-equally spaced maintenance at an interval of Tr. The increase in operational and maintenance costs due to the deterioration of the components is taken into account. This paper also performs a case study and sensitivity analysis on a liquid ring primer of a ship's bilge system

  15. Spreadsheets Across the Curriculum, 3: Finding a List of Mathematical Skills for Quantitative Literacy Empirically

    Directory of Open Access Journals (Sweden)

    H L. Vacher

    2011-01-01

    Full Text Available What mathematical topics do educators committed to teaching mathematics in context choose for their students when given the opportunity to develop an educational resource explicitly to teach mathematics in context? This paper examines the choices made for the 55 modules by 40 authors in the General Collection of the Spreadsheets Across the Curriculum (SSAC library. About half of the modules were made by authors from natural science, and about 60% of the other modules were by authors from mathematics. The modules are tagged with terms of a search vocabulary developed for the browse page of the collection. The four terms most frequently used to tag the modules are: visual display of data (particularly XY plots and bar graphs; ratio and proportion; rates; and forward modeling (e.g., what-if?. Subdividing the modules into those authored by instructors from mathematics vs. natural science vs. other disciplines shows universal popularity of the first three choices. Forward modeling was a favorite of authors from mathematics and natural science. Manipulating equations, unit conversions, and logarithms (orders of magnitude, scientific notation were called for by authors from natural science. The paper concludes with a list of 15 concepts and skills that received the most “votes.”

  16. Technical Innovation: The Automated Residency Match Rank List.

    Science.gov (United States)

    Strickland, Colin; Rubinstein, David

    2016-01-01

    The creation of the final rank list for the National Residency Matching Program every year is a laborious task requiring the time and input of numerous faculty members and residents. This article describes the creation of an automated visual rank list to efficiently organize and guide discussion at the yearly rank meeting so that the task may be efficiently and fairly completed. The rank list was created using a PowerPoint (Microsoft) macro that can pull information directly from a spreadsheet to generate a visual rank list that can be modified on-the-fly during the final rank list meeting. An automatically created visual rank list helps facilitate an efficient meeting and creates an open and transparent process leading to the final ranking. Copyright © 2015 Mosby, Inc. All rights reserved.

  17. Automated Determination of a Package's Center of Mass

    Directory of Open Access Journals (Sweden)

    Ayaz Hemani

    2010-01-01

    Full Text Available In order to address the issue of increased efficiency and better planning for parcel shipments, an automated computer program was developed in Microsoft Excel that calculates center of mass and moments of mass with greater speed and reliability than currently implemented systems. This simple program requires only a variable density function and limits of integration for a given object as input within the spreadsheet system. Once the required input has been provided, a series of chain calculations, with the help of a Visual Basic for Applications (VBA script, is able to process the input, which is done through integration and a Riemann sum. Furthermore, the foundation of the program can also be used for calculating other physical quantities of interest such as the moment of inertia or surface area of an object.

  18. Automated Service Discovery using Autonomous Control Technologies Project

    Data.gov (United States)

    National Aeronautics and Space Administration — With the advent of mobile commerce technologies, the realization of pervasive computing and the formation of ad-hoc networks can be leveraged to the benefit of the...

  19. Automated Atmospheric Composition Dataset Level Metadata Discovery. Difficulties and Surprises

    Science.gov (United States)

    Strub, R. F.; Falke, S. R.; Kempler, S.; Fialkowski, E.; Goussev, O.; Lynnes, C.

    2015-12-01

    The Atmospheric Composition Portal (ACP) is an aggregator and curator of information related to remotely sensed atmospheric composition data and analysis. It uses existing tools and technologies and, where needed, enhances those capabilities to provide interoperable access, tools, and contextual guidance for scientists and value-adding organizations using remotely sensed atmospheric composition data. The initial focus is on Essential Climate Variables identified by the Global Climate Observing System - CH4, CO, CO2, NO2, O3, SO2 and aerosols. This poster addresses our efforts in building the ACP Data Table, an interface to help discover and understand remotely sensed data that are related to atmospheric composition science and applications. We harvested GCMD, CWIC, GEOSS metadata catalogs using machine to machine technologies - OpenSearch, Web Services. We also manually investigated the plethora of CEOS data providers portals and other catalogs where that data might be aggregated. This poster is our experience of the excellence, variety, and challenges we encountered.Conclusions:1.The significant benefits that the major catalogs provide are their machine to machine tools like OpenSearch and Web Services rather than any GUI usability improvements due to the large amount of data in their catalog.2.There is a trend at the large catalogs towards simulating small data provider portals through advanced services. 3.Populating metadata catalogs using ISO19115 is too complex for users to do in a consistent way, difficult to parse visually or with XML libraries, and too complex for Java XML binders like CASTOR.4.The ability to search for Ids first and then for data (GCMD and ECHO) is better for machine to machine operations rather than the timeouts experienced when returning the entire metadata entry at once. 5.Metadata harvest and export activities between the major catalogs has led to a significant amount of duplication. (This is currently being addressed) 6.Most (if not all) Earth science atmospheric composition data providers store a reference to their data at GCMD.

  20. Automated Discovery of Internet Censorship by Web Crawling

    OpenAIRE

    Darer, Alexander; Farnan, Oliver; Wright, Joss

    2018-01-01

    Censorship of the Internet is widespread around the world. As access to the web becomes increasingly ubiquitous, filtering of this resource becomes more pervasive. Transparency about specific content that citizens are denied access to is atypical. To counter this, numerous techniques for maintaining URL filter lists have been proposed by various individuals and organisations that aim to empirical data on censorship for benefit of the public and wider censorship research community. We present ...

  1. Automated Service Discovery using Autonomous Control Technologies, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — With the advent of mobile commerce technologies, the realization of pervasive computing and the formation of ad-hoc networks can be leveraged to the benefit of the...

  2. The Greatest Mathematical Discovery?

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, David H.; Borwein, Jonathan M.

    2010-05-12

    What mathematical discovery more than 1500 years ago: (1) Is one of the greatest, if not the greatest, single discovery in the field of mathematics? (2) Involved three subtle ideas that eluded the greatest minds of antiquity, even geniuses such as Archimedes? (3) Was fiercely resisted in Europe for hundreds of years after its discovery? (4) Even today, in historical treatments of mathematics, is often dismissed with scant mention, or else is ascribed to the wrong source? Answer: Our modern system of positional decimal notation with zero, together with the basic arithmetic computational schemes, which were discovered in India about 500 CE.

  3. Using Computer Resources (Spreadsheet to Comprehend Rational Numbers Utilizando recursos computacionais (planilha na compreensão dos Números Racionais

    Directory of Open Access Journals (Sweden)

    Rosane Ratzlaff da Rosa

    2008-12-01

    Full Text Available This article reports on an investigation which sought to determine if the use of spreadsheets in the teaching of rational numbers in elementary education contributes to learning and improved learning retention. The study was carried out with a sample of students from two sixth-grade classes in a public school in Porto Alegre. Results indicated that the use of spreadsheets favored learning and made the classes more participatory for the students, who were able to visualize the processes they were working with. A second test applied five months after the first test showed that students who used the spreadsheets had greater learning retention of the contents. The results also show that the students felt comfortable with the technology, and almost all reported that they were more motivated by the use of computers in the classroom, despite less-than-ideal laboratory conditions. Key-words: Rational Numbers. Teaching with Spreadsheet. Teaching Rational Numbers using Spreadsheet.Este artigo relata uma investigação que procurou determinar se o uso de planilha como recurso no ensino dos números racionais na Educação Básica contribui para a aprendizagem e uma maior retenção dessa aprendizagem a médio prazo. A investigação foi realizada com uma amostra de alunos de duas turmas da sexta série de uma escola pública de Porto Alegre. Os resultados indicaram que o uso da planilha favorece a aprendizagem e torna as aulas mais participativas para os alunos, que conseguiram visualizar os processos com os quais trabalharam. Um segundo teste aplicado cinco meses após o primeiro mostrou que os alunos que utilizaram a planilha apresentaram uma maior retenção do conteúdo. Os resultados apontam ainda que os alunos se sentem à vontade com a tecnologia e quase todos disseram ficarem mais motivados com as aulas utilizando o computador apesar das condições do laboratório utilizado não ser a ideal. Palavras-chave: Números Racionais. Ensino com a

  4. Autonomy and Automation

    Science.gov (United States)

    Shively, Jay

    2017-01-01

    A significant level of debate and confusion has surrounded the meaning of the terms autonomy and automation. Automation is a multi-dimensional concept, and we propose that Remotely Piloted Aircraft Systems (RPAS) automation should be described with reference to the specific system and task that has been automated, the context in which the automation functions, and other relevant dimensions. In this paper, we present definitions of automation, pilot in the loop, pilot on the loop and pilot out of the loop. We further propose that in future, the International Civil Aviation Organization (ICAO) RPAS Panel avoids the use of the terms autonomy and autonomous when referring to automated systems on board RPA. Work Group 7 proposes to develop, in consultation with other workgroups, a taxonomy of Levels of Automation for RPAS.

  5. An automated swimming respirometer

    DEFF Research Database (Denmark)

    STEFFENSEN, JF; JOHANSEN, K; BUSHNELL, PG

    1984-01-01

    An automated respirometer is described that can be used for computerized respirometry of trout and sharks.......An automated respirometer is described that can be used for computerized respirometry of trout and sharks....

  6. Configuration Management Automation (CMA) -

    Data.gov (United States)

    Department of Transportation — Configuration Management Automation (CMA) will provide an automated, integrated enterprise solution to support CM of FAA NAS and Non-NAS assets and investments. CMA...

  7. Context-sensitive service discovery experimental prototype and evaluation

    DEFF Research Database (Denmark)

    Balken, Robin; Haukrogh, Jesper; L. Jensen, Jens

    2007-01-01

    The amount of different networks and services available to users today are increasing. This introduces the need for a way to locate and sort out irrelevant services in the process of discovering available services to a user. This paper describes and evaluates a prototype of an automated discovery...... and selection system, which locates services relevant to a user, based on his/her context and the context of the available services. The prototype includes a multi-level, hierarchical system approach and the introduction of entities called User-nodes, Super-nodes and Root-nodes. These entities separate...... the network in domains that handle the complex distributed service discovery, which is based on dynamically changing context information. In the prototype, a method for performing context-sensitive service discovery has been realised. The service discovery part utilizes UPnP, which has been expanded in order...

  8. Fateful discovery almost forgotten

    CERN Multimedia

    1989-01-01

    "The discovery of the fission of uranium exactly half a century ago is at risk of passing unremarked because of the general ambivalence towards the consequences of this development. Can that be wise?" (4 pages)

  9. On the antiproton discovery

    International Nuclear Information System (INIS)

    Piccioni, O.

    1989-01-01

    The author of this article describes his own role in the discovery of the antiproton. Although Segre and Chamberlain received the Nobel Prize in 1959 for its discovery, the author claims that their experimental method was his idea which he communicated to them informally in December 1954. He describes how his application for citizenship (he was Italian), and other scientists' manipulation, prevented him from being at Berkeley to work on the experiment himself. (UK)

  10. Excemplify: A Flexible Template Based Solution, Parsing and Managing Data in Spreadsheets for Experimentalists

    Directory of Open Access Journals (Sweden)

    Shi Lei

    2013-06-01

    Full Text Available In systems biology, quantitative experimental data is the basis of building mathematical models. In most of the cases, they are stored in Excel files and hosted locally. To have a public database for collecting, retrieving and citing experimental raw data as well as experimental conditions is important for both experimentalists and modelers. However, the great effort needed in the data handling procedure and in the data submission procedure becomes the crucial limitation for experimentalists to contribute to a database, thereby impeding the database to deliver its benefit. Moreover, manual copy and paste operations which are commonly used in those procedures increase the chance of making mistakes. Excemplify, a web-based application, proposes a flexible and adaptable template-based solution to solve these problems. Comparing to the normal template based uploading approach, which is supported by some public databases, rather than predefining a format that is potentiall impractical, Excemplify allows users to create their own experiment-specific content templates in different experiment stages and to build corresponding knowledge bases for parsing. Utilizing the embedded knowledge of used templates, Excemplify is able to parse experimental data from the initial setup stage and generate following stages spreadsheets automatically. The proposed solution standardizes the flows of data traveling according to the standard procedures of applying the experiment, cuts down the amount of manual effort and reduces the chance of mistakes caused by manual data handling. In addition, it maintains the context of meta-data from the initial preparation manuscript and improves the data consistency. It interoperates and complements RightField and SEEK as well.

  11. Automation in College Libraries.

    Science.gov (United States)

    Werking, Richard Hume

    1991-01-01

    Reports the results of a survey of the "Bowdoin List" group of liberal arts colleges. The survey obtained information about (1) automation modules in place and when they had been installed; (2) financing of automation and its impacts on the library budgets; and (3) library director's views on library automation and the nature of the…

  12. The beautiful cell: high-content screening in drug discovery.

    Science.gov (United States)

    Bickle, Marc

    2010-09-01

    The term "high-content screening" has become synonymous with imaging screens using automated microscopes and automated image analysis. The term was coined a little over 10 years ago. Since then the technology has evolved considerably and has established itself firmly in the drug discovery and development industry. Both the instruments and the software controlling the instruments and analyzing the data have come to maturity, so the full benefits of high-content screening can now be realized. Those benefits are the capability of carrying out phenotypic multiparametric cellular assays in an unbiased, fully automated, and quantitative fashion. Automated microscopes and automated image analysis are being applied at all stages of the drug discovery and development pipeline. All major pharmaceutical companies have adopted the technology and it is in the process of being embraced broadly by the academic community. This review aims at describing the current capabilities and limits of the technology as well as highlighting necessary developments that are required to exploit fully the potential of high-content screening and analysis.

  13. Shotgun Proteomics and Biomarker Discovery

    Directory of Open Access Journals (Sweden)

    W. Hayes McDonald

    2002-01-01

    Full Text Available Coupling large-scale sequencing projects with the amino acid sequence information that can be gleaned from tandem mass spectrometry (MS/MS has made it much easier to analyze complex mixtures of proteins. The limits of this “shotgun” approach, in which the protein mixture is proteolytically digested before separation, can be further expanded by separating the resulting mixture of peptides prior to MS/MS analysis. Both single dimensional high pressure liquid chromatography (LC and multidimensional LC (LC/LC can be directly interfaced with the mass spectrometer to allow for automated collection of tremendous quantities of data. While there is no single technique that addresses all proteomic challenges, the shotgun approaches, especially LC/LC-MS/MS-based techniques such as MudPIT (multidimensional protein identification technology, show advantages over gel-based techniques in speed, sensitivity, scope of analysis, and dynamic range. Advances in the ability to quantitate differences between samples and to detect for an array of post-translational modifications allow for the discovery of classes of protein biomarkers that were previously unassailable.

  14. Automation in Clinical Microbiology

    Science.gov (United States)

    Ledeboer, Nathan A.

    2013-01-01

    Historically, the trend toward automation in clinical pathology laboratories has largely bypassed the clinical microbiology laboratory. In this article, we review the historical impediments to automation in the microbiology laboratory and offer insight into the reasons why we believe that we are on the cusp of a dramatic change that will sweep a wave of automation into clinical microbiology laboratories. We review the currently available specimen-processing instruments as well as the total laboratory automation solutions. Lastly, we outline the types of studies that will need to be performed to fully assess the benefits of automation in microbiology laboratories. PMID:23515547

  15. Automation of industrial bioprocesses.

    Science.gov (United States)

    Beyeler, W; DaPra, E; Schneider, K

    2000-01-01

    The dramatic development of new electronic devices within the last 25 years has had a substantial influence on the control and automation of industrial bioprocesses. Within this short period of time the method of controlling industrial bioprocesses has changed completely. In this paper, the authors will use a practical approach focusing on the industrial applications of automation systems. From the early attempts to use computers for the automation of biotechnological processes up to the modern process automation systems some milestones are highlighted. Special attention is given to the influence of Standards and Guidelines on the development of automation systems.

  16. Automated and comprehensive link engineering supporting branched, ring, and mesh network topologies

    Science.gov (United States)

    Farina, J.; Khomchenko, D.; Yevseyenko, D.; Meester, J.; Richter, A.

    2016-02-01

    Link design, while relatively easy in the past, can become quite cumbersome with complex channel plans and equipment configurations. The task of designing optical transport systems and selecting equipment is often performed by an applications or sales engineer using simple tools, such as custom Excel spreadsheets. Eventually, every individual has their own version of the spreadsheet as well as their own methodology for building the network. This approach becomes unmanageable very quickly and leads to mistakes, bending of the engineering rules and installations that do not perform as expected. We demonstrate a comprehensive planning environment, which offers an efficient approach to unify, control and expedite the design process by controlling libraries of equipment and engineering methodologies, automating the process and providing the analysis tools necessary to predict system performance throughout the system and for all channels. In addition to the placement of EDFAs and DCEs, performance analysis metrics are provided at every step of the way. Metrics that can be tracked include power, CD and OSNR, SPM, XPM, FWM and SBS. Automated routine steps assist in design aspects such as equalization, padding and gain setting for EDFAs, the placement of ROADMs and transceivers, and creating regeneration points. DWDM networks consisting of a large number of nodes and repeater huts, interconnected in linear, branched, mesh and ring network topologies, can be designed much faster when compared with conventional design methods. Using flexible templates for all major optical components, our technology-agnostic planning approach supports the constant advances in optical communications.

  17. A novel real-time data acquisition using an Excel spreadsheet in pendulum experiment tool with light-based timer

    Science.gov (United States)

    Adhitama, Egy; Fauzi, Ahmad

    2018-05-01

    In this study, a pendulum experimental tool with a light-based timer has been developed to measure the period of a simple pendulum. The obtained data was automatically recorded in an Excel spreadsheet. The intensity of monochromatic light, sensed by a 3DU5C phototransistor, dynamically changes as the pendulum swings. The changed intensity varies the resistance value and was processed by the microcontroller, ATMega328, to obtain a signal period as a function of time and brightness when the pendulum crosses the light. Through the experiment, using calculated average periods, the gravitational acceleration value has been accurately and precisely determined.

  18. Spreadsheet as a motivational tool in learning and professional development in Agricultural Engineering

    Science.gov (United States)

    Medina, Silvia; Moratiel, Ruben; Tarquis, Ana Maria; María Durán, Jose

    2013-04-01

    For the past few decades, Spanish universities have been introduced gradually, the use of so-called New Technologies in the classroom. This is because its use contributes to improve outcomes in education at all levels. In this sense, it helps not only to expand knowledge as in traditional education, but teaches students to learn and encouraged them to be more independent, to develop and apply their knowledge in practice, their future employment use. The aim of this paper is to analyse the educational content and the degree of satisfaction students get through the use of a spreadsheet program to perform various practices of Agricultural Engineers courses of the Polytechnic University of Madrid. Weekly, the professor poses a practice with a detailed explanation of what is required and students have the opportunity to submit as many times as they want, over two weeks, the work developed. Students are encouraged to undertake individual work and to submit in the same day the exercise done because earlier is the presentation of results more opportunities to correct the mistakes. Regardless of students' knowledge on Excel, the professor explains each one of the Excel resources to be employed in the presented practice. Then, they have the opportunity to ask about them avoiding the scenario of not performing the practice due to ignore some Excel resources. The number of practices that are performed per year depend on the hours / credits that are assigned to each subject. On the other hand, to check the degree of student satisfaction with these practices, a anonymous questionnaire was performed consisting of 15 questions that can be grouped into four categories: consolidation of knowledge (4 questions), practice organization (7 questions), following indications (2 questions) and knowledge of Excel (2 questions). Results show high degree of students' satisfaction in their learning process and their applicability in the future. Acknowledgments Funded provided by educational

  19. Excel, Earthquakes, and Moneyball: exploring Cascadia earthquake probabilities using spreadsheets and baseball analogies

    Science.gov (United States)

    Campbell, M. R.; Salditch, L.; Brooks, E. M.; Stein, S.; Spencer, B. D.

    2017-12-01

    Much recent media attention focuses on Cascadia's earthquake hazard. A widely cited magazine article starts "An earthquake will destroy a sizable portion of the coastal Northwest. The question is when." Stories include statements like "a massive earthquake is overdue", "in the next 50 years, there is a 1-in-10 chance a "really big one" will erupt," or "the odds of the big Cascadia earthquake happening in the next fifty years are roughly one in three." These lead students to ask where the quoted probabilities come from and what they mean. These probability estimates involve two primary choices: what data are used to describe when past earthquakes happened and what models are used to forecast when future earthquakes will happen. The data come from a 10,000-year record of large paleoearthquakes compiled from subsidence data on land and turbidites, offshore deposits recording submarine slope failure. Earthquakes seem to have happened in clusters of four or five events, separated by gaps. Earthquakes within a cluster occur more frequently and regularly than in the full record. Hence the next earthquake is more likely if we assume that we are in the recent cluster that started about 1700 years ago, than if we assume the cluster is over. Students can explore how changing assumptions drastically changes probability estimates using easy-to-write and display spreadsheets, like those shown below. Insight can also come from baseball analogies. The cluster issue is like deciding whether to assume that a hitter's performance in the next game is better described by his lifetime record, or by the past few games, since he may be hitting unusually well or in a slump. The other big choice is whether to assume that the probability of an earthquake is constant with time, or is small immediately after one occurs and then grows with time. This is like whether to assume that a player's performance is the same from year to year, or changes over their career. Thus saying "the chance of

  20. Methodology for Outdoor Water Savings Model and Spreadsheet Tool for U.S. and Selected States

    Energy Technology Data Exchange (ETDEWEB)

    Williams, Alison A. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Chen, Yuting [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Dunham, Camilla [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Fuchs, Heidi [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Price, Sarah [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Stratton, Hannah [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2017-07-31

    Green lawns and landscaping are archetypical of the populated American landscape, and typically require irrigation, which corresponds to a significant fraction of residential, commercial, and institutional water use. In North American cities, the estimated portion of residential water used for outdoor purposes ranges from 22-38% in cooler climates up to 59-67% in dry and hot environments, while turfgrass coverage within the United States spans 11.1-20.2 million hectares (Milesi et al. 2009). One national estimate uses satellite and aerial photography data to develop a relationship between impervious surface and lawn surface area, yielding a conservative estimate of 16.4 (± 3.6) million hectares of lawn surface area in the United States—an area three times larger than that devoted to any irrigated crop (Milesi et al. 2005). One approach that holds promise for cutting unnecessary outdoor water use is the increased deployment of “smart” irrigation controllers to increase the water efficiency of irrigation systems. This report describes the methodology and inputs employed in a mathematical model that quantifies the effects of the U.S. Environmental Protection Agency’s WaterSense labeling program for one such type of controller, weather-based irrigation controllers (WBIC). This model builds off that described in “Methodology for National Water Savings Model and Spreadsheet Tool–Outdoor Water Use” and uses a two-tiered approach to quantify outdoor water savings attributable to the WaterSense program for WBIC, as well as net present value (NPV) of that savings. While the first iteration of the model assessed national impacts using averaged national values, this version begins by evaluating impacts in three key large states that make up a sizable portion of the irrigation market: California, Florida, and Texas. These states are considered to be the principal market of “smart” irrigation controllers that may result in the bulk of national savings. Modeled

  1. iPTF Discoveries of Recent Core-Collapse Supernovae

    Science.gov (United States)

    Taddia, F.; Ferretti, R.; Papadogiannakis, S.; Petrushevska, T.; Fremling, C.; Karamehmetoglu, E.; Nyholm, A.; Roy, R.; Hangard, L.; Horesh, A.; Khazov, D.; Knezevic, S.; Johansson, J.; Leloudas, G.; Manulis, I.; Rubin, A.; Soumagnac, M.; Vreeswijk, P.; Yaron, O.; Bar, I.; Cao, Y.; Kulkarni, S.; Blagorodnova, N.

    2016-05-01

    The intermediate Palomar Transient Factory (ATel #4807) reports the discovery and classification of the following core-collapse SNe. Our automated candidate vetting to distinguish a real astrophysical source (1.0) from bogus artifacts (0.0) is powered by three generations of machine learning algorithms: RB2 (Brink et al. 2013MNRAS.435.1047B), RB4 (Rebbapragada et al. 2015AAS...22543402R) and RB5 (Wozniak et al. 2013AAS...22143105W).

  2. iPTF Discoveries of Recent Type Ia Supernovae

    Science.gov (United States)

    Papadogiannakis, S.; Taddia, F.; Petrushevska, T.; Ferretti, R.; Fremling, C.; Karamehmetoglu, E.; Nyholm, A.; Roy, R.; Hangard, L.; Vreeswijk, P.; Horesh, A.; Manulis, I.; Rubin, A.; Yaron, O.; Leloudas, G.; Khazov, D.; Soumagnac, M.; Knezevic, S.; Johansson, J.; Nir, G.; Cao, Y.; Blagorodnova, N.; Kulkarni, S.

    2016-05-01

    The intermediate Palomar Transient Factory (ATel #4807) reports the discovery and classification of the following Type Ia SNe. Our automated candidate vetting to distinguish a real astrophysical source (1.0) from bogus artefacts (0.0) is powered by three generations of machine learning algorithms: RB2 (Brink et al. 2013MNRAS.435.1047B), RB4 (Rebbapragada et al. 2015AAS...22543402R) and RB5 (Wozniak et al. 2013AAS...22143105W).

  3. Discovery of Fullerenes

    Indian Academy of Sciences (India)

    ... Journal of Science Education; Volume 2; Issue 1. Discovery of Fullerenes Giving a New Shape to Carbon Chemistry. Rathna Ananthaiah. Research News Volume 2 Issue 1 January 1997 pp 68-73. Fulltext. Click here to view fulltext PDF. Permanent link: http://www.ias.ac.in/article/fulltext/reso/002/01/0068-0073 ...

  4. Landmark Discoveries in Neurosciences

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 17; Issue 11. Landmark Discoveries in Neurosciences. Niranjan Kambi Neeraj Jain. General Article Volume 17 Issue 11 November 2012 pp 1054-1064. Fulltext. Click here to view fulltext PDF. Permanent link:

  5. The discovery of fission

    International Nuclear Information System (INIS)

    McKay, H.A.C.

    1978-01-01

    In this article by the retired head of the Separation Processes Group of the Chemistry Division, Atomic Energy Research Establishment, Harwell, U.K., the author recalls what he terms 'an exciting drama, the unravelling of the nature of the atomic nucleus' in the years before the Second World War, including the discovery of fission. 12 references. (author)

  6. Automation systems for radioimmunoassay

    International Nuclear Information System (INIS)

    Yamasaki, Paul

    1974-01-01

    The application of automation systems for radioimmunoassay (RIA) was discussed. Automated systems could be useful in the second step, of the four basic processes in the course of RIA, i.e., preparation of sample for reaction. There were two types of instrumentation, a semi-automatic pipete, and a fully automated pipete station, both providing for fast and accurate dispensing of the reagent or for the diluting of sample with reagent. Illustrations of the instruments were shown. (Mukohata, S.)

  7. Automated stopcock actuator

    OpenAIRE

    Vandehey, N. T.; O\\'Neil, J. P.

    2015-01-01

    Introduction We have developed a low-cost stopcock valve actuator for radiochemistry automation built using a stepper motor and an Arduino, an open-source single-board microcontroller. The con-troller hardware can be programmed to run by serial communication or via two 5–24 V digital lines for simple integration into any automation control system. This valve actuator allows for automated use of a single, disposable stopcock, providing a number of advantages over stopcock manifold systems ...

  8. Automated Analysis of Accountability

    DEFF Research Database (Denmark)

    Bruni, Alessandro; Giustolisi, Rosario; Schürmann, Carsten

    2017-01-01

    that are amenable to automated verification. Our definitions are general enough to be applied to different classes of protocols and different automated security verification tools. Furthermore, we point out formally the relation between verifiability and accountability. We validate our definitions...... with the automatic verification of three protocols: a secure exam protocol, Google’s Certificate Transparency, and an improved version of Bingo Voting. We find through automated verification that all three protocols satisfy verifiability while only the first two protocols meet accountability....

  9. [Use of spreadsheet for statistical and graphical processing of records from the ambulatory blood pressure monitor Spacelabs 90207].

    Science.gov (United States)

    Borges, N; Polónia, J

    1993-04-01

    The introduction of portable devices for non-invasive ambulatory blood-pressure measurement is recognized as an advance in the study of human arterial hypertension, allowing a significant improvement in the selection of hypertensive patients as well as in the analysis of the effects of antihypertensive drugs during clinical trials. The Spacelabs 90207 is a recent example of this kind of apparatus, possessing high levels of portability and being highly classified in validation studies. Nevertheless, the software of this apparatus (like other similar devices) has severe limitations in what concerns the calculation of the area under the curve of blood pressure during the time of measurement, as well as in the possibility of grouping several records in a database for easy statistic and graphic analysis of different groups of records. In order to overcome these difficulties, the authors describe the development of a group of programs, using Microsoft Excel v3.0 spreadsheets and macros, that allow a direct import of individual files from the Spacelabs software to a spreadsheet and its further processing in three phases. These three phases, which we designated by "conversion", "export to database" and "statistic and graphic analysis", will permit an easy and fast statistic and graphic analysis of selected groups of records.

  10. DEVELOPMENT OF A SPREADSHEET BASED VENDOR MANAGED INVENTORY MODEL FOR A SINGLE ECHELON SUPPLY CHAIN: A CASE STUDY

    Directory of Open Access Journals (Sweden)

    Karanam Prahlada Rao

    2010-11-01

    Full Text Available Vendor managed inventory (VMI is a supply chain initiative where the supplier assumes the responsibility for managing inventories using advanced communication means such as online messaging and data retrieval system. A well collaborated vendor manage inventory system can improve supply chain performance by decreasing the inventory level and increasing the fill rate. This paper investigates the implementation of vendor managed inventory systems in a consumer goods industry. We consider (r, Q policy for replenishing its inventory. The objective of work is to minimize the inventory across the supply chain and maximize the service level. The major contribution of this work is to develop a spreadsheet model for VMI system, Evaluation of Total inventory cost by using spreadsheet based method and Analytical method, Quantifying inventory reduction, Estimating service efficiency level, and validating the VMI spread sheet model with randomly generated demand. In the application, VMI as an inventory control system is able to reduce the inventory cost without sacrificing the service level. The results further more show that the inventory reduction obtained from analytical method is closer to the spread sheet based approach, which reveals the VMI success. However the VMI success is impacted by the quality of buyersupplier relationships, the quality of the IT system and the intensity of information sharing, but not by the quality of information shared.

  11. The use of kragten spreadsheets for uncertainty evaluation of uranium potentiometric analysis by the Brazilian Safeguards Laboratory

    International Nuclear Information System (INIS)

    Silva, Jose Wanderley S. da; Barros, Pedro Dionisio de; Araujo, Radier Mario S. de

    2009-01-01

    In safeguards, independent analysis of uranium content and enrichment of nuclear materials to verify operator's declarations is an important tool to evaluate the accountability system applied by nuclear installations. This determination may be performed by nondestructive (NDA) methods, generally done in the field using portable radiation detection systems, or destructive (DA) methods by chemical analysis when more accurate and precise results are necessary. Samples for DA analysis are collected by inspectors during safeguards inspections and sent to Safeguards Laboratory (LASAL) of the Brazilian Nuclear Energy Commission - (CNEN), where the analysis take place. The method used by LASAL for determination of uranium in different physical and chemical forms is the Davies and Gray/NBL using an automatic potentiometric titrator, which performs the titration of uranium IV by a standard solution of K 2 Cr 2 O 7 . Uncertainty budgets have been determined based on the concepts of the ISO 'Guide to the Expression of Uncertainty in Measurement' (GUM). In order to simplify the calculation of the uncertainty, a computational tool named Kragten Spreadsheet was used. Such spreadsheet uses the concepts established by the GUM and provides results that numerically approximates to those obtained by propagation of uncertainty with analytically determined sensitivity coefficients. The main parameters (input quantities) interfering on the uncertainty were studied. In order to evaluate their contribution in the final uncertainty, the uncertainties of all steps of the analytical method were estimated and compiled. (author)

  12. Management Planning for Workplace Automation.

    Science.gov (United States)

    McDole, Thomas L.

    Several factors must be considered when implementing office automation. Included among these are whether or not to automate at all, the effects of automation on employees, requirements imposed by automation on the physical environment, effects of automation on the total organization, and effects on clientele. The reasons behind the success or…

  13. Laboratory Automation and Middleware.

    Science.gov (United States)

    Riben, Michael

    2015-06-01

    The practice of surgical pathology is under constant pressure to deliver the highest quality of service, reduce errors, increase throughput, and decrease turnaround time while at the same time dealing with an aging workforce, increasing financial constraints, and economic uncertainty. Although not able to implement total laboratory automation, great progress continues to be made in workstation automation in all areas of the pathology laboratory. This report highlights the benefits and challenges of pathology automation, reviews middleware and its use to facilitate automation, and reviews the progress so far in the anatomic pathology laboratory. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. Automated cloning methods.; TOPICAL

    International Nuclear Information System (INIS)

    Collart, F.

    2001-01-01

    Argonne has developed a series of automated protocols to generate bacterial expression clones by using a robotic system designed to be used in procedures associated with molecular biology. The system provides plate storage, temperature control from 4 to 37 C at various locations, and Biomek and Multimek pipetting stations. The automated system consists of a robot that transports sources from the active station on the automation system. Protocols for the automated generation of bacterial expression clones can be grouped into three categories (Figure 1). Fragment generation protocols are initiated on day one of the expression cloning procedure and encompass those protocols involved in generating purified coding region (PCR)

  15. Complacency and Automation Bias in the Use of Imperfect Automation.

    Science.gov (United States)

    Wickens, Christopher D; Clegg, Benjamin A; Vieane, Alex Z; Sebok, Angelia L

    2015-08-01

    We examine the effects of two different kinds of decision-aiding automation errors on human-automation interaction (HAI), occurring at the first failure following repeated exposure to correctly functioning automation. The two errors are incorrect advice, triggering the automation bias, and missing advice, reflecting complacency. Contrasts between analogous automation errors in alerting systems, rather than decision aiding, have revealed that alerting false alarms are more problematic to HAI than alerting misses are. Prior research in decision aiding, although contrasting the two aiding errors (incorrect vs. missing), has confounded error expectancy. Participants performed an environmental process control simulation with and without decision aiding. For those with the aid, automation dependence was created through several trials of perfect aiding performance, and an unexpected automation error was then imposed in which automation was either gone (one group) or wrong (a second group). A control group received no automation support. The correct aid supported faster and more accurate diagnosis and lower workload. The aid failure degraded all three variables, but "automation wrong" had a much greater effect on accuracy, reflecting the automation bias, than did "automation gone," reflecting the impact of complacency. Some complacency was manifested for automation gone, by a longer latency and more modest reduction in accuracy. Automation wrong, creating the automation bias, appears to be a more problematic form of automation error than automation gone, reflecting complacency. Decision-aiding automation should indicate its lower degree of confidence in uncertain environments to avoid the automation bias. © 2015, Human Factors and Ergonomics Society.

  16. Advances in Computer, Communication, Control and Automation

    CERN Document Server

    011 International Conference on Computer, Communication, Control and Automation

    2012-01-01

    The volume includes a set of selected papers extended and revised from the 2011 International Conference on Computer, Communication, Control and Automation (3CA 2011). 2011 International Conference on Computer, Communication, Control and Automation (3CA 2011) has been held in Zhuhai, China, November 19-20, 2011. This volume  topics covered include signal and Image processing, speech and audio Processing, video processing and analysis, artificial intelligence, computing and intelligent systems, machine learning, sensor and neural networks, knowledge discovery and data mining, fuzzy mathematics and Applications, knowledge-based systems, hybrid systems modeling and design, risk analysis and management, system modeling and simulation. We hope that researchers, graduate students and other interested readers benefit scientifically from the proceedings and also find it stimulating in the process.

  17. Pengembangan Modul Pembelajaran Pengolah Lembar Kerja Excel Berbasis Multimedia [Developing an Excel Spreadsheet Multimedia Learning Module

    Directory of Open Access Journals (Sweden)

    Yanuard Putro Dwikristanto

    2018-01-01

    Full Text Available Competence in the use of Information and Communication Technology (ICT has become a part that is needed to be mastered by prospective student-teachers. Students are expected to use ICT appropriately in education. The problem that often happens is that many students have difficulty working on tasks outside the classroom, especially when Microsoft Excel. Many students appear to understand and look like they are able follow guided practice in class, but outside of class they are confused and often forget the instructions given by lecturers. One solution for dealing with these problems is to provide learners with a multimedia module for learning how to create and use spreadsheets. It is hoped that this module will help students to solve their learning difficulties in ICT courses including those with different learning styles. The purpose of this research is to evaluate the usefulness of this multimedia module. The development of this module was done by using the ASSURE model which consists of six stages: (1 analyze the learners; (2 state objectives; (3 select appropriate methods, media, and materials; (4 utilize materials; (5 require learners’ participation; and (6 evaluate and revise. The results of the questionnaire indicate that candidate teachers tend to be stronger in learning through Visual-Auditory ways. In addition, student mastery of the Excel Sheet processing program has the lowest value of Word and PowerPoint programs. The design of this learning module will be made by taking into account the Visual-Auditory aspects of the Microsoft Excel topic. This learning module will be developed with an interactive design using the PowerPoint 2016 program.  BAHASA INDONESIA ABSTRAK: Kompetensi penggunaan Teknologi Informasi dan Komunikasi (TIK menjadi bagian yang perlu dikuasai oleh mahasiswa calon guru. Mahasiswa diharapkan dapat menggunakan TIK secara tepat dalam dunia pendidikan. Permasalahan yang sering terjadi adalah banyak mahasiswa yang

  18. Methods for Automated and Continuous Commissioning of Building Systems

    Energy Technology Data Exchange (ETDEWEB)

    Larry Luskay; Michael Brambley; Srinivas Katipamula

    2003-04-30

    Avoidance of poorly installed HVAC systems is best accomplished at the close of construction by having a building and its systems put ''through their paces'' with a well conducted commissioning process. This research project focused on developing key components to enable the development of tools that will automatically detect and correct equipment operating problems, thus providing continuous and automatic commissioning of the HVAC systems throughout the life of a facility. A study of pervasive operating problems reveled the following would most benefit from an automated and continuous commissioning process: (1) faulty economizer operation; (2) malfunctioning sensors; (3) malfunctioning valves and dampers, and (4) access to project design data. Methodologies for detecting system operation faults in these areas were developed and validated in ''bare-bones'' forms within standard software such as spreadsheets, databases, statistical or mathematical packages. Demonstrations included flow diagrams and simplified mock-up applications. Techniques to manage data were demonstrated by illustrating how test forms could be populated with original design information and the recommended sequence of operation for equipment systems. Proposed tools would use measured data, design data, and equipment operating parameters to diagnosis system problems. Steps for future research are suggested to help more toward practical application of automated commissioning and its high potential to improve equipment availability, increase occupant comfort, and extend the life of system equipment.

  19. The neutron discovery

    International Nuclear Information System (INIS)

    Six, J.

    1987-01-01

    The neutron: who had first the idea, who discovered it, who established its main properties. To these apparently simple questions, multiple answers exist. The progressive discovery of the neutron is a marvellous illustration of some characteristics of the scientific research, where the unforeseen may be combined with the expected. This discovery is replaced in the context of the 1930's scientific effervescence that succeeded the revolutionary introduction of quantum mechanics. This book describes the works of Bothe, the Joliot-Curie and Chadwick which led to the neutron in an unexpected way. A historical analysis allows to give a new interpretation on the hypothesis suggested by the Joliot-Curie. Some texts of these days will help the reader to revive this fascinating story [fr

  20. Discovery of charm

    Energy Technology Data Exchange (ETDEWEB)

    Goldhaber, G.

    1984-11-01

    In my talk I will cover the period 1973 to 1976 which saw the discoveries of the J/psi and psi' resonances and most of the Psion spectroscopy, the tau lepton and the D/sup 0/,D/sup +/ charmed meson doublet. Occasionally I will refer briefly to more recent results. Since this conference is on the history of the weak-interactions I will deal primarily with the properties of naked charm and in particular the weakly decaying doublet of charmed mesons. Most of the discoveries I will mention were made with the SLAC-LBL Magnetic Detector or MARK I which we operated at SPEAR from 1973 to 1976. 27 references.

  1. Atlas of Astronomical Discoveries

    CERN Document Server

    Schilling, Govert

    2011-01-01

    Four hundred years ago in Middelburg, in the Netherlands, the telescope was invented. The invention unleashed a revolution in the exploration of the universe. Galileo Galilei discovered mountains on the Moon, spots on the Sun, and moons around Jupiter. Christiaan Huygens saw details on Mars and rings around Saturn. William Herschel discovered a new planet and mapped binary stars and nebulae. Other astronomers determined the distances to stars, unraveled the structure of the Milky Way, and discovered the expansion of the universe. And, as telescopes became bigger and more powerful, astronomers delved deeper into the mysteries of the cosmos. In his Atlas of Astronomical Discoveries, astronomy journalist Govert Schilling tells the story of 400 years of telescopic astronomy. He looks at the 100 most important discoveries since the invention of the telescope. In his direct and accessible style, the author takes his readers on an exciting journey encompassing the highlights of four centuries of astronomy. Spectacul...

  2. Automated System Marketplace 1994.

    Science.gov (United States)

    Griffiths, Jose-Marie; Kertis, Kimberly

    1994-01-01

    Reports results of the 1994 Automated System Marketplace survey based on responses from 60 vendors. Highlights include changes in the library automation marketplace; estimated library systems revenues; minicomputer and microcomputer-based systems; marketplace trends; global markets and mergers; research needs; new purchase processes; and profiles…

  3. Automation benefits BWR customers

    International Nuclear Information System (INIS)

    Anon.

    1982-01-01

    A description is given of the increasing use of automation at General Electric's Wilmington fuel fabrication plant. Computerised systems and automated equipment perform a large number of inspections, inventory and process operations, and new advanced systems are being continuously introduced to reduce operator errors and expand product reliability margins. (U.K.)

  4. Automate functional testing

    Directory of Open Access Journals (Sweden)

    Ramesh Kalindri

    2014-06-01

    Full Text Available Currently, software engineers are increasingly turning to the option of automating functional tests, but not always have successful in this endeavor. Reasons range from low planning until over cost in the process. Some principles that can guide teams in automating these tests are described in this article.

  5. Automation in Warehouse Development

    NARCIS (Netherlands)

    Hamberg, R.; Verriet, J.

    2012-01-01

    The warehouses of the future will come in a variety of forms, but with a few common ingredients. Firstly, human operational handling of items in warehouses is increasingly being replaced by automated item handling. Extended warehouse automation counteracts the scarcity of human operators and

  6. Identity Management Processes Automation

    Directory of Open Access Journals (Sweden)

    A. Y. Lavrukhin

    2010-03-01

    Full Text Available Implementation of identity management systems consists of two main parts, consulting and automation. The consulting part includes development of a role model and identity management processes description. The automation part is based on the results of consulting part. This article describes the most important aspects of IdM implementation.

  7. Work and Programmable Automation.

    Science.gov (United States)

    DeVore, Paul W.

    A new industrial era based on electronics and the microprocessor has arrived, an era that is being called intelligent automation. Intelligent automation, in the form of robots, replaces workers, and the new products, using microelectronic devices, require significantly less labor to produce than the goods they replace. The microprocessor thus…

  8. Library Automation in Pakistan.

    Science.gov (United States)

    Haider, Syed Jalaluddin

    1998-01-01

    Examines the state of library automation in Pakistan. Discusses early developments; financial support by the Netherlands Library Development Project (Pakistan); lack of automated systems in college/university and public libraries; usage by specialist libraries; efforts by private-sector libraries and the National Library in Pakistan; commonly used…

  9. Library Automation Style Guide.

    Science.gov (United States)

    Gaylord Bros., Liverpool, NY.

    This library automation style guide lists specific terms and names often used in the library automation industry. The terms and/or acronyms are listed alphabetically and each is followed by a brief definition. The guide refers to the "Chicago Manual of Style" for general rules, and a notes section is included for the convenience of individual…

  10. Planning for Office Automation.

    Science.gov (United States)

    Sherron, Gene T.

    1982-01-01

    The steps taken toward office automation by the University of Maryland are described. Office automation is defined and some types of word processing systems are described. Policies developed in the writing of a campus plan are listed, followed by a section on procedures adopted to implement the plan. (Author/MLW)

  11. The Automated Office.

    Science.gov (United States)

    Naclerio, Nick

    1979-01-01

    Clerical personnel may be able to climb career ladders as a result of office automation and expanded job opportunities in the word processing area. Suggests opportunities in an automated office system and lists books and periodicals on word processing for counselors and teachers. (MF)

  12. Automating the Small Library.

    Science.gov (United States)

    Skapura, Robert

    1987-01-01

    Discusses the use of microcomputers for automating school libraries, both for entire systems and for specific library tasks. Highlights include available library management software, newsletters that evaluate software, constructing an evaluation matrix, steps to consider in library automation, and a brief discussion of computerized card catalogs.…

  13. Discoveries of isotopes by fission

    Indian Academy of Sciences (India)

    also contributed to the discovery of new isotopes. More recently, most of the very neutron- rich isotopes have been discovered by projectile fission. After a brief summary of the discovery of fission process itself, these production mechanisms will be discussed. The paper concludes with an outlook on future discoveries of ...

  14. Recent Discoveries and Bible Translation.

    Science.gov (United States)

    Harrelson, Walter

    1990-01-01

    Discusses recent discoveries for "Bible" translation with a focus on the "Dead Sea Scrolls." Examines recent discoveries that provide direct support for alternative reading of biblical passages and those discoveries that have contributed additional insight to knowledge of cultural practices, especially legal and religious…

  15. Fateful discovery almost forgotten

    International Nuclear Information System (INIS)

    Anon.

    1989-01-01

    The paper reviews the discovery of the fission of uranium, which took place fifty years ago. A description is given of the work of Meitner and Frisch in interpreting the Fermi data on the bombardment of uranium nuclei with neutrons, i.e. proposing fission. The historical events associated with the development and exploitation of uranium fission are described, including the Manhattan Project, Hiroshima and Nagasaki, Shippingport, and Chernobyl. (U.K.)

  16. Discovery as a process

    Energy Technology Data Exchange (ETDEWEB)

    Loehle, C.

    1994-05-01

    The three great myths, which form a sort of triumvirate of misunderstanding, are the Eureka! myth, the hypothesis myth, and the measurement myth. These myths are prevalent among scientists as well as among observers of science. The Eureka! myth asserts that discovery occurs as a flash of insight, and as such is not subject to investigation. This leads to the perception that discovery or deriving a hypothesis is a moment or event rather than a process. Events are singular and not subject to description. The hypothesis myth asserts that proper science is motivated by testing hypotheses, and that if something is not experimentally testable then it is not scientific. This myth leads to absurd posturing by some workers conducting empirical descriptive studies, who dress up their study with a ``hypothesis`` to obtain funding or get it published. Methods papers are often rejected because they do not address a specific scientific problem. The fact is that many of the great breakthroughs in silence involve methods and not hypotheses or arise from largely descriptive studies. Those captured by this myth also try to block funding for those developing methods. The third myth is the measurement myth, which holds that determining what to measure is straightforward, so one doesn`t need a lot of introspection to do science. As one ecologist put it to me ``Don`t give me any of that philosophy junk, just let me out in the field. I know what to measure.`` These myths lead to difficulties for scientists who must face peer review to obtain funding and to get published. These myths also inhibit the study of science as a process. Finally, these myths inhibit creativity and suppress innovation. In this paper I first explore these myths in more detail and then propose a new model of discovery that opens the supposedly miraculous process of discovery to doser scrutiny.

  17. Discovery of TUG-770

    DEFF Research Database (Denmark)

    Christiansen, Elisabeth; Hansen, Steffen Vissing Fahnøe; Urban, Christian

    2013-01-01

    Free fatty acid receptor 1 (FFA1 or GPR40) enhances glucose-stimulated insulin secretion from pancreatic β-cells and currently attracts high interest as a new target for the treatment of type 2 diabetes. We here report the discovery of a highly potent FFA1 agonist with favorable physicochemical...... and pharmacokinetic properties. The compound efficiently normalizes glucose tolerance in diet-induced obese mice, an effect that is fully sustained after 29 days of chronic dosing....

  18. Discovery concepts for Mars

    Science.gov (United States)

    Luhmann, J. G.; Russell, C. T.; Brace, L. H.; Nagy, A. F.; Jakosky, B. M.; Barth, C. A.; Waite, J. H.

    1992-01-01

    Two focused Mars missions that would fit within the guidelines for the proposed Discovery line are discussed. The first mission would deal with the issue of the escape of the atmosphere (Mars') to space. A complete understanding of this topic is crucial to deciphering the evolution of the atmosphere, climate change, and volatile inventories. The second mission concerns the investigation of remanent magnetization of the crust and its relationship to the ionosphere and the atmosphere.

  19. Advances in inspection automation

    Science.gov (United States)

    Weber, Walter H.; Mair, H. Douglas; Jansen, Dion; Lombardi, Luciano

    2013-01-01

    This new session at QNDE reflects the growing interest in inspection automation. Our paper describes a newly developed platform that makes the complex NDE automation possible without the need for software programmers. Inspection tasks that are tedious, error-prone or impossible for humans to perform can now be automated using a form of drag and drop visual scripting. Our work attempts to rectify the problem that NDE is not keeping pace with the rest of factory automation. Outside of NDE, robots routinely and autonomously machine parts, assemble components, weld structures and report progress to corporate databases. By contrast, components arriving in the NDT department typically require manual part handling, calibrations and analysis. The automation examples in this paper cover the development of robotic thickness gauging and the use of adaptive contour following on the NRU reactor inspection at Chalk River.

  20. Automated model building

    CERN Document Server

    Caferra, Ricardo; Peltier, Nicholas

    2004-01-01

    This is the first book on automated model building, a discipline of automated deduction that is of growing importance Although models and their construction are important per se, automated model building has appeared as a natural enrichment of automated deduction, especially in the attempt to capture the human way of reasoning The book provides an historical overview of the field of automated deduction, and presents the foundations of different existing approaches to model construction, in particular those developed by the authors Finite and infinite model building techniques are presented The main emphasis is on calculi-based methods, and relevant practical results are provided The book is of interest to researchers and graduate students in computer science, computational logic and artificial intelligence It can also be used as a textbook in advanced undergraduate courses

  1. Automation in Warehouse Development

    CERN Document Server

    Verriet, Jacques

    2012-01-01

    The warehouses of the future will come in a variety of forms, but with a few common ingredients. Firstly, human operational handling of items in warehouses is increasingly being replaced by automated item handling. Extended warehouse automation counteracts the scarcity of human operators and supports the quality of picking processes. Secondly, the development of models to simulate and analyse warehouse designs and their components facilitates the challenging task of developing warehouses that take into account each customer’s individual requirements and logistic processes. Automation in Warehouse Development addresses both types of automation from the innovative perspective of applied science. In particular, it describes the outcomes of the Falcon project, a joint endeavour by a consortium of industrial and academic partners. The results include a model-based approach to automate warehouse control design, analysis models for warehouse design, concepts for robotic item handling and computer vision, and auton...

  2. Automation in Immunohematology

    Directory of Open Access Journals (Sweden)

    Meenu Bajpai

    2012-01-01

    Full Text Available There have been rapid technological advances in blood banking in South Asian region over the past decade with an increasing emphasis on quality and safety of blood products. The conventional test tube technique has given way to newer techniques such as column agglutination technique, solid phase red cell adherence assay, and erythrocyte-magnetized technique. These new technologies are adaptable to automation and major manufacturers in this field have come up with semi and fully automated equipments for immunohematology tests in the blood bank. Automation improves the objectivity and reproducibility of tests. It reduces human errors in patient identification and transcription errors. Documentation and traceability of tests, reagents and processes and archiving of results is another major advantage of automation. Shifting from manual methods to automation is a major undertaking for any transfusion service to provide quality patient care with lesser turnaround time for their ever increasing workload. This article discusses the various issues involved in the process.

  3. How To Use the Spreadsheet as a Tool in the Secondary School Mathematics Classroom. Second Edition (for Windows and Macintosh Operating Systems).

    Science.gov (United States)

    Masalski, William J.

    This book seeks to develop, enhance, and expand students' understanding of mathematics by using technology. Topics covered include the advantages of spreadsheets along with the opportunity to explore the 'what if?' type of questions encountered in the problem-solving process, enhancing the user's insight into the development and use of algorithms,…

  4. The Impacts of Mathematical Representations Developed through Webquest and Spreadsheet Activities on the Motivation of Pre-Service Elementary School Teachers

    Science.gov (United States)

    Halat, Erdogan; Peker, Murat

    2011-01-01

    The purpose of this study was to compare the influence of instruction using WebQuest activities with the influence of an instruction using spreadsheet activities on the motivation of pre-service elementary school teachers in mathematics teaching course. There were a total of 70 pre-service elementary school teachers involved in this study. Thirty…

  5. Developing Students' Understanding of Co-Opetition and Multilevel Inventory Management Strategies in Supply Chains: An In-Class Spreadsheet Simulation Exercise

    Science.gov (United States)

    Fetter, Gary; Shockley, Jeff

    2014-01-01

    Instructors look for ways to explain to students how supply chains can be constructed so that competing suppliers can work together to improve inventory management performance (i.e., a phenomenon known as co-opetition). An Excel spreadsheet-driven simulation is presented that models a complete multilevel supply chain system--customer, retailer,…

  6. CalTOX (registered trademark), A multimedia total exposure model spreadsheet user's guide. Version 4.0(Beta)

    Energy Technology Data Exchange (ETDEWEB)

    McKone, T.E.; Enoch, K.G.

    2002-08-01

    CalTOX has been developed as a set of spreadsheet models and spreadsheet data sets to assist in assessing human exposures from continuous releases to multiple environmental media, i.e. air, soil, and water. It has also been used for waste classification and for setting soil clean-up levels at uncontrolled hazardous wastes sites. The modeling components of CalTOX include a multimedia transport and transformation model, multi-pathway exposure scenario models, and add-ins to quantify and evaluate uncertainty and variability. All parameter values used as inputs to CalTOX are distributions, described in terms of mean values and a coefficient of variation, rather than as point estimates or plausible upper values such as most other models employ. This probabilistic approach allows both sensitivity and uncertainty analyses to be directly incorporated into the model operation. This manual provides CalTOX users with a brief overview of the CalTOX spreadsheet model and provides instructions for using the spreadsheet to make deterministic and probabilistic calculations of source-dose-risk relationships.

  7. SimpleTreat: a spreadsheet-based box model to predict the fate of xenobiotics in a municipal waste water treatment plant

    NARCIS (Netherlands)

    Struijs J; van de Meent D; Stoltenkamp J

    1991-01-01

    A non-equilibrium steady state box model is reported, that predicts the fate of new chemicals in a conventional sewage treatment plant from a minimal input data set. The model, written in an electronic spreadsheet (Lotus TM 123), requires a minimum input: some basic properties of the chemical, its

  8. SU-G-BRB-04: Automated Output Factor Measurements Using Continuous Data Logging for Linac Commissioning

    International Nuclear Information System (INIS)

    Zhu, X; Li, S; Zheng, D; Wang, S; Lei, Y; Zhang, M; Ma, R; Fan, Q; Wang, X; Li, X; Verma, V; Enke, C; Zhou, S

    2016-01-01

    Purpose: Linac commissioning is a time consuming and labor intensive process, the streamline of which is highly desirable. In particular, manual measurement of output factors for a variety of field sizes and energy greatly hinders the commissioning efficiency. In this study, automated measurement of output factors was demonstrated as ‘one-click’ using data logging of an electrometer. Methods: Beams to be measured were created in the recording and verifying (R&V) system and configured for continuous delivery. An electrometer with an automatic data logging feature enabled continuous data collection for all fields without human intervention. The electrometer saved data into a spreadsheet every 0.5 seconds. A Matlab program was developed to analyze the excel data to monitor and check the data quality. Results: For each photon energy, output factors were measured for five configurations, including open field and four wedges. Each configuration includes 72 fields sizes, ranging from 4×4 to 20×30 cm 2 . Using automation, it took 50 minutes to complete the measurement of 72 field sizes, in contrast to 80 minutes when using the manual approach. The automation avoided the necessity of redundant Linac status checks between fields as in the manual approach. In fact, the only limiting factor in such automation is Linac overheating. The data collection beams in the R&V system are reusable, and the simplified process is less error-prone. In addition, our Matlab program extracted the output factors faithfully from data logging, and the discrepancy between the automatic and manual measurement is within ±0.3%. For two separate automated measurements 30 days apart, consistency check shows a discrepancy within ±1% for 6MV photon with a 60 degree wedge. Conclusion: Automated output factor measurements can save time by 40% when compared with conventional manual approach. This work laid ground for further improvement for the automation of Linac commissioning.

  9. When social actions get translated into spreadsheets: economics and social work with children and youth in Denmark

    DEFF Research Database (Denmark)

    Schrøder, Ida Marie

    2013-01-01

    As a means of reducing public spending, social workers in Danish municipalities are expected to take into account public sector economy when deciding on how to solve social problems. Researchers have previously investigated the impact of social work on the public sector economy, the cost...... and outcomes of social work and the impact of regulating social workers, but far less explored is what actually happens when social workers deal with economy in their everyday practice. My study takes some first steps to fill this knowledge gap. Through a mixed method design, the study explores social workers...... interventions to help children and young people. Inspired by the sociologist John Law, my preliminary study suggests that taking into account economy often becomes a question of translating social interventions into spreadsheets, rather than making economically-based decisions. I classify three kinds...

  10. Systematic review automation technologies

    Science.gov (United States)

    2014-01-01

    Systematic reviews, a cornerstone of evidence-based medicine, are not produced quickly enough to support clinical practice. The cost of production, availability of the requisite expertise and timeliness are often quoted as major contributors for the delay. This detailed survey of the state of the art of information systems designed to support or automate individual tasks in the systematic review, and in particular systematic reviews of randomized controlled clinical trials, reveals trends that see the convergence of several parallel research projects. We surveyed literature describing informatics systems that support or automate the processes of systematic review or each of the tasks of the systematic review. Several projects focus on automating, simplifying and/or streamlining specific tasks of the systematic review. Some tasks are already fully automated while others are still largely manual. In this review, we describe each task and the effect that its automation would have on the entire systematic review process, summarize the existing information system support for each task, and highlight where further research is needed for realizing automation for the task. Integration of the systems that automate systematic review tasks may lead to a revised systematic review workflow. We envisage the optimized workflow will lead to system in which each systematic review is described as a computer program that automatically retrieves relevant trials, appraises them, extracts and synthesizes data, evaluates the risk of bias, performs meta-analysis calculations, and produces a report in real time. PMID:25005128

  11. On-Site School Library Automation: Automation Anywhere with Laptops.

    Science.gov (United States)

    Gunn, Holly; Oxner, June

    2000-01-01

    Four years after the Halifax Regional School Board was formed through amalgamation, over 75% of its school libraries were automated. On-site automation with laptops was a quicker, more efficient way of automating than sending a shelf list to the Technical Services Department. The Eastern Shore School Library Automation Project was a successful…

  12. Automated electron microprobe

    International Nuclear Information System (INIS)

    Thompson, K.A.; Walker, L.R.

    1986-01-01

    The Plant Laboratory at the Oak Ridge Y-12 Plant has recently obtained a Cameca MBX electron microprobe with a Tracor Northern TN5500 automation system. This allows full stage and spectrometer automation and digital beam control. The capabilities of the system include qualitative and quantitative elemental microanalysis for all elements above and including boron in atomic number, high- and low-magnification imaging and processing, elemental mapping and enhancement, and particle size, shape, and composition analyses. Very low magnification, quantitative elemental mapping using stage control (which is of particular interest) has been accomplished along with automated size, shape, and composition analysis over a large relative area

  13. Operational proof of automation

    International Nuclear Information System (INIS)

    Jaerschky, R.; Reifenhaeuser, R.; Schlicht, K.

    1976-01-01

    Automation of the power plant process may imply quite a number of problems. The automation of dynamic operations requires complicated programmes often interfering in several branched areas. This reduces clarity for the operating and maintenance staff, whilst increasing the possibilities of errors. The synthesis and the organization of standardized equipment have proved very successful. The possibilities offered by this kind of automation for improving the operation of power plants will only sufficiently and correctly be turned to profit, however, if the application of these technics of equipment is further improved and if its volume is tallied with a definite etc. (orig.) [de

  14. Chef infrastructure automation cookbook

    CERN Document Server

    Marschall, Matthias

    2013-01-01

    Chef Infrastructure Automation Cookbook contains practical recipes on everything you will need to automate your infrastructure using Chef. The book is packed with illustrated code examples to automate your server and cloud infrastructure.The book first shows you the simplest way to achieve a certain task. Then it explains every step in detail, so that you can build your knowledge about how things work. Eventually, the book shows you additional things to consider for each approach. That way, you can learn step-by-step and build profound knowledge on how to go about your configuration management

  15. Applying 'Evidence-Based Medicine' Theory to Interventional Radiology.Part 2: A Spreadsheet for Swift Assessment of Procedural Benefit and Harm

    International Nuclear Information System (INIS)

    MacEneaney, Peter M.; Malone, Dermot E.

    2000-01-01

    AIM: To design a spreadsheet program to analyse interventional radiology (IR) data rapidly produced in local research or reported in the literature using 'evidence-based medicine' (EBM) parameters of treatment benefit and harm. MATERIALS AND METHODS: Microsoft Excel TM was used. The spreadsheet consists of three worksheets. The first shows the 'Levels of Evidence and Grades of Recommendations' that can be assigned to therapeutic studies as defined by the Oxford Centre for EBM. The second and third worksheets facilitate the EBM assessment of therapeutic benefit and harm. Validity criteria are described. These include the assessment of the adequacy of sample size in the detection of possible procedural complications. A contingency (2 x 2) table for raw data on comparative outcomes in treated patients and controls has been incorporated. Formulae for EBM calculations are related to these numerators and denominators in the spreadsheet. The parameters calculated are benefit -- relative risk reduction, absolute risk reduction, number needed to treat (NNT). Harm -- relative risk, relative odds, number needed to harm (NNH). Ninety-five per cent confidence intervals are calculated for all these indices. The results change automatically when the data in the therapeutic outcome cells are changed. A final section allows the user to correct the NNT or NNH in their application to individual patients. RESULTS: This spreadsheet can be used on desktop and palmtop computers. The MS Excel TM version can be downloaded via the Internet from the URL ftp://radiography.com/pub/TxHarm00.xls. CONCLUSION: A spreadsheet is useful for the rapid analysis of the clinical benefit and harm from IR procedures. MacEneaney, P.M. and Malone, D.E

  16. Automating Groundwater Sampling At Hanford, The Next Step

    International Nuclear Information System (INIS)

    Connell, C.W.; Conley, S.F.; Hildebrand, R.D.; Cunningham, D.E.

    2010-01-01

    Historically, the groundwater monitoring activities at the Department of Energy's Hanford Site in southeastern Washington State have been very 'people intensive.' Approximately 1500 wells are sampled each year by field personnel or 'samplers.' These individuals have been issued pre-printed forms showing information about the well(s) for a particular sampling evolution. This information is taken from 2 official electronic databases: the Hanford Well information System (HWIS) and the Hanford Environmental Information System (HEIS). The samplers used these hardcopy forms to document the groundwater samples and well water-levels. After recording the entries in the field, the samplers turned the forms in at the end of the day and other personnel posted the collected information onto a spreadsheet that was then printed and included in a log book. The log book was then used to make manual entries of the new information into the software application(s) for the HEIS and HWIS databases. A pilot project for automating this extremely tedious process was lauched in 2008. Initially, the automation was focused on water-level measurements. Now, the effort is being extended to automate the meta-data associated with collecting groundwater samples. The project allowed electronic forms produced in the field by samplers to be used in a work flow process where the data is transferred to the database and electronic form is filed in managed records - thus eliminating manually completed forms. Elimating the manual forms and streamlining the data entry not only improved the accuracy of the information recorded, but also enhanced the efficiency and sampling capacity of field office personnel.

  17. Recent advances in inkjet dispensing technologies: applications in drug discovery.

    Science.gov (United States)

    Zhu, Xiangcheng; Zheng, Qiang; Yang, Hu; Cai, Jin; Huang, Lei; Duan, Yanwen; Xu, Zhinan; Cen, Peilin

    2012-09-01

    Inkjet dispensing technology is a promising fabrication methodology widely applied in drug discovery. The automated programmable characteristics and high-throughput efficiency makes this approach potentially very useful in miniaturizing the design patterns for assays and drug screening. Various custom-made inkjet dispensing systems as well as specialized bio-ink and substrates have been developed and applied to fulfill the increasing demands of basic drug discovery studies. The incorporation of other modern technologies has further exploited the potential of inkjet dispensing technology in drug discovery and development. This paper reviews and discusses the recent developments and practical applications of inkjet dispensing technology in several areas of drug discovery and development including fundamental assays of cells and proteins, microarrays, biosensors, tissue engineering, basic biological and pharmaceutical studies. Progression in a number of areas of research including biomaterials, inkjet mechanical systems and modern analytical techniques as well as the exploration and accumulation of profound biological knowledge has enabled different inkjet dispensing technologies to be developed and adapted for high-throughput pattern fabrication and miniaturization. This in turn presents a great opportunity to propel inkjet dispensing technology into drug discovery.

  18. Automation Interface Design Development

    Data.gov (United States)

    National Aeronautics and Space Administration — Our research makes its contributions at two levels. At one level, we addressed the problems of interaction between humans and computers/automation in a particular...

  19. Automated Vehicles Symposium 2014

    CERN Document Server

    Beiker, Sven; Road Vehicle Automation 2

    2015-01-01

    This paper collection is the second volume of the LNMOB series on Road Vehicle Automation. The book contains a comprehensive review of current technical, socio-economic, and legal perspectives written by experts coming from public authorities, companies and universities in the U.S., Europe and Japan. It originates from the Automated Vehicle Symposium 2014, which was jointly organized by the Association for Unmanned Vehicle Systems International (AUVSI) and the Transportation Research Board (TRB) in Burlingame, CA, in July 2014. The contributions discuss the challenges arising from the integration of highly automated and self-driving vehicles into the transportation system, with a focus on human factors and different deployment scenarios. This book is an indispensable source of information for academic researchers, industrial engineers, and policy makers interested in the topic of road vehicle automation.

  20. Fixed automated spray technology.

    Science.gov (United States)

    2011-04-19

    This research project evaluated the construction and performance of Boschungs Fixed Automated : Spray Technology (FAST) system. The FAST system automatically sprays de-icing material on : the bridge when icing conditions are about to occur. The FA...

  1. Automated Vehicles Symposium 2015

    CERN Document Server

    Beiker, Sven

    2016-01-01

    This edited book comprises papers about the impacts, benefits and challenges of connected and automated cars. It is the third volume of the LNMOB series dealing with Road Vehicle Automation. The book comprises contributions from researchers, industry practitioners and policy makers, covering perspectives from the U.S., Europe and Japan. It is based on the Automated Vehicles Symposium 2015 which was jointly organized by the Association of Unmanned Vehicle Systems International (AUVSI) and the Transportation Research Board (TRB) in Ann Arbor, Michigan, in July 2015. The topical spectrum includes, but is not limited to, public sector activities, human factors, ethical and business aspects, energy and technological perspectives, vehicle systems and transportation infrastructure. This book is an indispensable source of information for academic researchers, industrial engineers and policy makers interested in the topic of road vehicle automation.

  2. Automation synthesis modules review

    International Nuclear Information System (INIS)

    Boschi, S.; Lodi, F.; Malizia, C.; Cicoria, G.; Marengo, M.

    2013-01-01

    The introduction of 68 Ga labelled tracers has changed the diagnostic approach to neuroendocrine tumours and the availability of a reliable, long-lived 68 Ge/ 68 Ga generator has been at the bases of the development of 68 Ga radiopharmacy. The huge increase in clinical demand, the impact of regulatory issues and a careful radioprotection of the operators have boosted for extensive automation of the production process. The development of automated systems for 68 Ga radiochemistry, different engineering and software strategies and post-processing of the eluate were discussed along with impact of automation with regulations. - Highlights: ► Generators availability and robust chemistry boosted for the huge diffusion of 68Ga radiopharmaceuticals. ► Different technological approaches for 68Ga radiopharmaceuticals will be discussed. ► Generator eluate post processing and evolution to cassette based systems were the major issues in automation. ► Impact of regulations on the technological development will be also considered

  3. Disassembly automation automated systems with cognitive abilities

    CERN Document Server

    Vongbunyong, Supachai

    2015-01-01

    This book presents a number of aspects to be considered in the development of disassembly automation, including the mechanical system, vision system and intelligent planner. The implementation of cognitive robotics increases the flexibility and degree of autonomy of the disassembly system. Disassembly, as a step in the treatment of end-of-life products, can allow the recovery of embodied value left within disposed products, as well as the appropriate separation of potentially-hazardous components. In the end-of-life treatment industry, disassembly has largely been limited to manual labor, which is expensive in developed countries. Automation is one possible solution for economic feasibility. The target audience primarily comprises researchers and experts in the field, but the book may also be beneficial for graduate students.

  4. Automated Lattice Perturbation Theory

    Energy Technology Data Exchange (ETDEWEB)

    Monahan, Christopher

    2014-11-01

    I review recent developments in automated lattice perturbation theory. Starting with an overview of lattice perturbation theory, I focus on the three automation packages currently "on the market": HiPPy/HPsrc, Pastor and PhySyCAl. I highlight some recent applications of these methods, particularly in B physics. In the final section I briefly discuss the related, but distinct, approach of numerical stochastic perturbation theory.

  5. Automated ISMS control auditability

    OpenAIRE

    Suomu, Mikko

    2015-01-01

    This thesis focuses on researching a possible reference model for automated ISMS’s (Information Security Management System) technical control auditability. The main objective was to develop a generic framework for automated compliance status monitoring of the ISO27001:2013 standard which could be re‐used in any ISMS system. The framework was tested with Proof of Concept (PoC) empirical research in a test infrastructure which simulates the framework target deployment environment. To fulfi...

  6. Marketing automation supporting sales

    OpenAIRE

    Sandell, Niko

    2016-01-01

    The past couple of decades has been a time of major changes in marketing. Digitalization has become a permanent part of marketing and at the same time enabled efficient collection of data. Personalization and customization of content are playing a crucial role in marketing when new customers are acquired. This has also created a need for automation to facilitate the distribution of targeted content. As a result of successful marketing automation more information of the customers is gathered ...

  7. Automated lattice data generation

    Directory of Open Access Journals (Sweden)

    Ayyar Venkitesh

    2018-01-01

    Full Text Available The process of generating ensembles of gauge configurations (and measuring various observables over them can be tedious and error-prone when done “by hand”. In practice, most of this procedure can be automated with the use of a workflow manager. We discuss how this automation can be accomplished using Taxi, a minimal Python-based workflow manager built for generating lattice data. We present a case study demonstrating this technology.

  8. Mapping the Stacks: Sustainability and User Experience of Animated Maps in Library Discovery Interfaces

    Science.gov (United States)

    McMillin, Bill; Gibson, Sally; MacDonald, Jean

    2016-01-01

    Animated maps of the library stacks were integrated into the catalog interface at Pratt Institute and into the EBSCO Discovery Service interface at Illinois State University. The mapping feature was developed for optimal automation of the update process to enable a range of library personnel to update maps and call-number ranges. The development…

  9. Automated detection of structural alerts (chemical fragments in (ecotoxicology

    Directory of Open Access Journals (Sweden)

    Ronan Bureau

    2013-02-01

    Full Text Available This mini-review describes the evolution of different algorithms dedicated to the automated discovery of chemical fragments associated to (ecotoxicological endpoints. These structural alerts correspond to one of the most interesting approach of in silico toxicology due to their direct link with specific toxicological mechanisms. A number of expert systems are already available but, since the first work in this field which considered a binomial distribution of chemical fragments between two datasets, new data miners were developed and applied with success in chemoinformatics. The frequency of a chemical fragment in a dataset is often at the core of the process for the definition of its toxicological relevance. However, recent progresses in data mining provide new insights into the automated discovery of new rules. Particularly, this review highlights the notion of Emerging Patterns that can capture contrasts between classes of data.

  10. AUTOMATED DETECTION OF STRUCTURAL ALERTS (CHEMICAL FRAGMENTS IN (ECOTOXICOLOGY

    Directory of Open Access Journals (Sweden)

    Alban Lepailleur

    2013-02-01

    Full Text Available This mini-review describes the evolution of different algorithms dedicated to the automated discovery of chemical fragments associated to (ecotoxicological endpoints. These structural alerts correspond to one of the most interesting approach of in silico toxicology due to their direct link with specific toxicological mechanisms. A number of expert systems are already available but, since the first work in this field which considered a binomial distribution of chemical fragments between two datasets, new data miners were developed and applied with success in chemoinformatics. The frequency of a chemical fragment in a dataset is often at the core of the process for the definition of its toxicological relevance. However, recent progresses in data mining provide new insights into the automated discovery of new rules. Particularly, this review highlights the notion of Emerging Patterns that can capture contrasts between classes of data.

  11. Protocol for Automated Zooplankton Analysis

    Science.gov (United States)

    2010-01-01

    filter cube position is manually set to position 2 - for the Green Fluorescent Protein ( GFP ) filter cube. Again, the operator is asked by the spreadsheet...brightfield) FDA Fluorescein diacetate GFP Green fluorescent pigment GUI Graphic user interface «,3 m Cubic meter ml Milliliter mm Millimeter ms...parameters are selected by the microscope operator: the GFP ( green fluorescent pigment) camera exposure time and the gain settings. Based on recent work, it

  12. Googling your hand hygiene data: Using Google Forms, Google Sheets, and R to collect and automate analysis of hand hygiene compliance monitoring.

    Science.gov (United States)

    Wiemken, Timothy L; Furmanek, Stephen P; Mattingly, William A; Haas, Janet; Ramirez, Julio A; Carrico, Ruth M

    2018-02-26

    Hand hygiene is one of the most important interventions in the quest to eliminate healthcare-associated infections, and rates in healthcare facilities are markedly low. Since hand hygiene observation and feedback are critical to improve adherence, we created an easy-to-use, platform-independent hand hygiene data collection process and an automated, on-demand reporting engine. A 3-step approach was used for this project: 1) creation of a data collection form using Google Forms, 2) transfer of data from the form to a spreadsheet using Google Spreadsheets, and 3) creation of an automated, cloud-based analytics platform for report generation using R and RStudio Shiny software. A video tutorial of all steps in the creation and use of this free tool can be found on our YouTube channel: https://www.youtube.com/watch?v=uFatMR1rXqU&t. The on-demand reporting tool can be accessed at: https://crsp.louisville.edu/shiny/handhygiene. This data collection and automated analytics engine provides an easy-to-use environment for evaluating hand hygiene data; it also provides rapid feedback to healthcare workers. By reducing some of the data management workload required of the infection preventionist, more focused interventions may be instituted to increase global hand hygiene rates and reduce infection. Copyright © 2018 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.

  13. Representation Discovery using Harmonic Analysis

    CERN Document Server

    Mahadevan, Sridhar

    2008-01-01

    Representations are at the heart of artificial intelligence (AI). This book is devoted to the problem of representation discovery: how can an intelligent system construct representations from its experience? Representation discovery re-parameterizes the state space - prior to the application of information retrieval, machine learning, or optimization techniques - facilitating later inference processes by constructing new task-specific bases adapted to the state space geometry. This book presents a general approach to representation discovery using the framework of harmonic analysis, in particu

  14. Quantitative DMS mapping for automated RNA secondary structure inference

    OpenAIRE

    Cordero, Pablo; Kladwang, Wipapat; VanLang, Christopher C.; Das, Rhiju

    2012-01-01

    For decades, dimethyl sulfate (DMS) mapping has informed manual modeling of RNA structure in vitro and in vivo. Here, we incorporate DMS data into automated secondary structure inference using a pseudo-energy framework developed for 2'-OH acylation (SHAPE) mapping. On six non-coding RNAs with crystallographic models, DMS- guided modeling achieves overall false negative and false discovery rates of 9.5% and 11.6%, comparable or better than SHAPE-guided modeling; and non-parametric bootstrappin...

  15. Optogenetics enlightens neuroscience drug discovery.

    Science.gov (United States)

    Song, Chenchen; Knöpfel, Thomas

    2016-02-01

    Optogenetics - the use of light and genetics to manipulate and monitor the activities of defined cell populations - has already had a transformative impact on basic neuroscience research. Now, the conceptual and methodological advances associated with optogenetic approaches are providing fresh momentum to neuroscience drug discovery, particularly in areas that are stalled on the concept of 'fixing the brain chemistry'. Optogenetics is beginning to translate and transit into drug discovery in several key domains, including target discovery, high-throughput screening and novel therapeutic approaches to disease states. Here, we discuss the exciting potential of optogenetic technologies to transform neuroscience drug discovery.

  16. Discovery of neptunium

    International Nuclear Information System (INIS)

    Abelson, P.H.

    1990-01-01

    A number of distinguished scientists irradiated uranium with neutrons during 1934-1938. All were knowledgeable about the periodic table. They observed a number of beta-emitting activities that seemed to be from transuranic elements. They assumed that elements 93 and 94 would have chemical properties similar to rhenium and osmium respectively. In consequence discovery of fission and neptunium was delayed. After fission was finally demonstrated, a new search for element 93 was initiated by McMillan. He showed that when thin films of uranium are exposed to neutrons, high energy fission products leave the film - 23 minute and 2.3 day activities. The 23 minute activity was known to be an isotope of uranium. Chemistry performed by Abelson in May 1940 produced conclusive evidence that the 2.3 day activity was from the transuranic element 93 later named neptunium

  17. Hippocampus discovery First steps

    Directory of Open Access Journals (Sweden)

    Eliasz Engelhardt

    Full Text Available The first steps of the discovery, and the main discoverers, of the hippocampus are outlined. Arantius was the first to describe a structure he named "hippocampus" or "white silkworm". Despite numerous controversies and alternate designations, the term hippocampus has prevailed until this day as the most widely used term. Duvernoy provided an illustration of the hippocampus and surrounding structures, considered the first by most authors, which appeared more than one and a half century after Arantius' description. Some authors have identified other drawings and texts which they claim predate Duvernoy's depiction, in studies by Vesalius, Varolio, Willis, and Eustachio, albeit unconvincingly. Considering the definition of the hippocampal formation as comprising the hippocampus proper, dentate gyrus and subiculum, Arantius and Duvernoy apparently described the gross anatomy of this complex. The pioneering studies of Arantius and Duvernoy revealed a relatively small hidden formation that would become one of the most valued brain structures.

  18. Semi-automated software service integration in virtual organisations

    Science.gov (United States)

    Afsarmanesh, Hamideh; Sargolzaei, Mahdi; Shadi, Mahdieh

    2015-08-01

    To enhance their business opportunities, organisations involved in many service industries are increasingly active in pursuit of both online provision of their business services (BSs) and collaborating with others. Collaborative Networks (CNs) in service industry sector, however, face many challenges related to sharing and integration of their collection of provided BSs and their corresponding software services. Therefore, the topic of service interoperability for which this article introduces a framework is gaining momentum in research for supporting CNs. It contributes to generation of formal machine readable specification for business processes, aimed at providing their unambiguous definitions, as needed for developing their equivalent software services. The framework provides a model and implementation architecture for discovery and composition of shared services, to support the semi-automated development of integrated value-added services. In support of service discovery, a main contribution of this research is the formal representation of services' behaviour and applying desired service behaviour specified by users for automated matchmaking with other existing services. Furthermore, to support service integration, mechanisms are developed for automated selection of the most suitable service(s) according to a number of service quality aspects. Two scenario cases are presented, which exemplify several specific features related to service discovery and service integration aspects.

  19. Automated parallel recordings of topologically identified single ion channels.

    Science.gov (United States)

    Kawano, Ryuji; Tsuji, Yutaro; Sato, Koji; Osaki, Toshihisa; Kamiya, Koki; Hirano, Minako; Ide, Toru; Miki, Norihisa; Takeuchi, Shoji

    2013-01-01

    Although ion channels are attractive targets for drug discovery, the systematic screening of ion channel-targeted drugs remains challenging. To facilitate automated single ion-channel recordings for the analysis of drug interactions with the intra- and extracellular domain, we have developed a parallel recording methodology using artificial cell membranes. The use of stable lipid bilayer formation in droplet chamber arrays facilitated automated, parallel, single-channel recording from reconstituted native and mutated ion channels. Using this system, several types of ion channels, including mutated forms, were characterised by determining the protein orientation. In addition, we provide evidence that both intra- and extracellular amyloid-beta fragments directly inhibit the channel open probability of the hBK channel. This automated methodology provides a high-throughput drug screening system for the targeting of ion channels and a data-intensive analysis technique for studying ion channel gating mechanisms.

  20. SEALEX — Internal reef chronology and virtual drill logs from a spreadsheet-based reef growth model

    Science.gov (United States)

    Koelling, Martin; Webster, Jody Michael; Camoin, Gilbert; Iryu, Yasufumi; Bard, Edouard; Seard, Claire

    2009-03-01

    A reef growth model has been developed using an Excel spreadsheet. The 1D forward model is driven by a user definable sea-level curve. Other adjustable model parameters include maximum coral growth rate, coral growth rate depth dependence and light attenuation, subaerial erosion and subsidence. A time lag for the establishment of significant reef accretion may also be set. During the model run, both, the external shape and the internal chronologic structure of the growing reef as well as the paleo-water-depths are continuously displayed and recorded. We tested the model on fossil reef systems growing in a range of different tectonic settings such as both on slowly subsiding islands like Tahiti (subsidence rates of 0.25 m ka - 1 ) and rapidly subsiding islands like Hawaii (subsidence rate of 2.5 mka - 1 ) as well as rapidly uplifting coastal settings like Huon Peninsula (uplift rates of 0.5 to 4 m ka - 1 ) and more slowly uplifting settings like Haiti (uplift rates of 0.55 mka - 1 ). The model runs show the sensitivity of the resulting overall morphology and internal age structure to different model parameters. Additionally the water depth at the time of deposition is recorded. This allows the constructions of virtual borehole logs with the coral age profiles and the paleo water depth at the time of growth both displayed and recorded. Because the model is implemented as a macro in a popular spreadsheet program, it may be easily adapted or extended to model the growth of different reef and carbonate platform settings. Single model runs take a few minutes on a standard (2 GHz CoreDuo) desktop computer under Windows XP. The model may be used to investigate the effects of different boundary conditions such as maximum reef growth, erosion rates, subsidence or uplift on both, the general morphology of the reefs, and the internal chronologic structure. These results can then be compared to observed data allowing different hypothesis concerning reefs development to be

  1. Discoveries of isotopes by fission

    Indian Academy of Sciences (India)

    2015-08-28

    Aug 28, 2015 ... Of the about 3000 isotopes presently known, about 20% have been discovered in fission. The history of fission as it relates to the discovery of isotopes as well as the various reaction mechanisms leading to isotope discoveries involving fission are presented.

  2. Drug Discovery of Therapies for Duchenne Muscular Dystrophy.

    Science.gov (United States)

    Blat, Yuval; Blat, Shachar

    2015-12-01

    Duchenne muscular dystrophy (DMD) is a genetic, lethal, muscle disorder caused by the loss of the muscle protein, dystrophin, leading to progressive loss of muscle fibers and muscle weakness. Drug discovery efforts targeting DMD have used two main approaches: (1) the restoration of dystrophin expression or the expression of a compensatory protein, and (2) the mitigation of downstream pathological mechanisms, including dysregulated calcium homeostasis, oxidative stress, inflammation, fibrosis, and muscle ischemia. The aim of this review is to introduce the disease, its pathophysiology, and the available research tools to a drug discovery audience. This review will also detail the most promising therapies that are currently being tested in clinical trials or in advanced preclinical models. © 2015 Society for Laboratory Automation and Screening.

  3. Automation Hooks Architecture for Flexible Test Orchestration - Concept Development and Validation

    Science.gov (United States)

    Lansdowne, C. A.; Maclean, John R.; Winton, Chris; McCartney, Pat

    2011-01-01

    The Automation Hooks Architecture Trade Study for Flexible Test Orchestration sought a standardized data-driven alternative to conventional automated test programming interfaces. The study recommended composing the interface using multicast DNS (mDNS/SD) service discovery, Representational State Transfer (Restful) Web Services, and Automatic Test Markup Language (ATML). We describe additional efforts to rapidly mature the Automation Hooks Architecture candidate interface definition by validating it in a broad spectrum of applications. These activities have allowed us to further refine our concepts and provide observations directed toward objectives of economy, scalability, versatility, performance, severability, maintainability, scriptability and others.

  4. Automation of PCXMC and ImPACT for NASA Astronaut Medical Imaging Dose and Risk Tracking

    Science.gov (United States)

    Bahadori, Amir; Picco, Charles; Flores-McLaughlin, John; Shavers, Mark; Semones, Edward

    2011-01-01

    To automate astronaut organ and effective dose calculations from occupational X-ray and computed tomography (CT) examinations incorporating PCXMC and ImPACT tools and to estimate the associated lifetime cancer risk per the National Council on Radiation Protection & Measurements (NCRP) using MATLAB(R). Methods: NASA follows guidance from the NCRP on its operational radiation safety program for astronauts. NCRP Report 142 recommends that astronauts be informed of the cancer risks from reported exposures to ionizing radiation from medical imaging. MATLAB(R) code was written to retrieve exam parameters for medical imaging procedures from a NASA database, calculate associated dose and risk, and return results to the database, using the Microsoft .NET Framework. This code interfaces with the PCXMC executable and emulates the ImPACT Excel spreadsheet to calculate organ doses from X-rays and CTs, respectively, eliminating the need to utilize the PCXMC graphical user interface (except for a few special cases) and the ImPACT spreadsheet. Results: Using MATLAB(R) code to interface with PCXMC and replicate ImPACT dose calculation allowed for rapid evaluation of multiple medical imaging exams. The user inputs the exam parameter data into the database and runs the code. Based on the imaging modality and input parameters, the organ doses are calculated. Output files are created for record, and organ doses, effective dose, and cancer risks associated with each exam are written to the database. Annual and post-flight exposure reports, which are used by the flight surgeon to brief the astronaut, are generated from the database. Conclusions: Automating PCXMC and ImPACT for evaluation of NASA astronaut medical imaging radiation procedures allowed for a traceable and rapid method for tracking projected cancer risks associated with over 12,000 exposures. This code will be used to evaluate future medical radiation exposures, and can easily be modified to accommodate changes to the risk

  5. Automating the CMS DAQ

    Energy Technology Data Exchange (ETDEWEB)

    Bauer, G.; et al.

    2014-01-01

    We present the automation mechanisms that have been added to the Data Acquisition and Run Control systems of the Compact Muon Solenoid (CMS) experiment during Run 1 of the LHC, ranging from the automation of routine tasks to automatic error recovery and context-sensitive guidance to the operator. These mechanisms helped CMS to maintain a data taking efficiency above 90% and to even improve it to 95% towards the end of Run 1, despite an increase in the occurrence of single-event upsets in sub-detector electronics at high LHC luminosity.

  6. Control and automation systems

    International Nuclear Information System (INIS)

    Schmidt, R.; Zillich, H.

    1986-01-01

    A survey is given of the development of control and automation systems for energy uses. General remarks about control and automation schemes are followed by a description of modern process control systems along with process control processes as such. After discussing the particular process control requirements of nuclear power plants the paper deals with the reliability and availability of process control systems and refers to computerized simulation processes. The subsequent paragraphs are dedicated to descriptions of the operating floor, ergonomic conditions, existing systems, flue gas desulfurization systems, the electromagnetic influences on digital circuits as well as of light wave uses. (HAG) [de

  7. Automated nuclear materials accounting

    International Nuclear Information System (INIS)

    Pacak, P.; Moravec, J.

    1982-01-01

    An automated state system of accounting for nuclear materials data was established in Czechoslovakia in 1979. A file was compiled of 12 programs in the PL/1 language. The file is divided into four groups according to logical associations, namely programs for data input and checking, programs for handling the basic data file, programs for report outputs in the form of worksheets and magnetic tape records, and programs for book inventory listing, document inventory handling and materials balance listing. A similar automated system of nuclear fuel inventory for a light water reactor was introduced for internal purposes in the Institute of Nuclear Research (UJV). (H.S.)

  8. Computer modeling in free spreadsheets OpenOffice.Calc as one of the modern methods of teaching physics and mathematics cycle subjects in primary and secondary schools

    Directory of Open Access Journals (Sweden)

    Markushevich M.V.

    2016-10-01

    Full Text Available the article details the use of such modern method of training as computer simulation applied to modelling of various kinds of mechanical motion of a material point in the free spreadsheet OpenOffice.org Calc while designing physics and computer science lessons in primary and secondary schools. Particular attention is paid to the application of computer modeling integrated with other modern teaching methods.

  9. Discovery Mondays: Surveyors' Tools

    CERN Multimedia

    2003-01-01

    Surveyors of all ages, have your rulers and compasses at the ready! This sixth edition of Discovery Monday is your chance to learn about the surveyor's tools - the state of the art in measuring instruments - and see for yourself how they work. With their usual daunting precision, the members of CERN's Surveying Group have prepared some demonstrations and exercises for you to try. Find out the techniques for ensuring accelerator alignment and learn about high-tech metrology systems such as deviation indicators, tracking lasers and total stations. The surveyors will show you how they precisely measure magnet positioning, with accuracy of a few thousandths of a millimetre. You can try your hand at precision measurement using different types of sensor and a modern-day version of the Romans' bubble level, accurate to within a thousandth of a millimetre. You will learn that photogrammetry techniques can transform even a simple digital camera into a remarkable measuring instrument. Finally, you will have a chance t...

  10. Altering user' acceptance of automation through prior automation exposure.

    Science.gov (United States)

    Bekier, Marek; Molesworth, Brett R C

    2017-06-01

    Air navigation service providers worldwide see increased use of automation as one solution to overcome the capacity constraints imbedded in the present air traffic management (ATM) system. However, increased use of automation within any system is dependent on user acceptance. The present research sought to determine if the point at which an individual is no longer willing to accept or cooperate with automation can be manipulated. Forty participants underwent training on a computer-based air traffic control programme, followed by two ATM exercises (order counterbalanced), one with and one without the aid of automation. Results revealed after exposure to a task with automation assistance, user acceptance of high(er) levels of automation ('tipping point') decreased; suggesting it is indeed possible to alter automation acceptance. Practitioner Summary: This paper investigates whether the point at which a user of automation rejects automation (i.e. 'tipping point') is constant or can be manipulated. The results revealed after exposure to a task with automation assistance, user acceptance of high(er) levels of automation decreased; suggesting it is possible to alter automation acceptance.

  11. Review of Literature for Inputs to the National Water Savings Model and Spreadsheet Tool-Commercial/Institutional

    Energy Technology Data Exchange (ETDEWEB)

    Whitehead, Camilla Dunham; Melody, Moya; Lutz, James

    2009-05-29

    Lawrence Berkeley National Laboratory (LBNL) is developing a computer model and spreadsheet tool for the United States Environmental Protection Agency (EPA) to help estimate the water savings attributable to their WaterSense program. WaterSense has developed a labeling program for three types of plumbing fixtures commonly used in commercial and institutional settings: flushometer valve toilets, urinals, and pre-rinse spray valves. This National Water Savings-Commercial/Institutional (NWS-CI) model is patterned after the National Water Savings-Residential model, which was completed in 2008. Calculating the quantity of water and money saved through the WaterSense labeling program requires three primary inputs: (1) the quantity of a given product in use; (2) the frequency with which units of the product are replaced or are installed in new construction; and (3) the number of times or the duration the product is used in various settings. To obtain the information required for developing the NWS-CI model, LBNL reviewed various resources pertaining to the three WaterSense-labeled commercial/institutional products. The data gathered ranged from the number of commercial buildings in the United States to numbers of employees in various sectors of the economy and plumbing codes for commercial buildings. This document summarizes information obtained about the three products' attributes, quantities, and use in commercial and institutional settings that is needed to estimate how much water EPA's WaterSense program saves.

  12. The spreadsheet as a tool for teaching set theory: Part 1 – an Excel lesson plan to help solve Sudokus

    Directory of Open Access Journals (Sweden)

    Stephen J Sugden

    2008-04-01

    Full Text Available This paper is intended to be used in the classroom. It describes essentially every step of the construction of an Excel model to help solve Sudoku puzzles. For those up to moderate difficulty, it will usually solve the puzzle to completion. For the more difficult ones, it still provides a platform for decision support. The paper may be found useful for a lesson in which students, who, having some basic knowledge of Excel, are learning some of its lesser-known features, such as conditional formatting. It also generates a useful tool for working with Sudoku puzzles, from the very easiest right up to the ones often labelled as fiendish or diabolical. Fundamental mathematical concepts such as set intersection, set partition and reduction of set partition to singletons are very graphically illustrated by the present Excel model for Sudoku. Prominent spreadsheet concepts presented here are conditional formatting, names, COUNTIF, CONCATENATE. The paper is accompanied by a completed Excel model, constructed by using the steps described herein. No VBA code is employed; the whole thing is done with Excel formulas and conditional formatting.

  13. A Spreadsheet Algorithm for Determining the Economic Feasibility of Micro-CHP Systems in the Arkansas Manufacturing Sector

    Science.gov (United States)

    Lewallen, Ford

    Combined heat and power (CHP) systems are not new to the market. However, advances in technology, specifically MicroTurbines, have presented new opportunities for installations of micro-CHP units - defined as 50 kWe to 300 kWe, specifically at small- to medium-sized industrial facilities. One pressing concern is whether or not an industrial plant has a high enough process thermal load requirement to fully utilize the energy output. This thesis will discuss simulations that were run on several actual electric and thermal load combinations, which correspond to types of manufacturing facilities commonly found in Arkansas. Analysis of the plant usage profiles will identify economically feasible scenarios from CHP production based on electric and thermal loads, electric demand and energy costs, the cost of natural gas, and CHP unit size and efficiencies. The spreadsheet algorithm will be written in a form to allow a user to select utility rate structures from major utility companies in Arkansas, or customize their own rate schedule, and enter their monthly energy usages and demands. The user can then compare and contrast costs and savings of different CHP units, and then make informed decisions on whether a company would benefit from installing a CHP system.

  14. Perovskite classification: An Excel spreadsheet to determine and depict end-member proportions for the perovskite- and vapnikite-subgroups of the perovskite supergroup

    Science.gov (United States)

    Locock, Andrew J.; Mitchell, Roger H.

    2018-04-01

    Perovskite mineral oxides commonly exhibit extensive solid-solution, and are therefore classified on the basis of the proportions of their ideal end-members. A uniform sequence of calculation of the end-members is required if comparisons are to be made between different sets of analytical data. A Microsoft Excel spreadsheet has been programmed to assist with the classification and depiction of the minerals of the perovskite- and vapnikite-subgroups following the 2017 nomenclature of the perovskite supergroup recommended by the International Mineralogical Association (IMA). Compositional data for up to 36 elements are input into the spreadsheet as oxides in weight percent. For each analysis, the output includes the formula, the normalized proportions of 15 end-members, and the percentage of cations which cannot be assigned to those end-members. The data are automatically plotted onto the ternary and quaternary diagrams recommended by the IMA for depiction of perovskite compositions. Up to 200 analyses can be entered into the spreadsheet, which is accompanied by data calculated for 140 perovskite compositions compiled from the literature.

  15. Idaho: Library Automation and Connectivity.

    Science.gov (United States)

    Bolles, Charles

    1996-01-01

    Provides an overview of the development of cooperative library automation and connectivity in Idaho, including telecommunications capacity, library networks, the Internet, and the role of the state library. Information on six shared automation systems in Idaho is included. (LRW)

  16. Automated three-dimensional analysis of particle measurements using an optical profilometer and image analysis software.

    Science.gov (United States)

    Bullman, V

    2003-07-01

    The automated collection of topographic images from an optical profilometer coupled with existing image analysis software offers the unique ability to quantify three-dimensional particle morphology. Optional software available with most optical profilers permits automated collection of adjacent topographic images of particles dispersed onto a suitable substrate. Particles are recognized in the image as a set of continuous pixels with grey-level values above the grey level assigned to the substrate, whereas particle height or thickness is represented in the numerical differences between these grey levels. These images are loaded into remote image analysis software where macros automate image processing, and then distinguish particles for feature analysis, including standard two-dimensional measurements (e.g. projected area, length, width, aspect ratios) and third-dimensional measurements (e.g. maximum height, mean height). Feature measurements from each calibrated image are automatically added to cumulative databases and exported to a commercial spreadsheet or statistical program for further data processing and presentation. An example is given that demonstrates the superiority of quantitative three-dimensional measurements by optical profilometry and image analysis in comparison with conventional two-dimensional measurements for the characterization of pharmaceutical powders with plate-like particles.

  17. Automated HAZOP revisited

    DEFF Research Database (Denmark)

    Taylor, J. R.

    2017-01-01

    Hazard and operability analysis (HAZOP) has developed from a tentative approach to hazard identification for process plants in the early 1970s to an almost universally accepted approach today, and a central technique of safety engineering. Techniques for automated HAZOP analysis were developed...

  18. Automated data model evaluation

    International Nuclear Information System (INIS)

    Kazi, Zoltan; Kazi, Ljubica; Radulovic, Biljana

    2012-01-01

    Modeling process is essential phase within information systems development and implementation. This paper presents methods and techniques for analysis and evaluation of data model correctness. Recent methodologies and development results regarding automation of the process of model correctness analysis and relations with ontology tools has been presented. Key words: Database modeling, Data model correctness, Evaluation

  19. Automated Vehicle Monitoring System

    OpenAIRE

    Wibowo, Agustinus Deddy Arief; Heriansyah, Rudi

    2014-01-01

    An automated vehicle monitoring system is proposed in this paper. The surveillance system is based on image processing techniques such as background subtraction, colour balancing, chain code based shape detection, and blob. The proposed system will detect any human's head as appeared at the side mirrors. The detected head will be tracked and recorded for further action.

  20. Automated Accounting. Instructor Guide.

    Science.gov (United States)

    Moses, Duane R.

    This curriculum guide was developed to assist business instructors using Dac Easy Accounting College Edition Version 2.0 software in their accounting programs. The module consists of four units containing assignment sheets and job sheets designed to enable students to master competencies identified in the area of automated accounting. The first…

  1. Mechatronic Design Automation

    DEFF Research Database (Denmark)

    Fan, Zhun

    successfully design analogue filters, vibration absorbers, micro-electro-mechanical systems, and vehicle suspension systems, all in an automatic or semi-automatic way. It also investigates the very important issue of co-designing plant-structures and dynamic controllers in automated design of Mechatronic...

  2. Protokoller til Home Automation

    DEFF Research Database (Denmark)

    Kjær, Kristian Ellebæk

    2008-01-01

    computer, der kan skifte mellem foruddefinerede indstillinger. Nogle gange kan computeren fjernstyres over internettet, så man kan se hjemmets status fra en computer eller måske endda fra en mobiltelefon. Mens nævnte anvendelser er klassiske indenfor home automation, er yderligere funktionalitet dukket op...

  3. Automated Water Extraction Index

    DEFF Research Database (Denmark)

    Feyisa, Gudina Legese; Meilby, Henrik; Fensholt, Rasmus

    2014-01-01

    of various sorts of environmental noise and at the same time offers a stable threshold value. Thus we introduced a new Automated Water Extraction Index (AWEI) improving classification accuracy in areas that include shadow and dark surfaces that other classification methods often fail to classify correctly...

  4. Myths in test automation

    Directory of Open Access Journals (Sweden)

    Jazmine Francis

    2015-01-01

    Full Text Available Myths in automation of software testing is an issue of discussion that echoes about the areas of service in validation of software industry. Probably, the first though that appears in knowledgeable reader would be Why this old topic again? What's New to discuss the matter? But, for the first time everyone agrees that undoubtedly automation testing today is not today what it used to be ten or fifteen years ago, because it has evolved in scope and magnitude. What began as a simple linear scripts for web applications today has a complex architecture and a hybrid framework to facilitate the implementation of testing applications developed with various platforms and technologies. Undoubtedly automation has advanced, but so did the myths associated with it. The change in perspective and knowledge of people on automation has altered the terrain. This article reflects the points of views and experience of the author in what has to do with the transformation of the original myths in new versions, and how they are derived; also provides his thoughts on the new generation of myths.

  5. Driver Psychology during Automated Platooning

    NARCIS (Netherlands)

    Heikoop, D.D.

    2017-01-01

    With the rapid increase in vehicle automation technology, the call for understanding how humans behave while driving in an automated vehicle becomes more urgent. Vehicles that have automated systems such as Lane Keeping Assist (LKA) or Adaptive Cruise Control (ACC) not only support drivers in their

  6. Supernovae Discovery Efficiency

    Science.gov (United States)

    John, Colin

    2018-01-01

    Abstract:We present supernovae (SN) search efficiency measurements for recent Hubble Space Telescope (HST) surveys. Efficiency is a key component to any search, and is important parameter as a correction factor for SN rates. To achieve an accurate value for efficiency, many supernovae need to be discoverable in surveys. This cannot be achieved from real SN only, due to their scarcity, so fake SN are planted. These fake supernovae—with a goal of realism in mind—yield an understanding of efficiency based on position related to other celestial objects, and brightness. To improve realism, we built a more accurate model of supernovae using a point-spread function. The next improvement to realism is planting these objects close to galaxies and of various parameters of brightness, magnitude, local galactic brightness and redshift. Once these are planted, a very accurate SN is visible and discoverable by the searcher. It is very important to find factors that affect this discovery efficiency. Exploring the factors that effect detection yields a more accurate correction factor. Further inquires into efficiency give us a better understanding of image processing, searching techniques and survey strategies, and result in an overall higher likelihood to find these events in future surveys with Hubble, James Webb, and WFIRST telescopes. After efficiency is discovered and refined with many unique surveys, it factors into measurements of SN rates versus redshift. By comparing SN rates vs redshift against the star formation rate we can test models to determine how long star systems take from the point of inception to explosion (delay time distribution). This delay time distribution is compared to SN progenitors models to get an accurate idea of what these stars were like before their deaths.

  7. Automated Diatom Analysis Applied to Traditional Light Microscopy: A Proof-of-Concept Study

    Science.gov (United States)

    Little, Z. H. L.; Bishop, I.; Spaulding, S. A.; Nelson, H.; Mahoney, C.

    2017-12-01

    Diatom identification and enumeration by high resolution light microscopy is required for many areas of research and water quality assessment. Such analyses, however, are both expertise and labor-intensive. These challenges motivate the need for an automated process to efficiently and accurately identify and enumerate diatoms. Improvements in particle analysis software have increased the likelihood that diatom enumeration can be automated. VisualSpreadsheet software provides a possible solution for automated particle analysis of high-resolution light microscope diatom images. We applied the software, independent of its complementary FlowCam hardware, to automated analysis of light microscope images containing diatoms. Through numerous trials, we arrived at threshold settings to correctly segment 67% of the total possible diatom valves and fragments from broad fields of view. (183 light microscope images were examined containing 255 diatom particles. Of the 255 diatom particles present, 216 diatoms valves and fragments of valves were processed, with 170 properly analyzed and focused upon by the software). Manual analysis of the images yielded 255 particles in 400 seconds, whereas the software yielded a total of 216 particles in 68 seconds, thus highlighting that the software has an approximate five-fold efficiency advantage in particle analysis time. As in past efforts, incomplete or incorrect recognition was found for images with multiple valves in contact or valves with little contrast. The software has potential to be an effective tool in assisting taxonomists with diatom enumeration by completing a large portion of analyses. Benefits and limitations of the approach are presented to allow for development of future work in image analysis and automated enumeration of traditional light microscope images containing diatoms.

  8. REdiii: a pipeline for automated structure solution.

    Science.gov (United States)

    Bohn, Markus Frederik; Schiffer, Celia A

    2015-05-01

    High-throughput crystallographic approaches require integrated software solutions to minimize the need for manual effort. REdiii is a system that allows fully automated crystallographic structure solution by integrating existing crystallographic software into an adaptive and partly autonomous workflow engine. The program can be initiated after collecting the first frame of diffraction data and is able to perform processing, molecular-replacement phasing, chain tracing, ligand fitting and refinement without further user intervention. Preset values for each software component allow efficient progress with high-quality data and known parameters. The adaptive workflow engine can determine whether some parameters require modifications and choose alternative software strategies in case the preconfigured solution is inadequate. This integrated pipeline is targeted at providing a comprehensive and efficient approach to screening for ligand-bound co-crystal structures while minimizing repetitiveness and allowing a high-throughput scientific discovery process.

  9. A novel in silico approach to drug discovery via computational intelligence.

    Science.gov (United States)

    Hecht, David; Fogel, Gary B

    2009-04-01

    A computational intelligence drug discovery platform is introduced as an innovative technology designed to accelerate high-throughput drug screening for generalized protein-targeted drug discovery. This technology results in collections of novel small molecule compounds that bind to protein targets as well as details on predicted binding modes and molecular interactions. The approach was tested on dihydrofolate reductase (DHFR) for novel antimalarial drug discovery; however, the methods developed can be applied broadly in early stage drug discovery and development. For this purpose, an initial fragment library was defined, and an automated fragment assembly algorithm was generated. These were combined with a computational intelligence screening tool for prescreening of compounds relative to DHFR inhibition. The entire method was assayed relative to spaces of known DHFR inhibitors and with chemical feasibility in mind, leading to experimental validation in future studies.

  10. Antibody informatics for drug discovery

    DEFF Research Database (Denmark)

    Shirai, Hiroki; Prades, Catherine; Vita, Randi

    2014-01-01

    to the antibody science in every project in antibody drug discovery. Recent experimental technologies allow for the rapid generation of large-scale data on antibody sequences, affinity, potency, structures, and biological functions; this should accelerate drug discovery research. Therefore, a robust bioinformatic...... infrastructure for these large data sets has become necessary. In this article, we first identify and discuss the typical obstacles faced during the antibody drug discovery process. We then summarize the current status of three sub-fields of antibody informatics as follows: (i) recent progress in technologies...... for antibody rational design using computational approaches to affinity and stability improvement, as well as ab-initio and homology-based antibody modeling; (ii) resources for antibody sequences, structures, and immune epitopes and open drug discovery resources for development of antibody drugs; and (iii...

  11. Discovery of the cadmium isotopes

    International Nuclear Information System (INIS)

    Amos, S.; Thoennessen, M.

    2010-01-01

    Thirty-seven cadmium isotopes have been observed so far and the discovery of these isotopes is discussed here. For each isotope a brief summary of the first refereed publication, including the production and identification method, is presented.

  12. Synthetic biology of antimicrobial discovery

    Science.gov (United States)

    Zakeri, Bijan; Lu, Timothy K.

    2012-01-01

    Antibiotic discovery has a storied history. From the discovery of penicillin by Sir Alexander Fleming to the relentless quest for antibiotics by Selman Waksman, the stories have become like folklore, used to inspire future generations of scientists. However, recent discovery pipelines have run dry at a time when multidrug resistant pathogens are on the rise. Nature has proven to be a valuable reservoir of antimicrobial agents, which are primarily produced by modularized biochemical pathways. Such modularization is well suited to remodeling by an interdisciplinary approach that spans science and engineering. Herein, we discuss the biological engineering of small molecules, peptides, and non-traditional antimicrobials and provide an overview of the growing applicability of synthetic biology to antimicrobials discovery. PMID:23654251

  13. Scientific discovery through weighted sampling

    NARCIS (Netherlands)

    E. Sidirourgos (Eleftherios); M.L. Kersten (Martin); P.A. Boncz (Peter)

    2013-01-01

    textabstractScientific discovery has shifted from being an exercise of theory and computation, to become the exploration of an ocean of observational data. Scientists explore data originated from modern scientific instruments in order to discover

  14. Exosomes in urine biomarker discovery.

    Science.gov (United States)

    Huebner, Alyssa R; Somparn, Poorichaya; Benjachat, Thitima; Leelahavanichkul, Asada; Avihingsanon, Yingyos; Fenton, Robert A; Pisitkun, Trairak

    2015-01-01

    Nanovesicles present in urine the so-called urinary exosomes have been found to be secreted by every epithelial cell type lining the urinary tract system in human. Urinary exosomes are an appealing source for biomarker discovery as they contain molecular constituents of their cell of origin, including proteins and genetic materials, and they can be isolated in a non-invasive manner. Following the discovery of urinary exosomes in 2004, many studies have been performed using urinary exosomes as a starting material to identify biomarkers in various renal, urogenital, and systemic diseases. Here, we describe the discovery of urinary exosomes and address the issues on the collection, isolation, and normalization of urinary exosomes as well as delineate the systems biology approach to biomarker discovery using urinary exosomes.

  15. Radioactivity. Centenary of radioactivity discovery

    International Nuclear Information System (INIS)

    Charpak, G.; Tubiana, M.; Bimbot, R.

    1997-01-01

    This small booklet was edited for the occasion of the exhibitions of the celebration of the centenary of radioactivity discovery which took place in various locations in France from 1996 to 1998. It recalls some basic knowledge concerning radioactivity and its applications: history of discovery, atoms and isotopes, radiations, measurement of ionizing radiations, natural and artificial radioactivity, isotope dating and labelling, radiotherapy, nuclear power and reactors, fission and fusion, nuclear wastes, dosimetry, effects and radioprotection. (J.S.)

  16. Computational methods in drug discovery

    Directory of Open Access Journals (Sweden)

    Sumudu P. Leelananda

    2016-12-01

    Full Text Available The process for drug discovery and development is challenging, time consuming and expensive. Computer-aided drug discovery (CADD tools can act as a virtual shortcut, assisting in the expedition of this long process and potentially reducing the cost of research and development. Today CADD has become an effective and indispensable tool in therapeutic development. The human genome project has made available a substantial amount of sequence data that can be used in various drug discovery projects. Additionally, increasing knowledge of biological structures, as well as increasing computer power have made it possible to use computational methods effectively in various phases of the drug discovery and development pipeline. The importance of in silico tools is greater than ever before and has advanced pharmaceutical research. Here we present an overview of computational methods used in different facets of drug discovery and highlight some of the recent successes. In this review, both structure-based and ligand-based drug discovery methods are discussed. Advances in virtual high-throughput screening, protein structure prediction methods, protein–ligand docking, pharmacophore modeling and QSAR techniques are reviewed.

  17. Knowledge Discovery in Data in Construction Projects

    Directory of Open Access Journals (Sweden)

    Szelka J.

    2016-06-01

    Full Text Available Decision-making processes, including the ones related to ill-structured problems, are of considerable significance in the area of construction projects. Computer-aided inference under such conditions requires the employment of specific methods and tools (non-algorithmic ones, the best recognized and successfully used in practice represented by expert systems. The knowledge indispensable for such systems to perform inference is most frequently acquired directly from experts (through a dialogue: a domain expert - a knowledge engineer and from various source documents. Little is known, however, about the possibility of automating knowledge acquisition in this area and as a result, in practice it is scarcely ever used. It has to be noted that in numerous areas of management more and more attention is paid to the issue of acquiring knowledge from available data. What is known and successfully employed in the practice of aiding the decision-making is the different methods and tools. The paper attempts to select methods for knowledge discovery in data and presents possible ways of representing the acquired knowledge as well as sample tools (including programming ones, allowing for the use of this knowledge in the area under consideration.

  18. Faults Discovery By Using Mined Data

    Science.gov (United States)

    Lee, Charles

    2005-01-01

    Fault discovery in the complex systems consist of model based reasoning, fault tree analysis, rule based inference methods, and other approaches. Model based reasoning builds models for the systems either by mathematic formulations or by experiment model. Fault Tree Analysis shows the possible causes of a system malfunction by enumerating the suspect components and their respective failure modes that may have induced the problem. The rule based inference build the model based on the expert knowledge. Those models and methods have one thing in common; they have presumed some prior-conditions. Complex systems often use fault trees to analyze the faults. Fault diagnosis, when error occurs, is performed by engineers and analysts performing extensive examination of all data gathered during the mission. International Space Station (ISS) control center operates on the data feedback from the system and decisions are made based on threshold values by using fault trees. Since those decision-making tasks are safety critical and must be done promptly, the engineers who manually analyze the data are facing time challenge. To automate this process, this paper present an approach that uses decision trees to discover fault from data in real-time and capture the contents of fault trees as the initial state of the trees.

  19. High-throughput protein crystallography and drug discovery.

    Science.gov (United States)

    Tickle, Ian; Sharff, Andrew; Vinkovic, Mladen; Yon, Jeff; Jhoti, Harren

    2004-10-20

    Single crystal X-ray diffraction is the technique of choice for studying the interactions of small organic molecules with proteins by determining their three-dimensional structures; however the requirement for highly purified protein and lack of process automation have traditionally limited its use in this field. Despite these shortcomings, the use of crystal structures of therapeutically relevant drug targets in pharmaceutical research has increased significantly over the last decade. The application of structure-based drug design has resulted in several marketed drugs and is now an established discipline in most pharmaceutical companies. Furthermore, the recently published full genome sequences of Homo sapiens and a number of micro-organisms have provided a plethora of new potential drug targets that could be utilised in structure-based drug design programs. In order to take maximum advantage of this explosion of information, techniques have been developed to automate and speed up the various procedures required to obtain protein crystals of suitable quality, to collect and process the raw X-ray diffraction data into usable structural information, and to use three-dimensional protein structure as a basis for drug discovery and lead optimisation. This tutorial review covers the various technologies involved in the process pipeline for high-throughput protein crystallography as it is currently being applied to drug discovery. It is aimed at synthetic and computational chemists, as well as structural biologists, in both academia and industry, who are interested in structure-based drug design.

  20. Automated campaign system

    Science.gov (United States)

    Vondran, Gary; Chao, Hui; Lin, Xiaofan; Beyer, Dirk; Joshi, Parag; Atkins, Brian; Obrador, Pere

    2006-02-01

    To run a targeted campaign involves coordination and management across numerous organizations and complex process flows. Everything from market analytics on customer databases, acquiring content and images, composing the materials, meeting the sponsoring enterprise brand standards, driving through production and fulfillment, and evaluating results; all processes are currently performed by experienced highly trained staff. Presented is a developed solution that not only brings together technologies that automate each process, but also automates the entire flow so that a novice user could easily run a successful campaign from their desktop. This paper presents the technologies, structure, and process flows used to bring this system together. Highlighted will be how the complexity of running a targeted campaign is hidden from the user through technologies, all while providing the benefits of a professionally managed campaign.

  1. Rapid automated nuclear chemistry

    International Nuclear Information System (INIS)

    Meyer, R.A.

    1979-01-01

    Rapid Automated Nuclear Chemistry (RANC) can be thought of as the Z-separation of Neutron-rich Isotopes by Automated Methods. The range of RANC studies of fission and its products is large. In a sense, the studies can be categorized into various energy ranges from the highest where the fission process and particle emission are considered, to low energies where nuclear dynamics are being explored. This paper presents a table which gives examples of current research using RANC on fission and fission products. The remainder of this text is divided into three parts. The first contains a discussion of the chemical methods available for the fission product elements, the second describes the major techniques, and in the last section, examples of recent results are discussed as illustrations of the use of RANC

  2. ATLAS Distributed Computing Automation

    CERN Document Server

    Schovancova, J; The ATLAS collaboration; Borrego, C; Campana, S; Di Girolamo, A; Elmsheuser, J; Hejbal, J; Kouba, T; Legger, F; Magradze, E; Medrano Llamas, R; Negri, G; Rinaldi, L; Sciacca, G; Serfon, C; Van Der Ster, D C

    2012-01-01

    The ATLAS Experiment benefits from computing resources distributed worldwide at more than 100 WLCG sites. The ATLAS Grid sites provide over 100k CPU job slots, over 100 PB of storage space on disk or tape. Monitoring of status of such a complex infrastructure is essential. The ATLAS Grid infrastructure is monitored 24/7 by two teams of shifters distributed world-wide, by the ATLAS Distributed Computing experts, and by site administrators. In this paper we summarize automation efforts performed within the ATLAS Distributed Computing team in order to reduce manpower costs and improve the reliability of the system. Different aspects of the automation process are described: from the ATLAS Grid site topology provided by the ATLAS Grid Information System, via automatic site testing by the HammerCloud, to automatic exclusion from production or analysis activities.

  3. Automated Assembly Center (AAC)

    Science.gov (United States)

    Stauffer, Robert J.

    1993-01-01

    The objectives of this project are as follows: to integrate advanced assembly and assembly support technology under a comprehensive architecture; to implement automated assembly technologies in the production of high-visibility DOD weapon systems; and to document the improved cost, quality, and lead time. This will enhance the production of DOD weapon systems by utilizing the latest commercially available technologies combined into a flexible system that will be able to readily incorporate new technologies as they emerge. Automated assembly encompasses the following areas: product data, process planning, information management policies and framework, three schema architecture, open systems communications, intelligent robots, flexible multi-ability end effectors, knowledge-based/expert systems, intelligent workstations, intelligent sensor systems, and PDES/PDDI data standards.

  4. Automated drawing generation system

    International Nuclear Information System (INIS)

    Yoshinaga, Toshiaki; Kawahata, Junichi; Yoshida, Naoto; Ono, Satoru

    1991-01-01

    Since automated CAD drawing generation systems still require human intervention, improvements were focussed on an interactive processing section (data input and correcting operation) which necessitates a vast amount of work. As a result, human intervention was eliminated, the original objective of a computerized system. This is the first step taken towards complete automation. The effects of development and commercialization of the system are as described below. (1) The interactive processing time required for generating drawings was improved. It was determined that introduction of the CAD system has reduced the time required for generating drawings. (2) The difference in skills between workers preparing drawings has been eliminated and the quality of drawings has been made uniform. (3) The extent of knowledge and experience demanded of workers has been reduced. (author)

  5. Automated fingerprint identification system

    International Nuclear Information System (INIS)

    Bukhari, U.A.; Sheikh, N.M.; Khan, U.I.; Mahmood, N.; Aslam, M.

    2002-01-01

    In this paper we present selected stages of an automated fingerprint identification system. The software for the system is developed employing algorithm for two-tone conversion, thinning, feature extraction and matching. Keeping FBI standards into account, it has been assured that no details of the image are lost in the comparison process. We have deployed a general parallel thinning algorithm for specialized images like fingerprints and modified the original algorithm after a series of experimentation selecting the one giving the best results. We also proposed an application-based approach for designing automated fingerprint identification systems keeping in view systems requirements. We will show that by using our system, the precision and efficiency of current fingerprint matching techniques are increased. (author)

  6. Automated breeder fuel fabrication

    International Nuclear Information System (INIS)

    Goldmann, L.H.; Frederickson, J.R.

    1983-01-01

    The objective of the Secure Automated Fabrication (SAF) Project is to develop remotely operated equipment for the processing and manufacturing of breeder reactor fuel pins. The SAF line will be installed in the Fuels and Materials Examination Facility (FMEF). The FMEF is presently under construction at the Department of Energy's (DOE) Hanford site near Richland, Washington, and is operated by the Westinghouse Hanford Company (WHC). The fabrication and support systems of the SAF line are designed for computer-controlled operation from a centralized control room. Remote and automated fuel fabriction operations will result in: reduced radiation exposure to workers; enhanced safeguards; improved product quality; near real-time accountability, and increased productivity. The present schedule calls for installation of SAF line equipment in the FMEF beginning in 1984, with qualifying runs starting in 1986 and production commencing in 1987. 5 figures

  7. Universal Knowledge Discovery from Big Data: Towards a Paradigm Shift from 'Knowledge Discovery' to 'Wisdom Discovery'

    OpenAIRE

    Shen, Bin

    2014-01-01

    Many people hold a vision that big data will provide big insights and have a big impact in the future, and big-data-assisted scientific discovery is seen as an emerging and promising scientific paradigm. However, how to turn big data into deep insights with tremendous value still remains obscure. To meet the challenge, universal knowledge discovery from big data (UKD) is proposed. The new concept focuses on discovering universal knowledge, which exists in the statistical analyses of big data ...

  8. Automated Instrumentation System Verification.

    Science.gov (United States)

    1983-04-01

    fUig JDma Entered) i. _-_J I ___________ UNCLASSI FI ED SECURITY CLASSIFICATION OF TIHIS PAGE(II7,m Daca Entod) 20. ABSTRACT (Continued). ) contain...automatic measurement should arise. 15 I "_......_______.....____,_.........____ _ ’ " AFWL-TR-82-137 11. TRADITIONAL PROCEDURES The necessity to measure data...measurement (Ref. 8). Finally, when the necessity for automation was recognized and funds were provided, the effort described in this report was started

  9. Cavendish Balance Automation

    Science.gov (United States)

    Thompson, Bryan

    2000-01-01

    This is the final report for a project carried out to modify a manual commercial Cavendish Balance for automated use in cryostat. The scope of this project was to modify an off-the-shelf manually operated Cavendish Balance to allow for automated operation for periods of hours or days in cryostat. The purpose of this modification was to allow the balance to be used in the study of effects of superconducting materials on the local gravitational field strength to determine if the strength of gravitational fields can be reduced. A Cavendish Balance was chosen because it is a fairly simple piece of equipment for measuring gravity, one the least accurately known and least understood physical constants. The principle activities that occurred under this purchase order were: (1) All the components necessary to hold and automate the Cavendish Balance in a cryostat were designed. Engineering drawings were made of custom parts to be fabricated, other off-the-shelf parts were procured; (2) Software was written in LabView to control the automation process via a stepper motor controller and stepper motor, and to collect data from the balance during testing; (3)Software was written to take the data collected from the Cavendish Balance and reduce it to give a value for the gravitational constant; (4) The components of the system were assembled and fitted to a cryostat. Also the LabView hardware including the control computer, stepper motor driver, data collection boards, and necessary cabling were assembled; and (5) The system was operated for a number of periods, data collected, and reduced to give an average value for the gravitational constant.

  10. Autonomy, Automation, and Systems

    Science.gov (United States)

    Turner, Philip R.

    1987-02-01

    Aerospace industry interest in autonomy and automation, given fresh impetus by the national goal of establishing a Space Station, is becoming a major item of research and technology development. The promise of new technology arising from research in Artificial Intelligence (AI) has focused much attention on its potential in autonomy and automation. These technologies can improve performance in autonomous control functions that involve planning, scheduling, and fault diagnosis of complex systems. There are, however, many aspects of system and subsystem design in an autonomous system that impact AI applications, but do not directly involve AI technology. Development of a system control architecture, establishment of an operating system within the design, providing command and sensory data collection features appropriate to automated operation, and the use of design analysis tools to support system engineering are specific examples of major design issues. Aspects such as these must also receive attention and technology development support if we are to implement complex autonomous systems within the realistic limitations of mass, power, cost, and available flight-qualified technology that are all-important to a flight project.

  11. Automation in biological crystallization.

    Science.gov (United States)

    Stewart, Patrick Shaw; Mueller-Dieckmann, Jochen

    2014-06-01

    Crystallization remains the bottleneck in the crystallographic process leading from a gene to a three-dimensional model of the encoded protein or RNA. Automation of the individual steps of a crystallization experiment, from the preparation of crystallization cocktails for initial or optimization screens to the imaging of the experiments, has been the response to address this issue. Today, large high-throughput crystallization facilities, many of them open to the general user community, are capable of setting up thousands of crystallization trials per day. It is thus possible to test multiple constructs of each target for their ability to form crystals on a production-line basis. This has improved success rates and made crystallization much more convenient. High-throughput crystallization, however, cannot relieve users of the task of producing samples of high quality. Moreover, the time gained from eliminating manual preparations must now be invested in the careful evaluation of the increased number of experiments. The latter requires a sophisticated data and laboratory information-management system. A review of the current state of automation at the individual steps of crystallization with specific attention to the automation of optimization is given.

  12. Coupling Visualization and Data Analysis for Knowledge Discovery from Multi-dimensional Scientific Data

    International Nuclear Information System (INIS)

    Rubel, Oliver; Ahern, Sean; Bethel, E. Wes; Biggin, Mark D.; Childs, Hank; Cormier-Michel, Estelle; DePace, Angela; Eisen, Michael B.; Fowlkes, Charless C.; Geddes, Cameron G.R.; Hagen, Hans; Hamann, Bernd; Huang, Min-Yu; Keranen, Soile V.E.; Knowles, David W.; Hendriks, Chris L. Luengo; Malik, Jitendra; Meredith, Jeremy; Messmer, Peter; Prabhat; Ushizima, Daniela; Weber, Gunther H.; Wu, Kesheng

    2010-01-01

    Knowledge discovery from large and complex scientific data is a challenging task. With the ability to measure and simulate more processes at increasingly finer spatial and temporal scales, the growing number of data dimensions and data objects presents tremendous challenges for effective data analysis and data exploration methods and tools. The combination and close integration of methods from scientific visualization, information visualization, automated data analysis, and other enabling technologies 'such as efficient data management' supports knowledge discovery from multi-dimensional scientific data. This paper surveys two distinct applications in developmental biology and accelerator physics, illustrating the effectiveness of the described approach.

  13. New Generation Discovery: A Systematic View for Its Development, Issues and Future

    KAUST Repository

    Yu, Yi

    2012-11-01

    Collecting, storing, discovering, and locating are integral parts of the composition of the library. To fully utilize the library and achieve its ultimate value, the construction and production of discovery has always been a central part of the library’s practice and identity. That is the reason why the new generation (also called the next-generation discovery) discovery gets such striking effect since it came into library automation arena. However, when we talk about the new generation of discovery in the library domain, we should see it in the entirety of the library as one of its organic parts and consider its progress along with the evolution of the whole library world. We should have a deeper understanding about its relationship and interaction with the internet, the rapidly changing digital environment, and the elements and the chain of library services. To address above issues, this paper overviews the different versions of the definition for the new generation discovery by combining our own understanding. The paper also gives our own description for its properties and characteristics. The paper points out what challenges, which extends the technology domain to commercial interests and business strategy, are faced by the discovery applications, and how library and library professionals deal with those challenges. Finally, the paper elaborates on the promise brought by the new discovery development and what the next exploration might be for its future.

  14. The Europa Ocean Discovery mission

    Energy Technology Data Exchange (ETDEWEB)

    Edwards, B.C. [Los Alamos National Lab., NM (United States); Chyba, C.F. [Univ. of Arizona, Tucson, AZ (United States); Abshire, J.B. [National Aeronautics and Space Administration, Greenbelt, MD (United States). Goddard Space Flight Center] [and others

    1997-06-01

    Since it was first proposed that tidal heating of Europa by Jupiter might lead to liquid water oceans below Europa`s ice cover, there has been speculation over the possible exobiological implications of such an ocean. Liquid water is the essential ingredient for life as it is known, and the existence of a second water ocean in the Solar System would be of paramount importance for seeking the origin and existence of life beyond Earth. The authors present here a Discovery-class mission concept (Europa Ocean Discovery) to determine the existence of a liquid water ocean on Europa and to characterize Europa`s surface structure. The technical goal of the Europa Ocean Discovery mission is to study Europa with an orbiting spacecraft. This goal is challenging but entirely feasible within the Discovery envelope. There are four key challenges: entering Europan orbit, generating power, surviving long enough in the radiation environment to return valuable science, and complete the mission within the Discovery program`s launch vehicle and budget constraints. The authors will present here a viable mission that meets these challenges.

  15. Automation, Performance and International Competition

    DEFF Research Database (Denmark)

    Kromann, Lene; Sørensen, Anders

    This paper presents new evidence on trade‐induced automation in manufacturing firms using unique data combining a retrospective survey that we have assembled with register data for 2005‐2010. In particular, we establish a causal effect where firms that have specialized in product types for which...... the Chinese exports to the world market has risen sharply invest more in automated capital compared to firms that have specialized in other product types. We also study the relationship between automation and firm performance and find that firms with high increases in scale and scope of automation have faster...... productivity growth than other firms. Moreover, automation improves the efficiency of all stages of the production process by reducing setup time, run time, and inspection time and increasing uptime and quantity produced per worker. The efficiency improvement varies by type of automation....

  16. Label-free drug discovery

    Directory of Open Access Journals (Sweden)

    Ye eFang

    2014-03-01

    Full Text Available Current drug discovery is dominated by label-dependent molecular approaches, which screen drugs in the context of a predefined and target-based hypothesis in vitro. Given that target-based discovery has not transformed the industry, phenotypic screen that identifies drugs based on a specific phenotype of cells, tissues, or animals has gained renewed interest. However, owing to the intrinsic complexity in drug-target interactions, there is often a significant gap between the phenotype screened and the ultimate molecular mechanism of action sought. This paper presents a label-free strategy for early drug discovery. This strategy combines label-free cell phenotypic profiling with computational approaches, and holds promise to bridge the gap by offering a kinetic and holistic representation of the functional consequences of drugs in disease relevant cells that is amenable to mechanistic deconvolution.

  17. Deep Learning in Drug Discovery.

    Science.gov (United States)

    Gawehn, Erik; Hiss, Jan A; Schneider, Gisbert

    2016-01-01

    Artificial neural networks had their first heyday in molecular informatics and drug discovery approximately two decades ago. Currently, we are witnessing renewed interest in adapting advanced neural network architectures for pharmaceutical research by borrowing from the field of "deep learning". Compared with some of the other life sciences, their application in drug discovery is still limited. Here, we provide an overview of this emerging field of molecular informatics, present the basic concepts of prominent deep learning methods and offer motivation to explore these techniques for their usefulness in computer-assisted drug discovery and design. We specifically emphasize deep neural networks, restricted Boltzmann machine networks and convolutional networks. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Bioinformatics in translational drug discovery.

    Science.gov (United States)

    Wooller, Sarah K; Benstead-Hume, Graeme; Chen, Xiangrong; Ali, Yusuf; Pearl, Frances M G

    2017-08-31

    Bioinformatics approaches are becoming ever more essential in translational drug discovery both in academia and within the pharmaceutical industry. Computational exploitation of the increasing volumes of data generated during all phases of drug discovery is enabling key challenges of the process to be addressed. Here, we highlight some of the areas in which bioinformatics resources and methods are being developed to support the drug discovery pipeline. These include the creation of large data warehouses, bioinformatics algorithms to analyse 'big data' that identify novel drug targets and/or biomarkers, programs to assess the tractability of targets, and prediction of repositioning opportunities that use licensed drugs to treat additional indications. © 2017 The Author(s).

  19. In defence of discovery learning.

    Science.gov (United States)

    Vereijken, B; Whiting, H T

    1990-06-01

    The present paper discusses the influence of different training methods--i.e., knowledge of results, preferred frequency, and the availability of a model--on the learning of a complex motor skill, in this case the learning of slalom ski-type movements on a ski-simulator. Results of three experiments performed on this apparatus showed that, although the training methods used influence the course of learning, none of the methods used was actually superior to discovery learning. It is suggested that discovery learning forces the learner to explore the dynamics of the system in which he or she operates, in an iterative way. Possibilities for cooperative working between prescription and discovery learning are discussed.

  20. Optical imaging for the new grammar of drug discovery.

    Science.gov (United States)

    Krucker, Thomas; Sandanaraj, Britto S

    2011-11-28

    Optical technologies used in biomedical research have undergone tremendous development in the last decade and enabled important insight into biochemical, cellular and physiological phenomena at the microscopic and macroscopic level. Historically in drug discovery, to increase throughput in screening, or increase efficiency through automation of image acquisition and analysis in pathology, efforts in imaging were focused on the reengineering of established microscopy solutions. However, with the emergence of the new grammar for drug discovery, other requirements and expectations have created unique opportunities for optical imaging. The new grammar of drug discovery provides rules for translating the wealth of genomic and proteomic information into targeted medicines with a focus on complex interactions of proteins. This paradigm shift requires highly specific and quantitative imaging at the molecular level with tools that can be used in cellular assays, animals and finally translated into patients. The development of fluorescent targeted and activatable 'smart' probes, fluorescent proteins and new reporter gene systems as functional and dynamic markers of molecular events in vitro and in vivo is therefore playing a pivotal role. An enabling optical imaging platform will combine optical hardware refinement with a strong emphasis on creating and validating highly specific chemical and biological tools.

  1. An Approach to Office Automation

    OpenAIRE

    Ischenko, A.N.; Tumeo, M.A.

    1983-01-01

    In recent years, the increasing scale of production and degree of specialization within firms has led to a significant growth in the amount of information needed for their successful management. As a result, the use of computer systems (office automation) has become increasingly common. However, no manuals or set automation procedures exist to help organizations design and implement an efficient and effective office automation system. The goals of this paper are to outline some important...

  2. Embedded system for building automation

    OpenAIRE

    Rolih, Andrej

    2014-01-01

    Home automation is a fast developing field of computer science and electronics. Companies are offering many different products for home automation. Ranging anywhere from complete systems for building management and control, to simple smart lights that can be connected to the internet. These products offer the user greater living comfort and lower their expenses by reducing the energy usage. This thesis shows the development of a simple home automation system that focuses mainly on the enhance...

  3. DiseasePlan - a spreadsheet application for training people to assess disease severity and to assist with standard area diagram development

    Directory of Open Access Journals (Sweden)

    Marcos Robson Sachet

    Full Text Available ABSTRACT: Several computer applications have been proposed for the visual assessment of plant disease severity; however, they have some restrictions such as not permitting the user to make modifications. Therefore, the VBA-Excel application was developed as a means to train people to estimate disease severity and to validate standard area diagrams through images and disease severity values (percentage, rating or score inserted into a database. The main performance statistics are screen displayed and spreadsheet recorded. Finally, the authors hope to receive evaluations and feedback regarding this application.

  4. CREST Cost of Renewable Energy Spreadsheet Tool: A Model for Developing Cost-based Incentives in the United States. User Manual Version 1

    Energy Technology Data Exchange (ETDEWEB)

    Gifford, Jason S. [Sustainable Energy Advantage, LLC, Framingham, MA (United States); Grace, Robert C. [Sustainable Energy Advantage, LLC, Framingham, MA (United States)

    2011-03-01

    This user manual helps model users understands how to use the CREST model to support renewable energy incentives, FITs, and other renewable energy rate-setting processes. It reviews the spreadsheet tool, including its layout and conventions, offering context on how and why it was created. It also provides instructions on how to populate the model with inputs that are appropriate for a specific jurisdiction’s policymaking objectives and context. And, it describes the results and outlines how these results may inform decisions about long-term renewable energy support programs.

  5. Advancements in Aptamer Discovery Technologies.

    Science.gov (United States)

    Gotrik, Michael R; Feagin, Trevor A; Csordas, Andrew T; Nakamoto, Margaret A; Soh, H Tom

    2016-09-20

    Affinity reagents that specifically bind to their target molecules are invaluable tools in nearly every field of modern biomedicine. Nucleic acid-based aptamers offer many advantages in this domain, because they are chemically synthesized, stable, and economical. Despite these compelling features, aptamers are currently not widely used in comparison to antibodies. This is primarily because conventional aptamer-discovery techniques such as SELEX are time-consuming and labor-intensive and often fail to produce aptamers with comparable binding performance to antibodies. This Account describes a body of work from our laboratory in developing advanced methods for consistently producing high-performance aptamers with higher efficiency, fewer resources, and, most importantly, a greater probability of success. We describe our efforts in systematically transforming each major step of the aptamer discovery process: selection, analysis, and characterization. To improve selection, we have developed microfluidic devices (M-SELEX) that enable discovery of high-affinity aptamers after a minimal number of selection rounds by precisely controlling the target concentration and washing stringency. In terms of improving aptamer pool analysis, our group was the first to use high-throughput sequencing (HTS) for the discovery of new aptamers. We showed that tracking the enrichment trajectory of individual aptamer sequences enables the identification of high-performing aptamers without requiring full convergence of the selected aptamer pool. HTS is now widely used for aptamer discovery, and open-source software has become available to facilitate analysis. To improve binding characterization, we used HTS data to design custom aptamer arrays to measure the affinity and specificity of up to ∼10(4) DNA aptamers in parallel as a means to rapidly discover high-quality aptamers. Most recently, our efforts have culminated in the invention of the "particle display" (PD) screening system, which

  6. Estimating Policy-Driven Greenhouse Gas Emissions Trajectories in California: The California Greenhouse Gas Inventory Spreadsheet (GHGIS) Model

    Energy Technology Data Exchange (ETDEWEB)

    Greenblatt, Jeffery B.

    2013-10-10

    A California Greenhouse Gas Inventory Spreadsheet (GHGIS) model was developed to explore the impact of combinations of state policies on state greenhouse gas (GHG) and regional criteria pollutant emissions. The model included representations of all GHG- emitting sectors of the California economy (including those outside the energy sector, such as high global warming potential gases, waste treatment, agriculture and forestry) in varying degrees of detail, and was carefully calibrated using available data and projections from multiple state agencies and other sources. Starting from basic drivers such as population, numbers of households, gross state product, numbers of vehicles, etc., the model calculated energy demands by type (various types of liquid and gaseous hydrocarbon fuels, electricity and hydrogen), and finally calculated emissions of GHGs and three criteria pollutants: reactive organic gases (ROG), nitrogen oxides (NOx), and fine (2.5 ?m) particulate matter (PM2.5). Calculations were generally statewide, but in some sectors, criteria pollutants were also calculated for two regional air basins: the South Coast Air Basin (SCAB) and the San Joaquin Valley (SJV). Three scenarios were developed that attempt to model: (1) all committed policies, (2) additional, uncommitted policy targets and (3) potential technology and market futures. Each scenario received extensive input from state energy planning agencies, in particular the California Air Resources Board. Results indicate that all three scenarios are able to meet the 2020 statewide GHG targets, and by 2030, statewide GHG emissions range from between 208 and 396 MtCO2/yr. However, none of the scenarios are able to meet the 2050 GHG target of 85 MtCO2/yr, with emissions ranging from 188 to 444 MtCO2/yr, so additional policies will need to be developed for California to meet this stringent future target. A full sensitivity study of major scenario assumptions was also performed. In terms of criteria pollutants

  7. World-wide distribution automation systems

    International Nuclear Information System (INIS)

    Devaney, T.M.

    1994-01-01

    A worldwide power distribution automation system is outlined. Distribution automation is defined and the status of utility automation is discussed. Other topics discussed include a distribution management system, substation feeder, and customer functions, potential benefits, automation costs, planning and engineering considerations, automation trends, databases, system operation, computer modeling of system, and distribution management systems

  8. Contaminant analysis automation, an overview

    International Nuclear Information System (INIS)

    Hollen, R.; Ramos, O. Jr.

    1996-01-01

    To meet the environmental restoration and waste minimization goals of government and industry, several government laboratories, universities, and private companies have formed the Contaminant Analysis Automation (CAA) team. The goal of this consortium is to design and fabricate robotics systems that standardize and automate the hardware and software of the most common environmental chemical methods. In essence, the CAA team takes conventional, regulatory- approved (EPA Methods) chemical analysis processes and automates them. The automation consists of standard laboratory modules (SLMs) that perform the work in a much more efficient, accurate, and cost- effective manner

  9. Financiering met Excel spreadsheets

    NARCIS (Netherlands)

    van der Goot, T.

    2010-01-01

    Dit boek geeft een samenvatting van de belangrijkste financieringsonderwerpen. Zo kun je bijvoorbeeld lezen over rendement en risico, constante waarde van kasstromen, hefboomwerking, dividendbeleid en opties.

  10. Optimization Modeling with Spreadsheets

    CERN Document Server

    Baker, Kenneth R

    2011-01-01

    This introductory book on optimization (mathematical programming) includes coverage on linear programming, nonlinear programming, integer programming and heuristic programming; as well as an emphasis on model building using Excel and Solver.  The emphasis on model building (rather than algorithms) is one of the features that makes this book distinctive. Most books devote more space to algorithmic details than to formulation principles. These days, however, it is not necessary to know a great deal about algorithms in order to apply optimization tools, especially when relying on the sp

  11. Beyond the Spreadsheet

    DEFF Research Database (Denmark)

    Tell, Paolo; Cholewa, Jacob; Kuhrmann, Marco

    2016-01-01

    Objective: Even though a number of tools is reported to be used by researchers undertaking systematic reviews, important shortages are still reported revealing how such solutions are unable to satisfy current needs. Method: Two researchers independently provided a competing design for a tool supp...

  12. Fossil power plant automation

    International Nuclear Information System (INIS)

    Divakaruni, S.M.; Touchton, G.

    1991-01-01

    This paper elaborates on issues facing the utilities industry and seeks to address how new computer-based control and automation technologies resulting from recent microprocessor evolution, can improve fossil plant operations and maintenance. This in turn can assist utilities to emerge stronger from the challenges ahead. Many presentations at the first ISA/EPRI co-sponsored conference are targeted towards improving the use of computer and control systems in the fossil and nuclear power plants and we believe this to be the right forum to share our ideas

  13. Statistical data analytics foundations for data mining, informatics, and knowledge discovery

    CERN Document Server

    Piegorsch, Walter W

    2015-01-01

      A comprehensive introduction to statistical methods for data mining and knowledge discovery.Applications of data mining and 'big data' increasingly take center stage in our modern, knowledge-driven society, supported by advances in computing power, automated data acquisition, social media development and interactive, linkable internet software.  This book presents a coherent, technical introduction to modern statistical learning and analytics, starting from the core foundations of statistics and probability. It includes an overview of probability and statistical distributions, basic

  14. MrBUMP: an automated pipeline for molecular replacement.

    Science.gov (United States)

    Keegan, Ronan M; Winn, Martyn D

    2008-01-01

    A novel automation pipeline for macromolecular structure solution by molecular replacement is described. There is a special emphasis on the discovery and preparation of a large number of search models, all of which can be passed to the core molecular-replacement programs. For routine molecular-replacement problems, the pipeline automates what a crystallographer might do and its value is simply one of convenience. For more difficult cases, the pipeline aims to discover the particular template structure and model edits required to produce a viable search model and may succeed in finding an efficacious combination that would be missed otherwise. An overview of MrBUMP is given and some recent additions to its functionality are highlighted.

  15. Automating expert role to determine design concept in Kansei Engineering

    Science.gov (United States)

    Lokman, Anitawati Mohd; Haron, Mohammad Bakri Che; Abidin, Siti Zaleha Zainal; Khalid, Noor Elaiza Abd

    2016-02-01

    Affect has become imperative in product quality. In affective design field, Kansei Engineering (KE) has been recognized as a technology that enables discovery of consumer's emotion and formulation of guide to design products that win consumers in the competitive market. Albeit powerful technology, there is no rule of thumb in its analysis and interpretation process. KE expertise is required to determine sets of related Kansei and the significant concept of emotion. Many research endeavors become handicapped with the limited number of available and accessible KE experts. This work is performed to simulate the role of experts with the use of Natphoric algorithm thus providing sound solution to the complexity and flexibility in KE. The algorithm is designed to learn the process by implementing training datasets taken from previous KE research works. A framework for automated KE is then designed to realize the development of automated KE system. A comparative analysis is performed to determine feasibility of the developed prototype to automate the process. The result shows that the significant Kansei is determined by manual KE implementation and the automated process is highly similar. KE research advocates will benefit this system to automatically determine significant design concepts.

  16. Automated sampling and data processing derived from biomimetic membranes

    International Nuclear Information System (INIS)

    Perry, M; Vissing, T; Hansen, J S; Nielsen, C H; Boesen, T P; Emneus, J

    2009-01-01

    Recent advances in biomimetic membrane systems have resulted in an increase in membrane lifetimes from hours to days and months. Long-lived membrane systems demand the development of both new automated monitoring equipment capable of measuring electrophysiological membrane characteristics and new data processing software to analyze and organize the large amounts of data generated. In this work, we developed an automated instrumental voltage clamp solution based on a custom-designed software controller application (the WaveManager), which enables automated on-line voltage clamp data acquisition applicable to long-time series experiments. We designed another software program for off-line data processing. The automation of the on-line voltage clamp data acquisition and off-line processing was furthermore integrated with a searchable database (DiscoverySheet(TM)) for efficient data management. The combined solution provides a cost efficient and fast way to acquire, process and administrate large amounts of voltage clamp data that may be too laborious and time consuming to handle manually. (communication)

  17. An automated graphics tool for comparative genomics: the Coulson plot generator.

    Science.gov (United States)

    Field, Helen I; Coulson, Richard M R; Field, Mark C

    2013-04-27

    Comparative analysis is an essential component to biology. When applied to genomics for example, analysis may require comparisons between the predicted presence and absence of genes in a group of genomes under consideration. Frequently, genes can be grouped into small categories based on functional criteria, for example membership of a multimeric complex, participation in a metabolic or signaling pathway or shared sequence features and/or paralogy. These patterns of retention and loss are highly informative for the prediction of function, and hence possible biological context, and can provide great insights into the evolutionary history of cellular functions. However, representation of such information in a standard spreadsheet is a poor visual means from which to extract patterns within a dataset. We devised the Coulson Plot, a new graphical representation that exploits a matrix of pie charts to display comparative genomics data. Each pie is used to describe a complex or process from a separate taxon, and is divided into sectors corresponding to the number of proteins (subunits) in a complex/process. The predicted presence or absence of proteins in each complex are delineated by occupancy of a given sector; this format is visually highly accessible and makes pattern recognition rapid and reliable. A key to the identity of each subunit, plus hierarchical naming of taxa and coloring are included. A java-based application, the Coulson plot generator (CPG) automates graphic production, with a tab or comma-delineated text file as input and generating an editable portable document format or svg file. CPG software may be used to rapidly convert spreadsheet data to a graphical matrix pie chart format. The representation essentially retains all of the information from the spreadsheet but presents a graphically rich format making comparisons and identification of patterns significantly clearer. While the Coulson plot format is highly useful in comparative genomics, its

  18. OPEN DATA FOR DISCOVERY SCIENCE.

    Science.gov (United States)

    Payne, Philip R O; Huang, Kun; Shah, Nigam H; Tenenbaum, Jessica

    2017-01-01

    The modern healthcare and life sciences ecosystem is moving towards an increasingly open and data-centric approach to discovery science. This evolving paradigm is predicated on a complex set of information needs related to our collective ability to share, discover, reuse, integrate, and analyze open biological, clinical, and population level data resources of varying composition, granularity, and syntactic or semantic consistency. Such an evolution is further impacted by a concomitant growth in the size of data sets that can and should be employed for both hypothesis discovery and testing. When such open data can be accessed and employed for discovery purposes, a broad spectrum of high impact end-points is made possible. These span the spectrum from identification of de novo biomarker complexes that can inform precision medicine, to the repositioning or repurposing of extant agents for new and cost-effective therapies, to the assessment of population level influences on disease and wellness. Of note, these types of uses of open data can be either primary, wherein open data is the substantive basis for inquiry, or secondary, wherein open data is used to augment or enrich project-specific or proprietary data that is not open in and of itself. This workshop is concerned with the key challenges, opportunities, and methodological best practices whereby open data can be used to drive the advancement of discovery science in all of the aforementioned capacities.

  19. Hubble 15 years of discovery

    CERN Document Server

    Lindberg Christensen, Lars; Kornmesser, M

    2006-01-01

    Hubble: 15 Years of Discovery was a key element of the European Space Agency's 15th anniversary celebration activities for the 1990 launch of the NASA/ESA Hubble Space Telescope. As an observatory in space, Hubble is one of the most successful scientific projects of all time, both in terms of scientific output and its immediate public appeal.

  20. Smartphones: A Potential Discovery Tool

    Directory of Open Access Journals (Sweden)

    Wendy Starkweather

    2009-09-01

    Full Text Available The anticipated wide adoption of smartphones by researchers is viewed by the authors as a basis for developing mobile-based services. In response to the UNLV Libraries’ strategic plan’s focus on experimentation and outreach, the authors investigate the current and potential role of smartphones as a valuable discovery tool for library users.

  1. Translational medicine and drug discovery

    National Research Council Canada - National Science Library

    Littman, Bruce H; Krishna, Rajesh

    2011-01-01

    ..., and examples of their application to real-life drug discovery and development. The latest thinking is presented by researchers from many of the world's leading pharmaceutical companies, including Pfizer, Merck, Eli Lilly, Abbott, and Novartis, as well as from academic institutions and public- private partnerships that support translational research...

  2. Structural Biology Guides Antibiotic Discovery

    Science.gov (United States)

    Polyak, Steven

    2014-01-01

    Modern drug discovery programs require the contribution of researchers in a number of specialist areas. One of these areas is structural biology. Using X-ray crystallography, the molecular basis of how a drug binds to its biological target and exerts its mode of action can be defined. For example, a drug that binds into the active site of an…

  3. A Discovery Approach to Movement.

    Science.gov (United States)

    O'Hagin, Isabel B.

    1998-01-01

    Investigates the effects of the discovery approach to movement-based instruction on children's level of musicality. Finds that the students with the highest musicality were girls, demonstrated reflective movements and a personal sense of style while moving, and made sense of the music by organizing, categorizing, and developing movement ideas.…

  4. Discoveries of isotopes by fission

    Indian Academy of Sciences (India)

    activities as the potential discovery of elements heavier than uranium [5]. He drew this conclusion ... alkaline earth metals in the irradiation of uranium by neutrons) Hahn and Strassmann did. 458. Pramana – J. ... the production of active barium isotopes from uranium and thorium by neutron irradiation;. Proof of further active ...

  5. An Automated System for Citizen Searches for Exoplanets

    Science.gov (United States)

    Edberg, Stephen J.

    2016-05-01

    The Panoptic Astronomical Networked OPtical observatory for Transiting Exoplanets Survey (PANOPTES) is a citizen science project which aims to build low cost, automated, robotic sky patrol camera systems which can be used to detect transiting exoplanets: planets orbiting other stars. The goal is to establish a worldwide network to image the nighttime celestial hemisphere 24/7/365. PANOPTES will search for exoplanets using the reduction in starlight caused when an exoplanet transits its host star. Individuals or groups can construct a PANOPTES station, tie it in the data reporting system, and contribute to the discovery of exoplanets across the large area of the sky not yet surveyed.

  6. Automated Test Case Generation

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    I would like to present the concept of automated test case generation. I work on it as part of my PhD and I think it would be interesting also for other people. It is also the topic of a workshop paper that I am introducing in Paris. (abstract below) Please note that the talk itself would be more general and not about the specifics of my PhD, but about the broad field of Automated Test Case Generation. I would introduce the main approaches (combinatorial testing, symbolic execution, adaptive random testing) and their advantages and problems. (oracle problem, combinatorial explosion, ...) Abstract of the paper: Over the last decade code-based test case generation techniques such as combinatorial testing or dynamic symbolic execution have seen growing research popularity. Most algorithms and tool implementations are based on finding assignments for input parameter values in order to maximise the execution branch coverage. Only few of them consider dependencies from outside the Code Under Test’s scope such...

  7. Automation from pictures

    International Nuclear Information System (INIS)

    Kozubal, A.J.

    1992-01-01

    The state transition diagram (STD) model has been helpful in the design of real time software, especially with the emergence of graphical computer aided software engineering (CASE) tools. Nevertheless, the translation of the STD to real time code has in the past been primarily a manual task. At Los Alamos we have automated this process. The designer constructs the STD using a CASE tool (Cadre Teamwork) using a special notation for events and actions. A translator converts the STD into an intermediate state notation language (SNL), and this SNL is compiled directly into C code (a state program). Execution of the state program is driven by external events, allowing multiple state programs to effectively share the resources of the host processor. Since the design and the code are tightly integrated through the CASE tool, the design and code never diverge, and we avoid design obsolescence. Furthermore, the CASE tool automates the production of formal technical documents from the graphic description encapsulated by the CASE tool. (author)

  8. AUTOMATED GEOSPATIAL WATERSHED ASSESSMENT ...

    Science.gov (United States)

    The Automated Geospatial Watershed Assessment tool (AGWA) is a GIS interface jointly developed by the USDA Agricultural Research Service, the U.S. Environmental Protection Agency, the University of Arizona, and the University of Wyoming to automate the parameterization and execution of the Soil Water Assessment Tool (SWAT) and KINEmatic Runoff and EROSion (KINEROS2) hydrologic models. The application of these two models allows AGWA to conduct hydrologic modeling and watershed assessments at multiple temporal and spatial scales. AGWA’s current outputs are runoff (volumes and peaks) and sediment yield, plus nitrogen and phosphorus with the SWAT model. AGWA uses commonly available GIS data layers to fully parameterize, execute, and visualize results from both models. Through an intuitive interface the user selects an outlet from which AGWA delineates and discretizes the watershed using a Digital Elevation Model (DEM) based on the individual model requirements. The watershed model elements are then intersected with soils and land cover data layers to derive the requisite model input parameters. The chosen model is then executed, and the results are imported back into AGWA for visualization. This allows managers to identify potential problem areas where additional monitoring can be undertaken or mitigation activities can be focused. AGWA also has tools to apply an array of best management practices. There are currently two versions of AGWA available; AGWA 1.5 for

  9. Maneuver Automation Software

    Science.gov (United States)

    Uffelman, Hal; Goodson, Troy; Pellegrin, Michael; Stavert, Lynn; Burk, Thomas; Beach, David; Signorelli, Joel; Jones, Jeremy; Hahn, Yungsun; Attiyah, Ahlam; hide

    2009-01-01

    The Maneuver Automation Software (MAS) automates the process of generating commands for maneuvers to keep the spacecraft of the Cassini-Huygens mission on a predetermined prime mission trajectory. Before MAS became available, a team of approximately 10 members had to work about two weeks to design, test, and implement each maneuver in a process that involved running many maneuver-related application programs and then serially handing off data products to other parts of the team. MAS enables a three-member team to design, test, and implement a maneuver in about one-half hour after Navigation has process-tracking data. MAS accepts more than 60 parameters and 22 files as input directly from users. MAS consists of Practical Extraction and Reporting Language (PERL) scripts that link, sequence, and execute the maneuver- related application programs: "Pushing a single button" on a graphical user interface causes MAS to run navigation programs that design a maneuver; programs that create sequences of commands to execute the maneuver on the spacecraft; and a program that generates predictions about maneuver performance and generates reports and other files that enable users to quickly review and verify the maneuver design. MAS can also generate presentation materials, initiate electronic command request forms, and archive all data products for future reference.

  10. A Demonstration of Automated DNA Sequencing.

    Science.gov (United States)

    Latourelle, Sandra; Seidel-Rogol, Bonnie

    1998-01-01

    Details a simulation that employs a paper-and-pencil model to demonstrate the principles behind automated DNA sequencing. Discusses the advantages of automated sequencing as well as the chemistry of automated DNA sequencing. (DDR)

  11. Meta-analyses and Forest plots using a microsoft excel spreadsheet: step-by-step guide focusing on descriptive data analysis

    Directory of Open Access Journals (Sweden)

    Neyeloff Jeruza L

    2012-01-01

    Full Text Available Abstract Background Meta-analyses are necessary to synthesize data obtained from primary research, and in many situations reviews of observational studies are the only available alternative. General purpose statistical packages can meta-analyze data, but usually require external macros or coding. Commercial specialist software is available, but may be expensive and focused in a particular type of primary data. Most available softwares have limitations in dealing with descriptive data, and the graphical display of summary statistics such as incidence and prevalence is unsatisfactory. Analyses can be conducted using Microsoft Excel, but there was no previous guide available. Findings We constructed a step-by-step guide to perform a meta-analysis in a Microsoft Excel spreadsheet, using either fixed-effect or random-effects models. We have also developed a second spreadsheet capable of producing customized forest plots. Conclusions It is possible to conduct a meta-analysis using only Microsoft Excel. More important, to our knowledge this is the first description of a method for producing a statistically adequate but graphically appealing forest plot summarizing descriptive data, using widely available software.

  12. Meta-analyses and Forest plots using a microsoft excel spreadsheet: step-by-step guide focusing on descriptive data analysis.

    Science.gov (United States)

    Neyeloff, Jeruza L; Fuchs, Sandra C; Moreira, Leila B

    2012-01-20

    Meta-analyses are necessary to synthesize data obtained from primary research, and in many situations reviews of observational studies are the only available alternative. General purpose statistical packages can meta-analyze data, but usually require external macros or coding. Commercial specialist software is available, but may be expensive and focused in a particular type of primary data. Most available softwares have limitations in dealing with descriptive data, and the graphical display of summary statistics such as incidence and prevalence is unsatisfactory. Analyses can be conducted using Microsoft Excel, but there was no previous guide available. We constructed a step-by-step guide to perform a meta-analysis in a Microsoft Excel spreadsheet, using either fixed-effect or random-effects models. We have also developed a second spreadsheet capable of producing customized forest plots. It is possible to conduct a meta-analysis using only Microsoft Excel. More important, to our knowledge this is the first description of a method for producing a statistically adequate but graphically appealing forest plot summarizing descriptive data, using widely available software.

  13. Opening up Library Automation Software

    Science.gov (United States)

    Breeding, Marshall

    2009-01-01

    Throughout the history of library automation, the author has seen a steady advancement toward more open systems. In the early days of library automation, when proprietary systems dominated, the need for standards was paramount since other means of inter-operability and data exchange weren't possible. Today's focus on Application Programming…

  14. Automated Power-Distribution System

    Science.gov (United States)

    Ashworth, Barry; Riedesel, Joel; Myers, Chris; Miller, William; Jones, Ellen F.; Freeman, Kenneth; Walsh, Richard; Walls, Bryan K.; Weeks, David J.; Bechtel, Robert T.

    1992-01-01

    Autonomous power-distribution system includes power-control equipment and automation equipment. System automatically schedules connection of power to loads and reconfigures itself when it detects fault. Potential terrestrial applications include optimization of consumption of power in homes, power supplies for autonomous land vehicles and vessels, and power supplies for automated industrial processes.

  15. Automation in Catholic College Libraries.

    Science.gov (United States)

    Stussy, Susan A.

    1981-01-01

    Reports on a 1980 survey of library automation in 105 Catholic colleges with collections containing less than 300,000 bibliographic items. The report indicates that network membership and grant funding were a vital part of library automation in the schools surveyed. (Author/LLS)

  16. Library Automation: A Year on.

    Science.gov (United States)

    Electronic Library, 1997

    1997-01-01

    A follow-up interview with librarians from Hong Kong, Mexico, Australia, Canada, and New Zealand about library automation systems in their libraries and their plans for the future. Discusses system performance, upgrades, services, resources, intranets, trends in automation, Web interfaces, full-text image/document systems, document delivery, OPACs…

  17. Library Automation: A Balanced View

    Science.gov (United States)

    Avram, Henriette

    1972-01-01

    Ellsworth Mason's two recently published papers, severely criticizing library automation, are refuted. While admitting to the failures and problems, this paper also presents the positive accomplishments in a brief evaluation of the status of library automation in 1971. (16 references) (Author/SJ)

  18. Library Automation: A Critical Review.

    Science.gov (United States)

    Overmyer, LaVahn

    This report has two main purposes: (1) To give an account of the use of automation in selected libraries throughout the country and in the development of networks; and (2) To discuss some of the fundamental considerations relevant to automation and the implications for library education, library research and the library profession. The first part…

  19. Automated Methods Of Corrosion Measurements

    DEFF Research Database (Denmark)

    Bech-Nielsen, Gregers; Andersen, Jens Enevold Thaulov; Reeve, John Ch

    1997-01-01

    The chapter describes the following automated measurements: Corrosion Measurements by Titration, Imaging Corrosion by Scanning Probe Microscopy, Critical Pitting Temperature and Application of the Electrochemical Hydrogen Permeation Cell.......The chapter describes the following automated measurements: Corrosion Measurements by Titration, Imaging Corrosion by Scanning Probe Microscopy, Critical Pitting Temperature and Application of the Electrochemical Hydrogen Permeation Cell....

  20. Automated Test-Form Generation

    Science.gov (United States)

    van der Linden, Wim J.; Diao, Qi

    2011-01-01

    In automated test assembly (ATA), the methodology of mixed-integer programming is used to select test items from an item bank to meet the specifications for a desired test form and optimize its measurement accuracy. The same methodology can be used to automate the formatting of the set of selected items into the actual test form. Three different…

  1. A Survey of Automated Deduction

    OpenAIRE

    Bundy, Alan

    1999-01-01

    We survey research in the automation of deductive inference, from its beginnings in the early history of computing to the present day. We identify and describe the major areas of research interest and their applications. The area is characterised by its wide variety of proof methods, forms of automated deduction and applications.

  2. The Science of Home Automation

    Science.gov (United States)

    Thomas, Brian Louis

    Smart home technologies and the concept of home automation have become more popular in recent years. This popularity has been accompanied by social acceptance of passive sensors installed throughout the home. The subsequent increase in smart homes facilitates the creation of home automation strategies. We believe that home automation strategies can be generated intelligently by utilizing smart home sensors and activity learning. In this dissertation, we hypothesize that home automation can benefit from activity awareness. To test this, we develop our activity-aware smart automation system, CARL (CASAS Activity-aware Resource Learning). CARL learns the associations between activities and device usage from historical data and utilizes the activity-aware capabilities to control the devices. To help validate CARL we deploy and test three different versions of the automation system in a real-world smart environment. To provide a foundation of activity learning, we integrate existing activity recognition and activity forecasting into CARL home automation. We also explore two alternatives to using human-labeled data to train the activity learning models. The first unsupervised method is Activity Detection, and the second is a modified DBSCAN algorithm that utilizes Dynamic Time Warping (DTW) as a distance metric. We compare the performance of activity learning with human-defined labels and with automatically-discovered activity categories. To provide evidence in support of our hypothesis, we evaluate CARL automation in a smart home testbed. Our results indicate that home automation can be boosted through activity awareness. We also find that the resulting automation has a high degree of usability and comfort for the smart home resident.

  3. Arthritis Genetics Analysis Aids Drug Discovery

    Science.gov (United States)

    ... NIH Research Matters January 13, 2014 Arthritis Genetics Analysis Aids Drug Discovery An international research team identified 42 new ... Edition Distracted Driving Raises Crash Risk Arthritis Genetics Analysis Aids Drug Discovery Oxytocin Affects Facial Recognition Connect with Us ...

  4. Berkeley automated supernova search

    Energy Technology Data Exchange (ETDEWEB)

    Kare, J.T.; Pennypacker, C.R.; Muller, R.A.; Mast, T.S.; Crawford, F.S.; Burns, M.S.

    1981-01-01

    The Berkeley automated supernova search employs a computer controlled 36-inch telescope and charge coupled device (CCD) detector to image 2500 galaxies per night. A dedicated minicomputer compares each galaxy image with stored reference data to identify supernovae in real time. The threshold for detection is m/sub v/ = 18.8. We plan to monitor roughly 500 galaxies in Virgo and closer every night, and an additional 6000 galaxies out to 70 Mpc on a three night cycle. This should yield very early detection of several supernovae per year for detailed study, and reliable premaximum detection of roughly 100 supernovae per year for statistical studies. The search should be operational in mid-1982.

  5. Berkeley automated supernova search

    International Nuclear Information System (INIS)

    Kare, J.T.; Pennypacker, C.R.; Muller, R.A.; Mast, T.S.

    1981-01-01

    The Berkeley automated supernova search employs a computer controlled 36-inch telescope and charge coupled device (CCD) detector to image 2500 galaxies per night. A dedicated minicomputer compares each galaxy image with stored reference data to identify supernovae in real time. The threshold for detection is m/sub v/ = 18.8. We plan to monitor roughly 500 galaxies in Virgo and closer every night, and an additional 6000 galaxies out to 70 Mpc on a three night cycle. This should yield very early detection of several supernovae per year for detailed study, and reliable premaximum detection of roughly 100 supernovae per year for statistical studies. The search should be operational in mid-1982

  6. Automated attendance accounting system

    Science.gov (United States)

    Chapman, C. P. (Inventor)

    1973-01-01

    An automated accounting system useful for applying data to a computer from any or all of a multiplicity of data terminals is disclosed. The system essentially includes a preselected number of data terminals which are each adapted to convert data words of decimal form to another form, i.e., binary, usable with the computer. Each data terminal may take the form of a keyboard unit having a number of depressable buttons or switches corresponding to selected data digits and/or function digits. A bank of data buffers, one of which is associated with each data terminal, is provided as a temporary storage. Data from the terminals is applied to the data buffers on a digit by digit basis for transfer via a multiplexer to the computer.

  7. Robust automated knowledge capture.

    Energy Technology Data Exchange (ETDEWEB)

    Stevens-Adams, Susan Marie; Abbott, Robert G.; Forsythe, James Chris; Trumbo, Michael Christopher Stefan; Haass, Michael Joseph; Hendrickson, Stacey M. Langfitt

    2011-10-01

    This report summarizes research conducted through the Sandia National Laboratories Robust Automated Knowledge Capture Laboratory Directed Research and Development project. The objective of this project was to advance scientific understanding of the influence of individual cognitive attributes on decision making. The project has developed a quantitative model known as RumRunner that has proven effective in predicting the propensity of an individual to shift strategies on the basis of task and experience related parameters. Three separate studies are described which have validated the basic RumRunner model. This work provides a basis for better understanding human decision making in high consequent national security applications, and in particular, the individual characteristics that underlie adaptive thinking.

  8. (No) Security in Automation!?

    CERN Document Server

    Lüders, S

    2008-01-01

    Modern Information Technologies like Ethernet, TCP/IP, web server or FTP are nowadays increas-ingly used in distributed control and automation systems. Thus, information from the factory floor is now directly available at the management level (From Shop-Floor to Top-Floor) and can be ma-nipulated from there. Despite the benefits coming with this (r)evolution, new vulnerabilities are in-herited, too: worms and viruses spread within seconds via Ethernet and attackers are becoming interested in control systems. Unfortunately, control systems lack the standard security features that usual office PCs have. This contribution will elaborate on these problems, discuss the vulnerabilities of modern control systems and present international initiatives for mitigation.

  9. Printing quality control automation

    Science.gov (United States)

    Trapeznikova, O. V.

    2018-04-01

    One of the most important problems in the concept of standardizing the process of offset printing is the control the quality rating of printing and its automation. To solve the problem, a software has been developed taking into account the specifics of printing system components and the behavior in printing process. In order to characterize the distribution of ink layer on the printed substrate the so-called deviation of the ink layer thickness on the sheet from nominal surface is suggested. The geometric data construction the surface projections of the color gamut bodies allows to visualize the color reproduction gamut of printing systems in brightness ranges and specific color sectors, that provides a qualitative comparison of the system by the reproduction of individual colors in a varying ranges of brightness.

  10. Automated electronic filter design

    CERN Document Server

    Banerjee, Amal

    2017-01-01

    This book describes a novel, efficient and powerful scheme for designing and evaluating the performance characteristics of any electronic filter designed with predefined specifications. The author explains techniques that enable readers to eliminate complicated manual, and thus error-prone and time-consuming, steps of traditional design techniques. The presentation includes demonstration of efficient automation, using an ANSI C language program, which accepts any filter design specification (e.g. Chebyschev low-pass filter, cut-off frequency, pass-band ripple etc.) as input and generates as output a SPICE(Simulation Program with Integrated Circuit Emphasis) format netlist. Readers then can use this netlist to run simulations with any version of the popular SPICE simulator, increasing accuracy of the final results, without violating any of the key principles of the traditional design scheme.

  11. Automated Essay Scoring

    Directory of Open Access Journals (Sweden)

    Semire DIKLI

    2006-01-01

    Full Text Available Automated Essay Scoring Semire DIKLI Florida State University Tallahassee, FL, USA ABSTRACT The impacts of computers on writing have been widely studied for three decades. Even basic computers functions, i.e. word processing, have been of great assistance to writers in modifying their essays. The research on Automated Essay Scoring (AES has revealed that computers have the capacity to function as a more effective cognitive tool (Attali, 2004. AES is defined as the computer technology that evaluates and scores the written prose (Shermis & Barrera, 2002; Shermis & Burstein, 2003; Shermis, Raymat, & Barrera, 2003. Revision and feedback are essential aspects of the writing process. Students need to receive feedback in order to increase their writing quality. However, responding to student papers can be a burden for teachers. Particularly if they have large number of students and if they assign frequent writing assignments, providing individual feedback to student essays might be quite time consuming. AES systems can be very useful because they can provide the student with a score as well as feedback within seconds (Page, 2003. Four types of AES systems, which are widely used by testing companies, universities, and public schools: Project Essay Grader (PEG, Intelligent Essay Assessor (IEA, E-rater, and IntelliMetric. AES is a developing technology. Many AES systems are used to overcome time, cost, and generalizability issues in writing assessment. The accuracy and reliability of these systems have been proven to be high. The search for excellence in machine scoring of essays is continuing and numerous studies are being conducted to improve the effectiveness of the AES systems.

  12. Automated endoscope reprocessors.

    Science.gov (United States)

    Desilets, David; Kaul, Vivek; Tierney, William M; Banerjee, Subhas; Diehl, David L; Farraye, Francis A; Kethu, Sripathi R; Kwon, Richard S; Mamula, Petar; Pedrosa, Marcos C; Rodriguez, Sarah A; Wong Kee Song, Louis-Michel

    2010-10-01

    The ASGE Technology Committee provides reviews of existing, new, or emerging endoscopic technologies that have an impact on the practice of GI endoscopy. Evidence-based methodology is used, with a MEDLINE literature search to identify pertinent clinical studies on the topic and a MAUDE (U.S. Food and Drug Administration Center for Devices and Radiological Health) database search to identify the reported complications of a given technology. Both are supplemented by accessing the "related articles" feature of PubMed and by scrutinizing pertinent references cited by the identified studies. Controlled clinical trials are emphasized, but in many cases data from randomized, controlled trials are lacking. In such cases, large case series, preliminary clinical studies, and expert opinions are used. Technical data are gathered from traditional and Web-based publications, proprietary publications, and informal communications with pertinent vendors. Technology Status Evaluation Reports are drafted by 1 or 2 members of the ASGE Technology Committee, reviewed and edited by the committee as a whole, and approved by the Governing Board of the ASGE. When financial guidance is indicated, the most recent coding data and list prices at the time of publication are provided. For this review, the MEDLINE database was searched through February 2010 for articles related to automated endoscope reprocessors, using the words endoscope reprocessing, endoscope cleaning, automated endoscope reprocessors, and high-level disinfection. Technology Status Evaluation Reports are scientific reviews provided solely for educational and informational purposes. Technology Status Evaluation Reports are not rules and should not be construed as establishing a legal standard of care or as encouraging, advocating, requiring, or discouraging any particular treatment or payment for such treatment. Copyright © 2010 American Society for Gastrointestinal Endoscopy. Published by Mosby, Inc. All rights reserved.

  13. [Artificial Intelligence in Drug Discovery].

    Science.gov (United States)

    Fujiwara, Takeshi; Kamada, Mayumi; Okuno, Yasushi

    2018-04-01

    According to the increase of data generated from analytical instruments, application of artificial intelligence(AI)technology in medical field is indispensable. In particular, practical application of AI technology is strongly required in "genomic medicine" and "genomic drug discovery" that conduct medical practice and novel drug development based on individual genomic information. In our laboratory, we have been developing a database to integrate genome data and clinical information obtained by clinical genome analysis and a computational support system for clinical interpretation of variants using AI. In addition, with the aim of creating new therapeutic targets in genomic drug discovery, we have been also working on the development of a binding affinity prediction system for mutated proteins and drugs by molecular dynamics simulation using supercomputer "Kei". We also have tackled for problems in a drug virtual screening. Our developed AI technology has successfully generated virtual compound library, and deep learning method has enabled us to predict interaction between compound and target protein.

  14. Glycoscience aids in biomarker discovery

    Directory of Open Access Journals (Sweden)

    Serenus Hua1,2 & Hyun Joo An1,2,*

    2012-06-01

    Full Text Available The glycome consists of all glycans (or carbohydrates within abiological system, and modulates a wide range of important biologicalactivities, from protein folding to cellular communications.The mining of the glycome for disease markers representsa new paradigm for biomarker discovery; however, this effortis severely complicated by the vast complexity and structuraldiversity of glycans. This review summarizes recent developmentsin analytical technology and methodology as applied tothe fields of glycomics and glycoproteomics. Mass spectrometricstrategies for glycan compositional profiling are described, as arepotential refinements which allow structure-specific profiling.Analytical methods that can discern protein glycosylation at aspecific site of modification are also discussed in detail.Biomarker discovery applications are shown at each level ofanalysis, highlighting the key role that glycoscience can play inhelping scientists understand disease biology.

  15. Enteric Neurobiology: Discoveries and Directions.

    Science.gov (United States)

    Wood, Jackie D

    Discovery and documentation of noncholinergic-nonadrenergic neurotransmission in the enteric nervous system started a revolution in mechanisms of neural control of the digestive tract that continues into a twenty-first century era of translational gastroenterology, which is now firmly embedded in the term, neurogastroenterology. This chapter, on Enteric Neurobiology: Discoveries and Directions, tracks the step-by-step advances in enteric neuronal electrophysiology and synaptic behavior and progresses to the higher order functions of central pattern generators, hard wired synaptic circuits and libraries of neural programs in the brain-in-the-gut that underlie the several different patterns of motility and secretory behaviors that occur in the specialized, serially-connected compartments extending from the esophagus to the anus.

  16. A quantum causal discovery algorithm

    Science.gov (United States)

    Giarmatzi, Christina; Costa, Fabio

    2018-03-01

    Finding a causal model for a set of classical variables is now a well-established task—but what about the quantum equivalent? Even the notion of a quantum causal model is controversial. Here, we present a causal discovery algorithm for quantum systems. The input to the algorithm is a process matrix describing correlations between quantum events. Its output consists of different levels of information about the underlying causal model. Our algorithm determines whether the process is causally ordered by grouping the events into causally ordered non-signaling sets. It detects if all relevant common causes are included in the process, which we label Markovian, or alternatively if some causal relations are mediated through some external memory. For a Markovian process, it outputs a causal model, namely the causal relations and the corresponding mechanisms, represented as quantum states and channels. Our algorithm opens the route to more general quantum causal discovery methods.

  17. The discovery of immunoglobulin E.

    Science.gov (United States)

    Ribatti, Domenico

    2016-03-01

    The discovery of immunoglobulin E (IgE) was a breakthrough in the field of allergy and immunology. Our understanding of mechanisms of allergic reactions and the role of IgE in these disorders has paralleled to the discovery of treatment modalities for patients with allergy. The first clue to the existence of a substance responsible for hypersensitivity reactions was demonstrated in 1921 by Prausnitz and Kustner, and after four decades it was identified as an immunoglobulin subclass by Ishizakas and co-workers. In 1968, the WHO International Reference Centre for Immunoglobulins announced the presence of a fifth immunoglobulin isotype, IgE. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Androgenetic alopecia: stress of discovery.

    Science.gov (United States)

    Passchier, Jan; Erdman, Jeroen; Hammiche, Fatima; Erdman, Ruud A M

    2006-02-01

    The psychological problems of men in the initial stages of alopecia androgenetica (hereditary male hair loss) have seldom been studied. We evaluated two groups of 80 men with alopecia androgenetica in Stages II to IV, indicating the amount of hair loss (overall N=160; for Group I: M=48 yr., SD=18.2; for Group II: M=50 yr., SD=18.0) who visited a dermatology clinic for benign dermatological complaints but not for hair loss, by questionnaires and interview, retrospectively. As predicted, hair problems were reported to be significantly greater overall at the moment of discovery of hair loss than later. About half of the men reported feeling annoyed to very annoyed about the discovery of hair loss. For those patients, provision of information by internet might facilitate a visit to the dermatologist.

  19. Cyber-Enabled Scientific Discovery

    International Nuclear Information System (INIS)

    Chan, Tony; Jameson, Leland

    2007-01-01

    It is often said that numerical simulation is third in the group of three ways to explore modern science: theory, experiment and simulation. Carefully executed modern numerical simulations can, however, be considered at least as relevant as experiment and theory. In comparison to physical experimentation, with numerical simulation one has the numerically simulated values of every field variable at every grid point in space and time. In comparison to theory, with numerical simulation one can explore sets of very complex non-linear equations such as the Einstein equations that are very difficult to investigate theoretically. Cyber-enabled scientific discovery is not just about numerical simulation but about every possible issue related to scientific discovery by utilizing cyberinfrastructure such as the analysis and storage of large data sets, the creation of tools that can be used by broad classes of researchers and, above all, the education and training of a cyber-literate workforce

  20. 12 CFR 308.107 - Document discovery.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 4 2010-01-01 2010-01-01 false Document discovery. 308.107 Section 308.107... PRACTICE AND PROCEDURE General Rules of Procedure § 308.107 Document discovery. (a) Parties to proceedings... only through the production of documents. No other form of discovery shall be allowed. (b) Any...

  1. 34 CFR 81.16 - Discovery.

    Science.gov (United States)

    2010-07-01

    ... 34 Education 1 2010-07-01 2010-07-01 false Discovery. 81.16 Section 81.16 Education Office of the... voluntarily. (b) The ALJ, at a party's request, may order compulsory discovery described in paragraph (c) of... respect to an issue in the case; (3) The discovery request was not made primarily for the purposes of...

  2. 42 CFR 426.532 - Discovery.

    Science.gov (United States)

    2010-10-01

    ... purpose of this section, the term documents includes relevant information, reports, answers, records... § 426.532 Discovery. (a) General rule. If the Board orders discovery, the Board must establish a... or burdensome; or (iii) Will unduly delay the proceeding. (c) Types of discovery available. A party...

  3. The discovery of the antiproton

    International Nuclear Information System (INIS)

    Chamberlain, Owen

    1989-01-01

    A number of groups of particle physicists competed to provide track evidence of the existence of Dirac's postulated antiproton in the mid-1950s. The work of the several teams is described briefly. The author describes the work of his own group on the Bevatron in more detail, and how they finally observed the antiproton. The article finishes with an assessment of the importance of this discovery. (UK)

  4. Model organisms and target discovery.

    Science.gov (United States)

    Muda, Marco; McKenna, Sean

    2004-09-01

    The wealth of information harvested from full genomic sequencing projects has not generated a parallel increase in the number of novel targets for therapeutic intervention. Several pharmaceutical companies have realized that novel drug targets can be identified and validated using simple model organisms. After decades of service in basic research laboratories, yeasts, worms, flies, fishes, and mice are now the cornerstones of modern drug discovery programs.: © 2004 Elsevier Ltd . All rights reserved.

  5. Gas reserves, discoveries and production

    International Nuclear Information System (INIS)

    Saniere, A.

    2006-01-01

    Between 2000 and 2004, new discoveries, located mostly in the Asia/Pacific region, permitted a 71% produced reserve replacement rate. The Middle East and the offshore sector represent a growing proportion of world gas production Non-conventional gas resources are substantial but are not exploited to any significant extent, except in the United States, where they account for 30% of U.S. gas production. (author)

  6. Specdata: Automated Analysis Software for Broadband Spectra

    Science.gov (United States)

    Oliveira, Jasmine N.; Martin-Drumel, Marie-Aline; McCarthy, Michael C.

    2017-06-01

    With the advancement of chirped-pulse techniques, broadband rotational spectra with a few tens to several hundred GHz of spectral coverage are now routinely recorded. When studying multi-component mixtures that might result, for example, with the use of an electrical discharge, lines of new chemical species are often obscured by those of known compounds, and analysis can be laborious. To address this issue, we have developed SPECdata, an open source, interactive tool which is designed to simplify and greatly accelerate the spectral analysis and discovery. Our software tool combines both automated and manual components that free the user from computation, while giving him/her considerable flexibility to assign, manipulate, interpret and export their analysis. The automated - and key - component of the new software is a database query system that rapidly assigns transitions of known species in an experimental spectrum. For each experiment, the software identifies spectral features, and subsequently assigns them to known molecules within an in-house database (Pickett .cat files, list of frequencies...), or those catalogued in Splatalogue (using automatic on-line queries). With suggested assignments, the control is then handed over to the user who can choose to accept, decline or add additional species. Data visualization, statistical information, and interactive widgets assist the user in making decisions about their data. SPECdata has several other useful features intended to improve the user experience. Exporting a full report of the analysis, or a peak file in which assigned lines are removed are among several options. A user may also save their progress to continue at another time. Additional features of SPECdata help the user to maintain and expand their database for future use. A user-friendly interface allows one to search, upload, edit or update catalog or experiment entries.

  7. Sea Level Rise Data Discovery

    Science.gov (United States)

    Quach, N.; Huang, T.; Boening, C.; Gill, K. M.

    2016-12-01

    Research related to sea level rise crosses multiple disciplines from sea ice to land hydrology. The NASA Sea Level Change Portal (SLCP) is a one-stop source for current sea level change information and data, including interactive tools for accessing and viewing regional data, a virtual dashboard of sea level indicators, and ongoing updates through a suite of editorial products that include content articles, graphics, videos, and animations. The architecture behind the SLCP makes it possible to integrate web content and data relevant to sea level change that are archived across various data centers as well as new data generated by sea level change principal investigators. The Extensible Data Gateway Environment (EDGE) is incorporated into the SLCP architecture to provide a unified platform for web content and science data discovery. EDGE is a data integration platform designed to facilitate high-performance geospatial data discovery and access with the ability to support multi-metadata standard specifications. EDGE has the capability to retrieve data from one or more sources and package the resulting sets into a single response to the requestor. With this unified endpoint, the Data Analysis Tool that is available on the SLCP can retrieve dataset and granule level metadata as well as perform geospatial search on the data. This talk focuses on the architecture that makes it possible to seamlessly integrate and enable discovery of disparate data relevant to sea level rise.

  8. Discovery of a Makemakean Moon

    Science.gov (United States)

    Parker, Alex H.; Buie, Marc W.; Grundy, Will M.; Noll, Keith S.

    2016-01-01

    We describe the discovery of a satellite in orbit about the dwarf planet (136472) Makemake. This satellite, provisionally designated S/2015 (136472) 1, was detected in imaging data collected with the Hubble Space Telescope's Wide Field Camera 3 on UTC 2015 April 27 at 7.80 +/- 0.04 mag fainter than Makemake and at a separation of 0farcs57. It likely evaded detection in previous satellite searches due to a nearly edge-on orbital configuration, placing it deep within the glare of Makemake during a substantial fraction of its orbital period. This configuration would place Makemake and its satellite near a mutual event season. Insufficient orbital motion was detected to make a detailed characterization of its orbital properties, prohibiting a measurement of the system mass with the discovery data alone. Preliminary analysis indicates that if the orbit is circular, its orbital period must be longer than 12.4 days and must have a semimajor axis > or approx. = 21,000 km. We find that the properties of Makemake's moon suggest that the majority of the dark material detected in the system by thermal observations may not reside on the surface of Makemake, but may instead be attributable to S/2015 (136472) 1 having a uniform dark surface. This dark moon hypothesis can be directly tested with future James Webb Space Telescope observations. We discuss the implications of this discovery for the spin state, figure, and thermal properties of Makemake and the apparent ubiquity of trans-Neptunian dwarf planet satellites.

  9. A New Universe of Discoveries

    Science.gov (United States)

    Córdova, France A.

    2016-01-01

    The convergence of emerging advances in astronomical instruments, computational capabilities and talented practitioners (both professional and civilian) is creating an extraordinary new environment for making numerous fundamental discoveries in astronomy, ranging from the nature of exoplanets to understanding the evolution of solar systems and galaxies. The National Science Foundation is playing a critical role in supporting, stimulating, and shaping these advances. NSF is more than an agency of government or a funding mechanism for the infrastructure of science. The work of NSF is a sacred trust that every generation of Americans makes to those of the next generation, that we will build on the body of knowledge we inherit and continue to push forward the frontiers of science. We never lose sight of NSF's obligation to "explore the unexplored" and inspire all of humanity with the wonders of discovery. As the only Federal agency dedicated to the support of basic research and education in all fields of science and engineering, NSF has empowered discoveries across a broad spectrum of scientific inquiry for more than six decades. The result is fundamental scientific research that has had a profound impact on our nation's innovation ecosystem and kept our nation at the very forefront of the world's science-and-engineering enterprise.

  10. Automating the radiographic NDT process

    International Nuclear Information System (INIS)

    Aman, J.K.

    1986-01-01

    Automation, the removal of the human element in inspection, has not been generally applied to film radiographic NDT. The justication for automating is not only productivity but also reliability of results. Film remains in the automated system of the future because of its extremely high image content, approximately 8 x 10 9 bits per 14 x 17. The equivalent to 2200 computer floppy discs. Parts handling systems and robotics applied for manufacturing and some NDT modalities, should now be applied to film radiographic NDT systems. Automatic film handling can be achieved with the daylight NDT film handling system. Automatic film processing is becoming the standard in industry and can be coupled to the daylight system. Robots offer the opportunity to automate fully the exposure step. Finally, computer aided interpretation appears on the horizon. A unit which laser scans a 14 x 17 (inch) film in 6 - 8 seconds can digitize film information for further manipulation and possible automatic interrogations (computer aided interpretation). The system called FDRS (for Film Digital Radiography System) is moving toward 50 micron (*approx* 16 lines/mm) resolution. This is believed to meet the need of the majority of image content needs. We expect the automated system to appear first in parts (modules) as certain operations are automated. The future will see it all come together in an automated film radiographic NDT system (author) [pt

  11. Sparse Mbplsr for Metabolomics Data and Biomarker Discovery

    DEFF Research Database (Denmark)

    Karaman, İbrahim

    2014-01-01

    Metabolomics is part of systems biology and a rapidly evolving field. It is a tool to analyze multiple metabolic changes in biofluids and tissues and aims at determining biomarkers in the metabolism. LC-MS (liquid chromatography – mass spectrometry), GC-MS (gas chromatography – mass spectrometry...... the link between high throughput metabolomics data generated on different analytical platforms, discover important metabolites deriving from the digestion processes in the gut, and automate metabolic pathway discovery from mass spectrometry. PLS (partial least squares) based chemometric methods were......, potential biomarkers from LC-MS and NMR data could be detected and the relationships among the measurement variables of both analytical methods could be studied. Detection of potential biomarkers is followed up by an identification process through online metabolite and pathway databases. This process...

  12. BioinformatiqTM - integrating data types for proteomic discovery

    International Nuclear Information System (INIS)

    Arthur, J.W.; Harrison, M.; Manoharan, A.; Traini, M.; Shaw, E.; Wilkins, M.

    2001-01-01

    Proteomics (Wilkins et al. 1997) involves the large-scale analysis of expressed proteins. At each stage of the discovery process the researcher accumulates large volumes of data. These include: clinical or biological data about the sample being studied; details of sample purification and separation; images of 2D gels and associated information; MALDI mass spectra; MS/MS and PSD spectra; as well as meta-data relating to the projects undertaken and experiments performed. All this must be combined with existing databases of protein and EST sequences, post-translational modifications, and protein glycosylation, then processed with sophisticated bioinformatics tools in order to extract meaningful answers to questions of biological, clinical, and agricultural significance. BioinformatlQ TM is a web-based application for the storage, management, and automated bioinformatic analysis of proteomic information. This poster will demonstrate the integration of these disparate data sources in proteomics

  13. An Automation Survival Guide for Media Centers.

    Science.gov (United States)

    Whaley, Roger E.

    1989-01-01

    Reviews factors that should affect the decision to automate a school media center and offers suggestions for the automation process. Topics discussed include getting the library collection ready for automation, deciding what automated functions are needed, evaluating software vendors, selecting software, and budgeting. (CLB)

  14. National Automated Conformity Inspection Process -

    Data.gov (United States)

    Department of Transportation — The National Automated Conformity Inspection Process (NACIP) Application is intended to expedite the workflow process as it pertains to the FAA Form 81 0-10 Request...

  15. Home automation with Intel Galileo

    CERN Document Server

    Dundar, Onur

    2015-01-01

    This book is for anyone who wants to learn Intel Galileo for home automation and cross-platform software development. No knowledge of programming with Intel Galileo is assumed, but knowledge of the C programming language is essential.

  16. Fully automated parallel oligonucleotide synthesizer

    Czech Academy of Sciences Publication Activity Database

    Lebl, M.; Burger, Ch.; Ellman, B.; Heiner, D.; Ibrahim, G.; Jones, A.; Nibbe, M.; Thompson, J.; Mudra, Petr; Pokorný, Vít; Poncar, Pavel; Ženíšek, Karel

    2001-01-01

    Roč. 66, č. 8 (2001), s. 1299-1314 ISSN 0010-0765 Institutional research plan: CEZ:AV0Z4055905 Keywords : automated oligonucleotide synthesizer Subject RIV: CC - Organic Chemistry Impact factor: 0.778, year: 2001

  17. Office Automation Boosts University's Productivity.

    Science.gov (United States)

    School Business Affairs, 1986

    1986-01-01

    The University of Pittsburgh has a 2-year agreement designating the Xerox Corporation as the primary supplier of word processing and related office automation equipment in order to increase productivity and more efficient use of campus resources. (MLF)

  18. Office Automation at Memphis State.

    Science.gov (United States)

    Smith, R. Eugene; And Others

    1986-01-01

    The development of a university-wide office automation plan, beginning with a short-range pilot project and a five-year plan for the entire organization with the potential for modular implementation, is described. (MSE)

  19. The Evaluation of Automated Systems

    National Research Council Canada - National Science Library

    McDougall, Jeffrey

    2004-01-01

    .... The Army has recognized this change and is adapting to operate in this new environment. It has developed a number of automated tools to assist leaders in the command and control of their organizations...

  20. Automation and Human Resource Management.

    Science.gov (United States)

    Taft, Michael

    1988-01-01

    Discussion of the automation of personnel administration in libraries covers (1) new developments in human resource management systems; (2) system requirements; (3) software evaluation; (4) vendor evaluation; (5) selection of a system; (6) training and support; and (7) benefits. (MES)