WorldWideScience

Sample records for protocols computational methods

  1. IMPROVED COMPUTATIONAL NEUTRONICS METHODS AND VALIDATION PROTOCOLS FOR THE ADVANCED TEST REACTOR

    Energy Technology Data Exchange (ETDEWEB)

    David W. Nigg; Joseph W. Nielsen; Benjamin M. Chase; Ronnie K. Murray; Kevin A. Steuhm

    2012-04-01

    The Idaho National Laboratory (INL) is in the process of modernizing the various reactor physics modeling and simulation tools used to support operation and safety assurance of the Advanced Test Reactor (ATR). Key accomplishments so far have encompassed both computational as well as experimental work. A new suite of stochastic and deterministic transport theory based reactor physics codes and their supporting nuclear data libraries (HELIOS, KENO6/SCALE, NEWT/SCALE, ATTILA, and an extended implementation of MCNP5) has been installed at the INL. Corresponding models of the ATR and ATRC are now operational with all five codes, demonstrating the basic feasibility of the new code packages for their intended purpose. Of particular importance, a set of as-run core depletion HELIOS calculations for all ATR cycles since August 2009 was successfully completed during 2011. This demonstration supported a decision late in the year to proceed with the phased incorporation of the HELIOS methodology into the ATR fuel cycle management process beginning in 2012. On the experimental side of the project, new hardware was fabricated, measurement protocols were finalized, and the first four of six planned physics code validation experiments based on neutron activation spectrometry were conducted at the ATRC facility. Data analysis for the first three experiments, focused on characterization of the neutron spectrum in one of the ATR flux traps, has been completed. The six experiments will ultimately form the basis for a flexible, easily-repeatable ATR physics code validation protocol that is consistent with applicable ASTM standards.

  2. Immunocytochemical methods and protocols

    National Research Council Canada - National Science Library

    Javois, Lorette C

    1999-01-01

    ... monoclonal antibodies to study cell differentiation during embryonic development. For a select few disciplines volumes have been published focusing on the specific application of immunocytochemical techniques to that discipline. What distinguished Immunocytochemical Methods and Protocols from earlier books when it was first published four years ago was i...

  3. Antibody engineering: methods and protocols

    National Research Council Canada - National Science Library

    Chames, Patrick

    2012-01-01

    "Antibody Engineering: Methods and Protocols, Second Edition was compiled to give complete and easy access to a variety of antibody engineering techniques, starting from the creation of antibody repertoires and efficient...

  4. Adaptive security protocol selection for mobile computing

    NARCIS (Netherlands)

    Pontes Soares Rocha, B.; Costa, D.N.O.; Moreira, R.A.; Rezende, C.G.; Loureiro, A.A.F.; Boukerche, A.

    2010-01-01

    The mobile computing paradigm has introduced new problems for application developers. Challenges include heterogeneity of hardware, software, and communication protocols, variability of resource limitations and varying wireless channel quality. In this scenario, security becomes a major concern for

  5. Essential numerical computer methods

    CERN Document Server

    Johnson, Michael L

    2010-01-01

    The use of computers and computational methods has become ubiquitous in biological and biomedical research. During the last 2 decades most basic algorithms have not changed, but what has is the huge increase in computer speed and ease of use, along with the corresponding orders of magnitude decrease in cost. A general perception exists that the only applications of computers and computer methods in biological and biomedical research are either basic statistical analysis or the searching of DNA sequence data bases. While these are important applications they only scratch the surface of the current and potential applications of computers and computer methods in biomedical research. The various chapters within this volume include a wide variety of applications that extend far beyond this limited perception. As part of the Reliable Lab Solutions series, Essential Numerical Computer Methods brings together chapters from volumes 210, 240, 321, 383, 384, 454, and 467 of Methods in Enzymology. These chapters provide ...

  6. Caltech computer scientists develop FAST protocol to speed up Internet

    CERN Multimedia

    2003-01-01

    "Caltech computer scientists have developed a new data transfer protocol for the Internet fast enough to download a full-length DVD movie in less than five seconds. The protocol is called FAST, standing for Fast Active queue management Scalable Transmission Control Protocol" (1 page).

  7. Language, Semantics, and Methods for Security Protocols

    DEFF Research Database (Denmark)

    Crazzolara, Federico

    events. Methods like strand spaces and the inductive method of Paulson have been designed to support an intensional, event-based, style of reasoning. These methods have successfully tackled a number of protocols though in an ad hoc fashion. They make an informal spring from a protocol to its......-nets. They have persistent conditions and as we show in this thesis, unfold under reasonable assumptions to a more basic kind of nets. We relate SPL-nets to strand spaces and inductive rules, as well as trace languages and event structures so unifying a range of approaches, as well as providing conditions under...... reveal. The last few years have seen the emergence of successful intensional, event-based, formal approaches to reasoning about security protocols. The methods are concerned with reasoning about the events that a security protocol can perform, and make use of a causal dependency that exists between...

  8. Clostridium difficile: methods and protocols

    National Research Council Canada - National Science Library

    Mullany, Peter; Roberts, Adam P

    2010-01-01

    .... difficile research to describe the recently developed methods for studying the organism. These range from methods for isolation of the organism, molecular typing, genomics, genetic manipulation, and the use of animal models...

  9. Light microscopy - Methods and protocols

    Directory of Open Access Journals (Sweden)

    CarloAlberto Redi

    2011-11-01

    Full Text Available The first part of the book (six chapters is devoted to some selected applications of bright-field microscopy while the second part (eight chapters to some fluorescence microscopy studies. Both animal and plant biology investigations are presented covering multiple fields like immunology, cell signaling, cancer biology and, surprisingly to me, ecology. This chapter is titled: Light microscopy in aquatic ecology: Methods for plankton communities studies and it is due to Maria Carolina S. Soares and colleagues from the Laboratory of Aquatic Ecology, Dept. of Biology, Federal University of Juiz de Fora (Brazil. Here they present methods to quantify the different component of planktonic communities in a step-by-step manner so that virus, bacteria, algae and animals pertaining to different taxa can be recognized and the contribution they made to the plankton composition evaluated. It descends that even how the plankton composition is changing due to environmental variations can be accurately determined....

  10. Protocol design and implementation using formal methods

    NARCIS (Netherlands)

    van Sinderen, Marten J.; Ferreira Pires, Luis; Pires, L.F.; Vissers, C.A.

    1992-01-01

    This paper reports on a number of formal methods that support correct protocol design and implementation. These methods are placed in the framework of a design methodology for distributed systems that was studied and developed within the ESPRIT II Lotosphere project (2304). The paper focuses on

  11. Minimal computational-space implementation of multiround quantum protocols

    International Nuclear Information System (INIS)

    Bisio, Alessandro; D'Ariano, Giacomo Mauro; Perinotti, Paolo; Chiribella, Giulio

    2011-01-01

    A single-party strategy in a multiround quantum protocol can be implemented by sequential networks of quantum operations connected by internal memories. Here, we provide an efficient realization in terms of computational-space resources.

  12. Blind quantum computation protocol in which Alice only makes measurements

    Science.gov (United States)

    Morimae, Tomoyuki; Fujii, Keisuke

    2013-05-01

    Blind quantum computation is a new secure quantum computing protocol which enables Alice (who does not have sufficient quantum technology) to delegate her quantum computation to Bob (who has a full-fledged quantum computer) in such a way that Bob cannot learn anything about Alice's input, output, and algorithm. In previous protocols, Alice needs to have a device which generates quantum states, such as single-photon states. Here we propose another type of blind computing protocol where Alice does only measurements, such as the polarization measurements with a threshold detector. In several experimental setups, such as optical systems, the measurement of a state is much easier than the generation of a single-qubit state. Therefore our protocols ease Alice's burden. Furthermore, the security of our protocol is based on the no-signaling principle, which is more fundamental than quantum physics. Finally, our protocols are device independent in the sense that Alice does not need to trust her measurement device in order to guarantee the security.

  13. Zero-Knowledge Protocols and Multiparty Computation

    DEFF Research Database (Denmark)

    Pastro, Valerio

    majority, in which all players but one are controlled by the adversary. In Chapter 5 we present both the preprocessing and the online phase of [DKL+ 13], while in Chapter 2 we describe only the preprocessing phase of [DPSZ12] since the combination of this preprocessing phase with the online phase of [DKL...... on information-theoretic message authentication codes, requires only a linear amount of data from the preprocessing, and improves on the number of field multiplications needed to perform one secure multiplication (linear, instead of quadratic as in earlier work). The preprocessing phase in Chapter 5 comes...... in an actively secure flavour and in a covertly secure one, both of which compare favourably to previous work in terms of efficiency and provable security. Moreover, the covertly secure solution includes a key generation protocol that allows players to obtain a public key and shares of a corresponding secret key...

  14. Dose optimization for multislice computed tomography protocols of the midface

    International Nuclear Information System (INIS)

    Lorenzen, M.; Wedegaertner, U.; Weber, C.; Adam, G.; Lorenzen, J.; Lockemann, U.

    2005-01-01

    Purpose: to optimize multislice computed tomography (MSCT) protocols of the midface for dose reduction and adequate image quality. Materials and methods: MSCT (somatom volume zoom, siemens) of the midface was performed on 3 cadavers within 24 hours of death with successive reduction of the tube current, applying 150, 100, 70 and 30 mAs at 120 kV as well as 40 and 21 mAs at 80 kV. At 120 kV, a pitch of 0.875 and collimation of 4 x 1 mm were used, and at 80 kV, a pitch of 0.7 and collimation of 2 x 0.5 mm. Images were reconstructed in transverse and coronal orientation. Qualitative image analysis was separately performed by two radiologists using a five-point scale (1 = excellent; 5 = poor) applying the following parameters: image quality, demarcation and sharpness of lamellar bone, overall image quality, and image noise (1 = minor; 5 = strong). The effective body dose [mSv] and organ dose [mSv] of the ocular lens (using the dosimetry system ''WINdose'') were calculated, and the interobserver agreement (kappa coefficient) was determined. Results: for the evaluation of the lamellar bone, adequate sharpness, demarcation and image quality was demonstrated at 120 kV/30 mAs, and for the overall image quality and noise, 120 kV/40 mAs was acceptable. With regard to image quality, the effective body dose could be reduced from 1.89 mSv to 0.34 mSv and the organ dose of the ocular lens from 27.2 mSv to 4.8 mSv. Interobserver agreement was moderate (kappa = 0.39). Conclusion: adequate image quality was achieved for MSCT protocols of the midface with 30 mAs at 120 kV, resulting in a dose reduction of 70% in comparison to standard protocols. (orig.)

  15. A Distributed Snapshot Protocol for Efficient Artificial Intelligence Computation in Cloud Computing Environments

    Directory of Open Access Journals (Sweden)

    JongBeom Lim

    2018-01-01

    Full Text Available Many artificial intelligence applications often require a huge amount of computing resources. As a result, cloud computing adoption rates are increasing in the artificial intelligence field. To support the demand for artificial intelligence applications and guarantee the service level agreement, cloud computing should provide not only computing resources but also fundamental mechanisms for efficient computing. In this regard, a snapshot protocol has been used to create a consistent snapshot of the global state in cloud computing environments. However, the existing snapshot protocols are not optimized in the context of artificial intelligence applications, where large-scale iterative computation is the norm. In this paper, we present a distributed snapshot protocol for efficient artificial intelligence computation in cloud computing environments. The proposed snapshot protocol is based on a distributed algorithm to run interconnected multiple nodes in a scalable fashion. Our snapshot protocol is able to deal with artificial intelligence applications, in which a large number of computing nodes are running. We reveal that our distributed snapshot protocol guarantees the correctness, safety, and liveness conditions.

  16. Quantitative methods for studying design protocols

    CERN Document Server

    Kan, Jeff WT

    2017-01-01

    This book is aimed at researchers and students who would like to engage in and deepen their understanding of design cognition research. The book presents new approaches for analyzing design thinking and proposes methods of measuring design processes. These methods seek to quantify design issues and design processes that are defined based on notions from the Function-Behavior-Structure (FBS) design ontology and from linkography. A linkograph is a network of linked design moves or segments. FBS ontology concepts have been used in both design theory and design thinking research and have yielded numerous results. Linkography is one of the most influential and elegant design cognition research methods. In this book Kan and Gero provide novel and state-of-the-art methods of analyzing design protocols that offer insights into design cognition by integrating segmentation with linkography by assigning FBS-based codes to design moves or segments and treating links as FBS transformation processes. They propose and test ...

  17. Protocol independent transmission method in software defined optical network

    Science.gov (United States)

    Liu, Yuze; Li, Hui; Hou, Yanfang; Qiu, Yajun; Ji, Yuefeng

    2016-10-01

    With the development of big data and cloud computing technology, the traditional software-defined network is facing new challenges (e.i., ubiquitous accessibility, higher bandwidth, more flexible management and greater security). Using a proprietary protocol or encoding format is a way to improve information security. However, the flow, which carried by proprietary protocol or code, cannot go through the traditional IP network. In addition, ultra- high-definition video transmission service once again become a hot spot. Traditionally, in the IP network, the Serial Digital Interface (SDI) signal must be compressed. This approach offers additional advantages but also bring some disadvantages such as signal degradation and high latency. To some extent, HD-SDI can also be regard as a proprietary protocol, which need transparent transmission such as optical channel. However, traditional optical networks cannot support flexible traffics . In response to aforementioned challenges for future network, one immediate solution would be to use NFV technology to abstract the network infrastructure and provide an all-optical switching topology graph for the SDN control plane. This paper proposes a new service-based software defined optical network architecture, including an infrastructure layer, a virtualization layer, a service abstract layer and an application layer. We then dwell on the corresponding service providing method in order to implement the protocol-independent transport. Finally, we experimentally evaluate that proposed service providing method can be applied to transmit the HD-SDI signal in the software-defined optical network.

  18. Is computer-assisted instruction more effective than other educational methods in achieving ECG competence among medical students and residents? Protocol for a systematic review and meta-analysis.

    Science.gov (United States)

    Viljoen, Charle André; Scott Millar, Rob; Engel, Mark E; Shelton, Mary; Burch, Vanessa

    2017-12-26

    Although ECG interpretation is an essential skill in clinical medicine, medical students and residents often lack ECG competence. Novel teaching methods are increasingly being implemented and investigated to improve ECG training. Computer-assisted instruction is one such method under investigation; however, its efficacy in achieving better ECG competence among medical students and residents remains uncertain. This article describes the protocol for a systematic review and meta-analysis that will compare the effectiveness of computer-assisted instruction with other teaching methods used for the ECG training of medical students and residents. Only studies with a comparative research design will be considered. Articles will be searched for in electronic databases (PubMed, Scopus, Web of Science, Academic Search Premier, CINAHL, PsycINFO, Education Resources Information Center, Africa-Wide Information and Teacher Reference Center). In addition, we will review citation indexes and conduct a grey literature search. Data extraction will be done on articles that met the predefined eligibility criteria. A descriptive analysis of the different teaching modalities will be provided and their educational impact will be assessed in terms of effect size and the modified version of Kirkpatrick framework for the evaluation of educational interventions. This systematic review aims to provide evidence as to whether computer-assisted instruction is an effective teaching modality for ECG training. It is hoped that the information garnered from this systematic review will assist in future curricular development and improve ECG training. As this research is a systematic review of published literature, ethical approval is not required. The results will be reported according to the Preferred Reporting Items for Systematic Review and Meta-Analysis statement and will be submitted to a peer-reviewed journal. The protocol and systematic review will be included in a PhD dissertation. CRD

  19. Computer network time synchronization the network time protocol

    CERN Document Server

    Mills, David L

    2006-01-01

    What started with the sundial has, thus far, been refined to a level of precision based on atomic resonance: Time. Our obsession with time is evident in this continued scaling down to nanosecond resolution and beyond. But this obsession is not without warrant. Precision and time synchronization are critical in many applications, such as air traffic control and stock trading, and pose complex and important challenges in modern information networks.Penned by David L. Mills, the original developer of the Network Time Protocol (NTP), Computer Network Time Synchronization: The Network Time Protocol

  20. Private quantum computation: an introduction to blind quantum computing and related protocols

    Science.gov (United States)

    Fitzsimons, Joseph F.

    2017-06-01

    Quantum technologies hold the promise of not only faster algorithmic processing of data, via quantum computation, but also of more secure communications, in the form of quantum cryptography. In recent years, a number of protocols have emerged which seek to marry these concepts for the purpose of securing computation rather than communication. These protocols address the task of securely delegating quantum computation to an untrusted device while maintaining the privacy, and in some instances the integrity, of the computation. We present a review of the progress to date in this emerging area.

  1. Computationally Developed Sham Stimulation Protocol for Multichannel Desynchronizing Stimulation

    Directory of Open Access Journals (Sweden)

    Magteld Zeitler

    2018-05-01

    Full Text Available A characteristic pattern of abnormal brain activity is abnormally strong neuronal synchronization, as found in several brain disorders, such as tinnitus, Parkinson's disease, and epilepsy. As observed in several diseases, different therapeutic interventions may induce a placebo effect that may be strong and hinder reliable clinical evaluations. Hence, to distinguish between specific, neuromodulation-induced effects and unspecific, placebo effects, it is important to mimic the therapeutic procedure as precisely as possibly, thereby providing controls that actually lack specific effects. Coordinated Reset (CR stimulation has been developed to specifically counteract abnormally strong synchronization by desynchronization. CR is a spatio-temporally patterned multichannel stimulation which reduces the extent of coincident neuronal activity and aims at an anti-kindling, i.e., an unlearning of both synaptic connectivity and neuronal synchrony. Apart from acute desynchronizing effects, CR may cause sustained, long-lasting desynchronizing effects, as already demonstrated in pre-clinical and clinical proof of concept studies. In this computational study, we set out to computationally develop a sham stimulation protocol for multichannel desynchronizing stimulation. To this end, we compare acute effects and long-lasting effects of six different spatio-temporally patterned stimulation protocols, including three variants of CR, using a no-stimulation condition as additional control. This is to provide an inventory of different stimulation algorithms with similar fundamental stimulation parameters (e.g., mean stimulation rates but qualitatively different acute and/or long-lasting effects. Stimulation protocols sharing basic parameters, but inducing nevertheless completely different or even no acute effects and/or after-effects, might serve as controls to validate the specific effects of particular desynchronizing protocols such as CR. In particular, based on

  2. Computing Nash equilibria through computational intelligence methods

    Science.gov (United States)

    Pavlidis, N. G.; Parsopoulos, K. E.; Vrahatis, M. N.

    2005-03-01

    Nash equilibrium constitutes a central solution concept in game theory. The task of detecting the Nash equilibria of a finite strategic game remains a challenging problem up-to-date. This paper investigates the effectiveness of three computational intelligence techniques, namely, covariance matrix adaptation evolution strategies, particle swarm optimization, as well as, differential evolution, to compute Nash equilibria of finite strategic games, as global minima of a real-valued, nonnegative function. An issue of particular interest is to detect more than one Nash equilibria of a game. The performance of the considered computational intelligence methods on this problem is investigated using multistart and deflection.

  3. A model based security testing method for protocol implementation.

    Science.gov (United States)

    Fu, Yu Long; Xin, Xiao Long

    2014-01-01

    The security of protocol implementation is important and hard to be verified. Since the penetration testing is usually based on the experience of the security tester and the specific protocol specifications, a formal and automatic verification method is always required. In this paper, we propose an extended model of IOLTS to describe the legal roles and intruders of security protocol implementations, and then combine them together to generate the suitable test cases to verify the security of protocol implementation.

  4. Security Protocol Review Method Analyzer(SPRMAN)

    OpenAIRE

    Navaz, A. S. Syed; Narayanan, H. Iyyappa; Vinoth, R.

    2013-01-01

    This Paper is designed using J2EE (JSP, SERVLET), HTML as front end and a Oracle 9i is back end. SPRMAN is been developed for the client British Telecom (BT) UK., Telecom company. Actually the requirement of BT is, they are providing Network Security Related Products to their IT customers like Virtusa,Wipro,HCL etc., This product is framed out by set of protocols and these protocols are been associated with set of components. By grouping all these protocols and components together, product is...

  5. Method for computed tomography

    International Nuclear Information System (INIS)

    Wagner, W.

    1980-01-01

    In transversal computer tomography apparatus, in which the positioning zone in which the patient can be positioned is larger than the scanning zone in which a body slice can be scanned, reconstruction errors are liable to occur. These errors are caused by incomplete irradiation of the body during examination. They become manifest not only as an incorrect image of the area not irradiated, but also have an adverse effect on the image of the other, completely irradiated areas. The invention enables reduction of these errors

  6. Dosimetric evaluation of cone beam computed tomography scanning protocols

    International Nuclear Information System (INIS)

    Soares, Maria Rosangela

    2015-01-01

    It was evaluated the cone beam computed tomography, CBCT scanning protocols, that was introduced in dental radiology at the end of the 1990's, and quickly became a fundamental examination for various procedures. Its main characteristic, the difference of medical CT is the beam shape. This study aimed to calculate the absorbed dose in eight tissues / organs of the head and neck, and to estimate the effective dose in 13 protocols and two techniques (stitched FOV e single FOV) of 5 equipment of different manufacturers of cone beam CT. For that purpose, a female anthropomorphic phantom was used, representing a default woman, in which were inserted thermoluminescent dosimeters at several points, representing organs / tissues with weighting values presented in the standard ICRP 103. The results were evaluated by comparing the dose according to the purpose of the tomographic image. Among the results, there is a difference up to 325% in the effective dose in relation to protocols with the same image goal. In relation to the image acquisition technique, the stitched FOV technique resulted in an effective dose of 5.3 times greater than the single FOV technique for protocols with the same image goal. In the individual contribution, the salivary glands are responsible for 31% of the effective dose in CT exams. The remaining tissues have also a significant contribution, 36%. The results drew attention to the need of estimating the effective dose in different equipment and protocols of the market, besides the knowledge of the radiation parameters and equipment manufacturing engineering to obtain the image. (author)

  7. Computational methods working group

    International Nuclear Information System (INIS)

    Gabriel, T.A.

    1997-09-01

    During the Cold Moderator Workshop several working groups were established including one to discuss calculational methods. The charge for this working group was to identify problems in theory, data, program execution, etc., and to suggest solutions considering both deterministic and stochastic methods including acceleration procedures.

  8. Survey of computed tomography doses in head and chest protocols

    International Nuclear Information System (INIS)

    Souza, Giordana Salvi de; Silva, Ana Maria Marques da

    2016-01-01

    Computed tomography is a clinical tool for the diagnosis of patients. However, the patient is subjected to a complex dose distribution. The aim of this study was to survey dose indicators in head and chest protocols CT scans, in terms of Dose-Length Product(DLP) and effective dose for adult and pediatric patients, comparing them with diagnostic reference levels in the literature. Patients were divided into age groups and the following image acquisition parameters were collected: age, kV, mAs, Volumetric Computed Tomography Dose Index (CTDIvol) and DLP. The effective dose was found multiplying DLP by correction factors. The results were obtained from the third quartile and showed the importance of determining kV and mAs values for each patient depending on the studied region, age and thickness. (author)

  9. Computational Methods in Medicine

    Directory of Open Access Journals (Sweden)

    Angel Garrido

    2010-01-01

    Full Text Available Artificial Intelligence requires Logic. But its Classical version shows too many insufficiencies. So, it is absolutely necessary to introduce more sophisticated tools, such as Fuzzy Logic, Modal Logic, Non-Monotonic Logic, and so on [2]. Among the things that AI needs to represent are Categories, Objects, Properties, Relations between objects, Situations, States, Time, Events, Causes and effects, Knowledge about knowledge, and so on. The problems in AI can be classified in two general types
    [3, 4], Search Problems and Representation Problem. There exist different ways to reach this objective. So, we have [3] Logics, Rules, Frames, Associative Nets, Scripts and so on, that are often interconnected. Also, it will be very useful, in dealing with problems of uncertainty and causality, to introduce Bayesian Networks and particularly, a principal tool as the Essential Graph. We attempt here to show the scope of application of such versatile methods, currently fundamental in Medicine.

  10. Numerical methods in matrix computations

    CERN Document Server

    Björck, Åke

    2015-01-01

    Matrix algorithms are at the core of scientific computing and are indispensable tools in most applications in engineering. This book offers a comprehensive and up-to-date treatment of modern methods in matrix computation. It uses a unified approach to direct and iterative methods for linear systems, least squares and eigenvalue problems. A thorough analysis of the stability, accuracy, and complexity of the treated methods is given. Numerical Methods in Matrix Computations is suitable for use in courses on scientific computing and applied technical areas at advanced undergraduate and graduate level. A large bibliography is provided, which includes both historical and review papers as well as recent research papers. This makes the book useful also as a reference and guide to further study and research work. Åke Björck is a professor emeritus at the Department of Mathematics, Linköping University. He is a Fellow of the Society of Industrial and Applied Mathematics.

  11. Mouse cell culture - Methods and protocols

    Directory of Open Access Journals (Sweden)

    CarloAlberto Redi

    2010-12-01

    Full Text Available The mouse is, out of any doubt, the experimental animal par excellence for many many colleagues within the scientific community, notably for those working in mammalian biology (in a broad sense, from basic genetic to modeling human diseases, starting at least from 1664 Robert Hooke experiments on air’s propertyn. Not surprising then that mouse cell cultures is a well established field of research itself and that there are several handbooks devoted to this discipline. Here, Andrew Ward and David Tosh provide a necessary update of the protocols currently needed. In fact, nearly half of the book is devoted to stem cells culture protocols, mainly embryonic, from a list of several organs (kidney, lung, oesophagus and intestine, pancreas and liver to mention some........

  12. Numerical computer methods part D

    CERN Document Server

    Johnson, Michael L

    2004-01-01

    The aim of this volume is to brief researchers of the importance of data analysis in enzymology, and of the modern methods that have developed concomitantly with computer hardware. It is also to validate researchers' computer programs with real and synthetic data to ascertain that the results produced are what they expected. Selected Contents: Prediction of protein structure; modeling and studying proteins with molecular dynamics; statistical error in isothermal titration calorimetry; analysis of circular dichroism data; model comparison methods.

  13. Computational Methods in Plasma Physics

    CERN Document Server

    Jardin, Stephen

    2010-01-01

    Assuming no prior knowledge of plasma physics or numerical methods, Computational Methods in Plasma Physics covers the computational mathematics and techniques needed to simulate magnetically confined plasmas in modern magnetic fusion experiments and future magnetic fusion reactors. Largely self-contained, the text presents the basic concepts necessary for the numerical solution of partial differential equations. Along with discussing numerical stability and accuracy, the author explores many of the algorithms used today in enough depth so that readers can analyze their stability, efficiency,

  14. Computational methods in earthquake engineering

    CERN Document Server

    Plevris, Vagelis; Lagaros, Nikos

    2017-01-01

    This is the third book in a series on Computational Methods in Earthquake Engineering. The purpose of this volume is to bring together the scientific communities of Computational Mechanics and Structural Dynamics, offering a wide coverage of timely issues on contemporary Earthquake Engineering. This volume will facilitate the exchange of ideas in topics of mutual interest and can serve as a platform for establishing links between research groups with complementary activities. The computational aspects are emphasized in order to address difficult engineering problems of great social and economic importance. .

  15. Methods for computing color anaglyphs

    Science.gov (United States)

    McAllister, David F.; Zhou, Ya; Sullivan, Sophia

    2010-02-01

    A new computation technique is presented for calculating pixel colors in anaglyph images. The method depends upon knowing the RGB spectral distributions of the display device and the transmission functions of the filters in the viewing glasses. It requires the solution of a nonlinear least-squares program for each pixel in a stereo pair and is based on minimizing color distances in the CIEL*a*b* uniform color space. The method is compared with several techniques for computing anaglyphs including approximation in CIE space using the Euclidean and Uniform metrics, the Photoshop method and its variants, and a method proposed by Peter Wimmer. We also discuss the methods of desaturation and gamma correction for reducing retinal rivalry.

  16. Protocol Analysis as a Method for Analyzing Conversational Data.

    Science.gov (United States)

    Aleman, Carlos G.; Vangelisti, Anita L.

    Protocol analysis, a technique that uses people's verbal reports about their cognitions as they engage in an assigned task, has been used in a number of applications to provide insight into how people mentally plan, assess, and carry out those assignments. Using a system of networked computers where actors communicate with each other over…

  17. Reduction of cancer risk by optimization of Computed Tomography head protocols: far eastern Cuban experience

    International Nuclear Information System (INIS)

    Miller Clemente, R.; Adame Brooks, D.; Lores Guevara, M.; Perez Diaz, M.; Arias Garlobo, M. L.; Ortega Rodriguez, O.; Nepite Haber, R.; Grinnan Hernandez, O.; Guillama Llosas, A.

    2015-01-01

    The cancer risk estimation constitutes one way for the evaluation of the public health, regarding computed tomography (CT) exposures. Starting from the hypothesis that the optimization of CT protocols would reduce significantly the added cancer risk, the purpose of this research was the application of optimization strategies regarding head CT protocols, in order to reduce the factors affecting the risk of induced cancer. The applied systemic approach included technological and human components, represented by quantitative physical factors. the volumetric kerma indexes, compared with respect to standard, optimized and reference values, were evaluated with multiple means comparison method. The added cancer risk resulted from the application of the methodology for biological effects evaluation, at low doses with low Linear Energy Transfer. Human observers in all scenarios evaluated the image quality. the reduced dose was significantly lower than for standard head protocols and reference levels, where: (1) for pediatric patients, by using an Automatic Exposure Control system, a reduction of 31% compared with standard protocol and ages range of 10-14, and (2) adults, using a Bilateral Filter for images obtained at low doses of 62% from those of standard head protocol. The risk reduction was higher than 25%. The systemic approach used allows the effective identification of factors involved on cancer risk related with exposures to CT. The combination of dose modulation and image restoration with Bilateral Filter, provide a significantly reduction of cancer risk, with acceptable diagnostic image quality. (Author)

  18. Computational methods in drug discovery

    Directory of Open Access Journals (Sweden)

    Sumudu P. Leelananda

    2016-12-01

    Full Text Available The process for drug discovery and development is challenging, time consuming and expensive. Computer-aided drug discovery (CADD tools can act as a virtual shortcut, assisting in the expedition of this long process and potentially reducing the cost of research and development. Today CADD has become an effective and indispensable tool in therapeutic development. The human genome project has made available a substantial amount of sequence data that can be used in various drug discovery projects. Additionally, increasing knowledge of biological structures, as well as increasing computer power have made it possible to use computational methods effectively in various phases of the drug discovery and development pipeline. The importance of in silico tools is greater than ever before and has advanced pharmaceutical research. Here we present an overview of computational methods used in different facets of drug discovery and highlight some of the recent successes. In this review, both structure-based and ligand-based drug discovery methods are discussed. Advances in virtual high-throughput screening, protein structure prediction methods, protein–ligand docking, pharmacophore modeling and QSAR techniques are reviewed.

  19. Fast protocol for radiochromic film dosimetry using a cloud computing web application.

    Science.gov (United States)

    Calvo-Ortega, Juan-Francisco; Pozo, Miquel; Moragues, Sandra; Casals, Joan

    2017-07-01

    To investigate the feasibility of a fast protocol for radiochromic film dosimetry to verify intensity-modulated radiotherapy (IMRT) plans. EBT3 film dosimetry was conducted in this study using the triple-channel method implemented in the cloud computing application (Radiochromic.com). We described a fast protocol for radiochromic film dosimetry to obtain measurement results within 1h. Ten IMRT plans were delivered to evaluate the feasibility of the fast protocol. The dose distribution of the verification film was derived at 15, 30, 45min using the fast protocol and also at 24h after completing the irradiation. The four dose maps obtained per plan were compared using global and local gamma index (5%/3mm) with the calculated one by the treatment planning system. Gamma passing rates obtained for 15, 30 and 45min post-exposure were compared with those obtained after 24h. Small differences respect to the 24h protocol were found in the gamma passing rates obtained for films digitized at 15min (global: 99.6%±0.9% vs. 99.7%±0.5%; local: 96.3%±3.4% vs. 96.3%±3.8%), at 30min (global: 99.5%±0.9% vs. 99.7%±0.5%; local: 96.5%±3.2% vs. 96.3±3.8%) and at 45min (global: 99.2%±1.5% vs. 99.7%±0.5%; local: 96.1%±3.8% vs. 96.3±3.8%). The fast protocol permits dosimetric results within 1h when IMRT plans are verified, with similar results as those reported by the standard 24h protocol. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  20. Combinatorial methods with computer applications

    CERN Document Server

    Gross, Jonathan L

    2007-01-01

    Combinatorial Methods with Computer Applications provides in-depth coverage of recurrences, generating functions, partitions, and permutations, along with some of the most interesting graph and network topics, design constructions, and finite geometries. Requiring only a foundation in discrete mathematics, it can serve as the textbook in a combinatorial methods course or in a combined graph theory and combinatorics course.After an introduction to combinatorics, the book explores six systematic approaches within a comprehensive framework: sequences, solving recurrences, evaluating summation exp

  1. Tracing Method with Intra and Inter Protocols Correlation

    Directory of Open Access Journals (Sweden)

    Marin Mangri

    2009-05-01

    Full Text Available MEGACO or H.248 is a protocol enabling acentralized Softswitch (or MGC to control MGsbetween Voice over Packet (VoP networks andtraditional ones. To analyze much deeper the realimplementations it is useful to use a tracing systemwith intra and inter protocols correlation. For thisreason in the case of MEGACO-H.248 it is necessaryto find the appropriate method of correlation with allprotocols involved. Starting from Rel4 a separation ofCP (Control Plane and UP (User Plane managementwithin the networks appears. MEGACO protocol playsan important role in the migration to the new releasesor from monolithic platform to a network withdistributed components.

  2. Analysis and Verification of a Key Agreement Protocol over Cloud Computing Using Scyther Tool

    OpenAIRE

    Hazem A Elbaz

    2015-01-01

    The mostly cloud computing authentication mechanisms use public key infrastructure (PKI). Hierarchical Identity Based Cryptography (HIBC) has several advantages that sound well align with the demands of cloud computing. The main objectives of cloud computing authentication protocols are security and efficiency. In this paper, we clarify Hierarchical Identity Based Authentication Key Agreement (HIB-AKA) protocol, providing lightweight key management approach for cloud computing users. Then, we...

  3. Computational methods for fluid dynamics

    CERN Document Server

    Ferziger, Joel H

    2002-01-01

    In its 3rd revised and extended edition the book offers an overview of the techniques used to solve problems in fluid mechanics on computers and describes in detail those most often used in practice. Included are advanced methods in computational fluid dynamics, like direct and large-eddy simulation of turbulence, multigrid methods, parallel computing, moving grids, structured, block-structured and unstructured boundary-fitted grids, free surface flows. The 3rd edition contains a new section dealing with grid quality and an extended description of discretization methods. The book shows common roots and basic principles for many different methods. The book also contains a great deal of practical advice for code developers and users, it is designed to be equally useful to beginners and experts. The issues of numerical accuracy, estimation and reduction of numerical errors are dealt with in detail, with many examples. A full-feature user-friendly demo-version of a commercial CFD software has been added, which ca...

  4. Numerical computer methods part E

    CERN Document Server

    Johnson, Michael L

    2004-01-01

    The contributions in this volume emphasize analysis of experimental data and analytical biochemistry, with examples taken from biochemistry. They serve to inform biomedical researchers of the modern data analysis methods that have developed concomitantly with computer hardware. Selected Contents: A practical approach to interpretation of SVD results; modeling of oscillations in endocrine networks with feedback; quantifying asynchronous breathing; sample entropy; wavelet modeling and processing of nasal airflow traces.

  5. Computational methods for stellerator configurations

    International Nuclear Information System (INIS)

    Betancourt, O.

    1992-01-01

    This project had two main objectives. The first one was to continue to develop computational methods for the study of three dimensional magnetic confinement configurations. The second one was to collaborate and interact with researchers in the field who can use these techniques to study and design fusion experiments. The first objective has been achieved with the development of the spectral code BETAS and the formulation of a new variational approach for the study of magnetic island formation in a self consistent fashion. The code can compute the correct island width corresponding to the saturated island, a result shown by comparing the computed island with the results of unstable tearing modes in Tokamaks and with experimental results in the IMS Stellarator. In addition to studying three dimensional nonlinear effects in Tokamaks configurations, these self consistent computed island equilibria will be used to study transport effects due to magnetic island formation and to nonlinearly bifurcated equilibria. The second objective was achieved through direct collaboration with Steve Hirshman at Oak Ridge, D. Anderson and R. Talmage at Wisconsin as well as through participation in the Sherwood and APS meetings

  6. Validation of internal dosimetry protocols based on stochastic method

    International Nuclear Information System (INIS)

    Mendes, Bruno M.; Fonseca, Telma C.F.; Almeida, Iassudara G.; Trindade, Bruno M.; Campos, Tarcisio P.R.

    2015-01-01

    Computational phantoms adapted to Monte Carlo codes have been applied successfully in radiation dosimetry fields. NRI research group has been developing Internal Dosimetry Protocols - IDPs, addressing distinct methodologies, software and computational human-simulators, to perform internal dosimetry, especially for new radiopharmaceuticals. Validation of the IDPs is critical to ensure the reliability of the simulations results. Inter comparisons of data from literature with those produced by our IDPs is a suitable method for validation. The aim of this study was to validate the IDPs following such inter comparison procedure. The Golem phantom has been reconfigured to run on MCNP5. The specific absorbed fractions (SAF) for photon at 30, 100 and 1000 keV energies were simulated based on the IDPs and compared with reference values (RV) published by Zankl and Petoussi-Henss, 1998. The SAF average differences from RV and those obtained in IDP simulations was 2.3 %. The SAF largest differences were found in situations involving low energy photons at 30 keV. The Adrenals and thyroid, i.e. the lowest mass organs, had the highest SAF discrepancies towards RV as 7.2 % and 3.8 %, respectively. The statistic differences of SAF applying our IDPs from reference values were considered acceptable at the 30, 100 and 1000 keV spectra. We believe that the main reason for the discrepancies in IDPs run, found in lower masses organs, was due to our source definition methodology. Improvements of source spatial distribution in the voxels may provide outputs more consistent with reference values for lower masses organs. (author)

  7. Validation of internal dosimetry protocols based on stochastic method

    Energy Technology Data Exchange (ETDEWEB)

    Mendes, Bruno M.; Fonseca, Telma C.F., E-mail: bmm@cdtn.br [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil); Almeida, Iassudara G.; Trindade, Bruno M.; Campos, Tarcisio P.R., E-mail: tprcampos@yahoo.com.br [Universidade Federal de Minas Gerais (DEN/UFMG), Belo Horizonte, MG (Brazil). Departamento de Engenharia Nuclear

    2015-07-01

    Computational phantoms adapted to Monte Carlo codes have been applied successfully in radiation dosimetry fields. NRI research group has been developing Internal Dosimetry Protocols - IDPs, addressing distinct methodologies, software and computational human-simulators, to perform internal dosimetry, especially for new radiopharmaceuticals. Validation of the IDPs is critical to ensure the reliability of the simulations results. Inter comparisons of data from literature with those produced by our IDPs is a suitable method for validation. The aim of this study was to validate the IDPs following such inter comparison procedure. The Golem phantom has been reconfigured to run on MCNP5. The specific absorbed fractions (SAF) for photon at 30, 100 and 1000 keV energies were simulated based on the IDPs and compared with reference values (RV) published by Zankl and Petoussi-Henss, 1998. The SAF average differences from RV and those obtained in IDP simulations was 2.3 %. The SAF largest differences were found in situations involving low energy photons at 30 keV. The Adrenals and thyroid, i.e. the lowest mass organs, had the highest SAF discrepancies towards RV as 7.2 % and 3.8 %, respectively. The statistic differences of SAF applying our IDPs from reference values were considered acceptable at the 30, 100 and 1000 keV spectra. We believe that the main reason for the discrepancies in IDPs run, found in lower masses organs, was due to our source definition methodology. Improvements of source spatial distribution in the voxels may provide outputs more consistent with reference values for lower masses organs. (author)

  8. Computational methods for molecular imaging

    CERN Document Server

    Shi, Kuangyu; Li, Shuo

    2015-01-01

    This volume contains original submissions on the development and application of molecular imaging computing. The editors invited authors to submit high-quality contributions on a wide range of topics including, but not limited to: • Image Synthesis & Reconstruction of Emission Tomography (PET, SPECT) and other Molecular Imaging Modalities • Molecular Imaging Enhancement • Data Analysis of Clinical & Pre-clinical Molecular Imaging • Multi-Modal Image Processing (PET/CT, PET/MR, SPECT/CT, etc.) • Machine Learning and Data Mining in Molecular Imaging. Molecular imaging is an evolving clinical and research discipline enabling the visualization, characterization and quantification of biological processes taking place at the cellular and subcellular levels within intact living subjects. Computational methods play an important role in the development of molecular imaging, from image synthesis to data analysis and from clinical diagnosis to therapy individualization. This work will bring readers fro...

  9. A Protocol for Provably Secure Authentication of a Tiny Entity to a High Performance Computing One

    Directory of Open Access Journals (Sweden)

    Siniša Tomović

    2016-01-01

    Full Text Available The problem of developing authentication protocols dedicated to a specific scenario where an entity with limited computational capabilities should prove the identity to a computationally powerful Verifier is addressed. An authentication protocol suitable for the considered scenario which jointly employs the learning parity with noise (LPN problem and a paradigm of random selection is proposed. It is shown that the proposed protocol is secure against active attacking scenarios and so called GRS man-in-the-middle (MIM attacking scenarios. In comparison with the related previously reported authentication protocols the proposed one provides reduction of the implementation complexity and at least the same level of the cryptographic security.

  10. Computer methods in general relativity: algebraic computing

    CERN Document Server

    Araujo, M E; Skea, J E F; Koutras, A; Krasinski, A; Hobill, D; McLenaghan, R G; Christensen, S M

    1993-01-01

    Karlhede & MacCallum [1] gave a procedure for determining the Lie algebra of the isometry group of an arbitrary pseudo-Riemannian manifold, which they intended to im- plement using the symbolic manipulation package SHEEP but never did. We have recently finished making this procedure explicit by giving an algorithm suitable for implemen- tation on a computer [2]. Specifically, we have written an algorithm for determining the isometry group of a spacetime (in four dimensions), and partially implemented this algorithm using the symbolic manipulation package CLASSI, which is an extension of SHEEP.

  11. Evaluation of condyle defects using different reconstruction protocols of cone-beam computed tomography

    International Nuclear Information System (INIS)

    Bastos, Luana Costa; Campos, Paulo Sergio Flores; Ramos-Perez, Flavia Maria de Moraes; Pontual, Andrea dos Anjos; Almeida, Solange Maria

    2013-01-01

    This study was conducted to investigate how well cone-beam computed tomography (CBCT) can detect simulated cavitary defects in condyles, and to test the influence of the reconstruction protocols. Defects were created with spherical diamond burs (numbers 1013, 1016, 3017) in superior and / or posterior surfaces of twenty condyles. The condyles were scanned, and cross-sectional reconstructions were performed with nine different protocols, based on slice thickness (0.2, 0.6, 1.0 mm) and on the filters (original image, Sharpen Mild, S9) used. Two observers evaluated the defects, determining their presence and location. Statistical analysis was carried out using simple Kappa coefficient and McNemar’s test to check inter- and intra-rater reliability. The chi-square test was used to compare the rater accuracy. Analysis of variance (Tukey's test) assessed the effect of the protocols used. Kappa values for inter- and intra-rater reliability demonstrate almost perfect agreement. The proportion of correct answers was significantly higher than that of errors for cavitary defects on both condyle surfaces (p < 0.01). Only in identifying the defects located on the posterior surface was it possible to observe the influence of the 1.0 mm protocol thickness and no filter, which showed a significantly lower value. Based on the results of the current study, the technique used was valid for identifying the existence of cavities in the condyle surface. However, the protocol of a 1.0 mm-thick slice and no filter proved to be the worst method for identifying the defects on the posterior surface. (author)

  12. Evaluation of condyle defects using different reconstruction protocols of cone-beam computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Bastos, Luana Costa; Campos, Paulo Sergio Flores, E-mail: bastosluana@ymail.com [Universidade Federal da Bahia (UFBA), Salvador, BA (Brazil). Fac. de Odontologia. Dept. de Radiologia Oral e Maxilofacial; Ramos-Perez, Flavia Maria de Moraes [Universidade Federal de Pernambuco (UFPE), Recife, PE (Brazil). Fac. de Odontologia. Dept. de Clinica e Odontologia Preventiva; Pontual, Andrea dos Anjos [Universidade Federal de Pernambuco (UFPE), Camaragibe, PE (Brazil). Fac. de Odontologia. Dept. de Radiologia Oral; Almeida, Solange Maria [Universidade Estadual de Campinas (UNICAMP), Piracicaba, SP (Brazil). Fac. de Odontologia. Dept. de Radiologia Oral

    2013-11-15

    This study was conducted to investigate how well cone-beam computed tomography (CBCT) can detect simulated cavitary defects in condyles, and to test the influence of the reconstruction protocols. Defects were created with spherical diamond burs (numbers 1013, 1016, 3017) in superior and / or posterior surfaces of twenty condyles. The condyles were scanned, and cross-sectional reconstructions were performed with nine different protocols, based on slice thickness (0.2, 0.6, 1.0 mm) and on the filters (original image, Sharpen Mild, S9) used. Two observers evaluated the defects, determining their presence and location. Statistical analysis was carried out using simple Kappa coefficient and McNemar’s test to check inter- and intra-rater reliability. The chi-square test was used to compare the rater accuracy. Analysis of variance (Tukey's test) assessed the effect of the protocols used. Kappa values for inter- and intra-rater reliability demonstrate almost perfect agreement. The proportion of correct answers was significantly higher than that of errors for cavitary defects on both condyle surfaces (p < 0.01). Only in identifying the defects located on the posterior surface was it possible to observe the influence of the 1.0 mm protocol thickness and no filter, which showed a significantly lower value. Based on the results of the current study, the technique used was valid for identifying the existence of cavities in the condyle surface. However, the protocol of a 1.0 mm-thick slice and no filter proved to be the worst method for identifying the defects on the posterior surface. (author)

  13. Fast computation of the characteristics method on vector computers

    International Nuclear Information System (INIS)

    Kugo, Teruhiko

    2001-11-01

    Fast computation of the characteristics method to solve the neutron transport equation in a heterogeneous geometry has been studied. Two vector computation algorithms; an odd-even sweep (OES) method and an independent sequential sweep (ISS) method have been developed and their efficiency to a typical fuel assembly calculation has been investigated. For both methods, a vector computation is 15 times faster than a scalar computation. From a viewpoint of comparison between the OES and ISS methods, the followings are found: 1) there is a small difference in a computation speed, 2) the ISS method shows a faster convergence and 3) the ISS method saves about 80% of computer memory size compared with the OES method. It is, therefore, concluded that the ISS method is superior to the OES method as a vectorization method. In the vector computation, a table-look-up method to reduce computation time of an exponential function saves only 20% of a whole computation time. Both the coarse mesh rebalance method and the Aitken acceleration method are effective as acceleration methods for the characteristics method, a combination of them saves 70-80% of outer iterations compared with a free iteration. (author)

  14. Readjustment of abdominal computed tomography protocols in a university hospital: impact on radiation dose

    Energy Technology Data Exchange (ETDEWEB)

    Romano, Ricardo Francisco Tavares; Salvadori, Priscila Silveira; Torres, Lucas Rios; Bretas, Elisa Almeida Sathler; Bekhor, Daniel; Medeiros, Regina Bitelli; D' Ippolito, Giuseppe, E-mail: ricardo.romano@unifesp.br [Universidade Federal de Sao Paulo (EPM/UNIFESP), Sao Paulo, SP (Brazil). Escola Paulista de Medicina; Caldana, Rogerio Pedreschi [Fleury Medicina e Saude, Sao Paulo, SP (Brazil)

    2015-09-15

    Objective: To assess the reduction of estimated radiation dose in abdominal computed tomography following the implementation of new scan protocols on the basis of clinical suspicion and of adjusted images acquisition parameters. Materials and Methods: Retrospective and prospective review of reports on radiation dose from abdominal CT scans performed three months before (group A - 551 studies) and three months after (group B - 788 studies) implementation of new scan protocols proposed as a function of clinical indications. Also, the images acquisition parameters were adjusted to reduce the radiation dose at each scan phase. The groups were compared for mean number of acquisition phases, mean CTDI{sub vol} per phase, mean DLP per phase, and mean DLP per scan. Results: A significant reduction was observed for group B as regards all the analyzed aspects, as follows: 33.9%, 25.0%, 27.0% and 52.5%, respectively for number of acquisition phases, CTDI{sub vol} per phase, DLP per phase and DLP per scan (p < 0.001). Conclusion: The rational use of abdominal computed tomography scan phases based on the clinical suspicion in conjunction with the adjusted images acquisition parameters allows for a 50% reduction in the radiation dose from abdominal computed tomography scans. (author)

  15. [Multidisciplinary protocol for computed tomography imaging and angiographic embolization of splenic injury due to trauma: assessment of pre-protocol and post-protocol outcomes].

    Science.gov (United States)

    Koo, M; Sabaté, A; Magalló, P; García, M A; Domínguez, J; de Lama, M E; López, S

    2011-11-01

    To assess conservative treatment of splenic injury due to trauma, following a protocol for computed tomography (CT) and angiographic embolization. To quantify the predictive value of CT for detecting bleeding and need for embolization. The care protocol developed by the multidisciplinary team consisted of angiography with embolization of lesions revealed by contrast extravasation under CT as well as embolization of grade III-V injuries observed, or grade I-II injuries causing hemodynamic instability and/or need for blood transfusion. We collected data on demographic variables, injury severity score (ISS), angiographic findings, and injuries revealed by CT. Pre-protocol and post-protocol outcomes were compared. The sensitivity and specificity of CT findings were calculated for all patients who required angiographic embolization. Forty-four and 30 angiographies were performed in the pre- and post-protocol periods, respectively. The mean (SD) ISSs in the two periods were 25 (11) and 26 (12), respectively. A total of 24 (54%) embolizations were performed in the pre-protocol period and 28 (98%) after implementation of the protocol. Two and 7 embolizations involved the spleen in the 2 periods, respectively; abdominal laparotomies numbered 32 and 25, respectively, and 10 (31%) vs 4 (16%) splenectomies were performed. The specificity and sensitivity values for contrast extravasation found on CT and followed by embolization were 77.7% and 79.5%. The implementation of this multidisciplinary protocol using CT imaging and angiographic embolization led to a decrease in the number of splenectomies. The protocol allows us to take a more conservative treatment approach.

  16. Computational Methodologies for Developing Structure–Morphology–Performance Relationships in Organic Solar Cells: A Protocol Review

    KAUST Repository

    Do, Khanh; Ravva, Mahesh Kumar; Wang, Tonghui; Bredas, Jean-Luc

    2016-01-01

    We outline a step-by-step protocol that incorporates a number of theoretical and computational methodologies to evaluate the structural and electronic properties of pi-conjugated semiconducting materials in the condensed phase. Our focus

  17. Backpressure-based control protocols: design and computational aspects

    NARCIS (Netherlands)

    Miretskiy, D.I.; Scheinhardt, Willem R.W.; Mandjes, M.R.H.

    2009-01-01

    Congestion control in packet-based networks is often realized by feedback protocols. In this paper we assess their performance under a back-pressure mechanism that has been proposed and standardized for Ethernet metropolitan networks. In such a mechanism the service rate of an upstream queue is

  18. Backpressure-based control protocols: Design and computational aspects

    NARCIS (Netherlands)

    Miretskiy, D.I.; Scheinhardt, W.R.W.; Mandjes, M.R.H.

    2009-01-01

    Congestion control in packet-based networks is often realized by feedback protocols. In this paper we assess their performance under a back-pressure mechanism that has been proposed and standardized for Ethernet metropolitan networks. In such a mechanism the service rate of an upstream queue is

  19. Introduction to basic immunological methods : Generalities, Principles, Protocols and Variants of basic protocols

    International Nuclear Information System (INIS)

    Mejri, Naceur

    2013-01-01

    This manuscript is dedicated to student of biological sciences. It provides the information necessary to perform practical works, the most commonly used in immunology. During my doctoral and post-doctoral periods, panoply of methods was employed in diverse subjects in my research. Technical means used in my investigations were diverse enough that i could extract a set of techniques that cover most the basic immunological methods. Each chapter of this manuscript contains a fairly complete description of immunological methods. In each topic the basic protocol and its variants were preceded by background information provided in paragraphs concerning the principle and generalities. The emphasis is placed on describing situations in which each method and its variants were used. These basic immunological methods are useful for students and even researchers studying the immune system of human, nice and other species. Different subjects showed not only detailed protocols but also photos or/and shemas used as support to illustrate some knowledge or practical knowledge. I hope that students will find this manual interesting, easy to use contains necessary information to acquire skills in immunological practice. (Author)

  20. Computational Methods and Function Theory

    CERN Document Server

    Saff, Edward; Salinas, Luis; Varga, Richard

    1990-01-01

    The volume is devoted to the interaction of modern scientific computation and classical function theory. Many problems in pure and more applied function theory can be tackled using modern computing facilities: numerically as well as in the sense of computer algebra. On the other hand, computer algorithms are often based on complex function theory, and dedicated research on their theoretical foundations can lead to great enhancements in performance. The contributions - original research articles, a survey and a collection of problems - cover a broad range of such problems.

  1. Epidemic Protocols for Pervasive Computing Systems - Moving Focus from Architecture to Protocol

    DEFF Research Database (Denmark)

    Mogensen, Martin

    2009-01-01

    Pervasive computing systems are inherently running on unstable networks and devices, subject to constant topology changes, network failures, and high churn. For this reason, pervasive computing infrastructures need to handle these issues as part of their design. This is, however, not feasible, si...

  2. Transgenic mouse - Methods and protocols, 2nd edition

    Directory of Open Access Journals (Sweden)

    Carlo Alberto Redi

    2011-09-01

    Full Text Available Marten H. Hofner (from the Dept. of Pathology of the Groningen University and Jan M. van Deursen (from the Mayo College of Medicine at Rochester, MN, USA provided us with the valuable second edition of Transgenic mouse: in fact, eventhough we are in the –omics era and already equipped with the state-of-the-art techniques in whatsoever field, still we need to have gene(s functional analysis data to understand common and complex deseases. Transgenesis is still an irreplaceable method and protocols to well perform it are more than welcome. Here, how to get genetic modified mice (the quintessential model of so many human deseases considering how much of the human genes are conserved in the mouse and the great block of genic synteny existing between the two genomes is analysed in deep and presented in clearly detailed step by step protocols....

  3. DNA arrays : methods and protocols [Methods in molecular biology, v. 170

    National Research Council Canada - National Science Library

    Rampal, Jang B

    2001-01-01

    "In DNA Arrays: Methods and Protocols, Jang Rampal and a authoritative panel of researchers, engineers, and technologists explain in detail how to design and construct DNA microarrays, as well as how to...

  4. Computational methods for reversed-field equilibrium

    International Nuclear Information System (INIS)

    Boyd, J.K.; Auerbach, S.P.; Willmann, P.A.; Berk, H.L.; McNamara, B.

    1980-01-01

    Investigating the temporal evolution of reversed-field equilibrium caused by transport processes requires the solution of the Grad-Shafranov equation and computation of field-line-averaged quantities. The technique for field-line averaging and the computation of the Grad-Shafranov equation are presented. Application of Green's function to specify the Grad-Shafranov equation boundary condition is discussed. Hill's vortex formulas used to verify certain computations are detailed. Use of computer software to implement computational methods is described

  5. An Empirical Study and some Improvements of the MiniMac Protocol for Secure Computation

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Lauritsen, Rasmus; Toft, Tomas

    2014-01-01

    Recent developments in Multi-party Computation (MPC) has resulted in very efficient protocols for dishonest majority in the preprocessing model. In particular, two very promising protocols for Boolean circuits have been proposed by Nielsen et al. (nicknamed TinyOT) and by Damg˚ard and Zakarias...... suggest a modification of MiniMac that achieves increased parallelism at no extra communication cost. This gives an asymptotic improvement of the original protocol as well as an 8-fold speed-up of our implementation. We compare the resulting protocol to TinyOT for the case of secure computation in parallel...... of a large number of AES encryptions and find that it performs better than results reported so far on TinyOT, on the same hardware.p...

  6. A new method for improving security in MANETs AODV Protocol

    Directory of Open Access Journals (Sweden)

    Zahra Alishahi

    2012-10-01

    Full Text Available In mobile ad hoc network (MANET, secure communication is more challenging task due to its fundamental characteristics like having less infrastructure, wireless link, distributed cooperation, dynamic topology, lack of association, resource constrained and physical vulnerability of node. In MANET, attacks can be broadly classified in two categories: routing attacks and data forwarding attacks. Any action not following rules of routing protocols belongs to routing attacks. The main objective of routing attacks is to disrupt normal functioning of network by advertising false routing updates. On the other hand, data forwarding attacks include actions such as modification or dropping data packet, which does not disrupt routing protocol. In this paper, we address the “Packet Drop Attack”, which is a serious threat to operational mobile ad hoc networks. The consequence of not forwarding other packets or dropping other packets prevents any kind of communication to be established in the network. Therefore, there is a need to address the packet dropping event takes higher priority for the mobile ad hoc networks to emerge and to operate, successfully. In this paper, we propose a method to secure ad hoc on-demand distance vector (AODV routing protocol. The proposed method provides security for routing packets where the malicious node acts as a black-hole and drops packets. In this method, the collaboration of a group of nodes is used to make accurate decisions. Validating received RREPs allows the source to select trusted path to its destination. The simulation results show that the proposed mechanism is able to detect any number of attackers.

  7. Privacy-Preserving Data Aggregation Protocol for Fog Computing-Assisted Vehicle-to-Infrastructure Scenario

    Directory of Open Access Journals (Sweden)

    Yanan Chen

    2018-01-01

    Full Text Available Vehicle-to-infrastructure (V2I communication enables moving vehicles to upload real-time data about road surface situation to the Internet via fixed roadside units (RSU. Thanks to the resource restriction of mobile vehicles, fog computation-enhanced V2I communication scenario has received increasing attention recently. However, how to aggregate the sensed data from vehicles securely and efficiently still remains open to the V2I communication scenario. In this paper, a light-weight and anonymous aggregation protocol is proposed for the fog computing-based V2I communication scenario. With the proposed protocol, the data collected by the vehicles can be efficiently obtained by the RSU in a privacy-preserving manner. Particularly, we first suggest a certificateless aggregate signcryption (CL-A-SC scheme and prove its security in the random oracle model. The suggested CL-A-SC scheme, which is of independent interest, can achieve the merits of certificateless cryptography and signcryption scheme simultaneously. Then we put forward the anonymous aggregation protocol for V2I communication scenario as one extension of the suggested CL-A-SC scheme. Security analysis demonstrates that the proposed aggregation protocol achieves desirable security properties. The performance comparison shows that the proposed protocol significantly reduces the computation and communication overhead compared with the up-to-date protocols in this field.

  8. Novel methods in computational finance

    CERN Document Server

    Günther, Michael; Maten, E

    2017-01-01

    This book discusses the state-of-the-art and open problems in computational finance. It presents a collection of research outcomes and reviews of the work from the STRIKE project, an FP7 Marie Curie Initial Training Network (ITN) project in which academic partners trained early-stage researchers in close cooperation with a broader range of associated partners, including from the private sector. The aim of the project was to arrive at a deeper understanding of complex (mostly nonlinear) financial models and to develop effective and robust numerical schemes for solving linear and nonlinear problems arising from the mathematical theory of pricing financial derivatives and related financial products. This was accomplished by means of financial modelling, mathematical analysis and numerical simulations, optimal control techniques and validation of models. In recent years the computational complexity of mathematical models employed in financial mathematics has witnessed tremendous growth. Advanced numerical techni...

  9. COMPUTER METHODS OF GENETIC ANALYSIS.

    Directory of Open Access Journals (Sweden)

    A. L. Osipov

    2017-02-01

    Full Text Available The basic statistical methods used in conducting the genetic analysis of human traits. We studied by segregation analysis, linkage analysis and allelic associations. Developed software for the implementation of these methods support.

  10. Computational methods in drug discovery

    OpenAIRE

    Sumudu P. Leelananda; Steffen Lindert

    2016-01-01

    The process for drug discovery and development is challenging, time consuming and expensive. Computer-aided drug discovery (CADD) tools can act as a virtual shortcut, assisting in the expedition of this long process and potentially reducing the cost of research and development. Today CADD has become an effective and indispensable tool in therapeutic development. The human genome project has made available a substantial amount of sequence data that can be used in various drug discovery project...

  11. Hybrid Monte Carlo methods in computational finance

    NARCIS (Netherlands)

    Leitao Rodriguez, A.

    2017-01-01

    Monte Carlo methods are highly appreciated and intensively employed in computational finance in the context of financial derivatives valuation or risk management. The method offers valuable advantages like flexibility, easy interpretation and straightforward implementation. Furthermore, the

  12. Advanced computational electromagnetic methods and applications

    CERN Document Server

    Li, Wenxing; Elsherbeni, Atef; Rahmat-Samii, Yahya

    2015-01-01

    This new resource covers the latest developments in computational electromagnetic methods, with emphasis on cutting-edge applications. This book is designed to extend existing literature to the latest development in computational electromagnetic methods, which are of interest to readers in both academic and industrial areas. The topics include advanced techniques in MoM, FEM and FDTD, spectral domain method, GPU and Phi hardware acceleration, metamaterials, frequency and time domain integral equations, and statistics methods in bio-electromagnetics.

  13. Computational Methods for Biomolecular Electrostatics

    Science.gov (United States)

    Dong, Feng; Olsen, Brett; Baker, Nathan A.

    2008-01-01

    An understanding of intermolecular interactions is essential for insight into how cells develop, operate, communicate and control their activities. Such interactions include several components: contributions from linear, angular, and torsional forces in covalent bonds, van der Waals forces, as well as electrostatics. Among the various components of molecular interactions, electrostatics are of special importance because of their long range and their influence on polar or charged molecules, including water, aqueous ions, and amino or nucleic acids, which are some of the primary components of living systems. Electrostatics, therefore, play important roles in determining the structure, motion and function of a wide range of biological molecules. This chapter presents a brief overview of electrostatic interactions in cellular systems with a particular focus on how computational tools can be used to investigate these types of interactions. PMID:17964951

  14. What is the best contrast injection protocol for 64-row multi-detector cardiac computed tomography?

    International Nuclear Information System (INIS)

    Lu Jinguo; Lv Bin; Chen Xiongbiao; Tang Xiang; Jiang Shiliang; Dai Ruping

    2010-01-01

    Objective: To determine the optimal contrast injection protocol for 64-MDCT coronary angiography. Materials and methods: One hundred and fifty consecutive patients scheduled to undergo retrospectively electrocardiographically gated 64-MDCT. Each 30 patients were assigned to use a different contrast protocol: group 1: uniphasic protocol (contrast injection without saline flush); group 2: biphasic protocol (contrast injection with saline flush); group 3A, 3B and 3C: triphasic protocol (contrast media + different saline diluted contrast media + saline flush). Image quality scores and artifacts were compared and evaluated on both transaxial and three-dimensional coronary artery images among each contrast protocol. Results: Among the triphasic protocol groups, group 3A (30%:70% contrast media-saline mixture was used in second phase) used the least contrast media and had the least frequency of streak artifacts, but there were no significant differences in coronary artery attenuation, image quality, visualization right and left heart structures. Among the uniphasic protocol group (group 1), biphasic protocol group (group 2) and triphasic protocol subgroup (group 3A), there were no significant differences in image quality scores of coronary artery (P = 0.18); uniphasic protocol group had the highest frequency of streak artifacts (20 cases) (P < 0.05) and had the most amount contrast media (67.0 ± 5.3 ml); biphasic protocol group had the least amount of contrast media (59.9 ± 4.9 ml) (P < 0.05) and had the highest attenuation of left main coronary artery and right coronary artery (P < 0.01), but had the least amount of clear visualization right heart structure (6 cases); triphasic protocol group (group 3A) had the most amount of clear visualization right heart structures (29 cases) were the most among the three groups (P < 0.05). Conclusion: Biphasic protocol are superior to the traditional uniphasic protocols for using the least total contrast media, having the least

  15. What is the best contrast injection protocol for 64-row multi-detector cardiac computed tomography?

    Energy Technology Data Exchange (ETDEWEB)

    Lu Jinguo [Department of Radiology, Cardiovascular Institute and Fuwai Hospital, Peking Union Medical College and Chinese Academy of Medical Sciences, 167 Beilishi Road, Beijing (China); Lv Bin, E-mail: blu@vip.sina.co [Department of Radiology, Cardiovascular Institute and Fuwai Hospital, Peking Union Medical College and Chinese Academy of Medical Sciences, 167 Beilishi Road, Beijing (China); Chen Xiongbiao; Tang Xiang; Jiang Shiliang; Dai Ruping [Department of Radiology, Cardiovascular Institute and Fuwai Hospital, Peking Union Medical College and Chinese Academy of Medical Sciences, 167 Beilishi Road, Beijing (China)

    2010-08-15

    Objective: To determine the optimal contrast injection protocol for 64-MDCT coronary angiography. Materials and methods: One hundred and fifty consecutive patients scheduled to undergo retrospectively electrocardiographically gated 64-MDCT. Each 30 patients were assigned to use a different contrast protocol: group 1: uniphasic protocol (contrast injection without saline flush); group 2: biphasic protocol (contrast injection with saline flush); group 3A, 3B and 3C: triphasic protocol (contrast media + different saline diluted contrast media + saline flush). Image quality scores and artifacts were compared and evaluated on both transaxial and three-dimensional coronary artery images among each contrast protocol. Results: Among the triphasic protocol groups, group 3A (30%:70% contrast media-saline mixture was used in second phase) used the least contrast media and had the least frequency of streak artifacts, but there were no significant differences in coronary artery attenuation, image quality, visualization right and left heart structures. Among the uniphasic protocol group (group 1), biphasic protocol group (group 2) and triphasic protocol subgroup (group 3A), there were no significant differences in image quality scores of coronary artery (P = 0.18); uniphasic protocol group had the highest frequency of streak artifacts (20 cases) (P < 0.05) and had the most amount contrast media (67.0 {+-} 5.3 ml); biphasic protocol group had the least amount of contrast media (59.9 {+-} 4.9 ml) (P < 0.05) and had the highest attenuation of left main coronary artery and right coronary artery (P < 0.01), but had the least amount of clear visualization right heart structure (6 cases); triphasic protocol group (group 3A) had the most amount of clear visualization right heart structures (29 cases) were the most among the three groups (P < 0.05). Conclusion: Biphasic protocol are superior to the traditional uniphasic protocols for using the least total contrast media, having the least

  16. A Novel UDT-Based Transfer Speed-Up Protocol for Fog Computing

    Directory of Open Access Journals (Sweden)

    Zhijie Han

    2018-01-01

    Full Text Available Fog computing is a distributed computing model as the middle layer between the cloud data center and the IoT device/sensor. It provides computing, network, and storage devices so that cloud based services can be closer to IOT devices and sensors. Cloud computing requires a lot of bandwidth, and the bandwidth of the wireless network is limited. In contrast, the amount of bandwidth required for “fog computing” is much less. In this paper, we improved a new protocol Peer Assistant UDT-Based Data Transfer Protocol (PaUDT, applied to Iot-Cloud computing. Furthermore, we compared the efficiency of the congestion control algorithm of UDT with the Adobe’s Secure Real-Time Media Flow Protocol (RTMFP, based on UDP completely at the transport layer. At last, we built an evaluation model of UDT in RTT and bit error ratio which describes the performance. The theoretical analysis and experiment result have shown that UDT has good performance in IoT-Cloud computing.

  17. Estimating Return on Investment in Translational Research: Methods and Protocols

    Science.gov (United States)

    Trochim, William; Dilts, David M.; Kirk, Rosalind

    2014-01-01

    Assessing the value of clinical and translational research funding on accelerating the translation of scientific knowledge is a fundamental issue faced by the National Institutes of Health and its Clinical and Translational Awards (CTSA). To address this issue, the authors propose a model for measuring the return on investment (ROI) of one key CTSA program, the clinical research unit (CRU). By estimating the economic and social inputs and outputs of this program, this model produces multiple levels of ROI: investigator, program and institutional estimates. A methodology, or evaluation protocol, is proposed to assess the value of this CTSA function, with specific objectives, methods, descriptions of the data to be collected, and how data are to be filtered, analyzed, and evaluated. This paper provides an approach CTSAs could use to assess the economic and social returns on NIH and institutional investments in these critical activities. PMID:23925706

  18. Estimating return on investment in translational research: methods and protocols.

    Science.gov (United States)

    Grazier, Kyle L; Trochim, William M; Dilts, David M; Kirk, Rosalind

    2013-12-01

    Assessing the value of clinical and translational research funding on accelerating the translation of scientific knowledge is a fundamental issue faced by the National Institutes of Health (NIH) and its Clinical and Translational Awards (CTSAs). To address this issue, the authors propose a model for measuring the return on investment (ROI) of one key CTSA program, the clinical research unit (CRU). By estimating the economic and social inputs and outputs of this program, this model produces multiple levels of ROI: investigator, program, and institutional estimates. A methodology, or evaluation protocol, is proposed to assess the value of this CTSA function, with specific objectives, methods, descriptions of the data to be collected, and how data are to be filtered, analyzed, and evaluated. This article provides an approach CTSAs could use to assess the economic and social returns on NIH and institutional investments in these critical activities.

  19. Computational methods in power system analysis

    CERN Document Server

    Idema, Reijer

    2014-01-01

    This book treats state-of-the-art computational methods for power flow studies and contingency analysis. In the first part the authors present the relevant computational methods and mathematical concepts. In the second part, power flow and contingency analysis are treated. Furthermore, traditional methods to solve such problems are compared to modern solvers, developed using the knowledge of the first part of the book. Finally, these solvers are analyzed both theoretically and experimentally, clearly showing the benefits of the modern approach.

  20. Computational methods for data evaluation and assimilation

    CERN Document Server

    Cacuci, Dan Gabriel

    2013-01-01

    Data evaluation and data combination require the use of a wide range of probability theory concepts and tools, from deductive statistics mainly concerning frequencies and sample tallies to inductive inference for assimilating non-frequency data and a priori knowledge. Computational Methods for Data Evaluation and Assimilation presents interdisciplinary methods for integrating experimental and computational information. This self-contained book shows how the methods can be applied in many scientific and engineering areas. After presenting the fundamentals underlying the evaluation of experiment

  1. Computational Aspects of Sensor Network Protocols (Distributed Sensor Network Simulator

    Directory of Open Access Journals (Sweden)

    Vasanth Iyer

    2009-08-01

    Full Text Available In this work, we model the sensor networks as an unsupervised learning and clustering process. We classify nodes according to its static distribution to form known class densities (CCPD. These densities are chosen from specific cross-layer features which maximizes lifetime of power-aware routing algorithms. To circumvent computational complexities of a power-ware communication STACK we introduce path-loss models at the nodes only for high density deployments. We study the cluster heads and formulate the data handling capacity for an expected deployment and use localized probability models to fuse the data with its side information before transmission. So each cluster head has a unique Pmax but not all cluster heads have the same measured value. In a lossless mode if there are no faults in the sensor network then we can show that the highest probability given by Pmax is ambiguous if its frequency is ≤ n/2 otherwise it can be determined by a local function. We further show that the event detection at the cluster heads can be modelled with a pattern 2m and m, the number of bits can be a correlated pattern of 2 bits and for a tight lower bound we use 3-bit Huffman codes which have entropy < 1. These local algorithms are further studied to optimize on power, fault detection and to maximize on the distributed routing algorithm used at the higher layers. From these bounds in large network, it is observed that the power dissipation is network size invariant. The performance of the routing algorithms solely based on success of finding healthy nodes in a large distribution. It is also observed that if the network size is kept constant and the density of the nodes is kept closer then the local pathloss model effects the performance of the routing algorithms. We also obtain the maximum intensity of transmitting nodes for a given category of routing algorithms for an outage constraint, i.e., the lifetime of sensor network.

  2. Comparison of low- and ultralow-dose computed tomography protocols for quantitative lung and airway assessment.

    Science.gov (United States)

    Hammond, Emily; Sloan, Chelsea; Newell, John D; Sieren, Jered P; Saylor, Melissa; Vidal, Craig; Hogue, Shayna; De Stefano, Frank; Sieren, Alexa; Hoffman, Eric A; Sieren, Jessica C

    2017-09-01

    Quantitative computed tomography (CT) measures are increasingly being developed and used to characterize lung disease. With recent advances in CT technologies, we sought to evaluate the quantitative accuracy of lung imaging at low- and ultralow-radiation doses with the use of iterative reconstruction (IR), tube current modulation (TCM), and spectral shaping. We investigated the effect of five independent CT protocols reconstructed with IR on quantitative airway measures and global lung measures using an in vivo large animal model as a human subject surrogate. A control protocol was chosen (NIH-SPIROMICS + TCM) and five independent protocols investigating TCM, low- and ultralow-radiation dose, and spectral shaping. For all scans, quantitative global parenchymal measurements (mean, median and standard deviation of the parenchymal HU, along with measures of emphysema) and global airway measurements (number of segmented airways and pi10) were generated. In addition, selected individual airway measurements (minor and major inner diameter, wall thickness, inner and outer area, inner and outer perimeter, wall area fraction, and inner equivalent circle diameter) were evaluated. Comparisons were made between control and target protocols using difference and repeatability measures. Estimated CT volume dose index (CTDIvol) across all protocols ranged from 7.32 mGy to 0.32 mGy. Low- and ultralow-dose protocols required more manual editing and resolved fewer airway branches; yet, comparable pi10 whole lung measures were observed across all protocols. Similar trends in acquired parenchymal and airway measurements were observed across all protocols, with increased measurement differences using the ultralow-dose protocols. However, for small airways (1.9 ± 0.2 mm) and medium airways (5.7 ± 0.4 mm), the measurement differences across all protocols were comparable to the control protocol repeatability across breath holds. Diameters, wall thickness, wall area fraction

  3. Genomics protocols [Methods in molecular biology, v. 175

    National Research Council Canada - National Science Library

    Starkey, Michael P; Elaswarapu, Ramnath

    2001-01-01

    .... Drawing on emerging technologies in the fields of bioinformatics and proteomics, these protocols cover not only those traditionally recognized as genomics, but also early therapeutich approaches...

  4. 3D-CT vascular setting protocol using computer graphics for the evaluation of maxillofacial lesions

    Directory of Open Access Journals (Sweden)

    CAVALCANTI Marcelo de Gusmão Paraiso

    2001-01-01

    Full Text Available In this paper we present the aspect of a mandibular giant cell granuloma in spiral computed tomography-based three-dimensional (3D-CT reconstructed images using computer graphics, and demonstrate the importance of the vascular protocol in permitting better diagnosis, visualization and determination of the dimensions of the lesion. We analyzed 21 patients with maxillofacial lesions of neoplastic and proliferative origins. Two oral and maxillofacial radiologists analyzed the images. The usefulness of interactive 3D images reconstructed by means of computer graphics, especially using a vascular setting protocol for qualitative and quantitative analyses for the diagnosis, determination of the extent of lesions, treatment planning and follow-up, was demonstrated. The technique is an important adjunct to the evaluation of lesions in relation to axial CT slices and 3D-CT bone images.

  5. 3D-CT vascular setting protocol using computer graphics for the evaluation of maxillofacial lesions.

    Science.gov (United States)

    Cavalcanti, M G; Ruprecht, A; Vannier, M W

    2001-01-01

    In this paper we present the aspect of a mandibular giant cell granuloma in spiral computed tomography-based three-dimensional (3D-CT) reconstructed images using computer graphics, and demonstrate the importance of the vascular protocol in permitting better diagnosis, visualization and determination of the dimensions of the lesion. We analyzed 21 patients with maxillofacial lesions of neoplastic and proliferative origins. Two oral and maxillofacial radiologists analyzed the images. The usefulness of interactive 3D images reconstructed by means of computer graphics, especially using a vascular setting protocol for qualitative and quantitative analyses for the diagnosis, determination of the extent of lesions, treatment planning and follow-up, was demonstrated. The technique is an important adjunct to the evaluation of lesions in relation to axial CT slices and 3D-CT bone images.

  6. Computational Methodologies for Developing Structure–Morphology–Performance Relationships in Organic Solar Cells: A Protocol Review

    KAUST Repository

    Do, Khanh

    2016-09-08

    We outline a step-by-step protocol that incorporates a number of theoretical and computational methodologies to evaluate the structural and electronic properties of pi-conjugated semiconducting materials in the condensed phase. Our focus is on methodologies appropriate for the characterization, at the molecular level, of the morphology in blend systems consisting of an electron donor and electron acceptor, of importance for understanding the performance properties of bulk-heterojunction organic solar cells. The protocol is formulated as an introductory manual for investigators who aim to study the bulk-heterojunction morphology in molecular details, thereby facilitating the development of structure morphology property relationships when used in tandem with experimental results.

  7. Computer network time synchronization the network time protocol on earth and in space

    CERN Document Server

    Mills, David L

    2010-01-01

    Carefully coordinated, reliable, and accurate time synchronization is vital to a wide spectrum of fields-from air and ground traffic control, to buying and selling goods and services, to TV network programming. Ill-gotten time could even lead to the unimaginable and cause DNS caches to expire, leaving the entire Internet to implode on the root servers.Written by the original developer of the Network Time Protocol (NTP), Computer Network Time Synchronization: The Network Time Protocol on Earth and in Space, Second Edition addresses the technological infrastructure of time dissemination, distrib

  8. Electromagnetic field computation by network methods

    CERN Document Server

    Felsen, Leopold B; Russer, Peter

    2009-01-01

    This monograph proposes a systematic and rigorous treatment of electromagnetic field representations in complex structures. The book presents new strong models by combining important computational methods. This is the last book of the late Leopold Felsen.

  9. Methods in Molecular Biology Mouse Genetics: Methods and Protocols | Center for Cancer Research

    Science.gov (United States)

    Mouse Genetics: Methods and Protocols provides selected mouse genetic techniques and their application in modeling varieties of human diseases. The chapters are mainly focused on the generation of different transgenic mice to accomplish the manipulation of genes of interest, tracing cell lineages, and modeling human diseases.

  10. Methods in computed angiotomography of the brain

    International Nuclear Information System (INIS)

    Yamamoto, Yuji; Asari, Shoji; Sadamoto, Kazuhiko.

    1985-01-01

    Authors introduce the methods in computed angiotomography of the brain. Setting of the scan planes and levels and the minimum dose bolus (MinDB) injection of contrast medium are described in detail. These methods are easily and safely employed with the use of already propagated CT scanners. Computed angiotomography is expected for clinical applications in many institutions because of its diagnostic value in screening of cerebrovascular lesions and in demonstrating the relationship between pathological lesions and cerebral vessels. (author)

  11. Methods and experimental techniques in computer engineering

    CERN Document Server

    Schiaffonati, Viola

    2014-01-01

    Computing and science reveal a synergic relationship. On the one hand, it is widely evident that computing plays an important role in the scientific endeavor. On the other hand, the role of scientific method in computing is getting increasingly important, especially in providing ways to experimentally evaluate the properties of complex computing systems. This book critically presents these issues from a unitary conceptual and methodological perspective by addressing specific case studies at the intersection between computing and science. The book originates from, and collects the experience of, a course for PhD students in Information Engineering held at the Politecnico di Milano. Following the structure of the course, the book features contributions from some researchers who are working at the intersection between computing and science.

  12. Computational techniques of the simplex method

    CERN Document Server

    Maros, István

    2003-01-01

    Computational Techniques of the Simplex Method is a systematic treatment focused on the computational issues of the simplex method. It provides a comprehensive coverage of the most important and successful algorithmic and implementation techniques of the simplex method. It is a unique source of essential, never discussed details of algorithmic elements and their implementation. On the basis of the book the reader will be able to create a highly advanced implementation of the simplex method which, in turn, can be used directly or as a building block in other solution algorithms.

  13. Computational and mathematical methods in brain atlasing.

    Science.gov (United States)

    Nowinski, Wieslaw L

    2017-12-01

    Brain atlases have a wide range of use from education to research to clinical applications. Mathematical methods as well as computational methods and tools play a major role in the process of brain atlas building and developing atlas-based applications. Computational methods and tools cover three areas: dedicated editors for brain model creation, brain navigators supporting multiple platforms, and atlas-assisted specific applications. Mathematical methods in atlas building and developing atlas-aided applications deal with problems in image segmentation, geometric body modelling, physical modelling, atlas-to-scan registration, visualisation, interaction and virtual reality. Here I overview computational and mathematical methods in atlas building and developing atlas-assisted applications, and share my contribution to and experience in this field.

  14. Numerical Methods for Stochastic Computations A Spectral Method Approach

    CERN Document Server

    Xiu, Dongbin

    2010-01-01

    The first graduate-level textbook to focus on fundamental aspects of numerical methods for stochastic computations, this book describes the class of numerical methods based on generalized polynomial chaos (gPC). These fast, efficient, and accurate methods are an extension of the classical spectral methods of high-dimensional random spaces. Designed to simulate complex systems subject to random inputs, these methods are widely used in many areas of computer science and engineering. The book introduces polynomial approximation theory and probability theory; describes the basic theory of gPC meth

  15. Empirical evaluation methods in computer vision

    CERN Document Server

    Christensen, Henrik I

    2002-01-01

    This book provides comprehensive coverage of methods for the empirical evaluation of computer vision techniques. The practical use of computer vision requires empirical evaluation to ensure that the overall system has a guaranteed performance. The book contains articles that cover the design of experiments for evaluation, range image segmentation, the evaluation of face recognition and diffusion methods, image matching using correlation methods, and the performance of medical image processing algorithms. Sample Chapter(s). Foreword (228 KB). Chapter 1: Introduction (505 KB). Contents: Automate

  16. A computational method for sharp interface advection

    DEFF Research Database (Denmark)

    Roenby, Johan; Bredmose, Henrik; Jasak, Hrvoje

    2016-01-01

    We devise a numerical method for passive advection of a surface, such as the interface between two incompressible fluids, across a computational mesh. The method is called isoAdvector, and is developed for general meshes consisting of arbitrary polyhedral cells. The algorithm is based on the volu...

  17. RT-PCR protocols [Methods in molecular biology, v. 193

    National Research Council Canada - National Science Library

    O'Connell, Joseph

    2002-01-01

    .... Here the newcomer will find readily reproducible protocols for highly sensitive detection and quantification of gene expression, the in situ localization of gene expression in tissue, and the cloning...

  18. Computing discharge using the index velocity method

    Science.gov (United States)

    Levesque, Victor A.; Oberg, Kevin A.

    2012-01-01

    Application of the index velocity method for computing continuous records of discharge has become increasingly common, especially since the introduction of low-cost acoustic Doppler velocity meters (ADVMs) in 1997. Presently (2011), the index velocity method is being used to compute discharge records for approximately 470 gaging stations operated and maintained by the U.S. Geological Survey. The purpose of this report is to document and describe techniques for computing discharge records using the index velocity method. Computing discharge using the index velocity method differs from the traditional stage-discharge method by separating velocity and area into two ratings—the index velocity rating and the stage-area rating. The outputs from each of these ratings, mean channel velocity (V) and cross-sectional area (A), are then multiplied together to compute a discharge. For the index velocity method, V is a function of such parameters as streamwise velocity, stage, cross-stream velocity, and velocity head, and A is a function of stage and cross-section shape. The index velocity method can be used at locations where stage-discharge methods are used, but it is especially appropriate when more than one specific discharge can be measured for a specific stage. After the ADVM is selected, installed, and configured, the stage-area rating and the index velocity rating must be developed. A standard cross section is identified and surveyed in order to develop the stage-area rating. The standard cross section should be surveyed every year for the first 3 years of operation and thereafter at a lesser frequency, depending on the susceptibility of the cross section to change. Periodic measurements of discharge are used to calibrate and validate the index rating for the range of conditions experienced at the gaging station. Data from discharge measurements, ADVMs, and stage sensors are compiled for index-rating analysis. Index ratings are developed by means of regression

  19. Computational efficiency for the surface renewal method

    Science.gov (United States)

    Kelley, Jason; Higgins, Chad

    2018-04-01

    Measuring surface fluxes using the surface renewal (SR) method requires programmatic algorithms for tabulation, algebraic calculation, and data quality control. A number of different methods have been published describing automated calibration of SR parameters. Because the SR method utilizes high-frequency (10 Hz+) measurements, some steps in the flux calculation are computationally expensive, especially when automating SR to perform many iterations of these calculations. Several new algorithms were written that perform the required calculations more efficiently and rapidly, and that tested for sensitivity to length of flux averaging period, ability to measure over a large range of lag timescales, and overall computational efficiency. These algorithms utilize signal processing techniques and algebraic simplifications that demonstrate simple modifications that dramatically improve computational efficiency. The results here complement efforts by other authors to standardize a robust and accurate computational SR method. Increased speed of computation time grants flexibility to implementing the SR method, opening new avenues for SR to be used in research, for applied monitoring, and in novel field deployments.

  20. Computational methods in molecular imaging technologies

    CERN Document Server

    Gunjan, Vinit Kumar; Venkatesh, C; Amarnath, M

    2017-01-01

    This book highlights the experimental investigations that have been carried out on magnetic resonance imaging and computed tomography (MRI & CT) images using state-of-the-art Computational Image processing techniques, and tabulates the statistical values wherever necessary. In a very simple and straightforward way, it explains how image processing methods are used to improve the quality of medical images and facilitate analysis. It offers a valuable resource for researchers, engineers, medical doctors and bioinformatics experts alike.

  1. A transport layer protocol for the future high speed grid computing: SCTP versus fast tcp multihoming

    International Nuclear Information System (INIS)

    Arshad, M.J.; Mian, M.S.

    2010-01-01

    TCP (Transmission Control Protocol) is designed for reliable data transfer on the global Internet today. One of its strong points is its use of flow control algorithm that allows TCP to adjust its congestion window if network congestion is occurred. A number of studies and investigations have confirmed that traditional TCP is not suitable for each and every type of application, for example, bulk data transfer over high speed long distance networks. TCP sustained the time of low-capacity and short-delay networks, however, for numerous factors it cannot be capable to efficiently deal with today's growing technologies (such as wide area Grid computing and optical-fiber networks). This research work surveys the congestion control mechanism of transport protocols, and addresses the different issues involved for transferring the huge data over the future high speed Grid computing and optical-fiber networks. This work also presents the simulations to compare the performance of FAST TCP multihoming with SCTP (Stream Control Transmission Protocol) multihoming in high speed networks. These simulation results show that FAST TCP multihoming achieves bandwidth aggregation efficiently and outperforms SCTP multihoming under a similar network conditions. The survey and simulation results presented in this work reveal that multihoming support into FAST TCP does provide a lot of benefits like redundancy, load-sharing and policy-based routing, which largely improves the whole performance of a network and can meet the increasing demand of the future high-speed network infrastructures (such as in Grid computing). (author)

  2. Digital image processing mathematical and computational methods

    CERN Document Server

    Blackledge, J M

    2005-01-01

    This authoritative text (the second part of a complete MSc course) provides mathematical methods required to describe images, image formation and different imaging systems, coupled with the principle techniques used for processing digital images. It is based on a course for postgraduates reading physics, electronic engineering, telecommunications engineering, information technology and computer science. This book relates the methods of processing and interpreting digital images to the 'physics' of imaging systems. Case studies reinforce the methods discussed, with examples of current research

  3. Zonal methods and computational fluid dynamics

    International Nuclear Information System (INIS)

    Atta, E.H.

    1985-01-01

    Recent advances in developing numerical algorithms for solving fluid flow problems, and the continuing improvement in the speed and storage of large scale computers have made it feasible to compute the flow field about complex and realistic configurations. Current solution methods involve the use of a hierarchy of mathematical models ranging from the linearized potential equation to the Navier Stokes equations. Because of the increasing complexity of both the geometries and flowfields encountered in practical fluid flow simulation, there is a growing emphasis in computational fluid dynamics on the use of zonal methods. A zonal method is one that subdivides the total flow region into interconnected smaller regions or zones. The flow solutions in these zones are then patched together to establish the global flow field solution. Zonal methods are primarily used either to limit the complexity of the governing flow equations to a localized region or to alleviate the grid generation problems about geometrically complex and multicomponent configurations. This paper surveys the application of zonal methods for solving the flow field about two and three-dimensional configurations. Various factors affecting their accuracy and ease of implementation are also discussed. From the presented review it is concluded that zonal methods promise to be very effective for computing complex flowfields and configurations. Currently there are increasing efforts to improve their efficiency, versatility, and accuracy

  4. Tensor network method for reversible classical computation

    Science.gov (United States)

    Yang, Zhi-Cheng; Kourtis, Stefanos; Chamon, Claudio; Mucciolo, Eduardo R.; Ruckenstein, Andrei E.

    2018-03-01

    We develop a tensor network technique that can solve universal reversible classical computational problems, formulated as vertex models on a square lattice [Nat. Commun. 8, 15303 (2017), 10.1038/ncomms15303]. By encoding the truth table of each vertex constraint in a tensor, the total number of solutions compatible with partial inputs and outputs at the boundary can be represented as the full contraction of a tensor network. We introduce an iterative compression-decimation (ICD) scheme that performs this contraction efficiently. The ICD algorithm first propagates local constraints to longer ranges via repeated contraction-decomposition sweeps over all lattice bonds, thus achieving compression on a given length scale. It then decimates the lattice via coarse-graining tensor contractions. Repeated iterations of these two steps gradually collapse the tensor network and ultimately yield the exact tensor trace for large systems, without the need for manual control of tensor dimensions. Our protocol allows us to obtain the exact number of solutions for computations where a naive enumeration would take astronomically long times.

  5. Domain decomposition methods and parallel computing

    International Nuclear Information System (INIS)

    Meurant, G.

    1991-01-01

    In this paper, we show how to efficiently solve large linear systems on parallel computers. These linear systems arise from discretization of scientific computing problems described by systems of partial differential equations. We show how to get a discrete finite dimensional system from the continuous problem and the chosen conjugate gradient iterative algorithm is briefly described. Then, the different kinds of parallel architectures are reviewed and their advantages and deficiencies are emphasized. We sketch the problems found in programming the conjugate gradient method on parallel computers. For this algorithm to be efficient on parallel machines, domain decomposition techniques are introduced. We give results of numerical experiments showing that these techniques allow a good rate of convergence for the conjugate gradient algorithm as well as computational speeds in excess of a billion of floating point operations per second. (author). 5 refs., 11 figs., 2 tabs., 1 inset

  6. Computational and instrumental methods in EPR

    CERN Document Server

    Bender, Christopher J

    2006-01-01

    Computational and Instrumental Methods in EPR Prof. Bender, Fordham University Prof. Lawrence J. Berliner, University of Denver Electron magnetic resonance has been greatly facilitated by the introduction of advances in instrumentation and better computational tools, such as the increasingly widespread use of the density matrix formalism. This volume is devoted to both instrumentation and computation aspects of EPR, while addressing applications such as spin relaxation time measurements, the measurement of hyperfine interaction parameters, and the recovery of Mn(II) spin Hamiltonian parameters via spectral simulation. Key features: Microwave Amplitude Modulation Technique to Measure Spin-Lattice (T1) and Spin-Spin (T2) Relaxation Times Improvement in the Measurement of Spin-Lattice Relaxation Time in Electron Paramagnetic Resonance Quantitative Measurement of Magnetic Hyperfine Parameters and the Physical Organic Chemistry of Supramolecular Systems New Methods of Simulation of Mn(II) EPR Spectra: Single Cryst...

  7. Proceedings of computational methods in materials science

    International Nuclear Information System (INIS)

    Mark, J.E. Glicksman, M.E.; Marsh, S.P.

    1992-01-01

    The Symposium on which this volume is based was conceived as a timely expression of some of the fast-paced developments occurring throughout materials science and engineering. It focuses particularly on those involving modern computational methods applied to model and predict the response of materials under a diverse range of physico-chemical conditions. The current easy access of many materials scientists in industry, government laboratories, and academe to high-performance computers has opened many new vistas for predicting the behavior of complex materials under realistic conditions. Some have even argued that modern computational methods in materials science and engineering are literally redefining the bounds of our knowledge from which we predict structure-property relationships, perhaps forever changing the historically descriptive character of the science and much of the engineering

  8. Computational botany methods for automated species identification

    CERN Document Server

    Remagnino, Paolo; Wilkin, Paul; Cope, James; Kirkup, Don

    2017-01-01

    This book discusses innovative methods for mining information from images of plants, especially leaves, and highlights the diagnostic features that can be implemented in fully automatic systems for identifying plant species. Adopting a multidisciplinary approach, it explores the problem of plant species identification, covering both the concepts of taxonomy and morphology. It then provides an overview of morphometrics, including the historical background and the main steps in the morphometric analysis of leaves together with a number of applications. The core of the book focuses on novel diagnostic methods for plant species identification developed from a computer scientist’s perspective. It then concludes with a chapter on the characterization of botanists' visions, which highlights important cognitive aspects that can be implemented in a computer system to more accurately replicate the human expert’s fixation process. The book not only represents an authoritative guide to advanced computational tools fo...

  9. A Pattern Language for Designing Application-Level Communication Protocols and the Improvement of Computer Science Education through Cloud Computing

    OpenAIRE

    Lascano, Jorge Edison

    2017-01-01

    Networking protocols have been developed throughout time following layered architectures such as the Open Systems Interconnection model and the Internet model. These protocols are grouped in the Internet protocol suite. Most developers do not deal with low-level protocols, instead they design application-level protocols on top of the low-level protocol. Although each application-level protocol is different, there is commonality among them and developers can apply lessons learned from one prot...

  10. Computer-Aided Modelling Methods and Tools

    DEFF Research Database (Denmark)

    Cameron, Ian; Gani, Rafiqul

    2011-01-01

    The development of models for a range of applications requires methods and tools. In many cases a reference model is required that allows the generation of application specific models that are fit for purpose. There are a range of computer aided modelling tools available that help to define the m...

  11. Applying Human Computation Methods to Information Science

    Science.gov (United States)

    Harris, Christopher Glenn

    2013-01-01

    Human Computation methods such as crowdsourcing and games with a purpose (GWAP) have each recently drawn considerable attention for their ability to synergize the strengths of people and technology to accomplish tasks that are challenging for either to do well alone. Despite this increased attention, much of this transformation has been focused on…

  12. The asymptotic expansion method via symbolic computation

    OpenAIRE

    Navarro, Juan F.

    2012-01-01

    This paper describes an algorithm for implementing a perturbation method based on an asymptotic expansion of the solution to a second-order differential equation. We also introduce a new symbolic computation system which works with the so-called modified quasipolynomials, as well as an implementation of the algorithm on it.

  13. The Asymptotic Expansion Method via Symbolic Computation

    Directory of Open Access Journals (Sweden)

    Juan F. Navarro

    2012-01-01

    Full Text Available This paper describes an algorithm for implementing a perturbation method based on an asymptotic expansion of the solution to a second-order differential equation. We also introduce a new symbolic computation system which works with the so-called modified quasipolynomials, as well as an implementation of the algorithm on it.

  14. Computationally efficient methods for digital control

    NARCIS (Netherlands)

    Guerreiro Tome Antunes, D.J.; Hespanha, J.P.; Silvestre, C.J.; Kataria, N.; Brewer, F.

    2008-01-01

    The problem of designing a digital controller is considered with the novelty of explicitly taking into account the computation cost of the controller implementation. A class of controller emulation methods inspired by numerical analysis is proposed. Through various examples it is shown that these

  15. LEACH-A: An Adaptive Method for Improving LEACH Protocol

    Directory of Open Access Journals (Sweden)

    Jianli ZHAO

    2014-01-01

    Full Text Available Energy has become one of the most important constraints on wireless sensor networks. Hence, many researchers in this field focus on how to design a routing protocol to prolong the lifetime of the network. The classical hierarchical protocols such as LEACH and LEACH-C have better performance in saving the energy consumption. However, the choosing strategy only based on the largest residue energy or shortest distance will still consume more energy. In this paper an adaptive routing protocol named “LEACH-A” which has an energy threshold E0 is proposed. If there are cluster nodes whose residual energy are greater than E0, the node of largest residual energy is selected to communicated with the base station; When all the cluster nodes energy are less than E0, the node nearest to the base station is select to communication with the base station. Simulations show that our improved protocol LEACH-A performs better than the LEACH and the LEACH-C.

  16. Distributed project scheduling at NASA: Requirements for manual protocols and computer-based support

    Science.gov (United States)

    Richards, Stephen F.

    1992-01-01

    The increasing complexity of space operations and the inclusion of interorganizational and international groups in the planning and control of space missions lead to requirements for greater communication, coordination, and cooperation among mission schedulers. These schedulers must jointly allocate scarce shared resources among the various operational and mission oriented activities while adhering to all constraints. This scheduling environment is complicated by such factors as the presence of varying perspectives and conflicting objectives among the schedulers, the need for different schedulers to work in parallel, and limited communication among schedulers. Smooth interaction among schedulers requires the use of protocols that govern such issues as resource sharing, authority to update the schedule, and communication of updates. This paper addresses the development and characteristics of such protocols and their use in a distributed scheduling environment that incorporates computer-aided scheduling tools. An example problem is drawn from the domain of Space Shuttle mission planning.

  17. Advances of evolutionary computation methods and operators

    CERN Document Server

    Cuevas, Erik; Oliva Navarro, Diego Alberto

    2016-01-01

    The goal of this book is to present advances that discuss alternative Evolutionary Computation (EC) developments and non-conventional operators which have proved to be effective in the solution of several complex problems. The book has been structured so that each chapter can be read independently from the others. The book contains nine chapters with the following themes: 1) Introduction, 2) the Social Spider Optimization (SSO), 3) the States of Matter Search (SMS), 4) the collective animal behavior (CAB) algorithm, 5) the Allostatic Optimization (AO) method, 6) the Locust Search (LS) algorithm, 7) the Adaptive Population with Reduced Evaluations (APRE) method, 8) the multimodal CAB, 9) the constrained SSO method.

  18. Chapter 15: Commercial New Construction Evaluation Protocol. The Uniform Methods Project: Methods for Determining Energy Efficiency Savings for Specific Measures

    Energy Technology Data Exchange (ETDEWEB)

    Kurnik, Charles W. [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Keates, Steven [ADM Associates, Inc., Atlanta, GA (United States)

    2017-10-09

    This protocol is intended to describe the recommended method when evaluating the whole-building performance of new construction projects in the commercial sector. The protocol focuses on energy conservation measures (ECMs) or packages of measures where evaluators can analyze impacts using building simulation. These ECMs typically require the use of calibrated building simulations under Option D of the International Performance Measurement and Verification Protocol (IPMVP).

  19. Computational Methods in Stochastic Dynamics Volume 2

    CERN Document Server

    Stefanou, George; Papadopoulos, Vissarion

    2013-01-01

    The considerable influence of inherent uncertainties on structural behavior has led the engineering community to recognize the importance of a stochastic approach to structural problems. Issues related to uncertainty quantification and its influence on the reliability of the computational models are continuously gaining in significance. In particular, the problems of dynamic response analysis and reliability assessment of structures with uncertain system and excitation parameters have been the subject of continuous research over the last two decades as a result of the increasing availability of powerful computing resources and technology.   This book is a follow up of a previous book with the same subject (ISBN 978-90-481-9986-0) and focuses on advanced computational methods and software tools which can highly assist in tackling complex problems in stochastic dynamic/seismic analysis and design of structures. The selected chapters are authored by some of the most active scholars in their respective areas and...

  20. Genomics protocols [Methods in molecular biology, v. 175

    National Research Council Canada - National Science Library

    Starkey, Michael P; Elaswarapu, Ramnath

    2001-01-01

    ... to the larger community of researchers who have recognized the potential of genomics research and may themselves be beginning to explore the technologies involved. Some of the techniques described in Genomics Protocols are clearly not restricted to the genomics field; indeed, a prerequisite for many procedures in this discipline is that they require an extremely high throughput, beyond the scope of the average investigator. However, what we have endeavored here to achieve is both to compile a collection of...

  1. Computational methods for industrial radiation measurement applications

    International Nuclear Information System (INIS)

    Gardner, R.P.; Guo, P.; Ao, Q.

    1996-01-01

    Computational methods have been used with considerable success to complement radiation measurements in solving a wide range of industrial problems. The almost exponential growth of computer capability and applications in the last few years leads to a open-quotes black boxclose quotes mentality for radiation measurement applications. If a black box is defined as any radiation measurement device that is capable of measuring the parameters of interest when a wide range of operating and sample conditions may occur, then the development of computational methods for industrial radiation measurement applications should now be focused on the black box approach and the deduction of properties of interest from the response with acceptable accuracy and reasonable efficiency. Nowadays, increasingly better understanding of radiation physical processes, more accurate and complete fundamental physical data, and more advanced modeling and software/hardware techniques have made it possible to make giant strides in that direction with new ideas implemented with computer software. The Center for Engineering Applications of Radioisotopes (CEAR) at North Carolina State University has been working on a variety of projects in the area of radiation analyzers and gauges for accomplishing this for quite some time, and they are discussed here with emphasis on current accomplishments

  2. BLUES function method in computational physics

    Science.gov (United States)

    Indekeu, Joseph O.; Müller-Nedebock, Kristian K.

    2018-04-01

    We introduce a computational method in physics that goes ‘beyond linear use of equation superposition’ (BLUES). A BLUES function is defined as a solution of a nonlinear differential equation (DE) with a delta source that is at the same time a Green’s function for a related linear DE. For an arbitrary source, the BLUES function can be used to construct an exact solution to the nonlinear DE with a different, but related source. Alternatively, the BLUES function can be used to construct an approximate piecewise analytical solution to the nonlinear DE with an arbitrary source. For this alternative use the related linear DE need not be known. The method is illustrated in a few examples using analytical calculations and numerical computations. Areas for further applications are suggested.

  3. Spatial analysis statistics, visualization, and computational methods

    CERN Document Server

    Oyana, Tonny J

    2015-01-01

    An introductory text for the next generation of geospatial analysts and data scientists, Spatial Analysis: Statistics, Visualization, and Computational Methods focuses on the fundamentals of spatial analysis using traditional, contemporary, and computational methods. Outlining both non-spatial and spatial statistical concepts, the authors present practical applications of geospatial data tools, techniques, and strategies in geographic studies. They offer a problem-based learning (PBL) approach to spatial analysis-containing hands-on problem-sets that can be worked out in MS Excel or ArcGIS-as well as detailed illustrations and numerous case studies. The book enables readers to: Identify types and characterize non-spatial and spatial data Demonstrate their competence to explore, visualize, summarize, analyze, optimize, and clearly present statistical data and results Construct testable hypotheses that require inferential statistical analysis Process spatial data, extract explanatory variables, conduct statisti...

  4. Computer Animation Based on Particle Methods

    Directory of Open Access Journals (Sweden)

    Rafal Wcislo

    1999-01-01

    Full Text Available The paper presents the main issues of a computer animation of a set of elastic macroscopic objects based on the particle method. The main assumption of the generated animations is to achieve very realistic movements in a scene observed on the computer display. The objects (solid bodies interact mechanically with each other, The movements and deformations of solids are calculated using the particle method. Phenomena connected with the behaviour of solids in the gravitational field, their defomtations caused by collisions and interactions with the optional liquid medium are simulated. The simulation ofthe liquid is performed using the cellular automata method. The paper presents both simulation schemes (particle method and cellular automata rules an the method of combining them in the single animation program. ln order to speed up the execution of the program the parallel version based on the network of workstation was developed. The paper describes the methods of the parallelization and it considers problems of load-balancing, collision detection, process synchronization and distributed control of the animation.

  5. Computational methods of electron/photon transport

    International Nuclear Information System (INIS)

    Mack, J.M.

    1983-01-01

    A review of computational methods simulating the non-plasma transport of electrons and their attendant cascades is presented. Remarks are mainly restricted to linearized formalisms at electron energies above 1 keV. The effectiveness of various metods is discussed including moments, point-kernel, invariant imbedding, discrete-ordinates, and Monte Carlo. Future research directions and the potential impact on various aspects of science and engineering are indicated

  6. Mathematical optics classical, quantum, and computational methods

    CERN Document Server

    Lakshminarayanan, Vasudevan

    2012-01-01

    Going beyond standard introductory texts, Mathematical Optics: Classical, Quantum, and Computational Methods brings together many new mathematical techniques from optical science and engineering research. Profusely illustrated, the book makes the material accessible to students and newcomers to the field. Divided into six parts, the text presents state-of-the-art mathematical methods and applications in classical optics, quantum optics, and image processing. Part I describes the use of phase space concepts to characterize optical beams and the application of dynamic programming in optical wave

  7. Lung Ultrasonography in Patients With Idiopathic Pulmonary Fibrosis: Evaluation of a Simplified Protocol With High-Resolution Computed Tomographic Correlation.

    Science.gov (United States)

    Vassalou, Evangelia E; Raissaki, Maria; Magkanas, Eleftherios; Antoniou, Katerina M; Karantanas, Apostolos H

    2018-03-01

    To compare a simplified ultrasonographic (US) protocol in 2 patient positions with the same-positioned comprehensive US assessments and high-resolution computed tomographic (CT) findings in patients with idiopathic pulmonary fibrosis. Twenty-five consecutive patients with idiopathic pulmonary fibrosis were prospectively enrolled and examined in 2 sessions. During session 1, patients were examined with a US protocol including 56 lung intercostal spaces in supine/sitting (supine/sitting comprehensive protocol) and lateral decubitus (decubitus comprehensive protocol) positions. During session 2, patients were evaluated with a 16-intercostal space US protocol in sitting (sitting simplified protocol) and left/right decubitus (decubitus simplified protocol) positions. The 16 intercostal spaces were chosen according to the prevalence of idiopathic pulmonary fibrosis-related changes on high-resolution CT. The sum of B-lines counted in each intercostal space formed the US scores for all 4 US protocols: supine/sitting and decubitus comprehensive US scores and sitting and decubitus simplified US scores. High-resolution CT-related Warrick scores (J Rheumatol 1991; 18:1520-1528) were compared to US scores. The duration of each protocol was recorded. A significant correlation was found between all US scores and Warrick scores and between simplified and corresponding comprehensive scores (P idiopathic pulmonary fibrosis. The 16-intercostal space simplified protocol in the lateral decubitus position correlated better with high-resolution CT findings and was less time-consuming compared to the sitting position. © 2017 by the American Institute of Ultrasound in Medicine.

  8. Accuracy of a Computer-Aided Surgical Simulation (CASS) Protocol for Orthognathic Surgery: A Prospective Multicenter Study

    Science.gov (United States)

    Hsu, Sam Sheng-Pin; Gateno, Jaime; Bell, R. Bryan; Hirsch, David L.; Markiewicz, Michael R.; Teichgraeber, John F.; Zhou, Xiaobo; Xia, James J.

    2012-01-01

    Purpose The purpose of this prospective multicenter study was to assess the accuracy of a computer-aided surgical simulation (CASS) protocol for orthognathic surgery. Materials and Methods The accuracy of the CASS protocol was assessed by comparing planned and postoperative outcomes of 65 consecutive patients enrolled from 3 centers. Computer-generated surgical splints were used for all patients. For the genioplasty, one center utilized computer-generated chin templates to reposition the chin segment only for patients with asymmetry. Standard intraoperative measurements were utilized without the chin templates for the remaining patients. The primary outcome measurements were linear and angular differences for the maxilla, mandible and chin when the planned and postoperative models were registered at the cranium. The secondary outcome measurements were: maxillary dental midline difference between the planned and postoperative positions; and linear and angular differences of the chin segment between the groups with and without the use of the template. The latter was measured when the planned and postoperative models were registered at mandibular body. Statistical analyses were performed, and the accuracy was reported using root mean square deviation (RMSD) and Bland and Altman's method for assessing measurement agreement. Results In the primary outcome measurements, there was no statistically significant difference among the 3 centers for the maxilla and mandible. The largest RMSD was 1.0mm and 1.5° for the maxilla, and 1.1mm and 1.8° for the mandible. For the chin, there was a statistically significant difference between the groups with and without the use of the chin template. The chin template group showed excellent accuracy with largest positional RMSD of 1.0mm and the largest orientational RSMD of 2.2°. However, larger variances were observed in the group not using the chin template. This was significant in anteroposterior and superoinferior directions, as in

  9. Whatever works: a systematic user-centered training protocol to optimize brain-computer interfacing individually.

    Directory of Open Access Journals (Sweden)

    Elisabeth V C Friedrich

    Full Text Available This study implemented a systematic user-centered training protocol for a 4-class brain-computer interface (BCI. The goal was to optimize the BCI individually in order to achieve high performance within few sessions for all users. Eight able-bodied volunteers, who were initially naïve to the use of a BCI, participated in 10 sessions over a period of about 5 weeks. In an initial screening session, users were asked to perform the following seven mental tasks while multi-channel EEG was recorded: mental rotation, word association, auditory imagery, mental subtraction, spatial navigation, motor imagery of the left hand and motor imagery of both feet. Out of these seven mental tasks, the best 4-class combination as well as most reactive frequency band (between 8-30 Hz was selected individually for online control. Classification was based on common spatial patterns and Fisher's linear discriminant analysis. The number and time of classifier updates varied individually. Selection speed was increased by reducing trial length. To minimize differences in brain activity between sessions with and without feedback, sham feedback was provided in the screening and calibration runs in which usually no real-time feedback is shown. Selected task combinations and frequency ranges differed between users. The tasks that were included in the 4-class combination most often were (1 motor imagery of the left hand (2, one brain-teaser task (word association or mental subtraction (3, mental rotation task and (4 one more dynamic imagery task (auditory imagery, spatial navigation, imagery of the feet. Participants achieved mean performances over sessions of 44-84% and peak performances in single-sessions of 58-93% in this user-centered 4-class BCI protocol. This protocol is highly adjustable to individual users and thus could increase the percentage of users who can gain and maintain BCI control. A high priority for future work is to examine this protocol with severely

  10. A New Dual-purpose Quality Control Dosimetry Protocol for Diagnostic Reference-level Determination in Computed Tomography.

    Science.gov (United States)

    Sohrabi, Mehdi; Parsi, Masoumeh; Sina, Sedigheh

    2018-05-17

    A diagnostic reference level is an advisory dose level set by a regulatory authority in a country as an efficient criterion for protection of patients from unwanted medical exposure. In computed tomography, the direct dose measurement and data collection methods are commonly applied for determination of diagnostic reference levels. Recently, a new quality-control-based dose survey method was proposed by the authors to simplify the diagnostic reference-level determination using a retrospective quality control database usually available at a regulatory authority in a country. In line with such a development, a prospective dual-purpose quality control dosimetry protocol is proposed for determination of diagnostic reference levels in a country, which can be simply applied by quality control service providers. This new proposed method was applied to five computed tomography scanners in Shiraz, Iran, and diagnostic reference levels for head, abdomen/pelvis, sinus, chest, and lumbar spine examinations were determined. The results were compared to those obtained by the data collection and quality-control-based dose survey methods, carried out in parallel in this study, and were found to agree well within approximately 6%. This is highly acceptable for quality-control-based methods according to International Atomic Energy Agency tolerance levels (±20%).

  11. BrEPS: a flexible and automatic protocol to compute enzyme-specific sequence profiles for functional annotation

    Directory of Open Access Journals (Sweden)

    Schomburg D

    2010-12-01

    Full Text Available Abstract Background Models for the simulation of metabolic networks require the accurate prediction of enzyme function. Based on a genomic sequence, enzymatic functions of gene products are today mainly predicted by sequence database searching and operon analysis. Other methods can support these techniques: We have developed an automatic method "BrEPS" that creates highly specific sequence patterns for the functional annotation of enzymes. Results The enzymes in the UniprotKB are identified and their sequences compared against each other with BLAST. The enzymes are then clustered into a number of trees, where each tree node is associated with a set of EC-numbers. The enzyme sequences in the tree nodes are aligned with ClustalW. The conserved columns of the resulting multiple alignments are used to construct sequence patterns. In the last step, we verify the quality of the patterns by computing their specificity. Patterns with low specificity are omitted and recomputed further down in the tree. The final high-quality patterns can be used for functional annotation. We ran our protocol on a recent Swiss-Prot release and show statistics, as well as a comparison to PRIAM, a probabilistic method that is also specialized on the functional annotation of enzymes. We determine the amount of true positive annotations for five common microorganisms with data from BRENDA and AMENDA serving as standard of truth. BrEPS is almost on par with PRIAM, a fact which we discuss in the context of five manually investigated cases. Conclusions Our protocol computes highly specific sequence patterns that can be used to support the functional annotation of enzymes. The main advantages of our method are that it is automatic and unsupervised, and quite fast once the patterns are evaluated. The results show that BrEPS can be a valuable addition to the reconstruction of metabolic networks.

  12. A protocol for the commissioning and quality assurance of new planning computers

    International Nuclear Information System (INIS)

    Ratcliffe, A.J.; Aukett, R.J.; Bolton, S.C.; Bonnett, D.E.

    1995-01-01

    Any new radiotherapy planning system needs to be thoroughly tested. Besides checking the accuracy of the algorithm by comparing plans done on the system with measurements done in a phantom, it is desirable for the user to compare the new equipment with a tried and tested system before it is used clinically. To test our recently purchased planning systems, a protocol was developed for running a comparison between these and our existing planning computer, an IGE RTPLAN. A summary of the test protocol that was developed is as follows: (1) A series of plans is created on the old system, to include at least one plan of each common type. The series includes at least one plan with a bone inhomogeneity, and one with an air or lung inhomogeneity, and these plans are computed both with and without inhomogeneity correction. Point dose calculations are made for a number of positions on each plan, including the dose at the centre of the treatment volume. (2) Each of these plans is reproduced as accurately as possible on the new system using the original CT data and patient outlines. (3) The old and new plans, including those with and without inhomogeneity correction are overlaid and compared using the following criteria: (a) how well the volumes of interest coincide, (b) how accurately the positions of the points of interest are reproduced, (c) the doses at the points of interest, (d) the distances between the isodoses defining the dose plateau, (e) the maximum displacement between the corresponding pairs of isodoses in the dose gradient around the tumour. The protocol has been used to test two systems: the (newly developed) Siemens Axiom and the Helax TMS (running on a DEC Alpha). A summary of the results obtained will be presented. These were sufficient to show up several minor problems, particularly in the Axiom system

  13. Delamination detection using methods of computational intelligence

    Science.gov (United States)

    Ihesiulor, Obinna K.; Shankar, Krishna; Zhang, Zhifang; Ray, Tapabrata

    2012-11-01

    Abstract Reliable delamination prediction scheme is indispensable in order to prevent potential risks of catastrophic failures in composite structures. The existence of delaminations changes the vibration characteristics of composite laminates and hence such indicators can be used to quantify the health characteristics of laminates. An approach for online health monitoring of in-service composite laminates is presented in this paper that relies on methods based on computational intelligence. Typical changes in the observed vibration characteristics (i.e. change in natural frequencies) are considered as inputs to identify the existence, location and magnitude of delaminations. The performance of the proposed approach is demonstrated using numerical models of composite laminates. Since this identification problem essentially involves the solution of an optimization problem, the use of finite element (FE) methods as the underlying tool for analysis turns out to be computationally expensive. A surrogate assisted optimization approach is hence introduced to contain the computational time within affordable limits. An artificial neural network (ANN) model with Bayesian regularization is used as the underlying approximation scheme while an improved rate of convergence is achieved using a memetic algorithm. However, building of ANN surrogate models usually requires large training datasets. K-means clustering is effectively employed to reduce the size of datasets. ANN is also used via inverse modeling to determine the position, size and location of delaminations using changes in measured natural frequencies. The results clearly highlight the efficiency and the robustness of the approach.

  14. Method of generating a computer readable model

    DEFF Research Database (Denmark)

    2008-01-01

    A method of generating a computer readable model of a geometrical object constructed from a plurality of interconnectable construction elements, wherein each construction element has a number of connection elements for connecting the construction element with another construction element. The met......A method of generating a computer readable model of a geometrical object constructed from a plurality of interconnectable construction elements, wherein each construction element has a number of connection elements for connecting the construction element with another construction element....... The method comprises encoding a first and a second one of the construction elements as corresponding data structures, each representing the connection elements of the corresponding construction element, and each of the connection elements having associated with it a predetermined connection type. The method...... further comprises determining a first connection element of the first construction element and a second connection element of the second construction element located in a predetermined proximity of each other; and retrieving connectivity information of the corresponding connection types of the first...

  15. Secure Multi-party Computation Protocol for Defense Applications in Military Operations Using Virtual Cryptography

    Science.gov (United States)

    Pathak, Rohit; Joshi, Satyadhar

    With the advent into the 20th century whole world has been facing the common dilemma of Terrorism. The suicide attacks on US twin towers 11 Sept. 2001, Train bombings in Madrid Spain 11 Mar. 2004, London bombings 7 Jul. 2005 and Mumbai attack 26 Nov. 2008 were some of the most disturbing, destructive and evil acts by terrorists in the last decade which has clearly shown their evil intent that they can go to any extent to accomplish their goals. Many terrorist organizations such as al Quaida, Harakat ul-Mujahidin, Hezbollah, Jaish-e-Mohammed, Lashkar-e-Toiba, etc. are carrying out training camps and terrorist operations which are accompanied with latest technology and high tech arsenal. To counter such terrorism our military is in need of advanced defense technology. One of the major issues of concern is secure communication. It has to be made sure that communication between different military forces is secure so that critical information is not leaked to the adversary. Military forces need secure communication to shield their confidential data from terrorist forces. Leakage of concerned data can prove hazardous, thus preservation and security is of prime importance. There may be a need to perform computations that require data from many military forces, but in some cases the associated forces would not want to reveal their data to other forces. In such situations Secure Multi-party Computations find their application. In this paper, we propose a new highly scalable Secure Multi-party Computation (SMC) protocol and algorithm for Defense applications which can be used to perform computation on encrypted data. Every party encrypts their data in accordance with a particular scheme. This encrypted data is distributed among some created virtual parties. These Virtual parties send their data to the TTP through an Anonymizer layer. TTP performs computation on encrypted data and announces the result. As the data sent was encrypted its actual value can’t be known by TTP

  16. Efficient computation method of Jacobian matrix

    International Nuclear Information System (INIS)

    Sasaki, Shinobu

    1995-05-01

    As well known, the elements of the Jacobian matrix are complex trigonometric functions of the joint angles, resulting in a matrix of staggering complexity when we write it all out in one place. This article addresses that difficulties to this subject are overcome by using velocity representation. The main point is that its recursive algorithm and computer algebra technologies allow us to derive analytical formulation with no human intervention. Particularly, it is to be noted that as compared to previous results the elements are extremely simplified throughout the effective use of frame transformations. Furthermore, in case of a spherical wrist, it is shown that the present approach is computationally most efficient. Due to such advantages, the proposed method is useful in studying kinematically peculiar properties such as singularity problems. (author)

  17. Computational method for free surface hydrodynamics

    International Nuclear Information System (INIS)

    Hirt, C.W.; Nichols, B.D.

    1980-01-01

    There are numerous flow phenomena in pressure vessel and piping systems that involve the dynamics of free fluid surfaces. For example, fluid interfaces must be considered during the draining or filling of tanks, in the formation and collapse of vapor bubbles, and in seismically shaken vessels that are partially filled. To aid in the analysis of these types of flow phenomena, a new technique has been developed for the computation of complicated free-surface motions. This technique is based on the concept of a local average volume of fluid (VOF) and is embodied in a computer program for two-dimensional, transient fluid flow called SOLA-VOF. The basic approach used in the VOF technique is briefly described, and compared to other free-surface methods. Specific capabilities of the SOLA-VOF program are illustrated by generic examples of bubble growth and collapse, flows of immiscible fluid mixtures, and the confinement of spilled liquids

  18. Soft Computing Methods for Disulfide Connectivity Prediction.

    Science.gov (United States)

    Márquez-Chamorro, Alfonso E; Aguilar-Ruiz, Jesús S

    2015-01-01

    The problem of protein structure prediction (PSP) is one of the main challenges in structural bioinformatics. To tackle this problem, PSP can be divided into several subproblems. One of these subproblems is the prediction of disulfide bonds. The disulfide connectivity prediction problem consists in identifying which nonadjacent cysteines would be cross-linked from all possible candidates. Determining the disulfide bond connectivity between the cysteines of a protein is desirable as a previous step of the 3D PSP, as the protein conformational search space is highly reduced. The most representative soft computing approaches for the disulfide bonds connectivity prediction problem of the last decade are summarized in this paper. Certain aspects, such as the different methodologies based on soft computing approaches (artificial neural network or support vector machine) or features of the algorithms, are used for the classification of these methods.

  19. Regressive Imagery in Creative Problem-Solving: Comparing Verbal Protocols of Expert and Novice Visual Artists and Computer Programmers

    Science.gov (United States)

    Kozbelt, Aaron; Dexter, Scott; Dolese, Melissa; Meredith, Daniel; Ostrofsky, Justin

    2015-01-01

    We applied computer-based text analyses of regressive imagery to verbal protocols of individuals engaged in creative problem-solving in two domains: visual art (23 experts, 23 novices) and computer programming (14 experts, 14 novices). Percentages of words involving primary process and secondary process thought, plus emotion-related words, were…

  20. Computational methods for nuclear criticality safety analysis

    International Nuclear Information System (INIS)

    Maragni, M.G.

    1992-01-01

    Nuclear criticality safety analyses require the utilization of methods which have been tested and verified against benchmarks results. In this work, criticality calculations based on the KENO-IV and MCNP codes are studied aiming the qualification of these methods at the IPEN-CNEN/SP and COPESP. The utilization of variance reduction techniques is important to reduce the computer execution time, and several of them are analysed. As practical example of the above methods, a criticality safety analysis for the storage tubes for irradiated fuel elements from the IEA-R1 research has been carried out. This analysis showed that the MCNP code is more adequate for problems with complex geometries, and the KENO-IV code shows conservative results when it is not used the generalized geometry option. (author)

  1. Evolutionary Computing Methods for Spectral Retrieval

    Science.gov (United States)

    Terrile, Richard; Fink, Wolfgang; Huntsberger, Terrance; Lee, Seugwon; Tisdale, Edwin; VonAllmen, Paul; Tinetti, Geivanna

    2009-01-01

    A methodology for processing spectral images to retrieve information on underlying physical, chemical, and/or biological phenomena is based on evolutionary and related computational methods implemented in software. In a typical case, the solution (the information that one seeks to retrieve) consists of parameters of a mathematical model that represents one or more of the phenomena of interest. The methodology was developed for the initial purpose of retrieving the desired information from spectral image data acquired by remote-sensing instruments aimed at planets (including the Earth). Examples of information desired in such applications include trace gas concentrations, temperature profiles, surface types, day/night fractions, cloud/aerosol fractions, seasons, and viewing angles. The methodology is also potentially useful for retrieving information on chemical and/or biological hazards in terrestrial settings. In this methodology, one utilizes an iterative process that minimizes a fitness function indicative of the degree of dissimilarity between observed and synthetic spectral and angular data. The evolutionary computing methods that lie at the heart of this process yield a population of solutions (sets of the desired parameters) within an accuracy represented by a fitness-function value specified by the user. The evolutionary computing methods (ECM) used in this methodology are Genetic Algorithms and Simulated Annealing, both of which are well-established optimization techniques and have also been described in previous NASA Tech Briefs articles. These are embedded in a conceptual framework, represented in the architecture of the implementing software, that enables automatic retrieval of spectral and angular data and analysis of the retrieved solutions for uniqueness.

  2. Why standard brain-computer interface (BCI) training protocols should be changed: an experimental study

    Science.gov (United States)

    Jeunet, Camille; Jahanpour, Emilie; Lotte, Fabien

    2016-06-01

    Objective. While promising, electroencephaloraphy based brain-computer interfaces (BCIs) are barely used due to their lack of reliability: 15% to 30% of users are unable to control a BCI. Standard training protocols may be partly responsible as they do not satisfy recommendations from psychology. Our main objective was to determine in practice to what extent standard training protocols impact users’ motor imagery based BCI (MI-BCI) control performance. Approach. We performed two experiments. The first consisted in evaluating the efficiency of a standard BCI training protocol for the acquisition of non-BCI related skills in a BCI-free context, which enabled us to rule out the possible impact of BCIs on the training outcome. Thus, participants (N = 54) were asked to perform simple motor tasks. The second experiment was aimed at measuring the correlations between motor tasks and MI-BCI performance. The ten best and ten worst performers of the first study were recruited for an MI-BCI experiment during which they had to learn to perform two MI tasks. We also assessed users’ spatial ability and pre-training μ rhythm amplitude, as both have been related to MI-BCI performance in the literature. Main results. Around 17% of the participants were unable to learn to perform the motor tasks, which is close to the BCI illiteracy rate. This suggests that standard training protocols are suboptimal for skill teaching. No correlation was found between motor tasks and MI-BCI performance. However, spatial ability played an important role in MI-BCI performance. In addition, once the spatial ability covariable had been controlled for, using an ANCOVA, it appeared that participants who faced difficulty during the first experiment improved during the second while the others did not. Significance. These studies suggest that (1) standard MI-BCI training protocols are suboptimal for skill teaching, (2) spatial ability is confirmed as impacting on MI-BCI performance, and (3) when faced

  3. A novel region-growing based semi-automatic segmentation protocol for three-dimensional condylar reconstruction using cone beam computed tomography (CBCT.

    Directory of Open Access Journals (Sweden)

    Tong Xi

    Full Text Available OBJECTIVE: To present and validate a semi-automatic segmentation protocol to enable an accurate 3D reconstruction of the mandibular condyles using cone beam computed tomography (CBCT. MATERIALS AND METHODS: Approval from the regional medical ethics review board was obtained for this study. Bilateral mandibular condyles in ten CBCT datasets of patients were segmented using the currently proposed semi-automatic segmentation protocol. This segmentation protocol combined 3D region-growing and local thresholding algorithms. The segmentation of a total of twenty condyles was performed by two observers. The Dice-coefficient and distance map calculations were used to evaluate the accuracy and reproducibility of the segmented and 3D rendered condyles. RESULTS: The mean inter-observer Dice-coefficient was 0.98 (range [0.95-0.99]. An average 90th percentile distance of 0.32 mm was found, indicating an excellent inter-observer similarity of the segmented and 3D rendered condyles. No systematic errors were observed in the currently proposed segmentation protocol. CONCLUSION: The novel semi-automated segmentation protocol is an accurate and reproducible tool to segment and render condyles in 3D. The implementation of this protocol in the clinical practice allows the CBCT to be used as an imaging modality for the quantitative analysis of condylar morphology.

  4. Geometric computations with interval and new robust methods applications in computer graphics, GIS and computational geometry

    CERN Document Server

    Ratschek, H

    2003-01-01

    This undergraduate and postgraduate text will familiarise readers with interval arithmetic and related tools to gain reliable and validated results and logically correct decisions for a variety of geometric computations plus the means for alleviating the effects of the errors. It also considers computations on geometric point-sets, which are neither robust nor reliable in processing with standard methods. The authors provide two effective tools for obtaining correct results: (a) interval arithmetic, and (b) ESSA the new powerful algorithm which improves many geometric computations and makes th

  5. Methodics of computing the results of monitoring the exploratory gallery

    Directory of Open Access Journals (Sweden)

    Krúpa Víazoslav

    2000-09-01

    Full Text Available At building site of motorway tunnel Višòové-Dubná skala , the priority is given to driving of exploration galley that secures in detail: geologic, engineering geology, hydrogeology and geotechnics research. This research is based on gathering information for a supposed use of the full profile driving machine that would drive the motorway tunnel. From a part of the exploration gallery which is driven by the TBM method, a fulfilling information is gathered about the parameters of the driving process , those are gathered by a computer monitoring system. The system is mounted on a driving machine. This monitoring system is based on the industrial computer PC 104. It records 4 basic values of the driving process: the electromotor performance of the driving machine Voest-Alpine ATB 35HA, the speed of driving advance, the rotation speed of the disintegrating head TBM and the total head pressure. The pressure force is evaluated from the pressure in the hydraulic cylinders of the machine. Out of these values, the strength of rock mass, the angle of inner friction, etc. are mathematically calculated. These values characterize rock mass properties as their changes. To define the effectivity of the driving process, the value of specific energy and the working ability of driving head is used. The article defines the methodics of computing the gathered monitoring information, that is prepared for the driving machine Voest – Alpine ATB 35H at the Institute of Geotechnics SAS. It describes the input forms (protocols of the developed method created by an EXCEL program and shows selected samples of the graphical elaboration of the first monitoring results obtained from exploratory gallery driving process in the Višòové – Dubná skala motorway tunnel.

  6. Multiparametric multidetector computed tomography scanning on suspicion of hyperacute ischemic stroke: validating a standardized protocol

    Directory of Open Access Journals (Sweden)

    Felipe Torres Pacheco

    2013-06-01

    Full Text Available Multidetector computed tomography (MDCT scanning has enabled the early diagnosis of hyperacute brain ischemia. We aimed at validating a standardized protocol to read and report MDCT techniques in a series of adult patients. The inter-observer agreement among the trained examiners was tested, and their results were compared with a standard reading. No false positives were observed, and an almost perfect agreement (Kappa>0.81 was documented when the CT angiography (CTA and cerebral perfusion CT (CPCT map data were added to the noncontrast CT (NCCT analysis. The inter-observer agreement was higher for highly trained readers, corroborating the need for specific training to interpret these modern techniques. The authors recommend adding CTA and CPCT to the NCCT analysis in order to clarify the global analysis of structural and hemodynamic brain abnormalities. Our structured report is suitable as a script for the reproducible analysis of the MDCT of patients on suspicion of ischemic stroke.

  7. A computational method for sharp interface advection

    Science.gov (United States)

    Bredmose, Henrik; Jasak, Hrvoje

    2016-01-01

    We devise a numerical method for passive advection of a surface, such as the interface between two incompressible fluids, across a computational mesh. The method is called isoAdvector, and is developed for general meshes consisting of arbitrary polyhedral cells. The algorithm is based on the volume of fluid (VOF) idea of calculating the volume of one of the fluids transported across the mesh faces during a time step. The novelty of the isoAdvector concept consists of two parts. First, we exploit an isosurface concept for modelling the interface inside cells in a geometric surface reconstruction step. Second, from the reconstructed surface, we model the motion of the face–interface intersection line for a general polygonal face to obtain the time evolution within a time step of the submerged face area. Integrating this submerged area over the time step leads to an accurate estimate for the total volume of fluid transported across the face. The method was tested on simple two-dimensional and three-dimensional interface advection problems on both structured and unstructured meshes. The results are very satisfactory in terms of volume conservation, boundedness, surface sharpness and efficiency. The isoAdvector method was implemented as an OpenFOAM® extension and is published as open source. PMID:28018619

  8. A computational method for sharp interface advection.

    Science.gov (United States)

    Roenby, Johan; Bredmose, Henrik; Jasak, Hrvoje

    2016-11-01

    We devise a numerical method for passive advection of a surface, such as the interface between two incompressible fluids, across a computational mesh. The method is called isoAdvector, and is developed for general meshes consisting of arbitrary polyhedral cells. The algorithm is based on the volume of fluid (VOF) idea of calculating the volume of one of the fluids transported across the mesh faces during a time step. The novelty of the isoAdvector concept consists of two parts. First, we exploit an isosurface concept for modelling the interface inside cells in a geometric surface reconstruction step. Second, from the reconstructed surface, we model the motion of the face-interface intersection line for a general polygonal face to obtain the time evolution within a time step of the submerged face area. Integrating this submerged area over the time step leads to an accurate estimate for the total volume of fluid transported across the face. The method was tested on simple two-dimensional and three-dimensional interface advection problems on both structured and unstructured meshes. The results are very satisfactory in terms of volume conservation, boundedness, surface sharpness and efficiency. The isoAdvector method was implemented as an OpenFOAM ® extension and is published as open source.

  9. Formal Analysis of SET and NSL Protocols Using the Interpretation Functions-Based Method

    Directory of Open Access Journals (Sweden)

    Hanane Houmani

    2012-01-01

    Full Text Available Most applications in the Internet such as e-banking and e-commerce use the SET and the NSL protocols to protect the communication channel between the client and the server. Then, it is crucial to ensure that these protocols respect some security properties such as confidentiality, authentication, and integrity. In this paper, we analyze the SET and the NSL protocols with respect to the confidentiality (secrecy property. To perform this analysis, we use the interpretation functions-based method. The main idea behind the interpretation functions-based technique is to give sufficient conditions that allow to guarantee that a cryptographic protocol respects the secrecy property. The flexibility of the proposed conditions allows the verification of daily-life protocols such as SET and NSL. Also, this method could be used under different assumptions such as a variety of intruder abilities including algebraic properties of cryptographic primitives. The NSL protocol, for instance, is analyzed with and without the homomorphism property. We show also, using the SET protocol, the usefulness of this approach to correct weaknesses and problems discovered during the analysis.

  10. Computational electromagnetic methods for transcranial magnetic stimulation

    Science.gov (United States)

    Gomez, Luis J.

    Transcranial magnetic stimulation (TMS) is a noninvasive technique used both as a research tool for cognitive neuroscience and as a FDA approved treatment for depression. During TMS, coils positioned near the scalp generate electric fields and activate targeted brain regions. In this thesis, several computational electromagnetics methods that improve the analysis, design, and uncertainty quantification of TMS systems were developed. Analysis: A new fast direct technique for solving the large and sparse linear system of equations (LSEs) arising from the finite difference (FD) discretization of Maxwell's quasi-static equations was developed. Following a factorization step, the solver permits computation of TMS fields inside realistic brain models in seconds, allowing for patient-specific real-time usage during TMS. The solver is an alternative to iterative methods for solving FD LSEs, often requiring run-times of minutes. A new integral equation (IE) method for analyzing TMS fields was developed. The human head is highly-heterogeneous and characterized by high-relative permittivities (107). IE techniques for analyzing electromagnetic interactions with such media suffer from high-contrast and low-frequency breakdowns. The novel high-permittivity and low-frequency stable internally combined volume-surface IE method developed. The method not only applies to the analysis of high-permittivity objects, but it is also the first IE tool that is stable when analyzing highly-inhomogeneous negative permittivity plasmas. Design: TMS applications call for electric fields to be sharply focused on regions that lie deep inside the brain. Unfortunately, fields generated by present-day Figure-8 coils stimulate relatively large regions near the brain surface. An optimization method for designing single feed TMS coil-arrays capable of producing more localized and deeper stimulation was developed. Results show that the coil-arrays stimulate 2.4 cm into the head while stimulating 3

  11. Computational predictive methods for fracture and fatigue

    Science.gov (United States)

    Cordes, J.; Chang, A. T.; Nelson, N.; Kim, Y.

    1994-09-01

    The damage-tolerant design philosophy as used by aircraft industries enables aircraft components and aircraft structures to operate safely with minor damage, small cracks, and flaws. Maintenance and inspection procedures insure that damages developed during service remain below design values. When damage is found, repairs or design modifications are implemented and flight is resumed. Design and redesign guidelines, such as military specifications MIL-A-83444, have successfully reduced the incidence of damage and cracks. However, fatigue cracks continue to appear in aircraft well before the design life has expired. The F16 airplane, for instance, developed small cracks in the engine mount, wing support, bulk heads, the fuselage upper skin, the fuel shelf joints, and along the upper wings. Some cracks were found after 600 hours of the 8000 hour design service life and design modifications were required. Tests on the F16 plane showed that the design loading conditions were close to the predicted loading conditions. Improvements to analytic methods for predicting fatigue crack growth adjacent to holes, when multiple damage sites are present, and in corrosive environments would result in more cost-effective designs, fewer repairs, and fewer redesigns. The overall objective of the research described in this paper is to develop, verify, and extend the computational efficiency of analysis procedures necessary for damage tolerant design. This paper describes an elastic/plastic fracture method and an associated fatigue analysis method for damage tolerant design. Both methods are unique in that material parameters such as fracture toughness, R-curve data, and fatigue constants are not required. The methods are implemented with a general-purpose finite element package. Several proof-of-concept examples are given. With further development, the methods could be extended for analysis of multi-site damage, creep-fatigue, and corrosion fatigue problems.

  12. Modules and methods for all photonic computing

    Science.gov (United States)

    Schultz, David R.; Ma, Chao Hung

    2001-01-01

    A method for all photonic computing, comprising the steps of: encoding a first optical/electro-optical element with a two dimensional mathematical function representing input data; illuminating the first optical/electro-optical element with a collimated beam of light; illuminating a second optical/electro-optical element with light from the first optical/electro-optical element, the second optical/electro-optical element having a characteristic response corresponding to an iterative algorithm useful for solving a partial differential equation; iteratively recirculating the signal through the second optical/electro-optical element with light from the second optical/electro-optical element for a predetermined number of iterations; and, after the predetermined number of iterations, optically and/or electro-optically collecting output data representing an iterative optical solution from the second optical/electro-optical element.

  13. Optical design teaching by computing graphic methods

    Science.gov (United States)

    Vazquez-Molini, D.; Muñoz-Luna, J.; Fernandez-Balbuena, A. A.; Garcia-Botella, A.; Belloni, P.; Alda, J.

    2012-10-01

    One of the key challenges in the teaching of Optics is that students need to know not only the math of the optical design, but also, and more important, to grasp and understand the optics in a three-dimensional space. Having a clear image of the problem to solve is the first step in order to begin to solve that problem. Therefore to achieve that the students not only must know the equation of refraction law but they have also to understand how the main parameters of this law are interacting among them. This should be a major goal in the teaching course. Optical graphic methods are a valuable tool in this way since they have the advantage of visual information and the accuracy of a computer calculation.

  14. Genomics protocols [Methods in molecular biology, v. 175

    National Research Council Canada - National Science Library

    Starkey, Michael P; Elaswarapu, Ramnath

    2001-01-01

    ... exploiting the potential of gene therapy. Highlights include methods for the analysis of differential gene expression, SNP detection, comparative genomic hybridization, and the functional analysis of genes, as well as the use of bio...

  15. Estimation of the radiation exposure of a chest pain protocol with ECG-gating in dual-source computed tomography

    International Nuclear Information System (INIS)

    Ketelsen, Dominik; Luetkhoff, Marie H.; Thomas, Christoph; Werner, Matthias; Tsiflikas, Ilias; Reimann, Anja; Kopp, Andreas F.; Claussen, Claus D.; Heuschmid, Martin; Buchgeister, Markus; Burgstahler, Christof

    2009-01-01

    The aim of the study was to evaluate radiation exposure of a chest pain protocol with ECG-gated dual-source computed tomography (DSCT). An Alderson Rando phantom equipped with thermoluminescent dosimeters was used for dose measurements. Exposure was performed on a dual-source computed tomography system with a standard protocol for chest pain evaluation (120 kV, 320 mAs/rot) with different simulated heart rates (HRs). The dose of a standard chest CT examination (120 kV, 160 mAs) was also measured. Effective dose of the chest pain protocol was 19.3/21.9 mSv (male/female, HR 60), 17.9/20.4 mSv (male/female, HR 80) and 14.7/16.7 mSv (male/female, HR 100). Effective dose of a standard chest examination was 6.3 mSv (males) and 7.2 mSv (females). Radiation dose of the chest pain protocol increases significantly with a lower heart rate for both males (p = 0.040) and females (p = 0.044). The average radiation dose of a standard chest CT examination is about 36.5% that of a CT examination performed for chest pain. Using DSCT, the evaluated chest pain protocol revealed a higher radiation exposure compared with standard chest CT. Furthermore, HRs markedly influenced the dose exposure when using the ECG-gated chest pain protocol. (orig.)

  16. RT-PCR Protocols - Methods in Molecular Biology

    Directory of Open Access Journals (Sweden)

    Manuela Monti

    2011-03-01

    Full Text Available “The first record I have of it, is when I made a computer file which I usually did whenever I had an idea, that would have been on the Monday when I got back, and I called it Chain Reaction.POL, meaning polymerase. That was the identifier for it and later I called the thing the Polymerase Chain Reaction, which a lot of people thought was a dumb name for it, but it stuck, and it became PCR”. With these words the Nobel prize winner, Kary Mullis, explains how he named the PCR: one of the most important techniques ever invented and currently used in molecular biology. This book “RT-PCR Protocols” covers a wide range of aspects important for the setting of a PCR experiment for both beginners and advanced users. In my opinion the book is very well structured in three different sections. The first one describes the different technologies now available, like competitive RT-PCR, nested RT-PCR or RT-PCR for cloning. An important part regards the usage of PCR in single cell mouse embryos, stressing how important...........

  17. Computed tomography shielding methods: a literature review.

    Science.gov (United States)

    Curtis, Jessica Ryann

    2010-01-01

    To investigate available shielding methods in an effort to further awareness and understanding of existing preventive measures related to patient exposure in computed tomography (CT) scanning. Searches were conducted to locate literature discussing the effectiveness of commercially available shields. Literature containing information regarding breast, gonad, eye and thyroid shielding was identified. Because of rapidly advancing technology, the selection of articles was limited to those published within the past 5 years. The selected studies were examined using the following topics as guidelines: the effectiveness of the shield (percentage of dose reduction), the shield's effect on image quality, arguments for or against its use (including practicality) and overall recommendation for its use in clinical practice. Only a limited number of studies have been performed on the use of shields for the eyes, thyroid and gonads, but the evidence shows an overall benefit to their use. Breast shielding has been the most studied shielding method, with consistent agreement throughout the literature on its effectiveness at reducing radiation dose. The effect of shielding on image quality was not remarkable in a majority of studies. Although it is noted that more studies need to be conducted regarding the impact on image quality, the currently published literature stresses the importance of shielding in reducing dose. Commercially available shields for the breast, thyroid, eyes and gonads should be implemented in clinical practice. Further research is needed to ascertain the prevalence of shielding in the clinical setting.

  18. Method-centered digital communities on protocols.io for fast-paced scientific innovation.

    Science.gov (United States)

    Kindler, Lori; Stoliartchouk, Alexei; Teytelman, Leonid; Hurwitz, Bonnie L

    2016-01-01

    The Internet has enabled online social interaction for scientists beyond physical meetings and conferences. Yet despite these innovations in communication, dissemination of methods is often relegated to just academic publishing. Further, these methods remain static, with subsequent advances published elsewhere and unlinked. For communities undergoing fast-paced innovation, researchers need new capabilities to share, obtain feedback, and publish methods at the forefront of scientific development. For example, a renaissance in virology is now underway given the new metagenomic methods to sequence viral DNA directly from an environment. Metagenomics makes it possible to "see" natural viral communities that could not be previously studied through culturing methods. Yet, the knowledge of specialized techniques for the production and analysis of viral metagenomes remains in a subset of labs.  This problem is common to any community using and developing emerging technologies and techniques. We developed new capabilities to create virtual communities in protocols.io, an open access platform, for disseminating protocols and knowledge at the forefront of scientific development. To demonstrate these capabilities, we present a virology community forum called VERVENet. These new features allow virology researchers to share protocols and their annotations and optimizations, connect with the broader virtual community to share knowledge, job postings, conference announcements through a common online forum, and discover the current literature through personalized recommendations to promote discussion of cutting edge research. Virtual communities in protocols.io enhance a researcher's ability to: discuss and share protocols, connect with fellow community members, and learn about new and innovative research in the field.  The web-based software for developing virtual communities is free to use on protocols.io. Data are available through public APIs at protocols.io.

  19. Scanning protocol of dual-source computed tomography for aortic dissection

    International Nuclear Information System (INIS)

    Zhai Mingchun; Wang Yongmei

    2013-01-01

    Objective: To find a dual-source CT scanning protocol which can obtain high image quality with low radiation dose for diagnosis of aortic dissection. Methods: Total 120 patients with suspected aortic dissection were randomly and equally assigned into three groups. Patients in Croup A were performed CTA exam with prospectively electrocardiogram- gated high pitch spiral mode (FLASH). Patients in Croup B were performed CTA exam with retrospective electrocardiogram- gated spiral mode. Patients in Croup C were performed CTA exam with conventional mode which no electrocardiogram-gated. The image quality, radiation dose, advantages and disadvantages among the three scan protocol were analyzed. Results: For image quality, seventeen, twenty two and one patients in group A were granted to grade 1, 2, 3 respectively, and none was in grade 4; thirty three and seven patients in group B were granted to grade 1, 2, respectively, and none was in grade 3 and 4; fourteen and twenty six patients in group C were granted to grade 3, 4, respectively, and none was in grade 1 and 2. There was no significant difference between group A and B in image quality. Compared with the image quality, Group A and B were significantly higher than Group C. Mean effective radiation dose of Croup A, B and C were 7.7±0.4 mSv, 33.11±3.38 mSv, and 7.6±0.68 mSv, respectively. Group B was significantly higher than Groups A and C (P<0.05, P<0.05, respectively), and there was no significant difference between Group A and C (P=0.826). Conclusions: Prospectively electrocardiogram-gated high pitch spiral mode can be the first line protocol for evaluation of aortic dissection. It can achieve high image quality with low radiation dose. Conventional mode with no electrocardiogram-gated can be selectively used for Stanford B aortic dissection. (authors)

  20. Computational methods in calculating superconducting current problems

    Science.gov (United States)

    Brown, David John, II

    Various computational problems in treating superconducting currents are examined. First, field inversion in spatial Fourier transform space is reviewed to obtain both one-dimensional transport currents flowing down a long thin tape, and a localized two-dimensional current. The problems associated with spatial high-frequency noise, created by finite resolution and experimental equipment, are presented, and resolved with a smooth Gaussian cutoff in spatial frequency space. Convergence of the Green's functions for the one-dimensional transport current densities is discussed, and particular attention is devoted to the negative effects of performing discrete Fourier transforms alone on fields asymptotically dropping like 1/r. Results of imaging simulated current densities are favorably compared to the original distributions after the resulting magnetic fields undergo the imaging procedure. The behavior of high-frequency spatial noise, and the behavior of the fields with a 1/r asymptote in the imaging procedure in our simulations is analyzed, and compared to the treatment of these phenomena in the published literature. Next, we examine calculation of Mathieu and spheroidal wave functions, solutions to the wave equation in elliptical cylindrical and oblate and prolate spheroidal coordinates, respectively. These functions are also solutions to Schrodinger's equations with certain potential wells, and are useful in solving time-varying superconducting problems. The Mathieu functions are Fourier expanded, and the spheroidal functions expanded in associated Legendre polynomials to convert the defining differential equations to recursion relations. The infinite number of linear recursion equations is converted to an infinite matrix, multiplied by a vector of expansion coefficients, thus becoming an eigenvalue problem. The eigenvalue problem is solved with root solvers, and the eigenvector problem is solved using a Jacobi-type iteration method, after preconditioning the

  1. A web-based computer-tailored smoking prevention programme for primary school children: intervention design and study protocol

    Science.gov (United States)

    2012-01-01

    Background Although the number of smokers has declined in the last decade, smoking is still a major health problem among youngsters and adolescents. For this reason, there is a need for effective smoking prevention programmes targeting primary school children. A web-based computer-tailored feedback programme may be an effective intervention to stimulate youngsters not to start smoking, and increase their knowledge about the adverse effects of smoking and their attitudes and self-efficacy regarding non-smoking. Methods & design This paper describes the development and evaluation protocol of a web-based out-of-school smoking prevention programme for primary school children (age 10-13 years) entitled ‘Fun without Smokes’. It is a transformation of a postal mailed intervention to a web-based intervention. Besides this transformation the effects of prompts will be examined. This web-based intervention will be evaluated in a 2-year cluster randomised controlled trial (c-RCT) with three study arms. An intervention and intervention + prompt condition will be evaluated for effects on smoking behaviour, compared with a no information control condition. Information about pupils’ smoking status and other factors related to smoking will be obtained using a web-based questionnaire. After completing the questionnaire pupils in both intervention conditions will receive three computer-tailored feedback letters in their personal e-mail box. Attitudes, social influences and self-efficacy expectations will be the content of these personalised feedback letters. Pupils in the intervention + prompt condition will - in addition to the personalised feedback letters - receive e-mail and SMS messages prompting them to revisit the ‘Fun without Smokes’ website. The main outcome measures will be ever smoking and the utilisation of the ‘Fun without Smokes’ website. Measurements will be carried out at baseline, 12 months and 24 months of follow-up. Discussion The present study

  2. A web-based computer-tailored smoking prevention programme for primary school children: intervention design and study protocol

    Directory of Open Access Journals (Sweden)

    Cremers Henricus-Paul

    2012-06-01

    Full Text Available Abstract Background Although the number of smokers has declined in the last decade, smoking is still a major health problem among youngsters and adolescents. For this reason, there is a need for effective smoking prevention programmes targeting primary school children. A web-based computer-tailored feedback programme may be an effective intervention to stimulate youngsters not to start smoking, and increase their knowledge about the adverse effects of smoking and their attitudes and self-efficacy regarding non-smoking. Methods & design This paper describes the development and evaluation protocol of a web-based out-of-school smoking prevention programme for primary school children (age 10-13 years entitled ‘Fun without Smokes’. It is a transformation of a postal mailed intervention to a web-based intervention. Besides this transformation the effects of prompts will be examined. This web-based intervention will be evaluated in a 2-year cluster randomised controlled trial (c-RCT with three study arms. An intervention and intervention + prompt condition will be evaluated for effects on smoking behaviour, compared with a no information control condition. Information about pupils’ smoking status and other factors related to smoking will be obtained using a web-based questionnaire. After completing the questionnaire pupils in both intervention conditions will receive three computer-tailored feedback letters in their personal e-mail box. Attitudes, social influences and self-efficacy expectations will be the content of these personalised feedback letters. Pupils in the intervention + prompt condition will - in addition to the personalised feedback letters - receive e-mail and SMS messages prompting them to revisit the ‘Fun without Smokes’ website. The main outcome measures will be ever smoking and the utilisation of the ‘Fun without Smokes’ website. Measurements will be carried out at baseline, 12 months and 24 months of follow

  3. Investigation of optimal scanning protocol for X-ray computed tomography polymer gel dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Sellakumar, P. [Bangalore Institute of Oncology, 44-45/2, II Cross, RRMR Extension, Bangalore 560 027 (India)], E-mail: psellakumar@rediffmail.com; James Jebaseelan Samuel, E. [School of Science and Humanities, VIT University, Vellore 632 014 (India); Supe, Sanjay S. [Department of Radiation Physics, Kidwai Memorial Institute of Oncology, Hosur Road, Bangalore 560 027 (India)

    2007-11-15

    X-ray computed tomography is one of the potential tool used to evaluate the polymer gel dosimeters in three dimensions. The purpose of this study is to investigate the factors which affect the image noise for X-ray CT polymer gel dosimetry. A cylindrical water filled phantom was imaged with single slice Siemens Somatom Emotion CT scanner. The imaging parameters like tube voltage, tube current, slice scan time, slice thickness and reconstruction algorithm were varied independently to study the dependence of noise on each other. Reductions of noise with number of images to be averaged and spatial uniformity of the image were also investigated. Normoxic polymer gel PAGAT was manufactured and irradiated using Siemens Primus linear accelerator. The radiation induced change in CT number was evaluated using X-ray CT scanner. From this study it is clear that image noise is reduced with increase in tube voltage, tube current, slice scan time, slice thickness and also reduced with increasing the number of images averaged. However to reduce the tube load and total scan time, it was concluded that tube voltage of 130 kV, tube current of 200 mA, scan time of 1.5 s, slice thickness of 3 mm for high dose gradient and 5 mm for low dose gradient were optimal scanning protocols for this scanner. Optimum number of images to be averaged was concluded to be 25 for X-ray CT polymer gel dosimetry. Choice of reconstruction algorithm was also critical. From the study it is also clear that CT number increase with imaging tube voltage and shows the energy dependency of polymer gel dosimeter. Hence for evaluation of polymer gel dosimeters with X-ray CT scanner needs the optimization of scanning protocols to reduce the image noise.

  4. Practical considerations for optimizing cardiac computed tomography protocols for comprehensive acquisition prior to transcatheter aortic valve replacement.

    Science.gov (United States)

    Khalique, Omar K; Pulerwitz, Todd C; Halliburton, Sandra S; Kodali, Susheel K; Hahn, Rebecca T; Nazif, Tamim M; Vahl, Torsten P; George, Isaac; Leon, Martin B; D'Souza, Belinda; Einstein, Andrew J

    2016-01-01

    Transcatheter aortic valve replacement (TAVR) is performed frequently in patients with severe, symptomatic aortic stenosis who are at high risk or inoperable for open surgical aortic valve replacement. Computed tomography angiography (CTA) has become the gold standard imaging modality for pre-TAVR cardiac anatomic and vascular access assessment. Traditionally, cardiac CTA has been most frequently used for assessment of coronary artery stenosis, and scanning protocols have generally been tailored for this purpose. Pre-TAVR CTA has different goals than coronary CTA and the high prevalence of chronic kidney disease in the TAVR patient population creates a particular need to optimize protocols for a reduction in iodinated contrast volume. This document reviews details which allow the physician to tailor CTA examinations to maximize image quality and minimize harm, while factoring in multiple patient and scanner variables which must be considered in customizing a pre-TAVR protocol. Copyright © 2016 Society of Cardiovascular Computed Tomography. Published by Elsevier Inc. All rights reserved.

  5. Computational Studies of Protein Hydration Methods

    Science.gov (United States)

    Morozenko, Aleksandr

    It is widely appreciated that water plays a vital role in proteins' functions. The long-range proton transfer inside proteins is usually carried out by the Grotthuss mechanism and requires a chain of hydrogen bonds that is composed of internal water molecules and amino acid residues of the protein. In other cases, water molecules can facilitate the enzymes catalytic reactions by becoming a temporary proton donor/acceptor. Yet a reliable way of predicting water protein interior is still not available to the biophysics community. This thesis presents computational studies that have been performed to gain insights into the problems of fast and accurate prediction of potential water sites inside internal cavities of protein. Specifically, we focus on the task of attainment of correspondence between results obtained from computational experiments and experimental data available from X-ray structures. An overview of existing methods of predicting water molecules in the interior of a protein along with a discussion of the trustworthiness of these predictions is a second major subject of this thesis. A description of differences of water molecules in various media, particularly, gas, liquid and protein interior, and theoretical aspects of designing an adequate model of water for the protein environment are widely discussed in chapters 3 and 4. In chapter 5, we discuss recently developed methods of placement of water molecules into internal cavities of a protein. We propose a new methodology based on the principle of docking water molecules to a protein body which allows to achieve a higher degree of matching experimental data reported in protein crystal structures than other techniques available in the world of biophysical software. The new methodology is tested on a set of high-resolution crystal structures of oligopeptide-binding protein (OppA) containing a large number of resolved internal water molecules and applied to bovine heart cytochrome c oxidase in the fully

  6. Developing an optimum protocol for thermoluminescence dosimetry with gr-200 chips using Taguchi method

    International Nuclear Information System (INIS)

    Sadeghi, Maryam; Faghihi, Reza; Sina, Sedigheh

    2017-01-01

    Thermoluminescence dosimetry (TLD) is a powerful technique with wide applications in personal, environmental and clinical dosimetry. The optimum annealing, storage and reading protocols are very effective in accuracy of TLD response. The purpose of this study is to obtain an optimum protocol for GR-200; LiF: Mg, Cu, P, by optimizing the effective parameters, to increase the reliability of the TLD response using Taguchi method. Taguchi method has been used in this study for optimization of annealing, storage and reading protocols of the TLDs. A number of 108 GR-200 chips were divided into 27 groups, each containing four chips. The TLDs were exposed to three different doses, and stored, annealed and read out by different procedures as suggested by Taguchi Method. By comparing the signal-to-noise ratios the optimum dosimetry procedure was obtained. According to the results, the optimum values for annealing temperature (de.C), Annealing Time (s), Annealing to Exposure time (d), Exposure to Readout time (d), Pre-heat Temperature (de.C), Pre-heat Time (s), Heating Rate (de.C/s), Maximum Temperature of Readout (de.C), readout time (s) and Storage Temperature (de.C) are 240, 90, 1, 2, 50, 0, 15, 240, 13 and -20, respectively. Using the optimum protocol, an efficient glow curve with low residual signals can be achieved. Using optimum protocol obtained by Taguchi method, the dosimetry can be effectively performed with great accuracy. (authors)

  7. A computational protocol for the study of circularly polarized phosphorescence and circular dichroism in spin-forbidden absorption

    DEFF Research Database (Denmark)

    Kaminski, Maciej; Cukras, Janusz; Pecul, Magdalena

    2015-01-01

    We present a computational methodology to calculate the intensity of circular dichroism (CD) in spinforbidden absorption and of circularly polarized phosphorescence (CPP) signals, a manifestation of the optical activity of the triplet–singlet transitions in chiral compounds. The protocol is based...

  8. In vivo cellular imaging using fluorescent proteins - Methods and Protocols

    Directory of Open Access Journals (Sweden)

    M. Monti

    2012-12-01

    Full Text Available The discovery and genetic engineering of fluorescent proteins has revolutionized cell biology. What was previously invisible to the cell often can be made visible with the use of fluorescent proteins. With this words, Robert M. Hoffman introduces In vivo Cellular Imaging Using Fluorescent proteins, the eighteen chapters book dedicated to the description of how fluorescence proteins have changed the way to analyze cellular processes in vivo. Modern researches aim to study new and less invasive methods able to follow the behavior of different cell types in different biological contexts: for example, how cancer cells migrate or how they respond to different therapies. Also, in vivo systems can help researchers to better understand animal embryonic development so as how fluorescence proteins may be used to monitor different processes in living organisms at the molecular and cellular level.

  9. Computational method and system for modeling, analyzing, and optimizing DNA amplification and synthesis

    Science.gov (United States)

    Vandersall, Jennifer A.; Gardner, Shea N.; Clague, David S.

    2010-05-04

    A computational method and computer-based system of modeling DNA synthesis for the design and interpretation of PCR amplification, parallel DNA synthesis, and microarray chip analysis. The method and system include modules that address the bioinformatics, kinetics, and thermodynamics of DNA amplification and synthesis. Specifically, the steps of DNA selection, as well as the kinetics and thermodynamics of DNA hybridization and extensions, are addressed, which enable the optimization of the processing and the prediction of the products as a function of DNA sequence, mixing protocol, time, temperature and concentration of species.

  10. Replication protocol analysis: a method for the study of real-world design thinking

    DEFF Research Database (Denmark)

    Galle, Per; Kovacs, L. B.

    1996-01-01

    Given the brief of an architectural competition on site planning, and the design awarded the first prize, the first author (trained as an architect but not a participant in the competition) produced a line of reasoning that might have led from brief to design. In the paper, such ‘design replication......’ is refined into a method called ‘replication protocol analysis’ (RPA), and discussed from a methodological perspective of design research. It is argued that for the study of real-world design thinking this method offers distinct advantages over traditional ‘design protocol analysis’, which seeks to capture...

  11. A Survey of Automatic Protocol Reverse Engineering Approaches, Methods, and Tools on the Inputs and Outputs View

    OpenAIRE

    Baraka D. Sija; Young-Hoon Goo; Kyu-Seok Shim; Huru Hasanova; Myung-Sup Kim

    2018-01-01

    A network protocol defines rules that control communications between two or more machines on the Internet, whereas Automatic Protocol Reverse Engineering (APRE) defines the way of extracting the structure of a network protocol without accessing its specifications. Enough knowledge on undocumented protocols is essential for security purposes, network policy implementation, and management of network resources. This paper reviews and analyzes a total of 39 approaches, methods, and tools towards ...

  12. Pilot studies for the North American Soil Geochemical Landscapes Project - Site selection, sampling protocols, analytical methods, and quality control protocols

    Science.gov (United States)

    Smith, D.B.; Woodruff, L.G.; O'Leary, R. M.; Cannon, W.F.; Garrett, R.G.; Kilburn, J.E.; Goldhaber, M.B.

    2009-01-01

    In 2004, the US Geological Survey (USGS) and the Geological Survey of Canada sampled and chemically analyzed soils along two transects across Canada and the USA in preparation for a planned soil geochemical survey of North America. This effort was a pilot study to test and refine sampling protocols, analytical methods, quality control protocols, and field logistics for the continental survey. A total of 220 sample sites were selected at approximately 40-km intervals along the two transects. The ideal sampling protocol at each site called for a sample from a depth of 0-5 cm and a composite of each of the O, A, and C horizons. The Ca, Fe, K, Mg, Na, S, Ti, Ag, As, Ba, Be, Bi, Cd, Ce, Co, Cr, Cs, Cu, Ga, In, La, Li, Mn, Mo, Nb, Ni, P, Pb, Rb, Sb, Sc, Sn, Sr, Te, Th, Tl, U, V, W, Y, and Zn by inductively coupled plasma-mass spectrometry and inductively coupled plasma-atomic emission spectrometry following a near-total digestion in a mixture of HCl, HNO3, HClO4, and HF. Separate methods were used for Hg, Se, total C, and carbonate-C on this same size fraction. Only Ag, In, and Te had a large percentage of concentrations below the detection limit. Quality control (QC) of the analyses was monitored at three levels: the laboratory performing the analysis, the USGS QC officer, and the principal investigator for the study. This level of review resulted in an average of one QC sample for every 20 field samples, which proved to be minimally adequate for such a large-scale survey. Additional QC samples should be added to monitor within-batch quality to the extent that no more than 10 samples are analyzed between a QC sample. Only Cr (77%), Y (82%), and Sb (80%) fell outside the acceptable limits of accuracy (% recovery between 85 and 115%) because of likely residence in mineral phases resistant to the acid digestion. A separate sample of 0-5-cm material was collected at each site for determination of organic compounds. A subset of 73 of these samples was analyzed for a suite of

  13. A limited, low-dose computed tomography protocol to examine the sacroiliac joints

    International Nuclear Information System (INIS)

    Friedman, L.; Silberberg, P.J.; Rainbow, A.; Butler, R.

    1993-01-01

    Limited, low-dose, three-scan computed tomography (CT) was shown to be as accurate as a complete CT series in examining the sacroiliac joints and is suggested as an effective alternative to plain radiography as the primary means to detect sacroiliitis. The advantages include the brevity of the examination, a 2-fold to 4-fold reduction in radiation exposure relative to conventional radiography and a 20-fold to 30-fold reduction relative to a full CT series. The technique was developed from studies of anatomic specimens in which the articular surfaces were covered with a film of barium to show clearly the synovial surfaces and allow the choice of the most appropriate levels of section. From the anteroposterior scout view the following levels were defined: at the first sacral foramen, between the first and second sacral foramina and at the third sacral foramen. In the superior section a quarter of the sacroiliac joint is synovial, whereas in the inferior section the entire joint is synovial. The three representative cuts and the anteroposterior scout view are displayed on a single 14 x 17 in. (36 x 43 cm) film. Comparative images at various current strengths showed that at lower currents than conventionally used no diagnostic information was lost, despite a slight increase in noise. The referring physicians at the authors' institution prefer this protocol to the imaging routine previously used. (author). 21 refs., 1 tab., 4 figs

  14. Standardization and Optimization of Computed Tomography Protocols to Achieve Low-Dose

    Science.gov (United States)

    Chin, Cynthia; Cody, Dianna D.; Gupta, Rajiv; Hess, Christopher P.; Kalra, Mannudeep K.; Kofler, James M.; Krishnam, Mayil S.; Einstein, Andrew J.

    2014-01-01

    The increase in radiation exposure due to CT scans has been of growing concern in recent years. CT scanners differ in their capabilities and various indications require unique protocols, but there remains room for standardization and optimization. In this paper we summarize approaches to reduce dose, as discussed in lectures comprising the first session of the 2013 UCSF Virtual Symposium on Radiation Safety in Computed Tomography. The experience of scanning at low dose in different body regions, for both diagnostic and interventional CT procedures, is addressed. An essential primary step is justifying the medical need for each scan. General guiding principles for reducing dose include tailoring a scan to a patient, minimizing scan length, use of tube current modulation and minimizing tube current, minimizing-tube potential, iterative reconstruction, and periodic review of CT studies. Organized efforts for standardization have been spearheaded by professional societies such as the American Association of Physicists in Medicine. Finally, all team members should demonstrate an awareness of the importance of minimizing dose. PMID:24589403

  15. Computing and physical methods to calculate Pu

    International Nuclear Information System (INIS)

    Mohamed, Ashraf Elsayed Mohamed

    2013-01-01

    Main limitations due to the enhancement of the plutonium content are related to the coolant void effect as the spectrum becomes faster, the neutron flux in the thermal region tends towards zero and is concentrated in the region from 10 Ke to 1 MeV. Thus, all captures by 240 Pu and 242 Pu in the thermal and epithermal resonance disappear and the 240 Pu and 242 Pu contributions to the void effect became positive. The higher the Pu content and the poorer the Pu quality, the larger the void effect. The core control in nominal or transient conditions Pu enrichment leads to a decrease in (B eff.), the efficiency of soluble boron and control rods. Also, the Doppler effect tends to decrease when Pu replaces U, so, that in case of transients the core could diverge again if the control is not effective enough. As for the voiding effect, the plutonium degradation and the 240 Pu and 242 Pu accumulation after multiple recycling lead to spectrum hardening and to a decrease in control. One solution would be to use enriched boron in soluble boron and shutdown rods. In this paper, I discuss and show the advanced computing and physical methods to calculate Pu inside the nuclear reactors and glovebox and the different solutions to be used to overcome the difficulties that effect, on safety parameters and on reactor performance, and analysis the consequences of plutonium management on the whole fuel cycle like Raw materials savings, fraction of nuclear electric power involved in the Pu management. All through two types of scenario, one involving a low fraction of the nuclear park dedicated to plutonium management, the other involving a dilution of the plutonium in all the nuclear park. (author)

  16. Computational methods in sequence and structure prediction

    Science.gov (United States)

    Lang, Caiyi

    This dissertation is organized into two parts. In the first part, we will discuss three computational methods for cis-regulatory element recognition in three different gene regulatory networks as the following: (a) Using a comprehensive "Phylogenetic Footprinting Comparison" method, we will investigate the promoter sequence structures of three enzymes (PAL, CHS and DFR) that catalyze sequential steps in the pathway from phenylalanine to anthocyanins in plants. Our result shows there exists a putative cis-regulatory element "AC(C/G)TAC(C)" in the upstream of these enzyme genes. We propose this cis-regulatory element to be responsible for the genetic regulation of these three enzymes and this element, might also be the binding site for MYB class transcription factor PAP1. (b) We will investigate the role of the Arabidopsis gene glutamate receptor 1.1 (AtGLR1.1) in C and N metabolism by utilizing the microarray data we obtained from AtGLR1.1 deficient lines (antiAtGLR1.1). We focus our investigation on the putatively co-regulated transcript profile of 876 genes we have collected in antiAtGLR1.1 lines. By (a) scanning the occurrence of several groups of known abscisic acid (ABA) related cisregulatory elements in the upstream regions of 876 Arabidopsis genes; and (b) exhaustive scanning of all possible 6-10 bps motif occurrence in the upstream regions of the same set of genes, we are able to make a quantative estimation on the enrichment level of each of the cis-regulatory element candidates. We finally conclude that one specific cis-regulatory element group, called "ABRE" elements, are statistically highly enriched within the 876-gene group as compared to their occurrence within the genome. (c) We will introduce a new general purpose algorithm, called "fuzzy REDUCE1", which we have developed recently for automated cis-regulatory element identification. In the second part, we will discuss our newly devised protein design framework. With this framework we have developed

  17. Computational methods for corpus annotation and analysis

    CERN Document Server

    Lu, Xiaofei

    2014-01-01

    This book reviews computational tools for lexical, syntactic, semantic, pragmatic and discourse analysis, with instructions on how to obtain, install and use each tool. Covers studies using Natural Language Processing, and offers ideas for better integration.

  18. A Standard Mutual Authentication Protocol for Cloud Computing Based Health Care System.

    Science.gov (United States)

    Mohit, Prerna; Amin, Ruhul; Karati, Arijit; Biswas, G P; Khan, Muhammad Khurram

    2017-04-01

    Telecare Medical Information System (TMIS) supports a standard platform to the patient for getting necessary medical treatment from the doctor(s) via Internet communication. Security protection is important for medical records (data) of the patients because of very sensitive information. Besides, patient anonymity is another most important property, which must be protected. Most recently, Chiou et al. suggested an authentication protocol for TMIS by utilizing the concept of cloud environment. They claimed that their protocol is patient anonymous and well security protected. We reviewed their protocol and found that it is completely insecure against patient anonymity. Further, the same protocol is not protected against mobile device stolen attack. In order to improve security level and complexity, we design a light weight authentication protocol for the same environment. Our security analysis ensures resilience of all possible security attacks. The performance of our protocol is relatively standard in comparison with the related previous research.

  19. Cloud computing methods and practical approaches

    CERN Document Server

    Mahmood, Zaigham

    2013-01-01

    This book presents both state-of-the-art research developments and practical guidance on approaches, technologies and frameworks for the emerging cloud paradigm. Topics and features: presents the state of the art in cloud technologies, infrastructures, and service delivery and deployment models; discusses relevant theoretical frameworks, practical approaches and suggested methodologies; offers guidance and best practices for the development of cloud-based services and infrastructures, and examines management aspects of cloud computing; reviews consumer perspectives on mobile cloud computing an

  20. Advanced Computational Methods in Bio-Mechanics.

    Science.gov (United States)

    Al Qahtani, Waleed M S; El-Anwar, Mohamed I

    2018-04-15

    A novel partnership between surgeons and machines, made possible by advances in computing and engineering technology, could overcome many of the limitations of traditional surgery. By extending surgeons' ability to plan and carry out surgical interventions more accurately and with fewer traumas, computer-integrated surgery (CIS) systems could help to improve clinical outcomes and the efficiency of healthcare delivery. CIS systems could have a similar impact on surgery to that long since realised in computer-integrated manufacturing. Mathematical modelling and computer simulation have proved tremendously successful in engineering. Computational mechanics has enabled technological developments in virtually every area of our lives. One of the greatest challenges for mechanists is to extend the success of computational mechanics to fields outside traditional engineering, in particular to biology, the biomedical sciences, and medicine. Biomechanics has significant potential for applications in orthopaedic industry, and the performance arts since skills needed for these activities are visibly related to the human musculoskeletal and nervous systems. Although biomechanics is widely used nowadays in the orthopaedic industry to design orthopaedic implants for human joints, dental parts, external fixations and other medical purposes, numerous researches funded by billions of dollars are still running to build a new future for sports and human healthcare in what is called biomechanics era.

  1. Dynamic Auditing Protocol for Efficient and Secure Data Storage in Cloud Computing

    OpenAIRE

    J. Noorul Ameen; J. Jamal Mohamed; N. Nilofer Begam

    2014-01-01

    Cloud computing, where the data has been stored on cloud servers and retrieved by users (data consumers) the data from cloud servers. However, there are some security challenges which are in need of independent auditing services to verify the data integrity and safety in the cloud. Until now a numerous methods has been developed for remote integrity checking whichever only serve for static archive data and cannot be implemented to the auditing service if the data in the cloud is being dynamic...

  2. Chapter 2: Commercial and Industrial Lighting Evaluation Protocol. The Uniform Methods Project: Methods for Determining Energy Efficiency Savings for Specific Measures

    Energy Technology Data Exchange (ETDEWEB)

    Kurnik, Charles W [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Gowans, Dakers [Left Fork Energy, Harrison, NY (United States); Telarico, Chad [DNV GL, Mahwah, NJ (United States)

    2017-11-02

    The Commercial and Industrial Lighting Evaluation Protocol (the protocol) describes methods to account for gross energy savings resulting from the programmatic installation of efficient lighting equipment in large populations of commercial, industrial, and other nonresidential facilities. This protocol does not address savings resulting from changes in codes and standards, or from education and training activities. A separate Uniform Methods Project (UMP) protocol, Chapter 3: Commercial and Industrial Lighting Controls Evaluation Protocol, addresses methods for evaluating savings resulting from lighting control measures such as adding time clocks, tuning energy management system commands, and adding occupancy sensors.

  3. A Survey of Automatic Protocol Reverse Engineering Approaches, Methods, and Tools on the Inputs and Outputs View

    Directory of Open Access Journals (Sweden)

    Baraka D. Sija

    2018-01-01

    Full Text Available A network protocol defines rules that control communications between two or more machines on the Internet, whereas Automatic Protocol Reverse Engineering (APRE defines the way of extracting the structure of a network protocol without accessing its specifications. Enough knowledge on undocumented protocols is essential for security purposes, network policy implementation, and management of network resources. This paper reviews and analyzes a total of 39 approaches, methods, and tools towards Protocol Reverse Engineering (PRE and classifies them into four divisions, approaches that reverse engineer protocol finite state machines, protocol formats, and both protocol finite state machines and protocol formats to approaches that focus directly on neither reverse engineering protocol formats nor protocol finite state machines. The efficiency of all approaches’ outputs based on their selected inputs is analyzed in general along with appropriate reverse engineering inputs format. Additionally, we present discussion and extended classification in terms of automated to manual approaches, known and novel categories of reverse engineered protocols, and a literature of reverse engineered protocols in relation to the seven layers’ OSI (Open Systems Interconnection model.

  4. Exploring two methods of usability testing: concurrent versus retrospective think-aloud protocols

    NARCIS (Netherlands)

    van den Haak, M.J.; de Jong, Menno D.T.

    2003-01-01

    Think-aloud protocols are commonly used for the usability testing of instructional documents, Web sites and interfaces. This paper addresses the benefits and drawbacks of two think-aloud variations: the traditional concurrent think-aloud method and the less familiar retrospective think-aloud

  5. Exploring Two Methods of Usability Testing : Concurrent versus Retrospective Think-Aloud Protocols

    NARCIS (Netherlands)

    Van den Haak, Maaike J.; De Jong, Menno D. T.

    2003-01-01

    Think-aloud protocols are commonly used for the usability testing of instructional documents, web sites and interfaces. This paper addresses the benefits and drawbacks of two think-aloud variations: the traditional concurrent think-aloud method and the less familiar retrospective think-aloud

  6. Protocol for concomitant temporomandibular joint custom-fitted total joint reconstruction and orthognathic surgery utilizing computer-assisted surgical simulation.

    Science.gov (United States)

    Movahed, Reza; Teschke, Marcus; Wolford, Larry M

    2013-12-01

    Clinicians who address temporomandibular joint (TMJ) pathology and dentofacial deformities surgically can perform the surgery in 1 stage or 2 separate stages. The 2-stage approach requires the patient to undergo 2 separate operations and anesthesia, significantly prolonging the overall treatment. However, performing concomitant TMJ and orthognathic surgery (CTOS) in these cases requires careful treatment planning and surgical proficiency in the 2 surgical areas. This article presents a new treatment protocol for the application of computer-assisted surgical simulation in CTOS cases requiring reconstruction with patient-fitted total joint prostheses. The traditional and new CTOS protocols are described and compared. The new CTOS protocol helps decrease the preoperative workup time and increase the accuracy of model surgery. Copyright © 2013 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  7. Task Group on Computer/Communication Protocols for Bibliographic Data Exchange. Interim Report = Groupe de Travail sur les Protocoles de Communication/Ordinateurs pour l'Exchange de Donnees Bibliographiques. Rapport d'Etape. May 1983.

    Science.gov (United States)

    Canadian Network Papers, 1983

    1983-01-01

    This preliminary report describes the work to date of the Task Group on Computer/Communication protocols for Bibliographic Data Interchange, which was formed in 1980 to develop a set of protocol standards to facilitate communication between heterogeneous library and information systems within the framework of Open Systems Interconnection (OSI). A…

  8. New or improved computational methods and advanced reactor design

    International Nuclear Information System (INIS)

    Nakagawa, Masayuki; Takeda, Toshikazu; Ushio, Tadashi

    1997-01-01

    Nuclear computational method has been studied continuously up to date, as a fundamental technology supporting the nuclear development. At present, research on computational method according to new theory and the calculating method thought to be difficult to practise are also continued actively to find new development due to splendid improvement of features of computer. In Japan, many light water type reactors are now in operations, new computational methods are induced for nuclear design, and a lot of efforts are concentrated for intending to more improvement of economics and safety. In this paper, some new research results on the nuclear computational methods and their application to nuclear design of the reactor were described for introducing recent trend of the nuclear design of the reactor. 1) Advancement of the computational method, 2) Reactor core design and management of the light water reactor, and 3) Nuclear design of the fast reactor. (G.K.)

  9. A computer method for spectral classification

    International Nuclear Information System (INIS)

    Appenzeller, I.; Zekl, H.

    1978-01-01

    The authors describe the start of an attempt to improve the accuracy of spectroscopic parallaxes by evaluating spectroscopic temperature and luminosity criteria such as those of the MK classification spectrograms which were analyzed automatically by means of a suitable computer program. (Auth.)

  10. Computational structural biology: methods and applications

    National Research Council Canada - National Science Library

    Schwede, Torsten; Peitsch, Manuel Claude

    2008-01-01

    ... sequencing reinforced the observation that structural information is needed to understand the detailed function and mechanism of biological molecules such as enzyme reactions and molecular recognition events. Furthermore, structures are obviously key to the design of molecules with new or improved functions. In this context, computational structural biology...

  11. Positron emission tomography/computed tomography--imaging protocols, artifacts, and pitfalls.

    Science.gov (United States)

    Bockisch, Andreas; Beyer, Thomas; Antoch, Gerald; Freudenberg, Lutz S; Kühl, Hilmar; Debatin, Jörg F; Müller, Stefan P

    2004-01-01

    There has been a longstanding interest in fused images of anatomical information, such as that provided by computed tomography (CT) or magnetic resonance imaging (MRI) systems, with biological information obtainable by positron emission tomography (PET). The near-simultaneous data acquisition in a fixed combination of a PET and a CT scanner in a combined PET/CT imaging system minimizes spatial and temporal mismatches between the modalities by eliminating the need to move the patient in between exams. In addition, using the fast CT scan for PET attenuation correction, the duration of the examination is significantly reduced compared to standalone PET imaging with standard rod-transmission sources. The main source of artifacts arises from the use of the CT-data for scatter and attenuation correction of the PET images. Today, CT reconstruction algorithms cannot account for the presence of metal implants, such as dental fillings or prostheses, properly, thus resulting in streak artifacts, which are propagated into the PET image by the attenuation correction. The transformation of attenuation coefficients at X-ray energies to those at 511 keV works well for soft tissues, bone, and air, but again is insufficient for dense CT contrast agents, such as iodine or barium. Finally, mismatches, for example, due to uncoordinated respiration result in incorrect attenuation-corrected PET images. These artifacts, however, can be minimized or avoided prospectively by careful acquisition protocol considerations. In doubt, the uncorrected images almost always allow discrimination between true and artificial finding. PET/CT has to be integrated into the diagnostic workflow for harvesting the full potential of the new modality. In particular, the diagnostic power of both, the CT and the PET within the combination must not be underestimated. By combining multiple diagnostic studies within a single examination, significant logistic advantages can be expected if the combined PET

  12. Developing an Optimum Protocol for Thermoluminescence Dosimetry with GR-200 Chips using Taguchi Method.

    Science.gov (United States)

    Sadeghi, Maryam; Faghihi, Reza; Sina, Sedigheh

    2017-06-15

    Thermoluminescence dosimetry (TLD) is a powerful technique with wide applications in personal, environmental and clinical dosimetry. The optimum annealing, storage and reading protocols are very effective in accuracy of TLD response. The purpose of this study is to obtain an optimum protocol for GR-200; LiF: Mg, Cu, P, by optimizing the effective parameters, to increase the reliability of the TLD response using Taguchi method. Taguchi method has been used in this study for optimization of annealing, storage and reading protocols of the TLDs. A number of 108 GR-200 chips were divided into 27 groups, each containing four chips. The TLDs were exposed to three different doses, and stored, annealed and read out by different procedures as suggested by Taguchi Method. By comparing the signal-to-noise ratios the optimum dosimetry procedure was obtained. According to the results, the optimum values for annealing temperature (°C), Annealing Time (s), Annealing to Exposure time (d), Exposure to Readout time (d), Pre-heat Temperature (°C), Pre-heat Time (s), Heating Rate (°C/s), Maximum Temperature of Readout (°C), readout time (s) and Storage Temperature (°C) are 240, 90, 1, 2, 50, 0, 15, 240, 13 and -20, respectively. Using the optimum protocol, an efficient glow curve with low residual signals can be achieved. Using optimum protocol obtained by Taguchi method, the dosimetry can be effectively performed with great accuracy. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  13. New Computational Approaches for NMR-based Drug Design: A Protocol for Ligand Docking to Flexible Target Sites

    International Nuclear Information System (INIS)

    Gracia, Luis; Speidel, Joshua A.; Weinstein, Harel

    2006-01-01

    NMR-based drug design has met with some success in the last decade, as illustrated in numerous instances by Fesik's ''ligand screening by NMR'' approach. Ongoing efforts to generalize this success have led us to the development of a new paradigm in which quantitative computational approaches are being integrated with NMR derived data and biological assays. The key component of this work is the inclusion of the intrinsic dynamic quality of NMR structures in theoretical models and its use in docking. A new computational protocol is introduced here, designed to dock small molecule ligands to flexible proteins derived from NMR structures. The algorithm makes use of a combination of simulated annealing monte carlo simulations (SA/MC) and a mean field potential informed by the NMR data. The new protocol is illustrated in the context of an ongoing project aimed at developing new selective inhibitors for the PCAF bromodomains that interact with HIV Tat

  14. Soft computing methods for geoidal height transformation

    Science.gov (United States)

    Akyilmaz, O.; Özlüdemir, M. T.; Ayan, T.; Çelik, R. N.

    2009-07-01

    Soft computing techniques, such as fuzzy logic and artificial neural network (ANN) approaches, have enabled researchers to create precise models for use in many scientific and engineering applications. Applications that can be employed in geodetic studies include the estimation of earth rotation parameters and the determination of mean sea level changes. Another important field of geodesy in which these computing techniques can be applied is geoidal height transformation. We report here our use of a conventional polynomial model, the Adaptive Network-based Fuzzy (or in some publications, Adaptive Neuro-Fuzzy) Inference System (ANFIS), an ANN and a modified ANN approach to approximate geoid heights. These approximation models have been tested on a number of test points. The results obtained through the transformation processes from ellipsoidal heights into local levelling heights have also been compared.

  15. Soft Computing Methods in Design of Superalloys

    Science.gov (United States)

    Cios, K. J.; Berke, L.; Vary, A.; Sharma, S.

    1996-01-01

    Soft computing techniques of neural networks and genetic algorithms are used in the design of superalloys. The cyclic oxidation attack parameter K(sub a), generated from tests at NASA Lewis Research Center, is modelled as a function of the superalloy chemistry and test temperature using a neural network. This model is then used in conjunction with a genetic algorithm to obtain an optimized superalloy composition resulting in low K(sub a) values.

  16. Computer simulations suggest that acute correction of hyperglycaemia with an insulin bolus protocol might be useful in brain FDG PET

    International Nuclear Information System (INIS)

    Buchert, R.; Brenner, W.; Apostolova, I.; Mester, J.; Clausen, M.; Santer, R.; Silverman, D.H.S.

    2009-01-01

    FDG PET in hyperglycaemic subjects often suffers from limited statistical image quality, which may hamper visual and quantitative evaluation. In our study the following insulin bolus protocol is proposed for acute correction of hyperglycaemia (> 7.0 mmol/l) in brain FDG PET. (i) Intravenous bolus injection of short-acting insulin, one I.E. for each 0.6 mmol/l blood glucose above 7.0. (ii) If 20 min after insulin administration plasma glucose is ≤ 7.0 mmol/l, proceed to (iii). If insulin has not taken sufficient effect step back to (i). Compute insulin dose with the updated blood glucose level. (iii) Wait further 20 min before injection of FDG. (iv) Continuous supervision of the patient during the whole scanning procedure. The potential of this protocol for improvement of image quality in brain FDG PET in hyperglycaemic subjects was evaluated by computer simulations within the Sokoloff model. A plausibility check of the prediction of the computer simulations on the magnitude of the effect that might be achieved by correction of hyperglycaemia was performed by retrospective evaluation of the relation between blood glucose level and brain FDG uptake in 89 subjects in whom FDG PET had been performed for diagnosis of Alzheimer's disease. The computer simulations suggested that acute correction of hyperglycaemia according to the proposed bolus insulin protocol might increase the FDG uptake of the brain by up to 80%. The magnitude of this effect was confirmed by the patient data. The proposed management protocol for acute correction of hyperglycaemia with insulin has the potential to significantly improve the statistical quality of brain FDG PET images. This should be confirmed in a prospective study in patients. (orig.)

  17. Computer simulations suggest that acute correction of hyperglycaemia with an insulin bolus protocol might be useful in brain FDG PET

    Energy Technology Data Exchange (ETDEWEB)

    Buchert, R.; Brenner, W.; Apostolova, I.; Mester, J.; Clausen, M. [University Medical Center Hamburg-Eppendorf (Germany). Dept. of Nuclear Medicine; Santer, R. [University Medical Center Hamburg-Eppendorf (Germany). Center for Gynaecology, Obstetrics and Paediatrics; Silverman, D.H.S. [David Geffen School of Medicine at UCLA, Los Angeles, CA (United States). Dept. of Molecular and Medical Pharmacology

    2009-07-01

    FDG PET in hyperglycaemic subjects often suffers from limited statistical image quality, which may hamper visual and quantitative evaluation. In our study the following insulin bolus protocol is proposed for acute correction of hyperglycaemia (> 7.0 mmol/l) in brain FDG PET. (i) Intravenous bolus injection of short-acting insulin, one I.E. for each 0.6 mmol/l blood glucose above 7.0. (ii) If 20 min after insulin administration plasma glucose is {<=} 7.0 mmol/l, proceed to (iii). If insulin has not taken sufficient effect step back to (i). Compute insulin dose with the updated blood glucose level. (iii) Wait further 20 min before injection of FDG. (iv) Continuous supervision of the patient during the whole scanning procedure. The potential of this protocol for improvement of image quality in brain FDG PET in hyperglycaemic subjects was evaluated by computer simulations within the Sokoloff model. A plausibility check of the prediction of the computer simulations on the magnitude of the effect that might be achieved by correction of hyperglycaemia was performed by retrospective evaluation of the relation between blood glucose level and brain FDG uptake in 89 subjects in whom FDG PET had been performed for diagnosis of Alzheimer's disease. The computer simulations suggested that acute correction of hyperglycaemia according to the proposed bolus insulin protocol might increase the FDG uptake of the brain by up to 80%. The magnitude of this effect was confirmed by the patient data. The proposed management protocol for acute correction of hyperglycaemia with insulin has the potential to significantly improve the statistical quality of brain FDG PET images. This should be confirmed in a prospective study in patients. (orig.)

  18. Statistical methods and computing for big data

    Science.gov (United States)

    Wang, Chun; Chen, Ming-Hui; Schifano, Elizabeth; Wu, Jing

    2016-01-01

    Big data are data on a massive scale in terms of volume, intensity, and complexity that exceed the capacity of standard analytic tools. They present opportunities as well as challenges to statisticians. The role of computational statisticians in scientific discovery from big data analyses has been under-recognized even by peer statisticians. This article summarizes recent methodological and software developments in statistics that address the big data challenges. Methodologies are grouped into three classes: subsampling-based, divide and conquer, and online updating for stream data. As a new contribution, the online updating approach is extended to variable selection with commonly used criteria, and their performances are assessed in a simulation study with stream data. Software packages are summarized with focuses on the open source R and R packages, covering recent tools that help break the barriers of computer memory and computing power. Some of the tools are illustrated in a case study with a logistic regression for the chance of airline delay. PMID:27695593

  19. Statistical methods and computing for big data.

    Science.gov (United States)

    Wang, Chun; Chen, Ming-Hui; Schifano, Elizabeth; Wu, Jing; Yan, Jun

    2016-01-01

    Big data are data on a massive scale in terms of volume, intensity, and complexity that exceed the capacity of standard analytic tools. They present opportunities as well as challenges to statisticians. The role of computational statisticians in scientific discovery from big data analyses has been under-recognized even by peer statisticians. This article summarizes recent methodological and software developments in statistics that address the big data challenges. Methodologies are grouped into three classes: subsampling-based, divide and conquer, and online updating for stream data. As a new contribution, the online updating approach is extended to variable selection with commonly used criteria, and their performances are assessed in a simulation study with stream data. Software packages are summarized with focuses on the open source R and R packages, covering recent tools that help break the barriers of computer memory and computing power. Some of the tools are illustrated in a case study with a logistic regression for the chance of airline delay.

  20. Advanced Computational Methods for Monte Carlo Calculations

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2018-01-12

    This course is intended for graduate students who already have a basic understanding of Monte Carlo methods. It focuses on advanced topics that may be needed for thesis research, for developing new state-of-the-art methods, or for working with modern production Monte Carlo codes.

  1. Computational methods for two-phase flow and particle transport

    CERN Document Server

    Lee, Wen Ho

    2013-01-01

    This book describes mathematical formulations and computational methods for solving two-phase flow problems with a computer code that calculates thermal hydraulic problems related to light water and fast breeder reactors. The physical model also handles the particle and gas flow problems that arise from coal gasification and fluidized beds. The second part of this book deals with the computational methods for particle transport.

  2. Reference depth for geostrophic computation - A new method

    Digital Repository Service at National Institute of Oceanography (India)

    Varkey, M.J.; Sastry, J.S.

    Various methods are available for the determination of reference depth for geostrophic computation. A new method based on the vertical profiles of mean and variance of the differences of mean specific volume anomaly (delta x 10) for different layers...

  3. Lattice Boltzmann method fundamentals and engineering applications with computer codes

    CERN Document Server

    Mohamad, A A

    2014-01-01

    Introducing the Lattice Boltzmann Method in a readable manner, this book provides detailed examples with complete computer codes. It avoids the most complicated mathematics and physics without scarifying the basic fundamentals of the method.

  4. An Augmented Fast Marching Method for Computing Skeletons and Centerlines

    NARCIS (Netherlands)

    Telea, Alexandru; Wijk, Jarke J. van

    2002-01-01

    We present a simple and robust method for computing skeletons for arbitrary planar objects and centerlines for 3D objects. We augment the Fast Marching Method (FMM) widely used in level set applications by computing the paramterized boundary location every pixel came from during the boundary

  5. Classical versus Computer Algebra Methods in Elementary Geometry

    Science.gov (United States)

    Pech, Pavel

    2005-01-01

    Computer algebra methods based on results of commutative algebra like Groebner bases of ideals and elimination of variables make it possible to solve complex, elementary and non elementary problems of geometry, which are difficult to solve using a classical approach. Computer algebra methods permit the proof of geometric theorems, automatic…

  6. Methods for teaching geometric modelling and computer graphics

    Energy Technology Data Exchange (ETDEWEB)

    Rotkov, S.I.; Faitel`son, Yu. Ts.

    1992-05-01

    This paper considers methods for teaching the methods and algorithms of geometric modelling and computer graphics to programmers, designers and users of CAD and computer-aided research systems. There is a bibliography that can be used to prepare lectures and practical classes. 37 refs., 1 tab.

  7. Replication protocol analysis: a method for the study of real-world design thinking

    DEFF Research Database (Denmark)

    Galle, Per; Kovacs, L. B.

    1996-01-01

    ’ is refined into a method called ‘replication protocol analysis’ (RPA), and discussed from a methodological perspective of design research. It is argued that for the study of real-world design thinking this method offers distinct advantages over traditional ‘design protocol analysis’, which seeks to capture......Given the brief of an architectural competition on site planning, and the design awarded the first prize, the first author (trained as an architect but not a participant in the competition) produced a line of reasoning that might have led from brief to design. In the paper, such ‘design replication...... the designer’s authentic line of reasoning. To illustrate how RPA can be used, the site planning case is briefly presented, and part of the replicated line of reasoning analysed. One result of the analysis is a glimpse of a ‘logic of design’; another is an insight which sheds new light on Darke’s classical...

  8. Computational Methods for Conformational Sampling of Biomolecules

    DEFF Research Database (Denmark)

    Bottaro, Sandro

    mathematical approach to a classic geometrical problem in protein simulations, and demonstrated its superiority compared to existing approaches. Secondly, we have constructed a more accurate implicit model of the aqueous environment, which is of fundamental importance in protein chemistry. This model......Proteins play a fundamental role in virtually every process within living organisms. For example, some proteins act as enzymes, catalyzing a wide range of reactions necessary for life, others mediate the cell interaction with the surrounding environment and still others have regulatory functions...... is computationally much faster than models where water molecules are represented explicitly. Finally, in collaboration with the group of structural bioinformatics at the Department of Biology (KU), we have applied these techniques in the context of modeling of protein structure and flexibility from low...

  9. Computational Method for Atomistic-Continuum Homogenization

    National Research Council Canada - National Science Library

    Chung, Peter

    2002-01-01

    The homogenization method is used as a framework for developing a multiscale system of equations involving atoms at zero temperature at the small scale and continuum mechanics at the very large scale...

  10. Augmented Quadruple-Phase Contrast Media Administration and Triphasic Scan Protocol Increases Image Quality at Reduced Radiation Dose During Computed Tomography Urography.

    Science.gov (United States)

    Saade, Charbel; Mohamad, May; Kerek, Racha; Hamieh, Nadine; Alsheikh Deeb, Ibrahim; El-Achkar, Bassam; Tamim, Hani; Abdul Razzak, Farah; Haddad, Maurice; Abi-Ghanem, Alain S; El-Merhi, Fadi

    The aim of this article was to investigate the opacification of the renal vasculature and the urogenital system during computed tomography urography by using a quadruple-phase contrast media in a triphasic scan protocol. A total of 200 patients with possible urinary tract abnormalities were equally divided between 2 protocols. Protocol A used the conventional single bolus and quadruple-phase scan protocol (pre, arterial, venous, and delayed), retrospectively. Protocol B included a quadruple-phase contrast media injection with a triphasic scan protocol (pre, arterial and combined venous, and delayed), prospectively. Each protocol used 100 mL contrast and saline at a flow rate of 4.5 mL. Attenuation profiles and contrast-to-noise ratio of the renal arteries, veins, and urogenital tract were measured. Effective radiation dose calculation, data analysis by independent sample t test, receiver operating characteristic, and visual grading characteristic analyses were performed. In arterial circulation, only the inferior interlobular arteries in both protocols showed a statistical significance (P contrast-to-noise ratio than protocol A (protocol B: 22.68 ± 13.72; protocol A: 14.75 ± 5.76; P contrast media and triphasic scan protocol usage increases the image quality at a reduced radiation dose.

  11. Masonry fireplace emissions test method: Repeatability and sensitivity to fueling protocol.

    Science.gov (United States)

    Stern, C H; Jaasma, D R; Champion, M R

    1993-03-01

    A test method for masonry fireplaces has been evaluated during testing on six masonry fireplace configurations. The method determines carbon monoxide and particulate matter emission rates (g/h) and factors (g/kg) and does not require weighing of the appliance to determine the timing of fuel loading.The intralaboratory repeatability of the test method has been determined from multiple tests on the six fireplaces. For the tested fireplaces, the ratio of the highest to lowest measured PM rate averaged 1.17 and in no case was greater than 1.32. The data suggest that some of the variation is due to differences in fuel properties.The influence of fueling protocol on emissions has also been studied. A modified fueling protocol, tested in large and small fireplaces, reduced CO and PM emission factors by roughly 40% and reduced CO and PM rates from 0 to 30%. For both of these fireplaces, emission rates were less sensitive to fueling protocol than emission factors.

  12. Development of the protocol for purification of artemisinin based on combination of commercial and computationally designed adsorbents.

    Science.gov (United States)

    Piletska, Elena V; Karim, Kal; Cutler, Malcolm; Piletsky, Sergey A

    2013-01-01

    A polymeric adsorbent for extraction of the antimalarial drug artemisinin from Artemisia annua L. was computationally designed. This polymer demonstrated a high capacity for artemisinin (120 mg g(-1) ), quantitative recovery (87%) and was found to be an effective material for purification of artemisinin from complex plant matrix. The artemisinin quantification was conducted using an optimised HPLC-MS protocol, which was characterised by high precision and linearity in the concentration range between 0.05 and 2 μg mL(-1) . Optimisation of the purification protocol also involved screening of commercial adsorbents for the removal of waxes and other interfering natural compounds, which inhibit the crystallisation of artemisinin. As a result of a two step-purification protocol crystals of artemisinin were obtained, and artemisinin purity was evaluated as 75%. By performing the second stage of purification twice, the purity of artemisinin can be further improved to 99%. The developed protocol produced high-purity artemisinin using only a few purification steps that makes it suitable for large scale industrial manufacturing process. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Methods for CT automatic exposure control protocol translation between scanner platforms.

    Science.gov (United States)

    McKenney, Sarah E; Seibert, J Anthony; Lamba, Ramit; Boone, John M

    2014-03-01

    An imaging facility with a diverse fleet of CT scanners faces considerable challenges when propagating CT protocols with consistent image quality and patient dose across scanner makes and models. Although some protocol parameters can comfortably remain constant among scanners (eg, tube voltage, gantry rotation time), the automatic exposure control (AEC) parameter, which selects the overall mA level during tube current modulation, is difficult to match among scanners, especially from different CT manufacturers. Objective methods for converting tube current modulation protocols among CT scanners were developed. Three CT scanners were investigated, a GE LightSpeed 16 scanner, a GE VCT scanner, and a Siemens Definition AS+ scanner. Translation of the AEC parameters such as noise index and quality reference mAs across CT scanners was specifically investigated. A variable-diameter poly(methyl methacrylate) phantom was imaged on the 3 scanners using a range of AEC parameters for each scanner. The phantom consisted of 5 cylindrical sections with diameters of 13, 16, 20, 25, and 32 cm. The protocol translation scheme was based on matching either the volumetric CT dose index or image noise (in Hounsfield units) between two different CT scanners. A series of analytic fit functions, corresponding to different patient sizes (phantom diameters), were developed from the measured CT data. These functions relate the AEC metric of the reference scanner, the GE LightSpeed 16 in this case, to the AEC metric of a secondary scanner. When translating protocols between different models of CT scanners (from the GE LightSpeed 16 reference scanner to the GE VCT system), the translation functions were linear. However, a power-law function was necessary to convert the AEC functions of the GE LightSpeed 16 reference scanner to the Siemens Definition AS+ secondary scanner, because of differences in the AEC functionality designed by these two companies. Protocol translation on the basis of

  14. Instrument design optimization with computational methods

    Energy Technology Data Exchange (ETDEWEB)

    Moore, Michael H. [Old Dominion Univ., Norfolk, VA (United States)

    2017-08-01

    Using Finite Element Analysis to approximate the solution of differential equations, two different instruments in experimental Hall C at the Thomas Jefferson National Accelerator Facility are analyzed. The time dependence of density uctuations from the liquid hydrogen (LH2) target used in the Qweak experiment (2011-2012) are studied with Computational Fluid Dynamics (CFD) and the simulation results compared to data from the experiment. The 2.5 kW liquid hydrogen target was the highest power LH2 target in the world and the first to be designed with CFD at Jefferson Lab. The first complete magnetic field simulation of the Super High Momentum Spectrometer (SHMS) is presented with a focus on primary electron beam deflection downstream of the target. The SHMS consists of a superconducting horizontal bending magnet (HB) and three superconducting quadrupole magnets. The HB allows particles scattered at an angle of 5:5 deg to the beam line to be steered into the quadrupole magnets which make up the optics of the spectrometer. Without mitigation, remnant fields from the SHMS may steer the unscattered beam outside of the acceptable envelope on the beam dump and limit beam operations at small scattering angles. A solution is proposed using optimal placement of a minimal amount of shielding iron around the beam line.

  15. Computer methods in physics 250 problems with guided solutions

    CERN Document Server

    Landau, Rubin H

    2018-01-01

    Our future scientists and professionals must be conversant in computational techniques. In order to facilitate integration of computer methods into existing physics courses, this textbook offers a large number of worked examples and problems with fully guided solutions in Python as well as other languages (Mathematica, Java, C, Fortran, and Maple). It’s also intended as a self-study guide for learning how to use computer methods in physics. The authors include an introductory chapter on numerical tools and indication of computational and physics difficulty level for each problem.

  16. Electromagnetic computation methods for lightning surge protection studies

    CERN Document Server

    Baba, Yoshihiro

    2016-01-01

    This book is the first to consolidate current research and to examine the theories of electromagnetic computation methods in relation to lightning surge protection. The authors introduce and compare existing electromagnetic computation methods such as the method of moments (MOM), the partial element equivalent circuit (PEEC), the finite element method (FEM), the transmission-line modeling (TLM) method, and the finite-difference time-domain (FDTD) method. The application of FDTD method to lightning protection studies is a topic that has matured through many practical applications in the past decade, and the authors explain the derivation of Maxwell's equations required by the FDTD, and modeling of various electrical components needed in computing lightning electromagnetic fields and surges with the FDTD method. The book describes the application of FDTD method to current and emerging problems of lightning surge protection of continuously more complex installations, particularly in critical infrastructures of e...

  17. Three-dimensional protein structure prediction: Methods and computational strategies.

    Science.gov (United States)

    Dorn, Márcio; E Silva, Mariel Barbachan; Buriol, Luciana S; Lamb, Luis C

    2014-10-12

    A long standing problem in structural bioinformatics is to determine the three-dimensional (3-D) structure of a protein when only a sequence of amino acid residues is given. Many computational methodologies and algorithms have been proposed as a solution to the 3-D Protein Structure Prediction (3-D-PSP) problem. These methods can be divided in four main classes: (a) first principle methods without database information; (b) first principle methods with database information; (c) fold recognition and threading methods; and (d) comparative modeling methods and sequence alignment strategies. Deterministic computational techniques, optimization techniques, data mining and machine learning approaches are typically used in the construction of computational solutions for the PSP problem. Our main goal with this work is to review the methods and computational strategies that are currently used in 3-D protein prediction. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Comparison of Five Computational Methods for Computing Q Factors in Photonic Crystal Membrane Cavities

    DEFF Research Database (Denmark)

    Novitsky, Andrey; de Lasson, Jakob Rosenkrantz; Frandsen, Lars Hagedorn

    2017-01-01

    Five state-of-the-art computational methods are benchmarked by computing quality factors and resonance wavelengths in photonic crystal membrane L5 and L9 line defect cavities. The convergence of the methods with respect to resolution, degrees of freedom and number of modes is investigated. Specia...

  19. Computational Methods for Physicists Compendium for Students

    CERN Document Server

    Sirca, Simon

    2012-01-01

    This book helps advanced undergraduate, graduate and postdoctoral students in their daily work by offering them a compendium of numerical methods. The choice of methods pays  significant attention to error estimates, stability and convergence issues as well as to the ways to optimize program execution speeds. Many examples are given throughout the chapters, and each chapter is followed by at least a handful of more comprehensive problems which may be dealt with, for example, on a weekly basis in a one- or two-semester course. In these end-of-chapter problems the physics background is pronounced, and the main text preceding them is intended as an introduction or as a later reference. Less stress is given to the explanation of individual algorithms. It is tried to induce in the reader an own independent thinking and a certain amount of scepticism and scrutiny instead of blindly following readily available commercial tools.

  20. Measurement method of cardiac computed tomography (CT)

    International Nuclear Information System (INIS)

    Watanabe, Shigeru; Yamamoto, Hironori; Yumura, Yasuo; Yoshida, Hideo; Morooka, Nobuhiro

    1980-01-01

    The CT was carried out in 126 cases consisting of 31 normals, 17 cases of mitral stenosis (MS), 8 cases of mitral regurgitation (MR), 11 cases of aortic stenosis (AS), 9 cases of aortic regurgitation (AR), 20 cases of myocardial infarction (MI), 8 cases of atrial septal defect (ASD) and 22 hypertensives. The 20-second scans were performed every 1.5 cm from the 2nd intercostal space to the 5th or 6th intercostal space. The computed tomograms obtained were classified into 8 levels by cross-sectional anatomy; levels of (1) the aortic arch, (2) just beneath the aortic arch, (3) the pulmonary artery bifurcation, (4) the right atrial appendage or the upper right atrium, (5) the aortic root, (6) the upper left ventricle, (7) the mid left ventricle, and (8) the lower left ventricle. The diameter (anteroposterior and transverse) and cross-sectional area were measured about ascending aorta (Ao), descending aorta (AoD), superior vena cava (SVC), inferoir vena cava (IVC), pulmonary artery branch (PA), main pulmonary artery (mPA), left atrium (LA), right atrium (RA), and right ventricular outflow tract (RVOT) on each level where they were clearly distinguished. However, it was difficult to separate cardiac wall from cardiac cavity because there was little difference of X-ray attenuation coefficient between the myocardium and blood. Therefore, on mid ventricular level, diameter and area about total cardiac shadow were measured, and then cardiac ratios to the thorax were respectively calculated. The normal range of their values was shown in table, and abnormal characteristics in cardiac disease were exhibited in comparison with normal values. In MS, diameter and area in LA were significantly larger than normal. In MS and ASD, all the right cardiac system were larger than normal, especially, RA and SVC in MS, PA and RVOT in ASD. The diameter and area of the aortic root was larger in the order of AR, AS and HT than normal. (author)

  1. Computational Biology Methods for Characterization of Pluripotent Cells.

    Science.gov (United States)

    Araúzo-Bravo, Marcos J

    2016-01-01

    Pluripotent cells are a powerful tool for regenerative medicine and drug discovery. Several techniques have been developed to induce pluripotency, or to extract pluripotent cells from different tissues and biological fluids. However, the characterization of pluripotency requires tedious, expensive, time-consuming, and not always reliable wet-lab experiments; thus, an easy, standard quality-control protocol of pluripotency assessment remains to be established. Here to help comes the use of high-throughput techniques, and in particular, the employment of gene expression microarrays, which has become a complementary technique for cellular characterization. Research has shown that the transcriptomics comparison with an Embryonic Stem Cell (ESC) of reference is a good approach to assess the pluripotency. Under the premise that the best protocol is a computer software source code, here I propose and explain line by line a software protocol coded in R-Bioconductor for pluripotency assessment based on the comparison of transcriptomics data of pluripotent cells with an ESC of reference. I provide advice for experimental design, warning about possible pitfalls, and guides for results interpretation.

  2. Three numerical methods for the computation of the electrostatic energy

    International Nuclear Information System (INIS)

    Poenaru, D.N.; Galeriu, D.

    1975-01-01

    The FORTRAN programs for computation of the electrostatic energy of a body with axial symmetry by Lawrence, Hill-Wheeler and Beringer methods are presented in detail. The accuracy, time of computation and the required memory of these methods are tested at various deformations for two simple parametrisations: two overlapping identical spheres and a spheroid. On this basis the field of application of each method is recomended

  3. Optimization on the dose versus noise in the image on protocols for computed tomography of pediatric head

    International Nuclear Information System (INIS)

    Saint'Yves, Thalis L.A.; Travassos, Paulo Cesar B.; Goncalves, Elicardo A.S.; Mecca A, Fernando; Silveira, Thiago B.

    2010-01-01

    This article aims to establish protocols optimized for computed tomography of pediatric skull, to the Picker Q2000 tomography of the Instituto Nacional de Cancer, through the analysis of dose x noise on the image with the variation of values of mAs and kVp. We used a water phantom to measure the noise, a pencil type ionization chamber to measure the dose in the air and the Alderson Randon phantom for check the quality of the image. We found values of mAs and kVp that reduce the skin dose of the original protocol used in 35.9%, maintaining the same image quality at a safe diagnosis. (author)

  4. Influence of different luting protocols on shear bond strength of computer aided design/computer aided manufacturing resin nanoceramic material to dentin.

    Science.gov (United States)

    Poggio, Claudio; Pigozzo, Marco; Ceci, Matteo; Scribante, Andrea; Beltrami, Riccardo; Chiesa, Marco

    2016-01-01

    The purpose of this study was to evaluate the influence of three different luting protocols on shear bond strength of computer aided design/computer aided manufacturing (CAD/CAM) resin nanoceramic (RNC) material to dentin. In this in vitro study, 30 disks were milled from RNC blocks (Lava Ultimate/3M ESPE) with CAD/CAM technology. The disks were subsequently cemented to the exposed dentin of 30 recently extracted bovine permanent mandibular incisors. The specimens were randomly assigned into 3 groups of 10 teeth each. In Group 1, disks were cemented using a total-etch protocol (Scotchbond™ Universal Etchant phosphoric acid + Scotchbond Universal Adhesive + RelyX™ Ultimate conventional resin cement); in Group 2, disks were cemented using a self-etch protocol (Scotchbond Universal Adhesive + RelyX™ Ultimate conventional resin cement); in Group 3, disks were cemented using a self-adhesive protocol (RelyX™ Unicem 2 Automix self-adhesive resin cement). All cemented specimens were placed in a universal testing machine (Instron Universal Testing Machine 3343) and submitted to a shear bond strength test to check the strength of adhesion between the two substrates, dentin, and RNC disks. Specimens were stressed at a crosshead speed of 1 mm/min. Data were analyzed with analysis of variance and post-hoc Tukey's test at a level of significance of 0.05. Post-hoc Tukey testing showed that the highest shear strength values (P adhesives) showed better shear strength values compared to self-adhesive resin cements. Furthermore, conventional resin cements used together with a self-etch adhesive reported the highest values of adhesion.

  5. Reduced order methods for modeling and computational reduction

    CERN Document Server

    Rozza, Gianluigi

    2014-01-01

    This monograph addresses the state of the art of reduced order methods for modeling and computational reduction of complex parametrized systems, governed by ordinary and/or partial differential equations, with a special emphasis on real time computing techniques and applications in computational mechanics, bioengineering and computer graphics.  Several topics are covered, including: design, optimization, and control theory in real-time with applications in engineering; data assimilation, geometry registration, and parameter estimation with special attention to real-time computing in biomedical engineering and computational physics; real-time visualization of physics-based simulations in computer science; the treatment of high-dimensional problems in state space, physical space, or parameter space; the interactions between different model reduction and dimensionality reduction approaches; the development of general error estimation frameworks which take into account both model and discretization effects. This...

  6. Testing and Validation of Computational Methods for Mass Spectrometry.

    Science.gov (United States)

    Gatto, Laurent; Hansen, Kasper D; Hoopmann, Michael R; Hermjakob, Henning; Kohlbacher, Oliver; Beyer, Andreas

    2016-03-04

    High-throughput methods based on mass spectrometry (proteomics, metabolomics, lipidomics, etc.) produce a wealth of data that cannot be analyzed without computational methods. The impact of the choice of method on the overall result of a biological study is often underappreciated, but different methods can result in very different biological findings. It is thus essential to evaluate and compare the correctness and relative performance of computational methods. The volume of the data as well as the complexity of the algorithms render unbiased comparisons challenging. This paper discusses some problems and challenges in testing and validation of computational methods. We discuss the different types of data (simulated and experimental validation data) as well as different metrics to compare methods. We also introduce a new public repository for mass spectrometric reference data sets ( http://compms.org/RefData ) that contains a collection of publicly available data sets for performance evaluation for a wide range of different methods.

  7. Computer-assisted machine-to-human protocols for authentication of a RAM-based embedded system

    Science.gov (United States)

    Idrissa, Abdourhamane; Aubert, Alain; Fournel, Thierry

    2012-06-01

    Mobile readers used for optical identification of manufactured products can be tampered in different ways: with hardware Trojan or by powering up with fake configuration data. How a human verifier can authenticate the reader to be handled for goods verification? In this paper, two cryptographic protocols are proposed to achieve the verification of a RAM-based system through a trusted auxiliary machine. Such a system is assumed to be composed of a RAM memory and a secure block (in practice a FPGA or a configurable microcontroller). The system is connected to an input/output interface and contains a Non Volatile Memory where the configuration data are stored. Here, except the secure block, all the blocks are exposed to attacks. At the registration stage of the first protocol, the MAC of both the secret and the configuration data, denoted M0 is computed by the mobile device without saving it then transmitted to the user in a secure environment. At the verification stage, the reader which is challenged with nonces sendsMACs / HMACs of both nonces and MAC M0 (to be recomputed), keyed with the secret. These responses are verified by the user through a trusted auxiliary MAC computer unit. Here the verifier does not need to tract a (long) list of challenge / response pairs. This makes the protocol tractable for a human verifier as its participation in the authentication process is increased. In counterpart the secret has to be shared with the auxiliary unit. This constraint is relaxed in a second protocol directly derived from Fiat-Shamir's scheme.

  8. Developing a multimodal biometric authentication system using soft computing methods.

    Science.gov (United States)

    Malcangi, Mario

    2015-01-01

    Robust personal authentication is becoming ever more important in computer-based applications. Among a variety of methods, biometric offers several advantages, mainly in embedded system applications. Hard and soft multi-biometric, combined with hard and soft computing methods, can be applied to improve the personal authentication process and to generalize the applicability. This chapter describes the embedded implementation of a multi-biometric (voiceprint and fingerprint) multimodal identification system based on hard computing methods (DSP) for feature extraction and matching, an artificial neural network (ANN) for soft feature pattern matching, and a fuzzy logic engine (FLE) for data fusion and decision.

  9. Computational Simulations and the Scientific Method

    Science.gov (United States)

    Kleb, Bil; Wood, Bill

    2005-01-01

    As scientific simulation software becomes more complicated, the scientific-software implementor's need for component tests from new model developers becomes more crucial. The community's ability to follow the basic premise of the Scientific Method requires independently repeatable experiments, and model innovators are in the best position to create these test fixtures. Scientific software developers also need to quickly judge the value of the new model, i.e., its cost-to-benefit ratio in terms of gains provided by the new model and implementation risks such as cost, time, and quality. This paper asks two questions. The first is whether other scientific software developers would find published component tests useful, and the second is whether model innovators think publishing test fixtures is a feasible approach.

  10. Computer systems and methods for visualizing data

    Science.gov (United States)

    Stolte, Chris; Hanrahan, Patrick

    2013-01-29

    A method for forming a visual plot using a hierarchical structure of a dataset. The dataset comprises a measure and a dimension. The dimension consists of a plurality of levels. The plurality of levels form a dimension hierarchy. The visual plot is constructed based on a specification. A first level from the plurality of levels is represented by a first component of the visual plot. A second level from the plurality of levels is represented by a second component of the visual plot. The dataset is queried to retrieve data in accordance with the specification. The data includes all or a portion of the dimension and all or a portion of the measure. The visual plot is populated with the retrieved data in accordance with the specification.

  11. Control rod computer code IAMCOS: general theory and numerical methods

    International Nuclear Information System (INIS)

    West, G.

    1982-11-01

    IAMCOS is a computer code for the description of mechanical and thermal behavior of cylindrical control rods for fast breeders. This code version was applied, tested and modified from 1979 to 1981. In this report are described the basic model (02 version), theoretical definitions and computation methods [fr

  12. Computation of saddle-type slow manifolds using iterative methods

    DEFF Research Database (Denmark)

    Kristiansen, Kristian Uldall

    2015-01-01

    with respect to , appropriate estimates are directly attainable using the method of this paper. The method is applied to several examples, including a model for a pair of neurons coupled by reciprocal inhibition with two slow and two fast variables, and the computation of homoclinic connections in the Fitz......This paper presents an alternative approach for the computation of trajectory segments on slow manifolds of saddle type. This approach is based on iterative methods rather than collocation-type methods. Compared to collocation methods, which require mesh refinements to ensure uniform convergence...

  13. An improved method for preparing Agrobacterium cells that simplifies the Arabidopsis transformation protocol

    Directory of Open Access Journals (Sweden)

    Ülker Bekir

    2006-10-01

    Full Text Available Abstract Background The Agrobacterium vacuum (Bechtold et al 1993 and floral-dip (Clough and Bent 1998 are very efficient methods for generating transgenic Arabidopsis plants. These methods allow plant transformation without the need for tissue culture. Large volumes of bacterial cultures grown in liquid media are necessary for both of these transformation methods. This limits the number of transformations that can be done at a given time due to the need for expensive large shakers and limited space on them. Additionally, the bacterial colonies derived from solid media necessary for starting these liquid cultures often fail to grow in such large volumes. Therefore the optimum stage of plant material for transformation is often missed and new plant material needs to be grown. Results To avoid problems associated with large bacterial liquid cultures, we investigated whether bacteria grown on plates are also suitable for plant transformation. We demonstrate here that bacteria grown on plates can be used with similar efficiency for transforming plants even after one week of storage at 4°C. This makes it much easier to synchronize Agrobacterium and plants for transformation. DNA gel blot analysis was carried out on the T1 plants surviving the herbicide selection and demonstrated that the surviving plants are indeed transgenic. Conclusion The simplified method works as efficiently as the previously reported protocols and significantly reduces the workload, cost and time. Additionally, the protocol reduces the risk of large scale contaminations involving GMOs. Most importantly, many more independent transformations per day can be performed using this modified protocol.

  14. Discrete linear canonical transform computation by adaptive method.

    Science.gov (United States)

    Zhang, Feng; Tao, Ran; Wang, Yue

    2013-07-29

    The linear canonical transform (LCT) describes the effect of quadratic phase systems on a wavefield and generalizes many optical transforms. In this paper, the computation method for the discrete LCT using the adaptive least-mean-square (LMS) algorithm is presented. The computation approaches of the block-based discrete LCT and the stream-based discrete LCT using the LMS algorithm are derived, and the implementation structures of these approaches by the adaptive filter system are considered. The proposed computation approaches have the inherent parallel structures which make them suitable for efficient VLSI implementations, and are robust to the propagation of possible errors in the computation process.

  15. Platform-independent method for computer aided schematic drawings

    Science.gov (United States)

    Vell, Jeffrey L [Slingerlands, NY; Siganporia, Darius M [Clifton Park, NY; Levy, Arthur J [Fort Lauderdale, FL

    2012-02-14

    A CAD/CAM method is disclosed for a computer system to capture and interchange schematic drawing and associated design information. The schematic drawing and design information are stored in an extensible, platform-independent format.

  16. Simulating elastic light scattering using high performance computing methods

    NARCIS (Netherlands)

    Hoekstra, A.G.; Sloot, P.M.A.; Verbraeck, A.; Kerckhoffs, E.J.H.

    1993-01-01

    The Coupled Dipole method, as originally formulated byPurcell and Pennypacker, is a very powerful method tosimulate the Elastic Light Scattering from arbitraryparticles. This method, which is a particle simulationmodel for Computational Electromagnetics, has one majordrawback: if the size of the

  17. Computational and experimental methods for enclosed natural convection

    International Nuclear Information System (INIS)

    Larson, D.W.; Gartling, D.K.; Schimmel, W.P. Jr.

    1977-10-01

    Two computational procedures and one optical experimental procedure for studying enclosed natural convection are described. The finite-difference and finite-element numerical methods are developed and several sample problems are solved. Results obtained from the two computational approaches are compared. A temperature-visualization scheme using laser holographic interferometry is described, and results from this experimental procedure are compared with results from both numerical methods

  18. Method and computer program product for maintenance and modernization backlogging

    Science.gov (United States)

    Mattimore, Bernard G; Reynolds, Paul E; Farrell, Jill M

    2013-02-19

    According to one embodiment, a computer program product for determining future facility conditions includes a computer readable medium having computer readable program code stored therein. The computer readable program code includes computer readable program code for calculating a time period specific maintenance cost, for calculating a time period specific modernization factor, and for calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. In another embodiment, a computer-implemented method for calculating future facility conditions includes calculating a time period specific maintenance cost, calculating a time period specific modernization factor, and calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. Other embodiments are also presented.

  19. Computer Anti-forensics Methods and their Impact on Computer Forensic Investigation

    OpenAIRE

    Pajek, Przemyslaw; Pimenidis, Elias

    2009-01-01

    Electronic crime is very difficult to investigate and prosecute, mainly\\ud due to the fact that investigators have to build their cases based on artefacts left\\ud on computer systems. Nowadays, computer criminals are aware of computer forensics\\ud methods and techniques and try to use countermeasure techniques to efficiently\\ud impede the investigation processes. In many cases investigation with\\ud such countermeasure techniques in place appears to be too expensive, or too\\ud time consuming t...

  20. Fibonacci’s Computation Methods vs Modern Algorithms

    Directory of Open Access Journals (Sweden)

    Ernesto Burattini

    2013-12-01

    Full Text Available In this paper we discuss some computational procedures given by Leonardo Pisano Fibonacci in his famous Liber Abaci book, and we propose their translation into a modern language for computers (C ++. Among the other we describe the method of “cross” multiplication, we evaluate its computational complexity in algorithmic terms and we show the output of a C ++ code that describes the development of the method applied to the product of two integers. In a similar way we show the operations performed on fractions introduced by Fibonacci. Thanks to the possibility to reproduce on a computer, the Fibonacci’s different computational procedures, it was possible to identify some calculation errors present in the different versions of the original text.

  1. SmartShadow models and methods for pervasive computing

    CERN Document Server

    Wu, Zhaohui

    2013-01-01

    SmartShadow: Models and Methods for Pervasive Computing offers a new perspective on pervasive computing with SmartShadow, which is designed to model a user as a personality ""shadow"" and to model pervasive computing environments as user-centric dynamic virtual personal spaces. Just like human beings' shadows in the physical world, it follows people wherever they go, providing them with pervasive services. The model, methods, and software infrastructure for SmartShadow are presented and an application for smart cars is also introduced.  The book can serve as a valuable reference work for resea

  2. Where Words Fail, Music Speaks: A Mixed Method Study of an Evidence-Based Music Protocol.

    Science.gov (United States)

    Daniels, Ruby A; Torres, David; Reeser, Cathy

    2016-01-01

    Despite numerous studies documenting the benefits of music, hospice social workers are often unfamiliar with evidence-based music practices that may improve end of life care. This mixed method study tested an intervention to teach hospice social workers and chaplains (N = 10) an evidence-based music protocol. Participants used the evidence-based practice (EBP) for 30 days, recording 226 journal entries that described observations of 84 patients and their families. There was a significant increase in EBP knowledge (35%). Prompting behavioral and emotional responses, music was described frequently as a catalyst that facilitated deeper dialogue between patients, families, social workers, and chaplains.

  3. SU-F-I-43: A Software-Based Statistical Method to Compute Low Contrast Detectability in Computed Tomography Images

    Energy Technology Data Exchange (ETDEWEB)

    Chacko, M; Aldoohan, S [University of Oklahoma Health Sciences Center, Oklahoma City, OK (United States)

    2016-06-15

    Purpose: The low contrast detectability (LCD) of a CT scanner is its ability to detect and display faint lesions. The current approach to quantify LCD is achieved using vendor-specific methods and phantoms, typically by subjectively observing the smallest size object at a contrast level above phantom background. However, this approach does not yield clinically applicable values for LCD. The current study proposes a statistical LCD metric using software tools to not only to assess scanner performance, but also to quantify the key factors affecting LCD. This approach was developed using uniform QC phantoms, and its applicability was then extended under simulated clinical conditions. Methods: MATLAB software was developed to compute LCD using a uniform image of a QC phantom. For a given virtual object size, the software randomly samples the image within a selected area, and uses statistical analysis based on Student’s t-distribution to compute the LCD as the minimal Hounsfield Unit’s that can be distinguished from the background at the 95% confidence level. Its validity was assessed by comparison with the behavior of a known QC phantom under various scan protocols and a tissue-mimicking phantom. The contributions of beam quality and scattered radiation upon the computed LCD were quantified by using various external beam-hardening filters and phantom lengths. Results: As expected, the LCD was inversely related to object size under all scan conditions. The type of image reconstruction kernel filter and tissue/organ type strongly influenced the background noise characteristics and therefore, the computed LCD for the associated image. Conclusion: The proposed metric and its associated software tools are vendor-independent and can be used to analyze any LCD scanner performance. Furthermore, the method employed can be used in conjunction with the relationships established in this study between LCD and tissue type to extend these concepts to patients’ clinical CT

  4. Chapter 16: Retrocommissioning Evaluation Protocol. The Uniform Methods Project: Methods for Determining Energy Efficiency Savings for Specific Measures

    Energy Technology Data Exchange (ETDEWEB)

    Kurnik, Charles W. [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Tiessen, Alex [Posterity Group, Derwood, MD (United States)

    2017-10-09

    Retrocommissioning (RCx) is a systematic process for optimizing energy performance in existing buildings. It specifically focuses on improving the control of energy-using equipment (e.g., heating, ventilation, and air conditioning [HVAC] equipment and lighting) and typically does not involve equipment replacement. Field results have shown proper RCx can achieve energy savings ranging from 5 percent to 20 percent, with a typical payback of two years or less (Thorne 2003). The method presented in this protocol provides direction regarding: (1) how to account for each measure's specific characteristics and (2) how to choose the most appropriate savings verification approach.

  5. Computer science handbook. Vol. 13.3. Environmental computer science. Computer science methods for environmental protection and environmental research

    International Nuclear Information System (INIS)

    Page, B.; Hilty, L.M.

    1994-01-01

    Environmental computer science is a new partial discipline of applied computer science, which makes use of methods and techniques of information processing in environmental protection. Thanks to the inter-disciplinary nature of environmental problems, computer science acts as a mediator between numerous disciplines and institutions in this sector. The handbook reflects the broad spectrum of state-of-the art environmental computer science. The following important subjects are dealt with: Environmental databases and information systems, environmental monitoring, modelling and simulation, visualization of environmental data and knowledge-based systems in the environmental sector. (orig.) [de

  6. Development and Usability Testing of a Computer-Tailored Decision Support Tool for Lung Cancer Screening: Study Protocol.

    Science.gov (United States)

    Carter-Harris, Lisa; Comer, Robert Skipworth; Goyal, Anurag; Vode, Emilee Christine; Hanna, Nasser; Ceppa, DuyKhanh; Rawl, Susan M

    2017-11-16

    Awareness of lung cancer screening remains low in the screening-eligible population, and when patients visit their clinician never having heard of lung cancer screening, engaging in shared decision making to arrive at an informed decision can be a challenge. Therefore, methods to effectively support both patients and clinicians to engage in these important discussions are essential. To facilitate shared decision making about lung cancer screening, effective methods to prepare patients to have these important discussions with their clinician are needed. Our objective is to develop a computer-tailored decision support tool that meets the certification criteria of the International Patient Decision Aid Standards instrument version 4.0 that will support shared decision making in lung cancer screening decisions. Using a 3-phase process, we will develop and test a prototype of a computer-tailored decision support tool in a sample of lung cancer screening-eligible individuals. In phase I, we assembled a community advisory board comprising 10 screening-eligible individuals to develop the prototype. In phase II, we recruited a sample of 13 screening-eligible individuals to test the prototype for usability, acceptability, and satisfaction. In phase III, we are conducting a pilot randomized controlled trial (RCT) with 60 screening-eligible participants who have never been screened for lung cancer. Outcomes tested include lung cancer and screening knowledge, lung cancer screening health beliefs (perceived risk, perceived benefits, perceived barriers, and self-efficacy), perception of being prepared to engage in a patient-clinician discussion about lung cancer screening, occurrence of a patient-clinician discussion about lung cancer screening, and stage of adoption for lung cancer screening. Phases I and II are complete. Phase III is underway. As of July 15, 2017, 60 participants have been enrolled into the study, and have completed the baseline survey, intervention, and first

  7. Balancing nurses' workload in hospital wards: study protocol of developing a method to manage workload.

    Science.gov (United States)

    van den Oetelaar, W F J M; van Stel, H F; van Rhenen, W; Stellato, R K; Grolman, W

    2016-11-10

    Hospitals pursue different goals at the same time: excellent service to their patients, good quality care, operational excellence, retaining employees. This requires a good balance between patient needs and nursing staff. One way to ensure a proper fit between patient needs and nursing staff is to work with a workload management method. In our view, a nursing workload management method needs to have the following characteristics: easy to interpret; limited additional registration; applicable to different types of hospital wards; supported by nurses; covers all activities of nurses and suitable for prospective planning of nursing staff. At present, no such method is available. The research follows several steps to come to a workload management method for staff nurses. First, a list of patient characteristics relevant to care time will be composed by performing a Delphi study among staff nurses. Next, a time study of nurses' activities will be carried out. The 2 can be combined to estimate care time per patient group and estimate the time nurses spend on non-patient-related activities. These 2 estimates can be combined and compared with available nursing resources: this gives an estimate of nurses' workload. The research will take place in an academic hospital in the Netherlands. 6 surgical wards will be included, capacity 15-30 beds. The study protocol was submitted to the Medical Ethical Review Board of the University Medical Center (UMC) Utrecht and received a positive advice, protocol number 14-165/C. This method will be developed in close cooperation with staff nurses and ward management. The strong involvement of the end users will contribute to a broader support of the results. The method we will develop may also be useful for planning purposes; this is a strong advantage compared with existing methods, which tend to focus on retrospective analysis. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence

  8. Methods for the evaluation of hospital cooperation activities (Systematic review protocol

    Directory of Open Access Journals (Sweden)

    Rotter Thomas

    2012-02-01

    Full Text Available Abstract Background Hospital partnerships, mergers and cooperatives are arrangements frequently seen as a means of improving health service delivery. Many of the assumptions used in planning hospital cooperatives are not stated clearly and are often based on limited or poor scientific evidence. Methods This is a protocol for a systematic review, following the Cochrane EPOC methodology. The review aims to document, catalogue and synthesize the existing literature on the reported methods for the evaluation of hospital cooperation activities as well as methods of hospital cooperation. We will search the Database of Abstracts of Reviews of Effectiveness, the Effective Practice and Organisation of Care Register, the Cochrane Central Register of Controlled Trials and bibliographic databases including PubMed (via NLM, Web of Science, NHS EED, Business Source Premier (via EBSCO and Global Health for publications that report on methods for evaluating hospital cooperatives, strategic partnerships, mergers, alliances, networks and related activities and methods used for such partnerships. The method proposed by the Cochrane EPOC group regarding randomized study designs, controlled clinical trials, controlled before and after studies, and interrupted time series will be followed. In addition, we will also include cohort, case-control studies, and relevant non-comparative publications such as case reports. We will categorize and analyze the review findings according to the study design employed, the study quality (low versus high quality studies and the method reported in the primary studies. We will present the results of studies in tabular form. Discussion Overall, the systematic review aims to identify, assess and synthesize the evidence to underpin hospital cooperation activities as defined in this protocol. As a result, the review will provide an evidence base for partnerships, alliances or other fields of cooperation in a hospital setting. PROSPERO

  9. A computational simulation of long-term synaptic potentiation inducing protocol processes with model of CA3 hippocampal microcircuit.

    Science.gov (United States)

    Świetlik, D; Białowąs, J; Kusiak, A; Cichońska, D

    2018-01-01

    An experimental study of computational model of the CA3 region presents cog-nitive and behavioural functions the hippocampus. The main property of the CA3 region is plastic recurrent connectivity, where the connections allow it to behave as an auto-associative memory. The computer simulations showed that CA3 model performs efficient long-term synaptic potentiation (LTP) induction and high rate of sub-millisecond coincidence detection. Average frequency of the CA3 pyramidal cells model was substantially higher in simulations with LTP induction protocol than without the LTP. The entropy of pyramidal cells with LTP seemed to be significantly higher than without LTP induction protocol (p = 0.0001). There was depression of entropy, which was caused by an increase of forgetting coefficient in pyramidal cells simulations without LTP (R = -0.88, p = 0.0008), whereas such correlation did not appear in LTP simulation (p = 0.4458). Our model of CA3 hippocampal formation microcircuit biologically inspired lets you understand neurophysiologic data. (Folia Morphol 2018; 77, 2: 210-220).

  10. Assessing the Efficacy of an App-Based Method of Family Planning: The Dot Study Protocol.

    Science.gov (United States)

    Simmons, Rebecca G; Shattuck, Dominick C; Jennings, Victoria H

    2017-01-18

    assess pregnancy status over time. This paper outlines the protocol for this efficacy trial, following the Standard Protocol Items: Recommendations for Intervention Trials checklist, to provide an overview of the rationale, methodology, and analysis plan. Participants will be asked to provide daily sexual history data and periodically answer surveys administered through a call center or directly on their phone. Funding for the study was provided in 2013 under the United States Agency for International Development Fertility Awareness for Community Transformation project. Recruitment for the study will begin in January of 2017. The study is expected to last approximately 18 months, depending on recruitment. Findings on the study's primary outcomes are expected to be finalized by September 2018. Reproducibility and transparency, important aspects of all research, are particularly critical in developing new approaches to research design. This protocol outlines the first study to prospectively test both the efficacy (correct use) and effectiveness (actual use) of a pregnancy prevention app. This protocol and the processes it describes reflect the dynamic integration of mobile technologies, a call center, and Health Insurance Portability and Accountability Act-compliant study procedures. Future fertility app studies can build on our approaches to develop methodologies that can contribute to the evidence base around app-based methods of contraception. ClinicalTrials.gov NCT02833922; https://clinicaltrials.gov/ct2/show/NCT02833922 (Archived be WebCite at http://www.webcitation.org/6nDkr0e76). ©Rebecca G Simmons, Dominick C Shattuck, Victoria H Jennings. Originally published in JMIR Research Protocols (http://www.researchprotocols.org), 18.01.2017.

  11. Computed tomography-based lung nodule volumetry - do optimized reconstructions of routine protocols achieve similar accuracy, reproducibility and interobserver variability to that of special volumetry protocols?

    International Nuclear Information System (INIS)

    Bolte, H.; Riedel, C.; Knoess, N.; Hoffmann, B.; Heller, M.; Biederer, J.; Freitag, S.

    2007-01-01

    Purpose: The aim of this in vitro and ex vivo CT study was to investigate whether the use of a routine thorax protocol (RTP) with optimized reconstruction parameters can provide comparable accuracy, reproducibility and interobserver variability of volumetric analyses to that of a special volumetry protocol (SVP). Materials and Methods: To assess accuracy, 3 polyurethane (PU) spheres (35 HU; diameters: 4, 6 and 10 mm) were examined with a recommended SVP using a multislice CT (collimation 16 x 0.75 mm, pitch 1.25, 20 mAs, slice thickness 1 mm, increment 0.7 mm, medium kernel) and an optimized RTP (collimation 16 x 1.5 mm, pitch 1.25, 100 mAs, reconstructed slice thickness 2 mm, increment 0.4 mm, sharp kernel). For the assessment of intrascan and interscan reproducibility and interobserver variability, 20 artificial small pulmonary nodules were placed in a dedicated ex vivo chest phantom and examined with identical scan protocols. The artificial lesions consisted of a fat-wax-Lipiodol registered mixture. Phantoms and ex vivo lesions were examined afterwards using commercial volumetry software. To describe accuracy the relative deviations from the true volumes of the PU phantoms were calculated. For intrascan and interscan reproducibility and interobserver variability, the 95 % normal range (95 % NR) of relative deviations between two measurements was calculated. Results: For the SVP the achieved relative deviations for the 4, 6 and 10 mm PU phantoms were - 14.3 %, - 12.7 % and - 6.8 % and were 4.5 %, - 0.6 % and - 2.6 %, respectively, for the optimized RTP. SVP showed a 95 % NR of 0 - 1.5 % for intrascan and a 95 % NR of - 10.8 - 2.9 % for interscan reproducibility. The 95 % NR for interobserver variability was - 4.3 - 3.3 %. The optimized RTP achieved a 95 % NR of - 3.1 - 4.3 % for intrascan reproducibility and a 95 % NR of - 7.0 - 3.5 % for interscan reproducibility. The 95 % NR for interobserver variability was - 0.4 - 6.8 %. (orig.)

  12. Optimizing diffusion of an online computer tailored lifestyle program: a study protocol

    Directory of Open Access Journals (Sweden)

    Schulz Daniela N

    2011-06-01

    Full Text Available Abstract Background Although the Internet is a promising medium to offer lifestyle interventions to large amounts of people at relatively low costs and effort, actual exposure rates of these interventions fail to meet the high expectations. Since public health impact of interventions is determined by intervention efficacy and level of exposure to the intervention, it is imperative to put effort in optimal dissemination. The present project attempts to optimize the dissemination process of a new online computer tailored generic lifestyle program by carefully studying the adoption process and developing a strategy to achieve sustained use of the program. Methods/Design A prospective study will be conducted to yield relevant information concerning the adoption process by studying the level of adoption of the program, determinants involved in adoption and characteristics of adopters and non-adopters as well as satisfied and unsatisfied users. Furthermore, a randomized control trial will be conducted to the test the effectiveness of a proactive strategy using periodic e-mail prompts in optimizing sustained use of the new program. Discussion Closely mapping the adoption process will gain insight in characteristics of adopters and non-adopters and satisfied and unsatisfied users. This insight can be used to further optimize the program by making it more suitable for a wider range of users, or to develop adjusted interventions to attract subgroups of users that are not reached or satisfied with the initial intervention. Furthermore, by studying the effect of a proactive strategy using period prompts compared to a reactive strategy to stimulate sustained use of the intervention and, possibly, behaviour change, specific recommendations on the use and the application of prompts in online lifestyle interventions can be developed. Trial registration Dutch Trial Register NTR1786 and Medical Ethics Committee of Maastricht University and the University Hospital

  13. Computational methods for protein identification from mass spectrometry data.

    Directory of Open Access Journals (Sweden)

    Leo McHugh

    2008-02-01

    Full Text Available Protein identification using mass spectrometry is an indispensable computational tool in the life sciences. A dramatic increase in the use of proteomic strategies to understand the biology of living systems generates an ongoing need for more effective, efficient, and accurate computational methods for protein identification. A wide range of computational methods, each with various implementations, are available to complement different proteomic approaches. A solid knowledge of the range of algorithms available and, more critically, the accuracy and effectiveness of these techniques is essential to ensure as many of the proteins as possible, within any particular experiment, are correctly identified. Here, we undertake a systematic review of the currently available methods and algorithms for interpreting, managing, and analyzing biological data associated with protein identification. We summarize the advances in computational solutions as they have responded to corresponding advances in mass spectrometry hardware. The evolution of scoring algorithms and metrics for automated protein identification are also discussed with a focus on the relative performance of different techniques. We also consider the relative advantages and limitations of different techniques in particular biological contexts. Finally, we present our perspective on future developments in the area of computational protein identification by considering the most recent literature on new and promising approaches to the problem as well as identifying areas yet to be explored and the potential application of methods from other areas of computational biology.

  14. Big data mining analysis method based on cloud computing

    Science.gov (United States)

    Cai, Qing Qiu; Cui, Hong Gang; Tang, Hao

    2017-08-01

    Information explosion era, large data super-large, discrete and non-(semi) structured features have gone far beyond the traditional data management can carry the scope of the way. With the arrival of the cloud computing era, cloud computing provides a new technical way to analyze the massive data mining, which can effectively solve the problem that the traditional data mining method cannot adapt to massive data mining. This paper introduces the meaning and characteristics of cloud computing, analyzes the advantages of using cloud computing technology to realize data mining, designs the mining algorithm of association rules based on MapReduce parallel processing architecture, and carries out the experimental verification. The algorithm of parallel association rule mining based on cloud computing platform can greatly improve the execution speed of data mining.

  15. A Krylov Subspace Method for Unstructured Mesh SN Transport Computation

    International Nuclear Information System (INIS)

    Yoo, Han Jong; Cho, Nam Zin; Kim, Jong Woon; Hong, Ser Gi; Lee, Young Ouk

    2010-01-01

    Hong, et al., have developed a computer code MUST (Multi-group Unstructured geometry S N Transport) for the neutral particle transport calculations in three-dimensional unstructured geometry. In this code, the discrete ordinates transport equation is solved by using the discontinuous finite element method (DFEM) or the subcell balance methods with linear discontinuous expansion. In this paper, the conventional source iteration in the MUST code is replaced by the Krylov subspace method to reduce computing time and the numerical test results are given

  16. Computational methods for high-energy source shielding

    International Nuclear Information System (INIS)

    Armstrong, T.W.; Cloth, P.; Filges, D.

    1983-01-01

    The computational methods for high-energy radiation transport related to shielding of the SNQ-spallation source are outlined. The basic approach is to couple radiation-transport computer codes which use Monte Carlo methods and discrete ordinates methods. A code system is suggested that incorporates state-of-the-art radiation-transport techniques. The stepwise verification of that system is briefly summarized. The complexity of the resulting code system suggests a more straightforward code specially tailored for thick shield calculations. A short guide line to future development of such a Monte Carlo code is given

  17. A simple method for estimating the effective dose in dental CT. Conversion factors and calculation for a clinical low-dose protocol

    International Nuclear Information System (INIS)

    Homolka, P.; Kudler, H.; Nowotny, R.; Gahleitner, A.; Wien Univ.

    2001-01-01

    An easily appliable method to estimate effective dose including in its definition the high radio-sensitivity of the salivary glands from dental computed tomography is presented. Effective doses were calculated for a markedly dose reduced dental CT protocol as well as for standard settings. Data are compared with effective doses from the literature obtained with other modalities frequently used in dental care. Methods: Conversion factors based on the weighted Computed Tomography Dose Index were derived from published data to calculate effective dose values for various CT exposure settings. Results: Conversion factors determined can be used for clinically used kVp settings and prefiltrations. With reduced tube current an effective dose for a CT examination of the maxilla of 22 μSv can be achieved, which compares to values typically obtained with panoramic radiography (26 μSv). A CT scan of the mandible, respectively, gives 123 μSv comparable to a full mouth survey with intraoral films (150 μSv). Conclusion: For standard CT scan protocols of the mandible, effective doses exceed 600 μSv. Hence, low dose protocols for dental CT should be considered whenever feasable, especially for paediatric patients. If hard tissue diagnoses is performed, the potential of dose reduction is significant despite the higher image noise levels as readability is still adequate. (orig.) [de

  18. Monte Carlo methods of PageRank computation

    NARCIS (Netherlands)

    Litvak, Nelli

    2004-01-01

    We describe and analyze an on-line Monte Carlo method of PageRank computation. The PageRank is being estimated basing on results of a large number of short independent simulation runs initiated from each page that contains outgoing hyperlinks. The method does not require any storage of the hyperlink

  19. Geometric optical transfer function and tis computation method

    International Nuclear Information System (INIS)

    Wang Qi

    1992-01-01

    Geometric Optical Transfer Function formula is derived after expound some content to be easily ignored, and the computation method is given with Bessel function of order zero and numerical integration and Spline interpolation. The method is of advantage to ensure accuracy and to save calculation

  20. Efficient Numerical Methods for Stochastic Differential Equations in Computational Finance

    KAUST Repository

    Happola, Juho

    2017-09-19

    Stochastic Differential Equations (SDE) offer a rich framework to model the probabilistic evolution of the state of a system. Numerical approximation methods are typically needed in evaluating relevant Quantities of Interest arising from such models. In this dissertation, we present novel effective methods for evaluating Quantities of Interest relevant to computational finance when the state of the system is described by an SDE.

  1. Fully consistent CFD methods for incompressible flow computations

    DEFF Research Database (Denmark)

    Kolmogorov, Dmitry; Shen, Wen Zhong; Sørensen, Niels N.

    2014-01-01

    Nowadays collocated grid based CFD methods are one of the most e_cient tools for computations of the ows past wind turbines. To ensure the robustness of the methods they require special attention to the well-known problem of pressure-velocity coupling. Many commercial codes to ensure the pressure...

  2. Efficient Numerical Methods for Stochastic Differential Equations in Computational Finance

    KAUST Repository

    Happola, Juho

    2017-01-01

    Stochastic Differential Equations (SDE) offer a rich framework to model the probabilistic evolution of the state of a system. Numerical approximation methods are typically needed in evaluating relevant Quantities of Interest arising from such models. In this dissertation, we present novel effective methods for evaluating Quantities of Interest relevant to computational finance when the state of the system is described by an SDE.

  3. Computational methods for structural load and resistance modeling

    Science.gov (United States)

    Thacker, B. H.; Millwater, H. R.; Harren, S. V.

    1991-01-01

    An automated capability for computing structural reliability considering uncertainties in both load and resistance variables is presented. The computations are carried out using an automated Advanced Mean Value iteration algorithm (AMV +) with performance functions involving load and resistance variables obtained by both explicit and implicit methods. A complete description of the procedures used is given as well as several illustrative examples, verified by Monte Carlo Analysis. In particular, the computational methods described in the paper are shown to be quite accurate and efficient for a material nonlinear structure considering material damage as a function of several primitive random variables. The results show clearly the effectiveness of the algorithms for computing the reliability of large-scale structural systems with a maximum number of resolutions.

  4. Computational mathematics models, methods, and analysis with Matlab and MPI

    CERN Document Server

    White, Robert E

    2004-01-01

    Computational Mathematics: Models, Methods, and Analysis with MATLAB and MPI explores and illustrates this process. Each section of the first six chapters is motivated by a specific application. The author applies a model, selects a numerical method, implements computer simulations, and assesses the ensuing results. These chapters include an abundance of MATLAB code. By studying the code instead of using it as a "black box, " you take the first step toward more sophisticated numerical modeling. The last four chapters focus on multiprocessing algorithms implemented using message passing interface (MPI). These chapters include Fortran 9x codes that illustrate the basic MPI subroutines and revisit the applications of the previous chapters from a parallel implementation perspective. All of the codes are available for download from www4.ncsu.edu./~white.This book is not just about math, not just about computing, and not just about applications, but about all three--in other words, computational science. Whether us...

  5. Advanced scientific computational methods and their applications to nuclear technologies. (4) Overview of scientific computational methods, introduction of continuum simulation methods and their applications (4)

    International Nuclear Information System (INIS)

    Sekimura, Naoto; Okita, Taira

    2006-01-01

    Scientific computational methods have advanced remarkably with the progress of nuclear development. They have played the role of weft connecting each realm of nuclear engineering and then an introductory course of advanced scientific computational methods and their applications to nuclear technologies were prepared in serial form. This is the fourth issue showing the overview of scientific computational methods with the introduction of continuum simulation methods and their applications. Simulation methods on physical radiation effects on materials are reviewed based on the process such as binary collision approximation, molecular dynamics, kinematic Monte Carlo method, reaction rate method and dislocation dynamics. (T. Tanaka)

  6. Class of reconstructed discontinuous Galerkin methods in computational fluid dynamics

    International Nuclear Information System (INIS)

    Luo, Hong; Xia, Yidong; Nourgaliev, Robert

    2011-01-01

    A class of reconstructed discontinuous Galerkin (DG) methods is presented to solve compressible flow problems on arbitrary grids. The idea is to combine the efficiency of the reconstruction methods in finite volume methods and the accuracy of the DG methods to obtain a better numerical algorithm in computational fluid dynamics. The beauty of the resulting reconstructed discontinuous Galerkin (RDG) methods is that they provide a unified formulation for both finite volume and DG methods, and contain both classical finite volume and standard DG methods as two special cases of the RDG methods, and thus allow for a direct efficiency comparison. Both Green-Gauss and least-squares reconstruction methods and a least-squares recovery method are presented to obtain a quadratic polynomial representation of the underlying linear discontinuous Galerkin solution on each cell via a so-called in-cell reconstruction process. The devised in-cell reconstruction is aimed to augment the accuracy of the discontinuous Galerkin method by increasing the order of the underlying polynomial solution. These three reconstructed discontinuous Galerkin methods are used to compute a variety of compressible flow problems on arbitrary meshes to assess their accuracy. The numerical experiments demonstrate that all three reconstructed discontinuous Galerkin methods can significantly improve the accuracy of the underlying second-order DG method, although the least-squares reconstructed DG method provides the best performance in terms of both accuracy, efficiency, and robustness. (author)

  7. Data analysis through interactive computer animation method (DATICAM)

    International Nuclear Information System (INIS)

    Curtis, J.N.; Schwieder, D.H.

    1983-01-01

    DATICAM is an interactive computer animation method designed to aid in the analysis of nuclear research data. DATICAM was developed at the Idaho National Engineering Laboratory (INEL) by EG and G Idaho, Inc. INEL analysts use DATICAM to produce computer codes that are better able to predict the behavior of nuclear power reactors. In addition to increased code accuracy, DATICAM has saved manpower and computer costs. DATICAM has been generalized to assist in the data analysis of virtually any data-producing dynamic process

  8. Multigrid methods for the computation of propagators in gauge fields

    International Nuclear Information System (INIS)

    Kalkreuter, T.

    1992-11-01

    In the present work generalizations of multigrid methods for propagators in gauge fields are investigated. We discuss proper averaging operations for bosons and for staggered fermions. An efficient algorithm for computing C numerically is presented. The averaging kernels C can be used not only in deterministic multigrid computations, but also in multigrid Monte Carlo simulations, and for the definition of block spins and blocked gauge fields in Monte Carlo renormalization group studies of gauge theories. Actual numerical computations of kernels and propagators are performed in compact four-dimensional SU(2) gauge fields. (orig./HSI)

  9. Computational Protocols for Prediction of Solute NMR Relative Chemical Shifts. A Case Study of L-Tryptophan in Aqueous Solution

    DEFF Research Database (Denmark)

    Eriksen, Janus J.; Olsen, Jógvan Magnus H.; Aidas, Kestutis

    2011-01-01

    to the results stemming from the conformations extracted from the MM conformational search in terms of replicating an experimental reference as well as in achieving the correct sequence of the NMR relative chemical shifts of L-tryptophan in aqueous solution. We find this to be due to missing conformations......In this study, we have applied two different spanning protocols for obtaining the molecular conformations of L-tryptophan in aqueous solution, namely a molecular dynamics simulation and a molecular mechanics conformational search with subsequent geometry re-optimization of the stable conformers...... using a quantum mechanically based method. These spanning protocols represent standard ways of obtaining a set of conformations on which NMR calculations may be performed. The results stemming from the solute–solvent configurations extracted from the MD simulation at 300 K are found to be inferior...

  10. Water demand forecasting: review of soft computing methods.

    Science.gov (United States)

    Ghalehkhondabi, Iman; Ardjmand, Ehsan; Young, William A; Weckman, Gary R

    2017-07-01

    Demand forecasting plays a vital role in resource management for governments and private companies. Considering the scarcity of water and its inherent constraints, demand management and forecasting in this domain are critically important. Several soft computing techniques have been developed over the last few decades for water demand forecasting. This study focuses on soft computing methods of water consumption forecasting published between 2005 and 2015. These methods include artificial neural networks (ANNs), fuzzy and neuro-fuzzy models, support vector machines, metaheuristics, and system dynamics. Furthermore, it was discussed that while in short-term forecasting, ANNs have been superior in many cases, but it is still very difficult to pick a single method as the overall best. According to the literature, various methods and their hybrids are applied to water demand forecasting. However, it seems soft computing has a lot more to contribute to water demand forecasting. These contribution areas include, but are not limited, to various ANN architectures, unsupervised methods, deep learning, various metaheuristics, and ensemble methods. Moreover, it is found that soft computing methods are mainly used for short-term demand forecasting.

  11. Computer assisted strain-gauge plethysmography is a practical method of excluding deep venous thrombosis

    International Nuclear Information System (INIS)

    Goddard, A.J.P.; Chakraverty, S.; Wright, J.

    2001-01-01

    AIM: To evaluate a computed strain-gauge plethysmograph (CSGP) as a screening tool to exclude above knee deep venous thrombosis (DVT). METHODS: The first phase took place in the Radiology department. One hundred and forty-nine patients had both Doppler ultrasound and CSGP performed. Discordant results were resolved by venography where possible. The second phase took place in an acute medical admissions ward using a modified protocol. A further 173 patients had both studies performed. The results were collated and analysed. RESULTS: Phase 1. The predictive value of a negative CSGP study was 98%. There were two false-negative CSGP results (false-negative rate 5%), including one equivocal CSGP study which had deep venous thrombosis on ultrasound examination. Two patients thought to have thrombus on ultrasound proved not to have acute thrombus on venography. Phase 2. The negative predictive value of CSGP using a modified protocol was 97%. There were two definite and one possible false-negative studies (false-negative rate 4-7%). CONCLUSION: Computer strain-gauge plethysmograph can provide a simple, cheap and effective method of excluding lower limb DVT. However, its use should be rigorously assessed in each hospital in which it is used. Goddard, A.J.P., Chakraverty, S. and Wright, J. (2001)

  12. Short-term electric load forecasting using computational intelligence methods

    OpenAIRE

    Jurado, Sergio; Peralta, J.; Nebot, Àngela; Mugica, Francisco; Cortez, Paulo

    2013-01-01

    Accurate time series forecasting is a key issue to support individual and organizational decision making. In this paper, we introduce several methods for short-term electric load forecasting. All the presented methods stem from computational intelligence techniques: Random Forest, Nonlinear Autoregressive Neural Networks, Evolutionary Support Vector Machines and Fuzzy Inductive Reasoning. The performance of the suggested methods is experimentally justified with several experiments carried out...

  13. A stochastic method for computing hadronic matrix elements

    Energy Technology Data Exchange (ETDEWEB)

    Alexandrou, Constantia [Cyprus Univ., Nicosia (Cyprus). Dept. of Physics; The Cyprus Institute, Nicosia (Cyprus). Computational-based Science and Technology Research Center; Dinter, Simon; Drach, Vincent [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Jansen, Karl [Cyprus Univ., Nicosia (Cyprus). Dept. of Physics; Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Hadjiyiannakou, Kyriakos [Cyprus Univ., Nicosia (Cyprus). Dept. of Physics; Renner, Dru B. [Thomas Jefferson National Accelerator Facility, Newport News, VA (United States); Collaboration: European Twisted Mass Collaboration

    2013-02-15

    We present a stochastic method for the calculation of baryon three-point functions that is more versatile compared to the typically used sequential method. We analyze the scaling of the error of the stochastically evaluated three-point function with the lattice volume and find a favorable signal-to-noise ratio suggesting that our stochastic method can be used efficiently at large volumes to compute hadronic matrix elements.

  14. Computational modeling of local hemodynamics phenomena: methods, tools and clinical applications

    International Nuclear Information System (INIS)

    Ponzini, R.; Rizzo, G.; Vergara, C.; Veneziani, A.; Morbiducci, U.; Montevecchi, F.M.; Redaelli, A.

    2009-01-01

    Local hemodynamics plays a key role in the onset of vessel wall pathophysiology, with peculiar blood flow structures (i.e. spatial velocity profiles, vortices, re-circulating zones, helical patterns and so on) characterizing the behavior of specific vascular districts. Thanks to the evolving technologies on computer sciences, mathematical modeling and hardware performances, the study of local hemodynamics can today afford also the use of a virtual environment to perform hypothesis testing, product development, protocol design and methods validation that just a couple of decades ago would have not been thinkable. Computational fluid dynamics (Cfd) appears to be more than a complementary partner to in vitro modeling and a possible substitute to animal models, furnishing a privileged environment for cheap fast and reproducible data generation.

  15. The Direct Lighting Computation in Global Illumination Methods

    Science.gov (United States)

    Wang, Changyaw Allen

    1994-01-01

    Creating realistic images is a computationally expensive process, but it is very important for applications such as interior design, product design, education, virtual reality, and movie special effects. To generate realistic images, state-of-art rendering techniques are employed to simulate global illumination, which accounts for the interreflection of light among objects. In this document, we formalize the global illumination problem into a eight -dimensional integral and discuss various methods that can accelerate the process of approximating this integral. We focus on the direct lighting computation, which accounts for the light reaching the viewer from the emitting sources after exactly one reflection, Monte Carlo sampling methods, and light source simplification. Results include a new sample generation method, a framework for the prediction of the total number of samples used in a solution, and a generalized Monte Carlo approach for computing the direct lighting from an environment which for the first time makes ray tracing feasible for highly complex environments.

  16. Numerical methods design, analysis, and computer implementation of algorithms

    CERN Document Server

    Greenbaum, Anne

    2012-01-01

    Numerical Methods provides a clear and concise exploration of standard numerical analysis topics, as well as nontraditional ones, including mathematical modeling, Monte Carlo methods, Markov chains, and fractals. Filled with appealing examples that will motivate students, the textbook considers modern application areas, such as information retrieval and animation, and classical topics from physics and engineering. Exercises use MATLAB and promote understanding of computational results. The book gives instructors the flexibility to emphasize different aspects--design, analysis, or computer implementation--of numerical algorithms, depending on the background and interests of students. Designed for upper-division undergraduates in mathematics or computer science classes, the textbook assumes that students have prior knowledge of linear algebra and calculus, although these topics are reviewed in the text. Short discussions of the history of numerical methods are interspersed throughout the chapters. The book a...

  17. Integrating computational methods to retrofit enzymes to synthetic pathways.

    Science.gov (United States)

    Brunk, Elizabeth; Neri, Marilisa; Tavernelli, Ivano; Hatzimanikatis, Vassily; Rothlisberger, Ursula

    2012-02-01

    Microbial production of desired compounds provides an efficient framework for the development of renewable energy resources. To be competitive to traditional chemistry, one requirement is to utilize the full capacity of the microorganism to produce target compounds with high yields and turnover rates. We use integrated computational methods to generate and quantify the performance of novel biosynthetic routes that contain highly optimized catalysts. Engineering a novel reaction pathway entails addressing feasibility on multiple levels, which involves handling the complexity of large-scale biochemical networks while respecting the critical chemical phenomena at the atomistic scale. To pursue this multi-layer challenge, our strategy merges knowledge-based metabolic engineering methods with computational chemistry methods. By bridging multiple disciplines, we provide an integral computational framework that could accelerate the discovery and implementation of novel biosynthetic production routes. Using this approach, we have identified and optimized a novel biosynthetic route for the production of 3HP from pyruvate. Copyright © 2011 Wiley Periodicals, Inc.

  18. The Experiment Method for Manufacturing Grid Development on Single Computer

    Institute of Scientific and Technical Information of China (English)

    XIAO Youan; ZHOU Zude

    2006-01-01

    In this paper, an experiment method for the Manufacturing Grid application system development in the single personal computer environment is proposed. The characteristic of the proposed method is constructing a full prototype Manufacturing Grid application system which is hosted on a single personal computer with the virtual machine technology. Firstly, it builds all the Manufacturing Grid physical resource nodes on an abstraction layer of a single personal computer with the virtual machine technology. Secondly, all the virtual Manufacturing Grid resource nodes will be connected with virtual network and the application software will be deployed on each Manufacturing Grid nodes. Then, we can obtain a prototype Manufacturing Grid application system which is working in the single personal computer, and can carry on the experiment on this foundation. Compared with the known experiment methods for the Manufacturing Grid application system development, the proposed method has the advantages of the known methods, such as cost inexpensively, operation simple, and can get the confidence experiment result easily. The Manufacturing Grid application system constructed with the proposed method has the high scalability, stability and reliability. It is can be migrated to the real application environment rapidly.

  19. Algorithm for planning a double-jaw orthognathic surgery using a computer-aided surgical simulation (CASS) protocol. Part 1: planning sequence

    Science.gov (United States)

    Xia, J. J.; Gateno, J.; Teichgraeber, J. F.; Yuan, P.; Chen, K.-C.; Li, J.; Zhang, X.; Tang, Z.; Alfi, D. M.

    2015-01-01

    The success of craniomaxillofacial (CMF) surgery depends not only on the surgical techniques, but also on an accurate surgical plan. The adoption of computer-aided surgical simulation (CASS) has created a paradigm shift in surgical planning. However, planning an orthognathic operation using CASS differs fundamentally from planning using traditional methods. With this in mind, the Surgical Planning Laboratory of Houston Methodist Research Institute has developed a CASS protocol designed specifically for orthognathic surgery. The purpose of this article is to present an algorithm using virtual tools for planning a double-jaw orthognathic operation. This paper will serve as an operation manual for surgeons wanting to incorporate CASS into their clinical practice. PMID:26573562

  20. Computational simulation in architectural and environmental acoustics methods and applications of wave-based computation

    CERN Document Server

    Sakamoto, Shinichi; Otsuru, Toru

    2014-01-01

    This book reviews a variety of methods for wave-based acoustic simulation and recent applications to architectural and environmental acoustic problems. Following an introduction providing an overview of computational simulation of sound environment, the book is in two parts: four chapters on methods and four chapters on applications. The first part explains the fundamentals and advanced techniques for three popular methods, namely, the finite-difference time-domain method, the finite element method, and the boundary element method, as well as alternative time-domain methods. The second part demonstrates various applications to room acoustics simulation, noise propagation simulation, acoustic property simulation for building components, and auralization. This book is a valuable reference that covers the state of the art in computational simulation for architectural and environmental acoustics.  

  1. Hamiltonian lattice field theory: Computer calculations using variational methods

    International Nuclear Information System (INIS)

    Zako, R.L.

    1991-01-01

    I develop a variational method for systematic numerical computation of physical quantities -- bound state energies and scattering amplitudes -- in quantum field theory. An infinite-volume, continuum theory is approximated by a theory on a finite spatial lattice, which is amenable to numerical computation. I present an algorithm for computing approximate energy eigenvalues and eigenstates in the lattice theory and for bounding the resulting errors. I also show how to select basis states and choose variational parameters in order to minimize errors. The algorithm is based on the Rayleigh-Ritz principle and Kato's generalizations of Temple's formula. The algorithm could be adapted to systems such as atoms and molecules. I show how to compute Green's functions from energy eigenvalues and eigenstates in the lattice theory, and relate these to physical (renormalized) coupling constants, bound state energies and Green's functions. Thus one can compute approximate physical quantities in a lattice theory that approximates a quantum field theory with specified physical coupling constants. I discuss the errors in both approximations. In principle, the errors can be made arbitrarily small by increasing the size of the lattice, decreasing the lattice spacing and computing sufficiently long. Unfortunately, I do not understand the infinite-volume and continuum limits well enough to quantify errors due to the lattice approximation. Thus the method is currently incomplete. I apply the method to real scalar field theories using a Fock basis of free particle states. All needed quantities can be calculated efficiently with this basis. The generalization to more complicated theories is straightforward. I describe a computer implementation of the method and present numerical results for simple quantum mechanical systems

  2. Hamiltonian lattice field theory: Computer calculations using variational methods

    International Nuclear Information System (INIS)

    Zako, R.L.

    1991-01-01

    A variational method is developed for systematic numerical computation of physical quantities-bound state energies and scattering amplitudes-in quantum field theory. An infinite-volume, continuum theory is approximated by a theory on a finite spatial lattice, which is amenable to numerical computation. An algorithm is presented for computing approximate energy eigenvalues and eigenstates in the lattice theory and for bounding the resulting errors. It is shown how to select basis states and choose variational parameters in order to minimize errors. The algorithm is based on the Rayleigh-Ritz principle and Kato's generalizations of Temple's formula. The algorithm could be adapted to systems such as atoms and molecules. It is shown how to compute Green's functions from energy eigenvalues and eigenstates in the lattice theory, and relate these to physical (renormalized) coupling constants, bound state energies and Green's functions. Thus one can compute approximate physical quantities in a lattice theory that approximates a quantum field theory with specified physical coupling constants. The author discusses the errors in both approximations. In principle, the errors can be made arbitrarily small by increasing the size of the lattice, decreasing the lattice spacing and computing sufficiently long. Unfortunately, the author does not understand the infinite-volume and continuum limits well enough to quantify errors due to the lattice approximation. Thus the method is currently incomplete. The method is applied to real scalar field theories using a Fock basis of free particle states. All needed quantities can be calculated efficiently with this basis. The generalization to more complicated theories is straightforward. The author describes a computer implementation of the method and present numerical results for simple quantum mechanical systems

  3. Application of statistical method for FBR plant transient computation

    International Nuclear Information System (INIS)

    Kikuchi, Norihiro; Mochizuki, Hiroyasu

    2014-01-01

    Highlights: • A statistical method with a large trial number up to 10,000 is applied to the plant system analysis. • A turbine trip test conducted at the “Monju” reactor is selected as a plant transient. • A reduction method of trial numbers is discussed. • The result with reduced trial number can express the base regions of the computed distribution. -- Abstract: It is obvious that design tolerances, errors included in operation, and statistical errors in empirical correlations effect on the transient behavior. The purpose of the present study is to apply above mentioned statistical errors to a plant system computation in order to evaluate the statistical distribution contained in the transient evolution. A selected computation case is the turbine trip test conducted at 40% electric power of the prototype fast reactor “Monju”. All of the heat transport systems of “Monju” are modeled with the NETFLOW++ system code which has been validated using the plant transient tests of the experimental fast reactor Joyo, and “Monju”. The effects of parameters on upper plenum temperature are confirmed by sensitivity analyses, and dominant parameters are chosen. The statistical errors are applied to each computation deck by using a pseudorandom number and the Monte-Carlo method. The dSFMT (Double precision SIMD-oriented Fast Mersenne Twister) that is developed version of Mersenne Twister (MT), is adopted as the pseudorandom number generator. In the present study, uniform random numbers are generated by dSFMT, and these random numbers are transformed to the normal distribution by the Box–Muller method. Ten thousands of different computations are performed at once. In every computation case, the steady calculation is performed for 12,000 s, and transient calculation is performed for 4000 s. In the purpose of the present statistical computation, it is important that the base regions of distribution functions should be calculated precisely. A large number of

  4. Multi-centred mixed-methods PEPFAR HIV care & support public health evaluation: study protocol

    Directory of Open Access Journals (Sweden)

    Fayers Peter

    2010-09-01

    Full Text Available Abstract Background A public health response is essential to meet the multidimensional needs of patients and families affected by HIV disease in sub-Saharan Africa. In order to appraise curret provision of HIV care and support in East Africa, and to provide evidence-based direction to future care programming, and Public Health Evaluation was commissioned by the PEPFAR programme of the US Government. Methods/Design This paper described the 2-Phase international mixed methods study protocol utilising longitudinal outcome measurement, surveys, patient and family qualitative interviews and focus groups, staff qualitative interviews, health economics and document analysis. Aim 1 To describe the nature and scope of HIV care and support in two African countries, including the types of facilities available, clients seen, and availability of specific components of care [Study Phase 1]. Aim 2 To determine patient health outcomes over time and principle cost drivers [Study Phase 2]. The study objectives are as follows. 1 To undertake a cross-sectional survey of service configuration and activity by sampling 10% of the facilities being funded by PEPFAR to provide HIV care and support in Kenya and Uganda (Phase 1 in order to describe care currently provided, including pharmacy drug reviews to determine availability and supply of essential drugs in HIV management. 2 To conduct patient focus group discussions at each of these (Phase 1 to determine care received. 3 To undertake a longitudinal prospective study of 1200 patients who are newly diagnosed with HIV or patients with HIV who present with a new problem attending PEPFAR care and support services. Data collection includes self-reported quality of life, core palliative outcomes and components of care received (Phase 2. 4 To conduct qualitative interviews with staff, patients and carers in order to explore and understand service issues and care provision in more depth (Phase 2. 5 To undertake document

  5. Electron beam treatment planning: A review of dose computation methods

    International Nuclear Information System (INIS)

    Mohan, R.; Riley, R.; Laughlin, J.S.

    1983-01-01

    Various methods of dose computations are reviewed. The equivalent path length methods used to account for body curvature and internal structure are not adequate because they ignore the lateral diffusion of electrons. The Monte Carlo method for the broad field three-dimensional situation in treatment planning is impractical because of the enormous computer time required. The pencil beam technique may represent a suitable compromise. The behavior of a pencil beam may be described by the multiple scattering theory or, alternatively, generated using the Monte Carlo method. Although nearly two orders of magnitude slower than the equivalent path length technique, the pencil beam method improves accuracy sufficiently to justify its use. It applies very well when accounting for the effect of surface irregularities; the formulation for handling inhomogeneous internal structure is yet to be developed

  6. Denver screening protocol for blunt cerebrovascular injury reduces the use of multi-detector computed tomography angiography.

    Science.gov (United States)

    Beliaev, Andrei M; Barber, P Alan; Marshall, Roger J; Civil, Ian

    2014-06-01

    Blunt cerebrovascular injury (BCVI) occurs in 0.2-2.7% of blunt trauma patients and has up to 30% mortality. Conventional screening does not recognize up to 20% of BCVI patients. To improve diagnosis of BCVI, both an expanded battery of screening criteria and a multi-detector computed tomography angiography (CTA) have been suggested. The aim of this study is to investigate whether the use of CTA restricted to the Denver protocol screen-positive patients would reduce the unnecessary use of CTA as a pre-emptive screening tool. This is a registry-based study of blunt trauma patients admitted to Auckland City Hospital from 1998 to 2012. The diagnosis of BCVI was confirmed or excluded with CTA, magnetic resonance angiography and, if these imaging were non-conclusive, four-vessel digital subtraction angiography. Thirty (61%) BCVI and 19 (39%) non-BCVI patients met eligibility criteria. The Denver protocol applied to our cohort of patients had a sensitivity of 97% (95% confidence interval (CI): 83-100%) and a specificity of 42% (95% CI: 20-67%). With a prevalence of BCVI in blunt trauma patients of 0.2% and 2.7%, post-test odds of a screen-positive test were 0.03 (95% CI: 0.002-0.005) and 0.046 (95% CI: 0.314-0.068), respectively. Application of the CTA to the Denver protocol screen-positive trauma patients can decrease the use of CTA as a pre-emptive screening tool by 95-97% and reduces its hazards. © 2013 Royal Australasian College of Surgeons.

  7. A software defined RTU multi-protocol automatic adaptation data transmission method

    Science.gov (United States)

    Jin, Huiying; Xu, Xingwu; Wang, Zhanfeng; Ma, Weijun; Li, Sheng; Su, Yong; Pan, Yunpeng

    2018-02-01

    Remote terminal unit (RTU) is the core device of the monitor system in hydrology and water resources. Different devices often have different communication protocols in the application layer, which results in the difficulty in information analysis and communication networking. Therefore, we introduced the idea of software defined hardware, and abstracted the common feature of mainstream communication protocols of RTU application layer, and proposed a uniformed common protocol model. Then, various communication protocol algorithms of application layer are modularized according to the model. The executable codes of these algorithms are labeled by the virtual functions and stored in the flash chips of embedded CPU to form the protocol stack. According to the configuration commands to initialize the RTU communication systems, it is able to achieve dynamic assembling and loading of various application layer communication protocols of RTU and complete the efficient transport of sensor data from RTU to central station when the data acquisition protocol of sensors and various external communication terminals remain unchanged.

  8. DelPhi web server v2: incorporating atomic-style geometrical figures into the computational protocol.

    Science.gov (United States)

    Smith, Nicholas; Witham, Shawn; Sarkar, Subhra; Zhang, Jie; Li, Lin; Li, Chuan; Alexov, Emil

    2012-06-15

    A new edition of the DelPhi web server, DelPhi web server v2, is released to include atomic presentation of geometrical figures. These geometrical objects can be used to model nano-size objects together with real biological macromolecules. The position and size of the object can be manipulated by the user in real time until desired results are achieved. The server fixes structural defects, adds hydrogen atoms and calculates electrostatic energies and the corresponding electrostatic potential and ionic distributions. The web server follows a client-server architecture built on PHP and HTML and utilizes DelPhi software. The computation is carried out on supercomputer cluster and results are given back to the user via http protocol, including the ability to visualize the structure and corresponding electrostatic potential via Jmol implementation. The DelPhi web server is available from http://compbio.clemson.edu/delphi_webserver.

  9. SU-F-R-40: Robustness Test of Computed Tomography Textures of Lung Tissues to Varying Scanning Protocols Using a Realistic Phantom Environment

    International Nuclear Information System (INIS)

    Lee, S; Markel, D; Hegyi, G; El Naqa, I

    2016-01-01

    Purpose: The reliability of computed tomography (CT) textures is an important element of radiomics analysis. This study investigates the dependency of lung CT textures on different breathing phases and changes in CT image acquisition protocols in a realistic phantom setting. Methods: We investigated 11 CT texture features for radiation-induced lung disease from 3 categories (first-order, grey level co-ocurrence matrix (GLCM), and Law’s filter). A biomechanical swine lung phantom was scanned at two breathing phases (inhale/exhale) and two scanning protocols set for PET/CT and diagnostic CT scanning. Lung volumes acquired from the CT images were divided into 2-dimensional sub-regions with a grid spacing of 31 mm. The distribution of the evaluated texture features from these sub-regions were compared between the two scanning protocols and two breathing phases. The significance of each factor on feature values were tested at 95% significance level using analysis of covariance (ANCOVA) model with interaction terms included. Robustness of a feature to a scanning factor was defined as non-significant dependence on the factor. Results: Three GLCM textures (variance, sum entropy, difference entropy) were robust to breathing changes. Two GLCM (variance, sum entropy) and 3 Law’s filter textures (S5L5, E5L5, W5L5) were robust to scanner changes. Moreover, the two GLCM textures (variance, sum entropy) were consistent across all 4 scanning conditions. First-order features, especially Hounsfield unit intensity features, presented the most drastic variation up to 39%. Conclusion: Amongst the studied features, GLCM and Law’s filter texture features were more robust than first-order features. However, the majority of the features were modified by either breathing phase or scanner changes, suggesting a need for calibration when retrospectively comparing scans obtained at different conditions. Further investigation is necessary to identify the sensitivity of individual image

  10. SU-F-R-40: Robustness Test of Computed Tomography Textures of Lung Tissues to Varying Scanning Protocols Using a Realistic Phantom Environment

    Energy Technology Data Exchange (ETDEWEB)

    Lee, S; Markel, D; Hegyi, G [Medical Physics Unit, McGill University, Montreal, Quebec (Canada); El Naqa, I [University of Michigan, Ann Arbor, MI (United States)

    2016-06-15

    Purpose: The reliability of computed tomography (CT) textures is an important element of radiomics analysis. This study investigates the dependency of lung CT textures on different breathing phases and changes in CT image acquisition protocols in a realistic phantom setting. Methods: We investigated 11 CT texture features for radiation-induced lung disease from 3 categories (first-order, grey level co-ocurrence matrix (GLCM), and Law’s filter). A biomechanical swine lung phantom was scanned at two breathing phases (inhale/exhale) and two scanning protocols set for PET/CT and diagnostic CT scanning. Lung volumes acquired from the CT images were divided into 2-dimensional sub-regions with a grid spacing of 31 mm. The distribution of the evaluated texture features from these sub-regions were compared between the two scanning protocols and two breathing phases. The significance of each factor on feature values were tested at 95% significance level using analysis of covariance (ANCOVA) model with interaction terms included. Robustness of a feature to a scanning factor was defined as non-significant dependence on the factor. Results: Three GLCM textures (variance, sum entropy, difference entropy) were robust to breathing changes. Two GLCM (variance, sum entropy) and 3 Law’s filter textures (S5L5, E5L5, W5L5) were robust to scanner changes. Moreover, the two GLCM textures (variance, sum entropy) were consistent across all 4 scanning conditions. First-order features, especially Hounsfield unit intensity features, presented the most drastic variation up to 39%. Conclusion: Amongst the studied features, GLCM and Law’s filter texture features were more robust than first-order features. However, the majority of the features were modified by either breathing phase or scanner changes, suggesting a need for calibration when retrospectively comparing scans obtained at different conditions. Further investigation is necessary to identify the sensitivity of individual image

  11. Computational methods for three-dimensional microscopy reconstruction

    CERN Document Server

    Frank, Joachim

    2014-01-01

    Approaches to the recovery of three-dimensional information on a biological object, which are often formulated or implemented initially in an intuitive way, are concisely described here based on physical models of the object and the image-formation process. Both three-dimensional electron microscopy and X-ray tomography can be captured in the same mathematical framework, leading to closely-related computational approaches, but the methodologies differ in detail and hence pose different challenges. The editors of this volume, Gabor T. Herman and Joachim Frank, are experts in the respective methodologies and present research at the forefront of biological imaging and structural biology.   Computational Methods for Three-Dimensional Microscopy Reconstruction will serve as a useful resource for scholars interested in the development of computational methods for structural biology and cell biology, particularly in the area of 3D imaging and modeling.

  12. Flaws in current human training protocols for spontaneous Brain-Computer Interfaces: lessons learned from instructional design

    Directory of Open Access Journals (Sweden)

    Fabien eLotte

    2013-09-01

    Full Text Available While recent research on Brain-Computer Interfaces (BCI has highlighted their potential for many applications, they remain barely used outside laboratories. The main reason is their lack of robustness. Indeed, with current BCI, mental state recognition is usually slow and often incorrect. Spontaneous BCI (i.e., mental imagery-based BCI often rely on mutual learning efforts by the user and the machine, with BCI users learning to produce stable EEG patterns (spontaneous BCI control being widely acknowledged as a skill while the computer learns to automatically recognize these EEG patterns, using signal processing. Most research so far was focused on signal processing, mostly neglecting the human in the loop. However, how well the user masters the BCI skill is also a key element explaining BCI robustness. Indeed, if the user is not able to produce stable and distinct EEG patterns, then no signal processing algorithm would be able to recognize them. Unfortunately, despite the importance of BCI training protocols, they have been scarcely studied so far, and used mostly unchanged for years.In this paper, we advocate that current human training approaches for spontaneous BCI are most likely inappropriate. We notably study instructional design literature in order to identify the key requirements and guidelines for a successful training procedure that promotes a good and efficient skill learning. This literature study highlights that current spontaneous BCI user training procedures satisfy very few of these requirements and hence are likely to be suboptimal. We therefore identify the flaws in BCI training protocols according to instructional design principles, at several levels: in the instructions provided to the user, in the tasks he/she has to perform, and in the feedback provided. For each level, we propose new research directions that are theoretically expected to address some of these flaws and to help users learn the BCI skill more efficiently.

  13. Computations of finite temperature QCD with the pseudofermion method

    International Nuclear Information System (INIS)

    Fucito, F.; Solomon, S.

    1985-01-01

    The authors discuss the phase diagram of finite temperature QCD as it is obtained including the effects of dynamical quarks by the pseudofermion method. They compare their results with the results obtained by other groups and comment on the actual state of the art for these kind of computations

  14. Multiscale methods in computational fluid and solid mechanics

    NARCIS (Netherlands)

    Borst, de R.; Hulshoff, S.J.; Lenz, S.; Munts, E.A.; Brummelen, van E.H.; Wall, W.; Wesseling, P.; Onate, E.; Periaux, J.

    2006-01-01

    First, an attempt is made towards gaining a more systematic understanding of recent progress in multiscale modelling in computational solid and fluid mechanics. Sub- sequently, the discussion is focused on variational multiscale methods for the compressible and incompressible Navier-Stokes

  15. Health risk behaviours amongst school adolescents: protocol for a mixed methods study

    Directory of Open Access Journals (Sweden)

    Youness El Achhab

    2016-11-01

    Full Text Available Abstract Background Determining risky behaviours of adolescents provides valuable information for designing appropriate intervention programmes for advancing adolescent’s health. However, these behaviours are not fully addressed by researchers in a comprehensive approach. We report the protocol of a mixed methods study designed to investigate the health risk behaviours of Moroccan adolescents with the goal of identifying suitable strategies to address their health concerns. Methods We used a sequential two-phase explanatory mixed method study design. The approach begins with the collection of quantitative data, followed by the collection of qualitative data to explain and enrich the quantitative findings. In the first phase, the global school-based student health survey (GSHS was administered to 800 students who were between 14 and 19 years of age. The second phase engaged adolescents, parents and teachers in focus groups and assessed education documents to explore the level of coverage of health education in the programme learnt in the middle school. To obtain opinions about strategies to reduce Moroccan adolescents’ health risk behaviours, a nominal group technique will be used. Discussion The findings of this mixed methods sequential explanatory study provide insights into the risk behaviours that need to be considered if intervention programmes and preventive strategies are to be designed to promote adolescent’s health in the Moroccan school.

  16. Advanced scientific computational methods and their applications of nuclear technologies. (1) Overview of scientific computational methods, introduction of continuum simulation methods and their applications (1)

    International Nuclear Information System (INIS)

    Oka, Yoshiaki; Okuda, Hiroshi

    2006-01-01

    Scientific computational methods have advanced remarkably with the progress of nuclear development. They have played the role of weft connecting each realm of nuclear engineering and then an introductory course of advanced scientific computational methods and their applications to nuclear technologies were prepared in serial form. This is the first issue showing their overview and introduction of continuum simulation methods. Finite element method as their applications is also reviewed. (T. Tanaka)

  17. Regression modeling methods, theory, and computation with SAS

    CERN Document Server

    Panik, Michael

    2009-01-01

    Regression Modeling: Methods, Theory, and Computation with SAS provides an introduction to a diverse assortment of regression techniques using SAS to solve a wide variety of regression problems. The author fully documents the SAS programs and thoroughly explains the output produced by the programs.The text presents the popular ordinary least squares (OLS) approach before introducing many alternative regression methods. It covers nonparametric regression, logistic regression (including Poisson regression), Bayesian regression, robust regression, fuzzy regression, random coefficients regression,

  18. Recent Development in Rigorous Computational Methods in Dynamical Systems

    OpenAIRE

    Arai, Zin; Kokubu, Hiroshi; Pilarczyk, Paweł

    2009-01-01

    We highlight selected results of recent development in the area of rigorous computations which use interval arithmetic to analyse dynamical systems. We describe general ideas and selected details of different ways of approach and we provide specific sample applications to illustrate the effectiveness of these methods. The emphasis is put on a topological approach, which combined with rigorous calculations provides a broad range of new methods that yield mathematically rel...

  19. Method and system for environmentally adaptive fault tolerant computing

    Science.gov (United States)

    Copenhaver, Jason L. (Inventor); Jeremy, Ramos (Inventor); Wolfe, Jeffrey M. (Inventor); Brenner, Dean (Inventor)

    2010-01-01

    A method and system for adapting fault tolerant computing. The method includes the steps of measuring an environmental condition representative of an environment. An on-board processing system's sensitivity to the measured environmental condition is measured. It is determined whether to reconfigure a fault tolerance of the on-board processing system based in part on the measured environmental condition. The fault tolerance of the on-board processing system may be reconfigured based in part on the measured environmental condition.

  20. Method-centered digital communities on protocols.io for fast-paced scientific innovation [version 2; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Lori Kindler

    2017-06-01

    Full Text Available The Internet has enabled online social interaction for scientists beyond physical meetings and conferences. Yet despite these innovations in communication, dissemination of methods is often relegated to just academic publishing. Further, these methods remain static, with subsequent advances published elsewhere and unlinked. For communities undergoing fast-paced innovation, researchers need new capabilities to share, obtain feedback, and publish methods at the forefront of scientific development. For example, a renaissance in virology is now underway given the new metagenomic methods to sequence viral DNA directly from an environment. Metagenomics makes it possible to "see" natural viral communities that could not be previously studied through culturing methods. Yet, the knowledge of specialized techniques for the production and analysis of viral metagenomes remains in a subset of labs.  This problem is common to any community using and developing emerging technologies and techniques. We developed new capabilities to create virtual communities in protocols.io, an open access platform, for disseminating protocols and knowledge at the forefront of scientific development. To demonstrate these capabilities, we present a virology community forum called VERVENet. These new features allow virology researchers to share protocols and their annotations and optimizations, connect with the broader virtual community to share knowledge, job postings, conference announcements through a common online forum, and discover the current literature through personalized recommendations to promote discussion of cutting edge research. Virtual communities in protocols.io enhance a researcher's ability to: discuss and share protocols, connect with fellow community members, and learn about new and innovative research in the field.  The web-based software for developing virtual communities is free to use on protocols.io. Data are available through public APIs at protocols.io.

  1. Cryptographic Protocols:

    DEFF Research Database (Denmark)

    Geisler, Martin Joakim Bittel

    cryptography was thus concerned with message confidentiality and integrity. Modern cryptography cover a much wider range of subjects including the area of secure multiparty computation, which will be the main topic of this dissertation. Our first contribution is a new protocol for secure comparison, presented...... implemented the comparison protocol in Java and benchmarks show that is it highly competitive and practical. The biggest contribution of this dissertation is a general framework for secure multiparty computation. Instead of making new ad hoc implementations for each protocol, we want a single and extensible...... in Chapter 2. Comparisons play a key role in many systems such as online auctions and benchmarks — it is not unreasonable to say that when parties come together for a multiparty computation, it is because they want to make decisions that depend on private information. Decisions depend on comparisons. We have...

  2. Numerical evaluation of methods for computing tomographic projections

    International Nuclear Information System (INIS)

    Zhuang, W.; Gopal, S.S.; Hebert, T.J.

    1994-01-01

    Methods for computing forward/back projections of 2-D images can be viewed as numerical integration techniques. The accuracy of any ray-driven projection method can be improved by increasing the number of ray-paths that are traced per projection bin. The accuracy of pixel-driven projection methods can be increased by dividing each pixel into a number of smaller sub-pixels and projecting each sub-pixel. The authors compared four competing methods of computing forward/back projections: bilinear interpolation, ray-tracing, pixel-driven projection based upon sub-pixels, and pixel-driven projection based upon circular, rather than square, pixels. This latter method is equivalent to a fast, bi-nonlinear interpolation. These methods and the choice of the number of ray-paths per projection bin or the number of sub-pixels per pixel present a trade-off between computational speed and accuracy. To solve the problem of assessing backprojection accuracy, the analytical inverse Fourier transform of the ramp filtered forward projection of the Shepp and Logan head phantom is derived

  3. High-integrity software, computation and the scientific method

    International Nuclear Information System (INIS)

    Hatton, L.

    2012-01-01

    Computation rightly occupies a central role in modern science. Datasets are enormous and the processing implications of some algorithms are equally staggering. With the continuing difficulties in quantifying the results of complex computations, it is of increasing importance to understand its role in the essentially Popperian scientific method. In this paper, some of the problems with computation, for example the long-term unquantifiable presence of undiscovered defect, problems with programming languages and process issues will be explored with numerous examples. One of the aims of the paper is to understand the implications of trying to produce high-integrity software and the limitations which still exist. Unfortunately Computer Science itself suffers from an inability to be suitably critical of its practices and has operated in a largely measurement-free vacuum since its earliest days. Within computer science itself, this has not been so damaging in that it simply leads to unconstrained creativity and a rapid turnover of new technologies. In the applied sciences however which have to depend on computational results, such unquantifiability significantly undermines trust. It is time this particular demon was put to rest. (author)

  4. Low-dose X-ray computed tomography image reconstruction with a combined low-mAs and sparse-view protocol.

    Science.gov (United States)

    Gao, Yang; Bian, Zhaoying; Huang, Jing; Zhang, Yunwan; Niu, Shanzhou; Feng, Qianjin; Chen, Wufan; Liang, Zhengrong; Ma, Jianhua

    2014-06-16

    To realize low-dose imaging in X-ray computed tomography (CT) examination, lowering milliampere-seconds (low-mAs) or reducing the required number of projection views (sparse-view) per rotation around the body has been widely studied as an easy and effective approach. In this study, we are focusing on low-dose CT image reconstruction from the sinograms acquired with a combined low-mAs and sparse-view protocol and propose a two-step image reconstruction strategy. Specifically, to suppress significant statistical noise in the noisy and insufficient sinograms, an adaptive sinogram restoration (ASR) method is first proposed with consideration of the statistical property of sinogram data, and then to further acquire a high-quality image, a total variation based projection onto convex sets (TV-POCS) method is adopted with a slight modification. For simplicity, the present reconstruction strategy was termed as "ASR-TV-POCS." To evaluate the present ASR-TV-POCS method, both qualitative and quantitative studies were performed on a physical phantom. Experimental results have demonstrated that the present ASR-TV-POCS method can achieve promising gains over other existing methods in terms of the noise reduction, contrast-to-noise ratio, and edge detail preservation.

  5. Computational biology in the cloud: methods and new insights from computing at scale.

    Science.gov (United States)

    Kasson, Peter M

    2013-01-01

    The past few years have seen both explosions in the size of biological data sets and the proliferation of new, highly flexible on-demand computing capabilities. The sheer amount of information available from genomic and metagenomic sequencing, high-throughput proteomics, experimental and simulation datasets on molecular structure and dynamics affords an opportunity for greatly expanded insight, but it creates new challenges of scale for computation, storage, and interpretation of petascale data. Cloud computing resources have the potential to help solve these problems by offering a utility model of computing and storage: near-unlimited capacity, the ability to burst usage, and cheap and flexible payment models. Effective use of cloud computing on large biological datasets requires dealing with non-trivial problems of scale and robustness, since performance-limiting factors can change substantially when a dataset grows by a factor of 10,000 or more. New computing paradigms are thus often needed. The use of cloud platforms also creates new opportunities to share data, reduce duplication, and to provide easy reproducibility by making the datasets and computational methods easily available.

  6. Chapter 3: Commercial and Industrial Lighting Controls Evaluation Protocol. The Uniform Methods Project: Methods for Determining Energy Efficiency Savings for Specific Measures

    Energy Technology Data Exchange (ETDEWEB)

    Kurnik, Charles W. [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Carlson, Stephen [DNV GL, Madison, WI (United States)

    2017-10-04

    This Commercial and Industrial Lighting Controls Evaluation Protocol (the protocol) describes methods to account for energy savings resulting from programmatic installation of lighting control equipment in large populations of commercial, industrial, government, institutional, and other nonresidential facilities. This protocol does not address savings resulting from changes in codes and standards, or from education and training activities. When lighting controls are installed in conjunction with a lighting retrofit project, the lighting control savings must be calculated parametrically with the lighting retrofit project so savings are not double counted.

  7. Computational Methods for Modeling Aptamers and Designing Riboswitches

    Directory of Open Access Journals (Sweden)

    Sha Gong

    2017-11-01

    Full Text Available Riboswitches, which are located within certain noncoding RNA region perform functions as genetic “switches”, regulating when and where genes are expressed in response to certain ligands. Understanding the numerous functions of riboswitches requires computation models to predict structures and structural changes of the aptamer domains. Although aptamers often form a complex structure, computational approaches, such as RNAComposer and Rosetta, have already been applied to model the tertiary (three-dimensional (3D structure for several aptamers. As structural changes in aptamers must be achieved within the certain time window for effective regulation, kinetics is another key point for understanding aptamer function in riboswitch-mediated gene regulation. The coarse-grained self-organized polymer (SOP model using Langevin dynamics simulation has been successfully developed to investigate folding kinetics of aptamers, while their co-transcriptional folding kinetics can be modeled by the helix-based computational method and BarMap approach. Based on the known aptamers, the web server Riboswitch Calculator and other theoretical methods provide a new tool to design synthetic riboswitches. This review will represent an overview of these computational methods for modeling structure and kinetics of riboswitch aptamers and for designing riboswitches.

  8. Computational electrodynamics the finite-difference time-domain method

    CERN Document Server

    Taflove, Allen

    2005-01-01

    This extensively revised and expanded third edition of the Artech House bestseller, Computational Electrodynamics: The Finite-Difference Time-Domain Method, offers engineers the most up-to-date and definitive resource on this critical method for solving Maxwell's equations. The method helps practitioners design antennas, wireless communications devices, high-speed digital and microwave circuits, and integrated optical devices with unsurpassed efficiency. There has been considerable advancement in FDTD computational technology over the past few years, and the third edition brings professionals the very latest details with entirely new chapters on important techniques, major updates on key topics, and new discussions on emerging areas such as nanophotonics. What's more, to supplement the third edition, the authors have created a Web site with solutions to problems, downloadable graphics and videos, and updates, making this new edition the ideal textbook on the subject as well.

  9. A Computationally Efficient Method for Polyphonic Pitch Estimation

    Directory of Open Access Journals (Sweden)

    Ruohua Zhou

    2009-01-01

    Full Text Available This paper presents a computationally efficient method for polyphonic pitch estimation. The method employs the Fast Resonator Time-Frequency Image (RTFI as the basic time-frequency analysis tool. The approach is composed of two main stages. First, a preliminary pitch estimation is obtained by means of a simple peak-picking procedure in the pitch energy spectrum. Such spectrum is calculated from the original RTFI energy spectrum according to harmonic grouping principles. Then the incorrect estimations are removed according to spectral irregularity and knowledge of the harmonic structures of the music notes played on commonly used music instruments. The new approach is compared with a variety of other frame-based polyphonic pitch estimation methods, and results demonstrate the high performance and computational efficiency of the approach.

  10. Evolutionary Computation Methods and their applications in Statistics

    Directory of Open Access Journals (Sweden)

    Francesco Battaglia

    2013-05-01

    Full Text Available A brief discussion of the genesis of evolutionary computation methods, their relationship to artificial intelligence, and the contribution of genetics and Darwin’s theory of natural evolution is provided. Then, the main evolutionary computation methods are illustrated: evolution strategies, genetic algorithms, estimation of distribution algorithms, differential evolution, and a brief description of some evolutionary behavior methods such as ant colony and particle swarm optimization. We also discuss the role of the genetic algorithm for multivariate probability distribution random generation, rather than as a function optimizer. Finally, some relevant applications of genetic algorithm to statistical problems are reviewed: selection of variables in regression, time series model building, outlier identification, cluster analysis, design of experiments.

  11. Variational-moment method for computing magnetohydrodynamic equilibria

    International Nuclear Information System (INIS)

    Lao, L.L.

    1983-08-01

    A fast yet accurate method to compute magnetohydrodynamic equilibria is provided by the variational-moment method, which is similar to the classical Rayleigh-Ritz-Galerkin approximation. The equilibrium solution sought is decomposed into a spectral representation. The partial differential equations describing the equilibrium are then recast into their equivalent variational form and systematically reduced to an optimum finite set of coupled ordinary differential equations. An appropriate spectral decomposition can make the series representing the solution coverge rapidly and hence substantially reduces the amount of computational time involved. The moment method was developed first to compute fixed-boundary inverse equilibria in axisymmetric toroidal geometry, and was demonstrated to be both efficient and accurate. The method since has been generalized to calculate free-boundary axisymmetric equilibria, to include toroidal plasma rotation and pressure anisotropy, and to treat three-dimensional toroidal geometry. In all these formulations, the flux surfaces are assumed to be smooth and nested so that the solutions can be decomposed in Fourier series in inverse coordinates. These recent developments and the advantages and limitations of the moment method are reviewed. The use of alternate coordinates for decomposition is discussed

  12. Computer-aided methods of determining thyristor thermal transients

    International Nuclear Information System (INIS)

    Lu, E.; Bronner, G.

    1988-08-01

    An accurate tracing of the thyristor thermal response is investigated. This paper offers several alternatives for thermal modeling and analysis by using an electrical circuit analog: topological method, convolution integral method, etc. These methods are adaptable to numerical solutions and well suited to the use of the digital computer. The thermal analysis of thyristors was performed for the 1000 MVA converter system at the Princeton Plasma Physics Laboratory. Transient thermal impedance curves for individual thyristors in a given cooling arrangement were known from measurements and from manufacturer's data. The analysis pertains to almost any loading case, and the results are obtained in a numerical or a graphical format. 6 refs., 9 figs

  13. Computational methods for coupling microstructural and micromechanical materials response simulations

    Energy Technology Data Exchange (ETDEWEB)

    HOLM,ELIZABETH A.; BATTAILE,CORBETT C.; BUCHHEIT,THOMAS E.; FANG,HUEI ELIOT; RINTOUL,MARK DANIEL; VEDULA,VENKATA R.; GLASS,S. JILL; KNOROVSKY,GERALD A.; NEILSEN,MICHAEL K.; WELLMAN,GERALD W.; SULSKY,DEBORAH; SHEN,YU-LIN; SCHREYER,H. BUCK

    2000-04-01

    Computational materials simulations have traditionally focused on individual phenomena: grain growth, crack propagation, plastic flow, etc. However, real materials behavior results from a complex interplay between phenomena. In this project, the authors explored methods for coupling mesoscale simulations of microstructural evolution and micromechanical response. In one case, massively parallel (MP) simulations for grain evolution and microcracking in alumina stronglink materials were dynamically coupled. In the other, codes for domain coarsening and plastic deformation in CuSi braze alloys were iteratively linked. this program provided the first comparison of two promising ways to integrate mesoscale computer codes. Coupled microstructural/micromechanical codes were applied to experimentally observed microstructures for the first time. In addition to the coupled codes, this project developed a suite of new computational capabilities (PARGRAIN, GLAD, OOF, MPM, polycrystal plasticity, front tracking). The problem of plasticity length scale in continuum calculations was recognized and a solution strategy was developed. The simulations were experimentally validated on stockpile materials.

  14. Fast calculation method for computer-generated cylindrical holograms.

    Science.gov (United States)

    Yamaguchi, Takeshi; Fujii, Tomohiko; Yoshikawa, Hiroshi

    2008-07-01

    Since a general flat hologram has a limited viewable area, we usually cannot see the other side of a reconstructed object. There are some holograms that can solve this problem. A cylindrical hologram is well known to be viewable in 360 deg. Most cylindrical holograms are optical holograms, but there are few reports of computer-generated cylindrical holograms. The lack of computer-generated cylindrical holograms is because the spatial resolution of output devices is not great enough; therefore, we have to make a large hologram or use a small object to fulfill the sampling theorem. In addition, in calculating the large fringe, the calculation amount increases in proportion to the hologram size. Therefore, we propose what we believe to be a new calculation method for fast calculation. Then, we print these fringes with our prototype fringe printer. As a result, we obtain a good reconstructed image from a computer-generated cylindrical hologram.

  15. Computational methods in metabolic engineering for strain design.

    Science.gov (United States)

    Long, Matthew R; Ong, Wai Kit; Reed, Jennifer L

    2015-08-01

    Metabolic engineering uses genetic approaches to control microbial metabolism to produce desired compounds. Computational tools can identify new biological routes to chemicals and the changes needed in host metabolism to improve chemical production. Recent computational efforts have focused on exploring what compounds can be made biologically using native, heterologous, and/or enzymes with broad specificity. Additionally, computational methods have been developed to suggest different types of genetic modifications (e.g. gene deletion/addition or up/down regulation), as well as suggest strategies meeting different criteria (e.g. high yield, high productivity, or substrate co-utilization). Strategies to improve the runtime performances have also been developed, which allow for more complex metabolic engineering strategies to be identified. Future incorporation of kinetic considerations will further improve strain design algorithms. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Method of Computer-aided Instruction in Situation Control Systems

    Directory of Open Access Journals (Sweden)

    Anatoliy O. Kargin

    2013-01-01

    Full Text Available The article considers the problem of computer-aided instruction in context-chain motivated situation control system of the complex technical system behavior. The conceptual and formal models of situation control with practical instruction are considered. Acquisition of new behavior knowledge is presented as structural changes in system memory in the form of situational agent set. Model and method of computer-aided instruction represent formalization, based on the nondistinct theories by physiologists and cognitive psychologists.The formal instruction model describes situation and reaction formation and dependence on different parameters, effecting education, such as the reinforcement value, time between the stimulus, action and the reinforcement. The change of the contextual link between situational elements when using is formalized.The examples and results of computer instruction experiments of the robot device “LEGO MINDSTORMS NXT”, equipped with ultrasonic distance, touch, light sensors.

  17. Positron emission tomography-computed tomography protocol considerations for head and neck cancer imaging.

    Science.gov (United States)

    Escott, Edward J

    2008-08-01

    Positron emission tomographic-computed tomographic (PET-CT) imaging of patients with primary head and neck cancers has become an established approach for staging and restaging, as well as radiation therapy planning. The inherent co-registration of PET and CT images made possible by the integrated PET-CT scanner is particularly valuable in head and neck cancer imaging due to the complex and closely situated anatomy in this part of the body, the varied sources of physiologic and benign 2-deoxy-2-[F-18]fluoro-D-glucose (FDG) tracer uptake that occurs in the neck, and the varied and complex posttreatment appearance of the neck. Careful optimization of both the CT and the PET portion of the examination is essential to insure the most accurate and clinically valuable interpretation of these examinations.

  18. Costing 'healthy' food baskets in Australia - a systematic review of food price and affordability monitoring tools, protocols and methods.

    Science.gov (United States)

    Lewis, Meron; Lee, Amanda

    2016-11-01

    To undertake a systematic review to determine similarities and differences in metrics and results between recently and/or currently used tools, protocols and methods for monitoring Australian healthy food prices and affordability. Electronic databases of peer-reviewed literature and online grey literature were systematically searched using the PRISMA approach for articles and reports relating to healthy food and diet price assessment tools, protocols, methods and results that utilised retail pricing. National, state, regional and local areas of Australia from 1995 to 2015. Assessment tools, protocols and methods to measure the price of 'healthy' foods and diets. The search identified fifty-nine discrete surveys of 'healthy' food pricing incorporating six major food pricing tools (those used in multiple areas and time periods) and five minor food pricing tools (those used in a single survey area or time period). Analysis demonstrated methodological differences regarding: included foods; reference households; use of availability and/or quality measures; household income sources; store sampling methods; data collection protocols; analysis methods; and results. 'Healthy' food price assessment methods used in Australia lack comparability across all metrics and most do not fully align with a 'healthy' diet as recommended by the current Australian Dietary Guidelines. None have been applied nationally. Assessment of the price, price differential and affordability of healthy (recommended) and current (unhealthy) diets would provide more robust and meaningful data to inform health and fiscal policy in Australia. The INFORMAS 'optimal' approach provides a potential framework for development of these methods.

  19. A new fault detection method for computer networks

    International Nuclear Information System (INIS)

    Lu, Lu; Xu, Zhengguo; Wang, Wenhai; Sun, Youxian

    2013-01-01

    Over the past few years, fault detection for computer networks has attracted extensive attentions for its importance in network management. Most existing fault detection methods are based on active probing techniques which can detect the occurrence of faults fast and precisely. But these methods suffer from the limitation of traffic overhead, especially in large scale networks. To relieve traffic overhead induced by active probing based methods, a new fault detection method, whose key is to divide the detection process into multiple stages, is proposed in this paper. During each stage, only a small region of the network is detected by using a small set of probes. Meanwhile, it also ensures that the entire network can be covered after multiple detection stages. This method can guarantee that the traffic used by probes during each detection stage is small sufficiently so that the network can operate without severe disturbance from probes. Several simulation results verify the effectiveness of the proposed method

  20. Early Interventions Following the Death of a Parent: Protocol of a Mixed Methods Systematic Review.

    Science.gov (United States)

    Pereira, Mariana; Johnsen, Iren; Hauken, May Aa; Kristensen, Pål; Dyregrov, Atle

    2017-06-29

    Previous meta-analyses examined the effectiveness of interventions for bereaved children showing small to moderate effect sizes. However, no mixed methods systematic review was conducted on bereavement interventions following the loss of a parent focusing on the time since death in regard to the prevention of grief complications. The overall purpose of the review is to provide a rigorous synthesis of early intervention after parental death in childhood. Specifically, the aims are twofold: (1) to determine the rationales, contents, timeframes, and outcomes of early bereavement care interventions for children and/or their parents and (2) to assess the quality of current early intervention studies. Quantitative, qualitative, and mixed methods intervention studies that start intervention with parentally bereaved children (and/or their parents) up to 6 months postloss will be included in the review. The search strategy was based on the Population, Interventions, Comparator, Outcomes, and Study Designs (PICOS) approach, and it was devised together with a university librarian. The literature searches will be carried out in the Medical Literature Analysis and Retrieval System Online (MEDLINE), PsycINFO, Excerpta Medica Database (EMBASE), and Cumulative Index to Nursing and Allied Health Literature (CINAHL). The Mixed Methods Appraisal Tool will be used to appraise the quality of eligible studies. All data will be narratively synthetized following the Guidance on the Conduct of Narrative Synthesis in Systematic Reviews. The systematic review is ongoing and the data search has started. The review is expected to be completed by the end of 2017. Findings will be submitted to leading journals for publication. In accordance with the current diagnostic criteria for prolonged grief as well as the users' perspectives literature, this systematic review outlines a possible sensitive period for early intervention following the death of a parent. The hereby presented protocol ensures

  1. Practical methods to improve the development of computational software

    International Nuclear Information System (INIS)

    Osborne, A. G.; Harding, D. W.; Deinert, M. R.

    2013-01-01

    The use of computation has become ubiquitous in science and engineering. As the complexity of computer codes has increased, so has the need for robust methods to minimize errors. Past work has show that the number of functional errors is related the number of commands that a code executes. Since the late 1960's, major participants in the field of computation have encouraged the development of best practices for programming to help reduce coder induced error, and this has lead to the emergence of 'software engineering' as a field of study. Best practices for coding and software production have now evolved and become common in the development of commercial software. These same techniques, however, are largely absent from the development of computational codes by research groups. Many of the best practice techniques from the professional software community would be easy for research groups in nuclear science and engineering to adopt. This paper outlines the history of software engineering, as well as issues in modern scientific computation, and recommends practices that should be adopted by individual scientific programmers and university research groups. (authors)

  2. Computing homography with RANSAC algorithm: a novel method of registration

    Science.gov (United States)

    Li, Xiaowei; Liu, Yue; Wang, Yongtian; Yan, Dayuan

    2005-02-01

    An AR (Augmented Reality) system can integrate computer-generated objects with the image sequences of real world scenes in either an off-line or a real-time way. Registration, or camera pose estimation, is one of the key techniques to determine its performance. The registration methods can be classified as model-based and move-matching. The former approach can accomplish relatively accurate registration results, but it requires the precise model of the scene, which is hard to be obtained. The latter approach carries out registration by computing the ego-motion of the camera. Because it does not require the prior-knowledge of the scene, its registration results sometimes turn out to be less accurate. When the model defined is as simple as a plane, a mixed method is introduced to take advantages of the virtues of the two methods mentioned above. Although unexpected objects often occlude this plane in an AR system, one can still try to detect corresponding points with a contract-expand method, while this will import erroneous correspondences. Computing homography with RANSAC algorithm is used to overcome such shortcomings. Using the robustly estimated homography resulted from RANSAC, the camera projective matrix can be recovered and thus registration is accomplished even when the markers are lost in the scene.

  3. Pair Programming as a Modern Method of Teaching Computer Science

    Directory of Open Access Journals (Sweden)

    Irena Nančovska Šerbec

    2008-10-01

    Full Text Available At the Faculty of Education, University of Ljubljana we educate future computer science teachers. Beside didactical, pedagogical, mathematical and other interdisciplinary knowledge, students gain knowledge and skills of programming that are crucial for computer science teachers. For all courses, the main emphasis is the absorption of professional competences, related to the teaching profession and the programming profile. The latter are selected according to the well-known document, the ACM Computing Curricula. The professional knowledge is therefore associated and combined with the teaching knowledge and skills. In the paper we present how to achieve competences related to programming by using different didactical models (semiotic ladder, cognitive objectives taxonomy, problem solving and modern teaching method “pair programming”. Pair programming differs from standard methods (individual work, seminars, projects etc.. It belongs to the extreme programming as a discipline of software development and is known to have positive effects on teaching first programming language. We have experimentally observed pair programming in the introductory programming course. The paper presents and analyzes the results of using this method: the aspects of satisfaction during programming and the level of gained knowledge. The results are in general positive and demonstrate the promising usage of this teaching method.

  4. Applications of meshless methods for damage computations with finite strains

    International Nuclear Information System (INIS)

    Pan Xiaofei; Yuan Huang

    2009-01-01

    Material defects such as cavities have great effects on the damage process in ductile materials. Computations based on finite element methods (FEMs) often suffer from instability due to material failure as well as large distortions. To improve computational efficiency and robustness the element-free Galerkin (EFG) method is applied in the micro-mechanical constitute damage model proposed by Gurson and modified by Tvergaard and Needleman (the GTN damage model). The EFG algorithm is implemented in the general purpose finite element code ABAQUS via the user interface UEL. With the help of the EFG method, damage processes in uniaxial tension specimens and notched specimens are analyzed and verified with experimental data. Computational results reveal that the damage which takes place in the interior of specimens will extend to the exterior and cause fracture of specimens; the damage is a fast procedure relative to the whole tensing process. The EFG method provides more stable and robust numerical solution in comparing with the FEM analysis

  5. Improved computation method in residual life estimation of structural components

    Directory of Open Access Journals (Sweden)

    Maksimović Stevan M.

    2013-01-01

    Full Text Available This work considers the numerical computation methods and procedures for the fatigue crack growth predicting of cracked notched structural components. Computation method is based on fatigue life prediction using the strain energy density approach. Based on the strain energy density (SED theory, a fatigue crack growth model is developed to predict the lifetime of fatigue crack growth for single or mixed mode cracks. The model is based on an equation expressed in terms of low cycle fatigue parameters. Attention is focused on crack growth analysis of structural components under variable amplitude loads. Crack growth is largely influenced by the effect of the plastic zone at the front of the crack. To obtain efficient computation model plasticity-induced crack closure phenomenon is considered during fatigue crack growth. The use of the strain energy density method is efficient for fatigue crack growth prediction under cyclic loading in damaged structural components. Strain energy density method is easy for engineering applications since it does not require any additional determination of fatigue parameters (those would need to be separately determined for fatigue crack propagation phase, and low cyclic fatigue parameters are used instead. Accurate determination of fatigue crack closure has been a complex task for years. The influence of this phenomenon can be considered by means of experimental and numerical methods. Both of these models are considered. Finite element analysis (FEA has been shown to be a powerful and useful tool1,6 to analyze crack growth and crack closure effects. Computation results are compared with available experimental results. [Projekat Ministarstva nauke Republike Srbije, br. OI 174001

  6. Gene probes : principles and protocols [Methods in molecular biology, v. 179

    National Research Council Canada - National Science Library

    Rapley, Ralph; Aquino de Muro, Marilena

    2002-01-01

    ... of labeled DNA has allowed genes to be mapped to single chromosomes and in many cases to a single chromosome band, promoting significant advance in human genome mapping. Gene Probes: Principles and Protocols presents the principles for gene probe design, labeling, detection, target format, and hybridization conditions together with detailed protocols, accom...

  7. Comparison between evaluating methods about the protocols of different dose distributions in radiotherapy

    International Nuclear Information System (INIS)

    Ju Yongjian; Chen Meihua; Sun Fuyin; Zhang Liang'an; Lei Chengzhi

    2004-01-01

    Objective: To study the relationship between tumor control probability (TCP) or equivalent uniform dose (EUD) and the heterogeneity degree of the dose changes with variable biological parameter values of the tumor. Methods: According to the definitions of TCP and EUD, calculating equations were derived. The dose distributions in the tumor were assumed to be Gaussian ones. The volume of the tumor was divided into several voxels, and the absorbed doses of these voxels were simulated by Monte Carlo methods. Then with the different values of radiosensitivity (α) and potential doubling time of the clonogens (T p ), the relationships between TCP or EUD and the standard deviation of dose (S d ) were evaluated. Results: The TCP-S d curves were influenced by the variable α and T p values, but the EUD-S d curves showed little variation. Conclusion: When the radiotherapy protocols with different dose distributions are compared, if the biological parameter values of the tumor have been known exactly, it's better to use the TCP, otherwise the EUD will be preferred

  8. Chapter 6: Residential Lighting Evaluation Protocol. The Uniform Methods Project: Methods for Determining Energy Efficiency Savings for Specific Measures

    Energy Technology Data Exchange (ETDEWEB)

    Kurnik, Charles W [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Dimetrosky, Scott [Apex Analytics, LLC, Boulder, CO (United States); Parkinson, Katie [Apex Analytics, LLC, Boulder, CO (United States); Lieb, Noah [Apex Analytics, LLC, Boulder, CO (United States)

    2017-10-19

    Given new regulations, increased complexity in the market, and the general shift from CFLs to LEDs, this evaluation protocol was updated in 2017 to shift the focus of the protocols toward LEDs and away from CFLs and to resolve evaluation uncertainties affecting residential lighting incentive programs.

  9. Quality control in cone-beam computed tomography (CBCT) EFOMP-ESTRO-IAEA protocol (summary report).

    Science.gov (United States)

    de Las Heras Gala, Hugo; Torresin, Alberto; Dasu, Alexandru; Rampado, Osvaldo; Delis, Harry; Hernández Girón, Irene; Theodorakou, Chrysoula; Andersson, Jonas; Holroyd, John; Nilsson, Mats; Edyvean, Sue; Gershan, Vesna; Hadid-Beurrier, Lama; Hoog, Christopher; Delpon, Gregory; Sancho Kolster, Ismael; Peterlin, Primož; Garayoa Roca, Julia; Caprile, Paola; Zervides, Costas

    2017-07-01

    The aim of the guideline presented in this article is to unify the test parameters for image quality evaluation and radiation output in all types of cone-beam computed tomography (CBCT) systems. The applications of CBCT spread over dental and interventional radiology, guided surgery and radiotherapy. The chosen tests provide the means to objectively evaluate the performance and monitor the constancy of the imaging chain. Experience from all involved associations has been collected to achieve a consensus that is rigorous and helpful for the practice. The guideline recommends to assess image quality in terms of uniformity, geometrical precision, voxel density values (or Hounsfield units where available), noise, low contrast resolution and spatial resolution measurements. These tests usually require the use of a phantom and evaluation software. Radiation output can be determined with a kerma-area product meter attached to the tube case. Alternatively, a solid state dosimeter attached to the flat panel and a simple geometric relationship can be used to calculate the dose to the isocentre. Summary tables including action levels and recommended frequencies for each test, as well as relevant references, are provided. If the radiation output or image quality deviates from expected values, or exceeds documented action levels for a given system, a more in depth system analysis (using conventional tests) and corrective maintenance work may be required. Copyright © 2017. Published by Elsevier Ltd.

  10. An introduction to computer simulation methods applications to physical systems

    CERN Document Server

    Gould, Harvey; Christian, Wolfgang

    2007-01-01

    Now in its third edition, this book teaches physical concepts using computer simulations. The text incorporates object-oriented programming techniques and encourages readers to develop good programming habits in the context of doing physics. Designed for readers at all levels , An Introduction to Computer Simulation Methods uses Java, currently the most popular programming language. Introduction, Tools for Doing Simulations, Simulating Particle Motion, Oscillatory Systems, Few-Body Problems: The Motion of the Planets, The Chaotic Motion of Dynamical Systems, Random Processes, The Dynamics of Many Particle Systems, Normal Modes and Waves, Electrodynamics, Numerical and Monte Carlo Methods, Percolation, Fractals and Kinetic Growth Models, Complex Systems, Monte Carlo Simulations of Thermal Systems, Quantum Systems, Visualization and Rigid Body Dynamics, Seeing in Special and General Relativity, Epilogue: The Unity of Physics For all readers interested in developing programming habits in the context of doing phy...

  11. NATO Advanced Study Institute on Methods in Computational Molecular Physics

    CERN Document Server

    Diercksen, Geerd

    1992-01-01

    This volume records the lectures given at a NATO Advanced Study Institute on Methods in Computational Molecular Physics held in Bad Windsheim, Germany, from 22nd July until 2nd. August, 1991. This NATO Advanced Study Institute sought to bridge the quite considerable gap which exist between the presentation of molecular electronic structure theory found in contemporary monographs such as, for example, McWeeny's Methods 0/ Molecular Quantum Mechanics (Academic Press, London, 1989) or Wilson's Electron correlation in moleeules (Clarendon Press, Oxford, 1984) and the realization of the sophisticated computational algorithms required for their practical application. It sought to underline the relation between the electronic structure problem and the study of nuc1ear motion. Software for performing molecular electronic structure calculations is now being applied in an increasingly wide range of fields in both the academic and the commercial sectors. Numerous applications are reported in areas as diverse as catalysi...

  12. An Adaptive Reordered Method for Computing PageRank

    Directory of Open Access Journals (Sweden)

    Yi-Ming Bu

    2013-01-01

    Full Text Available We propose an adaptive reordered method to deal with the PageRank problem. It has been shown that one can reorder the hyperlink matrix of PageRank problem to calculate a reduced system and get the full PageRank vector through forward substitutions. This method can provide a speedup for calculating the PageRank vector. We observe that in the existing reordered method, the cost of the recursively reordering procedure could offset the computational reduction brought by minimizing the dimension of linear system. With this observation, we introduce an adaptive reordered method to accelerate the total calculation, in which we terminate the reordering procedure appropriately instead of reordering to the end. Numerical experiments show the effectiveness of this adaptive reordered method.

  13. Experiences using DAKOTA stochastic expansion methods in computational simulations.

    Energy Technology Data Exchange (ETDEWEB)

    Templeton, Jeremy Alan; Ruthruff, Joseph R.

    2012-01-01

    Uncertainty quantification (UQ) methods bring rigorous statistical connections to the analysis of computational and experiment data, and provide a basis for probabilistically assessing margins associated with safety and reliability. The DAKOTA toolkit developed at Sandia National Laboratories implements a number of UQ methods, which are being increasingly adopted by modeling and simulation teams to facilitate these analyses. This report disseminates results as to the performance of DAKOTA's stochastic expansion methods for UQ on a representative application. Our results provide a number of insights that may be of interest to future users of these methods, including the behavior of the methods in estimating responses at varying probability levels, and the expansion levels for the methodologies that may be needed to achieve convergence.

  14. Interval sampling methods and measurement error: a computer simulation.

    Science.gov (United States)

    Wirth, Oliver; Slaven, James; Taylor, Matthew A

    2014-01-01

    A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method's inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments. © Society for the Experimental Analysis of Behavior.

  15. Advanced soft computing diagnosis method for tumour grading.

    Science.gov (United States)

    Papageorgiou, E I; Spyridonos, P P; Stylios, C D; Ravazoula, P; Groumpos, P P; Nikiforidis, G N

    2006-01-01

    To develop an advanced diagnostic method for urinary bladder tumour grading. A novel soft computing modelling methodology based on the augmentation of fuzzy cognitive maps (FCMs) with the unsupervised active Hebbian learning (AHL) algorithm is applied. One hundred and twenty-eight cases of urinary bladder cancer were retrieved from the archives of the Department of Histopathology, University Hospital of Patras, Greece. All tumours had been characterized according to the classical World Health Organization (WHO) grading system. To design the FCM model for tumour grading, three experts histopathologists defined the main histopathological features (concepts) and their impact on grade characterization. The resulted FCM model consisted of nine concepts. Eight concepts represented the main histopathological features for tumour grading. The ninth concept represented the tumour grade. To increase the classification ability of the FCM model, the AHL algorithm was applied to adjust the weights of the FCM. The proposed FCM grading model achieved a classification accuracy of 72.5%, 74.42% and 95.55% for tumours of grades I, II and III, respectively. An advanced computerized method to support tumour grade diagnosis decision was proposed and developed. The novelty of the method is based on employing the soft computing method of FCMs to represent specialized knowledge on histopathology and on augmenting FCMs ability using an unsupervised learning algorithm, the AHL. The proposed method performs with reasonably high accuracy compared to other existing methods and at the same time meets the physicians' requirements for transparency and explicability.

  16. Shortened screening method for phosphorus fractionation in sediments A complementary approach to the standards, measurements and testing harmonised protocol

    International Nuclear Information System (INIS)

    Pardo, Patricia; Rauret, Gemma; Lopez-Sanchez, Jose Fermin

    2004-01-01

    The SMT protocol, a sediment phosphorus fractionation method harmonised and validated in the frame of the standards, measurements and testing (SMT) programme (European Commission), establishes five fractions of phosphorus according to their extractability. The determination of phosphate extracted is carried out spectrophotometrically. This protocol has been applied to 11 sediments of different origin and characteristics and the phosphorus extracted in each fraction was determined not only by UV-Vis spectrophotometry, but also by inductively coupled plasma-atomic emission spectrometry. The use of these two determination techniques allowed the differentiation between phosphorus that was present in the extracts as soluble reactive phosphorus and as total phosphorus. From the comparison of data obtained with both determination techniques a shortened screening method, for a quick evaluation of the magnitude and importance of the fractions given by the SMT protocol, is proposed and validated using two certified reference materials

  17. Evaluation of Extraction Protocols for Simultaneous Polar and Non-Polar Yeast Metabolite Analysis Using Multivariate Projection Methods

    Directory of Open Access Journals (Sweden)

    Nicolas P. Tambellini

    2013-07-01

    Full Text Available Metabolomic and lipidomic approaches aim to measure metabolites or lipids in the cell. Metabolite extraction is a key step in obtaining useful and reliable data for successful metabolite studies. Significant efforts have been made to identify the optimal extraction protocol for various platforms and biological systems, for both polar and non-polar metabolites. Here we report an approach utilizing chemoinformatics for systematic comparison of protocols to extract both from a single sample of the model yeast organism Saccharomyces cerevisiae. Three chloroform/methanol/water partitioning based extraction protocols found in literature were evaluated for their effectiveness at reproducibly extracting both polar and non-polar metabolites. Fatty acid methyl esters and methoxyamine/trimethylsilyl derivatized aqueous compounds were analyzed by gas chromatography mass spectrometry to evaluate non-polar or polar metabolite analysis. The comparative breadth and amount of recovered metabolites was evaluated using multivariate projection methods. This approach identified an optimal protocol consisting of 64 identified polar metabolites from 105 ion hits and 12 fatty acids recovered, and will potentially attenuate the error and variation associated with combining metabolite profiles from different samples for untargeted analysis with both polar and non-polar analytes. It also confirmed the value of using multivariate projection methods to compare established extraction protocols.

  18. Splitting method for computing coupled hydrodynamic and structural response

    International Nuclear Information System (INIS)

    Ash, J.E.

    1977-01-01

    A numerical method is developed for application to unsteady fluid dynamics problems, in particular to the mechanics following a sudden release of high energy. Solution of the initial compressible flow phase provides input to a power-series method for the incompressible fluid motions. The system is split into spatial and time domains leading to the convergent computation of a sequence of elliptic equations. Two sample problems are solved, the first involving an underwater explosion and the second the response of a nuclear reactor containment shell structure to a hypothetical core accident. The solutions are correlated with experimental data

  19. Complex Data Modeling and Computationally Intensive Statistical Methods

    CERN Document Server

    Mantovan, Pietro

    2010-01-01

    The last years have seen the advent and development of many devices able to record and store an always increasing amount of complex and high dimensional data; 3D images generated by medical scanners or satellite remote sensing, DNA microarrays, real time financial data, system control datasets. The analysis of this data poses new challenging problems and requires the development of novel statistical models and computational methods, fueling many fascinating and fast growing research areas of modern statistics. The book offers a wide variety of statistical methods and is addressed to statistici

  20. Computational methods for planning and evaluating geothermal energy projects

    International Nuclear Information System (INIS)

    Goumas, M.G.; Lygerou, V.A.; Papayannakis, L.E.

    1999-01-01

    In planning, designing and evaluating a geothermal energy project, a number of technical, economic, social and environmental parameters should be considered. The use of computational methods provides a rigorous analysis improving the decision-making process. This article demonstrates the application of decision-making methods developed in operational research for the optimum exploitation of geothermal resources. Two characteristic problems are considered: (1) the economic evaluation of a geothermal energy project under uncertain conditions using a stochastic analysis approach and (2) the evaluation of alternative exploitation schemes for optimum development of a low enthalpy geothermal field using a multicriteria decision-making procedure. (Author)

  1. Automated uncertainty analysis methods in the FRAP computer codes

    International Nuclear Information System (INIS)

    Peck, S.O.

    1980-01-01

    A user oriented, automated uncertainty analysis capability has been incorporated in the Fuel Rod Analysis Program (FRAP) computer codes. The FRAP codes have been developed for the analysis of Light Water Reactor fuel rod behavior during steady state (FRAPCON) and transient (FRAP-T) conditions as part of the United States Nuclear Regulatory Commission's Water Reactor Safety Research Program. The objective of uncertainty analysis of these codes is to obtain estimates of the uncertainty in computed outputs of the codes is to obtain estimates of the uncertainty in computed outputs of the codes as a function of known uncertainties in input variables. This paper presents the methods used to generate an uncertainty analysis of a large computer code, discusses the assumptions that are made, and shows techniques for testing them. An uncertainty analysis of FRAP-T calculated fuel rod behavior during a hypothetical loss-of-coolant transient is presented as an example and carried through the discussion to illustrate the various concepts

  2. Comparison of different methods for shielding design in computed tomography

    International Nuclear Information System (INIS)

    Ciraj-Bjelac, O.; Arandjic, D.; Kosutic, D.

    2011-01-01

    The purpose of this work is to compare different methods for shielding calculation in computed tomography (CT). The BIR-IPEM (British Inst. of Radiology and Inst. of Physics in Engineering in Medicine) and NCRP (National Council on Radiation Protection) method were used for shielding thickness calculation. Scattered dose levels and calculated barrier thickness were also compared with those obtained by scatter dose measurements in the vicinity of a dedicated CT unit. Minimal requirement for protective barriers based on BIR-IPEM method ranged between 1.1 and 1.4 mm of lead demonstrating underestimation of up to 20 % and overestimation of up to 30 % when compared with thicknesses based on measured dose levels. For NCRP method, calculated thicknesses were 33 % higher (27-42 %). BIR-IPEM methodology-based results were comparable with values based on scattered dose measurements, while results obtained using NCRP methodology demonstrated an overestimation of the minimal required barrier thickness. (authors)

  3. The Influence of Collaborative Reflection and Think-Aloud Protocols on Pre-Service Teachers' Reflection: A Mixed Methods Approach

    Science.gov (United States)

    Epler, Cory M.; Drape, Tiffany A.; Broyles, Thomas W.; Rudd, Rick D.

    2013-01-01

    The purpose of this mixed methods study was to determine if there are differences in pre-service teachers' depth of reflection when using a written self-reflection form, a written self-reflection form and a think-aloud protocol, and collaborative reflection. Twenty-six pre-service teachers were randomly assigned to fourteen teaching teams. The…

  4. Multiscale methods in turbulent combustion: strategies and computational challenges

    International Nuclear Information System (INIS)

    Echekki, Tarek

    2009-01-01

    A principal challenge in modeling turbulent combustion flows is associated with their complex, multiscale nature. Traditional paradigms in the modeling of these flows have attempted to address this nature through different strategies, including exploiting the separation of turbulence and combustion scales and a reduced description of the composition space. The resulting moment-based methods often yield reasonable predictions of flow and reactive scalars' statistics under certain conditions. However, these methods must constantly evolve to address combustion at different regimes, modes or with dominant chemistries. In recent years, alternative multiscale strategies have emerged, which although in part inspired by the traditional approaches, also draw upon basic tools from computational science, applied mathematics and the increasing availability of powerful computational resources. This review presents a general overview of different strategies adopted for multiscale solutions of turbulent combustion flows. Within these strategies, some specific models are discussed or outlined to illustrate their capabilities and underlying assumptions. These strategies may be classified under four different classes, including (i) closure models for atomistic processes, (ii) multigrid and multiresolution strategies, (iii) flame-embedding strategies and (iv) hybrid large-eddy simulation-low-dimensional strategies. A combination of these strategies and models can potentially represent a robust alternative strategy to moment-based models; but a significant challenge remains in the development of computational frameworks for these approaches as well as their underlying theories. (topical review)

  5. Mathematical modellings and computational methods for structural analysis of LMFBR's

    International Nuclear Information System (INIS)

    Liu, W.K.; Lam, D.

    1983-01-01

    In this paper, two aspects of nuclear reactor problems are discussed, modelling techniques and computational methods for large scale linear and nonlinear analyses of LMFBRs. For nonlinear fluid-structure interaction problem with large deformation, arbitrary Lagrangian-Eulerian description is applicable. For certain linear fluid-structure interaction problem, the structural response spectrum can be found via 'added mass' approach. In a sense, the fluid inertia is accounted by a mass matrix added to the structural mass. The fluid/structural modes of certain fluid-structure problem can be uncoupled to get the reduced added mass. The advantage of this approach is that it can account for the many repeated structures of nuclear reactor. In regard to nonlinear dynamic problem, the coupled nonlinear fluid-structure equations usually have to be solved by direct time integration. The computation can be very expensive and time consuming for nonlinear problems. Thus, it is desirable to optimize the accuracy and computation effort by using implicit-explicit mixed time integration method. (orig.)

  6. Gene probes : principles and protocols [Methods in molecular biology, v. 179

    National Research Council Canada - National Science Library

    Rapley, Ralph; Aquino de Muro, Marilena

    2002-01-01

    "Senior scientists Marilena Aquino de Muro and Ralph Rapley have brought together an outstanding collection of time-tested protocols for designing and using genes probes in a wide variety of applications...

  7. Study protocol: a randomized controlled trial of a computer-based depression and substance abuse intervention for people attending residential substance abuse treatment

    Directory of Open Access Journals (Sweden)

    Kelly Peter J

    2012-02-01

    Full Text Available Abstract Background A large proportion of people attending residential alcohol and other substance abuse treatment have a co-occurring mental illness. Empirical evidence suggests that it is important to treat both the substance abuse problem and co-occurring mental illness concurrently and in an integrated fashion. However, the majority of residential alcohol and other substance abuse services do not address mental illness in a systematic way. It is likely that computer delivered interventions could improve the ability of substance abuse services to address co-occurring mental illness. This protocol describes a study in which we will assess the effectiveness of adding a computer delivered depression and substance abuse intervention for people who are attending residential alcohol and other substance abuse treatment. Methods/Design Participants will be recruited from residential rehabilitation programs operated by the Australian Salvation Army. All participants who satisfy the diagnostic criteria for an alcohol or other substance dependence disorder will be asked to participate in the study. After completion of a baseline assessment, participants will be randomly assigned to either a computer delivered substance abuse and depression intervention (treatment condition or to a computer-delivered typing tutorial (active control condition. All participants will continue to complete The Salvation Army residential program, a predominantly 12-step based treatment facility. Randomisation will be stratified by gender (Male, Female, length of time the participant has been in the program at the commencement of the study (4 weeks or less, 4 weeks or more, and use of anti-depressant medication (currently prescribed medication, not prescribed medication. Participants in both conditions will complete computer sessions twice per week, over a five-week period. Research staff blind to treatment allocation will complete the assessments at baseline, and then 3, 6, 9

  8. Optimising social information by game theory and ant colony method to enhance routing protocol in opportunistic networks

    Directory of Open Access Journals (Sweden)

    Chander Prabha

    2016-09-01

    Full Text Available The data loss and disconnection of nodes are frequent in the opportunistic networks. The social information plays an important role in reducing the data loss because it depends on the connectivity of nodes. The appropriate selection of next hop based on social information is critical for improving the performance of routing in opportunistic networks. The frequent disconnection problem is overcome by optimising the social information with Ant Colony Optimization method which depends on the topology of opportunistic network. The proposed protocol is examined thoroughly via analysis and simulation in order to assess their performance in comparison with other social based routing protocols in opportunistic network under various parameters settings.

  9. Comparative analysis of five DNA isolation protocols and three drying methods for leaves samples of Nectandra megapotamica (Spreng. Mez

    Directory of Open Access Journals (Sweden)

    Leonardo Severo da Costa

    2016-06-01

    Full Text Available The aim of the study was to establish a DNA isolation protocol Nectandra megapotamica (Spreng. Mez., able to obtain samples of high yield and quality for use in genomic analysis. A commercial kit and four classical methods of DNA extraction were tested, including three cetyltrimethylammonium bromide (CTAB-based and one sodium dodecyl sulfate (SDS-based methods. Three drying methods for leaves samples were also evaluated including drying at room temperature (RT, in an oven at 40ºC (S40, and in a microwave oven (FMO. The DNA solutions obtained from different types of leaves samples using the five protocols were assessed in terms of cost, execution time, and quality and yield of extracted DNA. The commercial kit did not extract DNA with sufficient quantity or quality for successful PCR reactions. Among the classic methods, only the protocols of Dellaporta and of Khanuja yielded DNA extractions for all three types of foliar samples that resulted in successful PCR reactions and subsequent enzyme restriction assays. Based on the evaluated variables, the most appropriate DNA extraction method for Nectandra megapotamica (Spreng. Mez. was that of Dellaporta, regardless of the method used to dry the samples. The selected method has a relatively low cost and total execution time. Moreover, the quality and quantity of DNA extracted using this method was sufficient for DNA sequence amplification using PCR reactions and to get restriction fragments.

  10. A numerical method to compute interior transmission eigenvalues

    International Nuclear Information System (INIS)

    Kleefeld, Andreas

    2013-01-01

    In this paper the numerical calculation of eigenvalues of the interior transmission problem arising in acoustic scattering for constant contrast in three dimensions is considered. From the computational point of view existing methods are very expensive, and are only able to show the existence of such transmission eigenvalues. Furthermore, they have trouble finding them if two or more eigenvalues are situated closely together. We present a new method based on complex-valued contour integrals and the boundary integral equation method which is able to calculate highly accurate transmission eigenvalues. So far, this is the first paper providing such accurate values for various surfaces different from a sphere in three dimensions. Additionally, the computational cost is even lower than those of existing methods. Furthermore, the algorithm is capable of finding complex-valued eigenvalues for which no numerical results have been reported yet. Until now, the proof of existence of such eigenvalues is still open. Finally, highly accurate eigenvalues of the interior Dirichlet problem are provided and might serve as test cases to check newly derived Faber–Krahn type inequalities for larger transmission eigenvalues that are not yet available. (paper)

  11. Laboratory Sequence in Computational Methods for Introductory Chemistry

    Science.gov (United States)

    Cody, Jason A.; Wiser, Dawn C.

    2003-07-01

    A four-exercise laboratory sequence for introductory chemistry integrating hands-on, student-centered experience with computer modeling has been designed and implemented. The progression builds from exploration of molecular shapes to intermolecular forces and the impact of those forces on chemical separations made with gas chromatography and distillation. The sequence ends with an exploration of molecular orbitals. The students use the computers as a tool; they build the molecules, submit the calculations, and interpret the results. Because of the construction of the sequence and its placement spanning the semester break, good laboratory notebook practices are reinforced and the continuity of course content and methods between semesters is emphasized. The inclusion of these techniques in the first year of chemistry has had a positive impact on student perceptions and student learning.

  12. An analytical method for computing atomic contact areas in biomolecules.

    Science.gov (United States)

    Mach, Paul; Koehl, Patrice

    2013-01-15

    We propose a new analytical method for detecting and computing contacts between atoms in biomolecules. It is based on the alpha shape theory and proceeds in three steps. First, we compute the weighted Delaunay triangulation of the union of spheres representing the molecule. In the second step, the Delaunay complex is filtered to derive the dual complex. Finally, contacts between spheres are collected. In this approach, two atoms i and j are defined to be in contact if their centers are connected by an edge in the dual complex. The contact areas between atom i and its neighbors are computed based on the caps formed by these neighbors on the surface of i; the total area of all these caps is partitioned according to their spherical Laguerre Voronoi diagram on the surface of i. This method is analytical and its implementation in a new program BallContact is fast and robust. We have used BallContact to study contacts in a database of 1551 high resolution protein structures. We show that with this new definition of atomic contacts, we generate realistic representations of the environments of atoms and residues within a protein. In particular, we establish the importance of nonpolar contact areas that complement the information represented by the accessible surface areas. This new method bears similarity to the tessellation methods used to quantify atomic volumes and contacts, with the advantage that it does not require the presence of explicit solvent molecules if the surface of the protein is to be considered. © 2012 Wiley Periodicals, Inc. Copyright © 2012 Wiley Periodicals, Inc.

  13. Chapter 14: Chiller Evaluation Protocol. The Uniform Methods Project: Methods for Determining Energy Efficiency Savings for Specific Measures

    Energy Technology Data Exchange (ETDEWEB)

    Kurnik, Charles W [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Tiessen, Alex [Posterity Group, Ottawa, ON (Canada)

    2017-10-06

    This protocol defines a chiller measure as a project that directly impacts equipment within the boundary of a chiller plant. A chiller plant encompasses a chiller - or multiple chillers - and associated auxiliary equipment. This protocol primarily covers electric-driven chillers and chiller plants. It does not include thermal energy storage and absorption chillers fired by natural gas or steam, although a similar methodology may be applicable to these chilled water system components.

  14. An Accurate liver segmentation method using parallel computing algorithm

    International Nuclear Information System (INIS)

    Elbasher, Eiman Mohammed Khalied

    2014-12-01

    Computed Tomography (CT or CAT scan) is a noninvasive diagnostic imaging procedure that uses a combination of X-rays and computer technology to produce horizontal, or axial, images (often called slices) of the body. A CT scan shows detailed images of any part of the body, including the bones muscles, fat and organs CT scans are more detailed than standard x-rays. CT scans may be done with or without "contrast Contrast refers to a substance taken by mouth and/ or injected into an intravenous (IV) line that causes the particular organ or tissue under study to be seen more clearly. CT scan of the liver and biliary tract are used in the diagnosis of many diseases in the abdomen structures, particularly when another type of examination, such as X-rays, physical examination, and ultra sound is not conclusive. Unfortunately, the presence of noise and artifact in the edges and fine details in the CT images limit the contrast resolution and make diagnostic procedure more difficult. This experimental study was conducted at the College of Medical Radiological Science, Sudan University of Science and Technology and Fidel Specialist Hospital. The sample of study was included 50 patients. The main objective of this research was to study an accurate liver segmentation method using a parallel computing algorithm, and to segment liver and adjacent organs using image processing technique. The main technique of segmentation used in this study was watershed transform. The scope of image processing and analysis applied to medical application is to improve the quality of the acquired image and extract quantitative information from medical image data in an efficient and accurate way. The results of this technique agreed wit the results of Jarritt et al, (2010), Kratchwil et al, (2010), Jover et al, (2011), Yomamoto et al, (1996), Cai et al (1999), Saudha and Jayashree (2010) who used different segmentation filtering based on the methods of enhancing the computed tomography images. Anther

  15. A discrete ordinate response matrix method for massively parallel computers

    International Nuclear Information System (INIS)

    Hanebutte, U.R.; Lewis, E.E.

    1991-01-01

    A discrete ordinate response matrix method is formulated for the solution of neutron transport problems on massively parallel computers. The response matrix formulation eliminates iteration on the scattering source. The nodal matrices which result from the diamond-differenced equations are utilized in a factored form which minimizes memory requirements and significantly reduces the required number of algorithm utilizes massive parallelism by assigning each spatial node to a processor. The algorithm is accelerated effectively by a synthetic method in which the low-order diffusion equations are also solved by massively parallel red/black iterations. The method has been implemented on a 16k Connection Machine-2, and S 8 and S 16 solutions have been obtained for fixed-source benchmark problems in X--Y geometry

  16. Review methods for image segmentation from computed tomography images

    International Nuclear Information System (INIS)

    Mamat, Nurwahidah; Rahman, Wan Eny Zarina Wan Abdul; Soh, Shaharuddin Cik; Mahmud, Rozi

    2014-01-01

    Image segmentation is a challenging process in order to get the accuracy of segmentation, automation and robustness especially in medical images. There exist many segmentation methods that can be implemented to medical images but not all methods are suitable. For the medical purposes, the aims of image segmentation are to study the anatomical structure, identify the region of interest, measure tissue volume to measure growth of tumor and help in treatment planning prior to radiation therapy. In this paper, we present a review method for segmentation purposes using Computed Tomography (CT) images. CT images has their own characteristics that affect the ability to visualize anatomic structures and pathologic features such as blurring of the image and visual noise. The details about the methods, the goodness and the problem incurred in the methods will be defined and explained. It is necessary to know the suitable segmentation method in order to get accurate segmentation. This paper can be a guide to researcher to choose the suitable segmentation method especially in segmenting the images from CT scan

  17. A computer method for simulating the decay of radon daughters

    International Nuclear Information System (INIS)

    Hartley, B.M.

    1988-01-01

    The analytical equations representing the decay of a series of radioactive atoms through a number of daughter products are well known. These equations are for an idealized case in which the expectation value of the number of atoms which decay in a certain time can be represented by a smooth curve. The real curve of the total number of disintegrations from a radioactive species consists of a series of Heaviside step functions, with the steps occurring at the time of the disintegration. The disintegration of radioactive atoms is said to be random but this random behaviour is such that a single species forms an ensemble of which the times of disintegration give a geometric distribution. Numbers which have a geometric distribution can be generated by computer and can be used to simulate the decay of one or more radioactive species. A computer method is described for simulating such decay of radioactive atoms and this method is applied specifically to the decay of the short half life daughters of radon 222 and the emission of alpha particles from polonium 218 and polonium 214. Repeating the simulation of the decay a number of times provides a method for investigating the statistical uncertainty inherent in methods for measurement of exposure to radon daughters. This statistical uncertainty is difficult to investigate analytically since the time of decay of an atom of polonium 218 is not independent of the time of decay of subsequent polonium 214. The method is currently being used to investigate the statistical uncertainties of a number of commonly used methods for the counting of alpha particles from radon daughters and the calculations of exposure

  18. Multiscale Methods, Parallel Computation, and Neural Networks for Real-Time Computer Vision.

    Science.gov (United States)

    Battiti, Roberto

    1990-01-01

    This thesis presents new algorithms for low and intermediate level computer vision. The guiding ideas in the presented approach are those of hierarchical and adaptive processing, concurrent computation, and supervised learning. Processing of the visual data at different resolutions is used not only to reduce the amount of computation necessary to reach the fixed point, but also to produce a more accurate estimation of the desired parameters. The presented adaptive multiple scale technique is applied to the problem of motion field estimation. Different parts of the image are analyzed at a resolution that is chosen in order to minimize the error in the coefficients of the differential equations to be solved. Tests with video-acquired images show that velocity estimation is more accurate over a wide range of motion with respect to the homogeneous scheme. In some cases introduction of explicit discontinuities coupled to the continuous variables can be used to avoid propagation of visual information from areas corresponding to objects with different physical and/or kinematic properties. The human visual system uses concurrent computation in order to process the vast amount of visual data in "real -time." Although with different technological constraints, parallel computation can be used efficiently for computer vision. All the presented algorithms have been implemented on medium grain distributed memory multicomputers with a speed-up approximately proportional to the number of processors used. A simple two-dimensional domain decomposition assigns regions of the multiresolution pyramid to the different processors. The inter-processor communication needed during the solution process is proportional to the linear dimension of the assigned domain, so that efficiency is close to 100% if a large region is assigned to each processor. Finally, learning algorithms are shown to be a viable technique to engineer computer vision systems for different applications starting from

  19. A new computational method for reactive power market clearing

    International Nuclear Information System (INIS)

    Zhang, T.; Elkasrawy, A.; Venkatesh, B.

    2009-01-01

    After deregulation of electricity markets, ancillary services such as reactive power supply are priced separately. However, unlike real power supply, procedures for costing and pricing reactive power supply are still evolving and spot markets for reactive power do not exist as of now. Further, traditional formulations proposed for clearing reactive power markets use a non-linear mixed integer programming formulation that are difficult to solve. This paper proposes a new reactive power supply market clearing scheme. Novelty of this formulation lies in the pricing scheme that rewards transformers for tap shifting while participating in this market. The proposed model is a non-linear mixed integer challenge. A significant portion of the manuscript is devoted towards the development of a new successive mixed integer linear programming (MILP) technique to solve this formulation. The successive MILP method is computationally robust and fast. The IEEE 6-bus and 300-bus systems are used to test the proposed method. These tests serve to demonstrate computational speed and rigor of the proposed method. (author)

  20. Empirical method for simulation of water tables by digital computers

    International Nuclear Information System (INIS)

    Carnahan, C.L.; Fenske, P.R.

    1975-09-01

    An empirical method is described for computing a matrix of water-table elevations from a matrix of topographic elevations and a set of observed water-elevation control points which may be distributed randomly over the area of interest. The method is applicable to regions, such as the Great Basin, where the water table can be assumed to conform to a subdued image of overlying topography. A first approximation to the water table is computed by smoothing a matrix of topographic elevations and adjusting each node of the smoothed matrix according to a linear regression between observed water elevations and smoothed topographic elevations. Each observed control point is assumed to exert a radially decreasing influence on the first approximation surface. The first approximation is then adjusted further to conform to observed water-table elevations near control points. Outside the domain of control, the first approximation is assumed to represent the most probable configuration of the water table. The method has been applied to the Nevada Test Site and the Hot Creek Valley areas in Nevada

  1. A Novel Automated Method for Analyzing Cylindrical Computed Tomography Data

    Science.gov (United States)

    Roth, D. J.; Burke, E. R.; Rauser, R. W.; Martin, R. E.

    2011-01-01

    A novel software method is presented that is applicable for analyzing cylindrical and partially cylindrical objects inspected using computed tomography. This method involves unwrapping and re-slicing data so that the CT data from the cylindrical object can be viewed as a series of 2-D sheets in the vertical direction in addition to volume rendering and normal plane views provided by traditional CT software. The method is based on interior and exterior surface edge detection and under proper conditions, is FULLY AUTOMATED and requires no input from the user except the correct voxel dimension from the CT scan. The software is available from NASA in 32- and 64-bit versions that can be applied to gigabyte-sized data sets, processing data either in random access memory or primarily on the computer hard drive. Please inquire with the presenting author if further interested. This software differentiates itself in total from other possible re-slicing software solutions due to complete automation and advanced processing and analysis capabilities.

  2. Computer codes and methods for simulating accelerator driven systems

    International Nuclear Information System (INIS)

    Sartori, E.; Byung Chan Na

    2003-01-01

    A large set of computer codes and associated data libraries have been developed by nuclear research and industry over the past half century. A large number of them are in the public domain and can be obtained under agreed conditions from different Information Centres. The areas covered comprise: basic nuclear data and models, reactor spectra and cell calculations, static and dynamic reactor analysis, criticality, radiation shielding, dosimetry and material damage, fuel behaviour, safety and hazard analysis, heat conduction and fluid flow in reactor systems, spent fuel and waste management (handling, transportation, and storage), economics of fuel cycles, impact on the environment of nuclear activities etc. These codes and models have been developed mostly for critical systems used for research or power generation and other technological applications. Many of them have not been designed for accelerator driven systems (ADS), but with competent use, they can be used for studying such systems or can form the basis for adapting existing methods to the specific needs of ADS's. The present paper describes the types of methods, codes and associated data available and their role in the applications. It provides Web addresses for facilitating searches for such tools. Some indications are given on the effect of non appropriate or 'blind' use of existing tools to ADS. Reference is made to available experimental data that can be used for validating the methods use. Finally, some international activities linked to the different computational aspects are described briefly. (author)

  3. Security analysis of the decoy method with the Bennett–Brassard 1984 protocol for finite key lengths

    International Nuclear Information System (INIS)

    Hayashi, Masahito; Nakayama, Ryota

    2014-01-01

    This paper provides a formula for the sacrifice bit-length for privacy amplification with the Bennett–Brassard 1984 protocol for finite key lengths, when we employ the decoy method. Using the formula, we can guarantee the security parameter for a realizable quantum key distribution system. The key generation rates with finite key lengths are numerically evaluated. The proposed method improves the existing key generation rate even in the asymptotic setting. (paper)

  4. Description of a method for computing fluid-structure interaction

    International Nuclear Information System (INIS)

    Gantenbein, F.

    1982-02-01

    A general formulation allowing computation of structure vibrations in a dense fluid is described. It is based on fluid modelisation by fluid finite elements. For each fluid node are associated two variables: the pressure p and a variable π defined as p=d 2 π/dt 2 . Coupling between structure and fluid is introduced by surface elements. This method is easy to introduce in a general finite element code. Validation was obtained by analytical calculus and tests. It is widely used for vibrational and seismic studies of pipes and internals of nuclear reactors some applications are presented [fr

  5. Computer Aided Flowsheet Design using Group Contribution Methods

    DEFF Research Database (Denmark)

    Bommareddy, Susilpa; Eden, Mario R.; Gani, Rafiqul

    2011-01-01

    In this paper, a systematic group contribution based framework is presented for synthesis of process flowsheets from a given set of input and output specifications. Analogous to the group contribution methods developed for molecular design, the framework employs process groups to represent...... information of each flowsheet to minimize the computational load and information storage. The design variables for the selected flowsheet(s) are identified through a reverse simulation approach and are used as initial estimates for rigorous simulation to verify the feasibility and performance of the design....

  6. COMPUTER-IMPLEMENTED METHOD OF PERFORMING A SEARCH USING SIGNATURES

    DEFF Research Database (Denmark)

    2017-01-01

    A computer-implemented method of processing a query vector and a data vector), comprising: generating a set of masks and a first set of multiple signatures and a second set of multiple signatures by applying the set of masks to the query vector and the data vector, respectively, and generating...... candidate pairs, of a first signature and a second signature, by identifying matches of a first signature and a second signature. The set of masks comprises a configuration of the elements that is a Hadamard code; a permutation of a Hadamard code; or a code that deviates from a Hadamard code...

  7. Method and apparatus for managing transactions with connected computers

    Science.gov (United States)

    Goldsmith, Steven Y.; Phillips, Laurence R.; Spires, Shannon V.

    2003-01-01

    The present invention provides a method and apparatus that make use of existing computer and communication resources and that reduce the errors and delays common to complex transactions such as international shipping. The present invention comprises an agent-based collaborative work environment that assists geographically distributed commercial and government users in the management of complex transactions such as the transshipment of goods across the U.S.-Mexico border. Software agents can mediate the creation, validation and secure sharing of shipment information and regulatory documentation over the Internet, using the World-Wide Web to interface with human users.

  8. Numerical methods and computers used in elastohydrodynamic lubrication

    Science.gov (United States)

    Hamrock, B. J.; Tripp, J. H.

    1982-01-01

    Some of the methods of obtaining approximate numerical solutions to boundary value problems that arise in elastohydrodynamic lubrication are reviewed. The highlights of four general approaches (direct, inverse, quasi-inverse, and Newton-Raphson) are sketched. Advantages and disadvantages of these approaches are presented along with a flow chart showing some of the details of each. The basic question of numerical stability of the elastohydrodynamic lubrication solutions, especially in the pressure spike region, is considered. Computers used to solve this important class of lubrication problems are briefly described, with emphasis on supercomputers.

  9. A hybrid method for the parallel computation of Green's functions

    International Nuclear Information System (INIS)

    Petersen, Dan Erik; Li Song; Stokbro, Kurt; Sorensen, Hans Henrik B.; Hansen, Per Christian; Skelboe, Stig; Darve, Eric

    2009-01-01

    Quantum transport models for nanodevices using the non-equilibrium Green's function method require the repeated calculation of the block tridiagonal part of the Green's and lesser Green's function matrices. This problem is related to the calculation of the inverse of a sparse matrix. Because of the large number of times this calculation needs to be performed, this is computationally very expensive even on supercomputers. The classical approach is based on recurrence formulas which cannot be efficiently parallelized. This practically prevents the solution of large problems with hundreds of thousands of atoms. We propose new recurrences for a general class of sparse matrices to calculate Green's and lesser Green's function matrices which extend formulas derived by Takahashi and others. We show that these recurrences may lead to a dramatically reduced computational cost because they only require computing a small number of entries of the inverse matrix. Then, we propose a parallelization strategy for block tridiagonal matrices which involves a combination of Schur complement calculations and cyclic reduction. It achieves good scalability even on problems of modest size.

  10. Strategies for research engagement of clinicians in allied health (STRETCH): a mixed methods research protocol.

    Science.gov (United States)

    Mickan, Sharon; Wenke, Rachel; Weir, Kelly; Bialocerkowski, Andrea; Noble, Christy

    2017-09-11

    Allied health professionals (AHPs) report positive attitudes to using research evidence in clinical practice, yet often lack time, confidence and skills to use, participate in and conduct research. A range of multifaceted strategies including education, mentoring and guidance have been implemented to increase AHPs' use of and participation in research. Emerging evidence suggests that knowledge brokering activities have the potential to support research engagement, but it is not clear which knowledge brokering strategies are most effective and in what contexts they work best to support and maintain clinicians' research engagement. This protocol describes an exploratory concurrent mixed methods study that is designed to understand how allied health research fellows use knowledge brokering strategies within tailored evidence-based interventions, to facilitate research engagement by allied health clinicians. Simultaneously, a realist approach will guide a systematic process evaluation of the research fellows' pattern of use of knowledge brokering strategies within each case study to build a programme theory explaining which knowledge brokering strategies work best, in what contexts and why. Learning and behavioural theories will inform this critical explanation. An explanation of how locally tailored evidence-based interventions improve AHPs use of, participation in and leadership of research projects will be summarised and shared with all participating clinicians and within each case study. It is expected that local recommendations will be developed and shared with medical and nursing professionals in and beyond the health service, to facilitate building research capacity in a systematic and effective way. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  11. Use of a mobile social networking intervention for weight management: a mixed-methods study protocol.

    Science.gov (United States)

    Laranjo, Liliana; Lau, Annie Y S; Martin, Paige; Tong, Huong Ly; Coiera, Enrico

    2017-07-12

    Obesity and physical inactivity are major societal challenges and significant contributors to the global burden of disease and healthcare costs. Information and communication technologies are increasingly being used in interventions to promote behaviour change in diet and physical activity. In particular, social networking platforms seem promising for the delivery of weight control interventions.We intend to pilot test an intervention involving the use of a social networking mobile application and tracking devices ( Fitbit Flex 2 and Fitbit Aria scale) to promote the social comparison of weight and physical activity, in order to evaluate whether mechanisms of social influence lead to changes in those outcomes over the course of the study. Mixed-methods study involving semi-structured interviews and a pre-post quasi-experimental pilot with one arm, where healthy participants in different body mass index (BMI) categories, aged between 19 and 35 years old, will be subjected to a social networking intervention over a 6-month period. The primary outcome is the average difference in weight before and after the intervention. Secondary outcomes include BMI, number of steps per day, engagement with the intervention, social support and system usability. Semi-structured interviews will assess participants' expectations and perceptions regarding the intervention. Ethics approval was granted by Macquarie University's Human Research Ethics Committee for Medical Sciences on 3 November 2016 (ethics reference number 5201600716).The social network will be moderated by a researcher with clinical expertise, who will monitor and respond to concerns raised by participants. Monitoring will involve daily observation of measures collected by the fitness tracker and the wireless scale, as well as continuous supervision of forum interactions and posts. Additionally, a protocol is in place to monitor for participant misbehaviour and direct participants-in-need to appropriate sources of help

  12. Multigrid Methods for the Computation of Propagators in Gauge Fields

    Science.gov (United States)

    Kalkreuter, Thomas

    Multigrid methods were invented for the solution of discretized partial differential equations in order to overcome the slowness of traditional algorithms by updates on various length scales. In the present work generalizations of multigrid methods for propagators in gauge fields are investigated. Gauge fields are incorporated in algorithms in a covariant way. The kernel C of the restriction operator which averages from one grid to the next coarser grid is defined by projection on the ground-state of a local Hamiltonian. The idea behind this definition is that the appropriate notion of smoothness depends on the dynamics. The ground-state projection choice of C can be used in arbitrary dimension and for arbitrary gauge group. We discuss proper averaging operations for bosons and for staggered fermions. The kernels C can also be used in multigrid Monte Carlo simulations, and for the definition of block spins and blocked gauge fields in Monte Carlo renormalization group studies. Actual numerical computations are performed in four-dimensional SU(2) gauge fields. We prove that our proposals for block spins are “good”, using renormalization group arguments. A central result is that the multigrid method works in arbitrarily disordered gauge fields, in principle. It is proved that computations of propagators in gauge fields without critical slowing down are possible when one uses an ideal interpolation kernel. Unfortunately, the idealized algorithm is not practical, but it was important to answer questions of principle. Practical methods are able to outperform the conjugate gradient algorithm in case of bosons. The case of staggered fermions is harder. Multigrid methods give considerable speed-ups compared to conventional relaxation algorithms, but on lattices up to 184 conjugate gradient is superior.

  13. Fluid history computation methods for reactor safeguards problems using MNODE computer program

    International Nuclear Information System (INIS)

    Huang, Y.S.; Savery, C.W.

    1976-10-01

    A method for predicting the pressure-temperature histories of air, water liquid, and vapor flowing in a zoned containment as a result of high energy pipe rupture is described. The computer code, MNODE, has been developed for 12 connected control volumes and 24 inertia flow paths. Predictions by the code are compared with the results of an analytical gas dynamic problem, semiscale blowdown experiments, full scale MARVIKEN test results, Battelle-Frankfurt model PWR containment test data. The MNODE solutions to NRC/AEC subcompartment benchmark problems are also compared with results predicted by other computer codes such as RELAP-3, FLASH-2, CONTEMPT-PS. The analytical consideration is consistent with Section 6.2.1.2 of the Standard Format (Rev. 2) issued by U.S. Nuclear Regulatory Commission in September 1975

  14. Methods for culturing retinal pigment epithelial cells: a review of current protocols and future recommendations

    Directory of Open Access Journals (Sweden)

    Aaron H Fronk

    2016-07-01

    Full Text Available The retinal pigment epithelium is an important part of the vertebrate eye, particularly in studying the causes and possible treatment of age-related macular degeneration. The retinal pigment epithelium is difficult to access in vivo due to its location at the back of the eye, making experimentation with age-related macular degeneration treatments problematic. An alternative to in vivo experimentation is cultivating the retinal pigment epithelium in vitro, a practice that has been going on since the 1970s, providing a wide range of retinal pigment epithelial culture protocols, each producing cells and tissue of varying degrees of similarity to natural retinal pigment epithelium. The purpose of this review is to provide researchers with a ready list of retinal pigment epithelial protocols, their effects on cultured tissue, and their specific possible applications. Protocols using human and animal retinal pigment epithelium cells, derived from tissue or cell lines, are discussed, and recommendations for future researchers included.

  15. Oligomerization of G protein-coupled receptors: computational methods.

    Science.gov (United States)

    Selent, J; Kaczor, A A

    2011-01-01

    Recent research has unveiled the complexity of mechanisms involved in G protein-coupled receptor (GPCR) functioning in which receptor dimerization/oligomerization may play an important role. Although the first high-resolution X-ray structure for a likely functional chemokine receptor dimer has been deposited in the Protein Data Bank, the interactions and mechanisms of dimer formation are not yet fully understood. In this respect, computational methods play a key role for predicting accurate GPCR complexes. This review outlines computational approaches focusing on sequence- and structure-based methodologies as well as discusses their advantages and limitations. Sequence-based approaches that search for possible protein-protein interfaces in GPCR complexes have been applied with success in several studies, but did not yield always consistent results. Structure-based methodologies are a potent complement to sequence-based approaches. For instance, protein-protein docking is a valuable method especially when guided by experimental constraints. Some disadvantages like limited receptor flexibility and non-consideration of the membrane environment have to be taken into account. Molecular dynamics simulation can overcome these drawbacks giving a detailed description of conformational changes in a native-like membrane. Successful prediction of GPCR complexes using computational approaches combined with experimental efforts may help to understand the role of dimeric/oligomeric GPCR complexes for fine-tuning receptor signaling. Moreover, since such GPCR complexes have attracted interest as potential drug target for diverse diseases, unveiling molecular determinants of dimerization/oligomerization can provide important implications for drug discovery.

  16. Computing thermal Wigner densities with the phase integration method

    International Nuclear Information System (INIS)

    Beutier, J.; Borgis, D.; Vuilleumier, R.; Bonella, S.

    2014-01-01

    We discuss how the Phase Integration Method (PIM), recently developed to compute symmetrized time correlation functions [M. Monteferrante, S. Bonella, and G. Ciccotti, Mol. Phys. 109, 3015 (2011)], can be adapted to sampling/generating the thermal Wigner density, a key ingredient, for example, in many approximate schemes for simulating quantum time dependent properties. PIM combines a path integral representation of the density with a cumulant expansion to represent the Wigner function in a form calculable via existing Monte Carlo algorithms for sampling noisy probability densities. The method is able to capture highly non-classical effects such as correlation among the momenta and coordinates parts of the density, or correlations among the momenta themselves. By using alternatives to cumulants, it can also indicate the presence of negative parts of the Wigner density. Both properties are demonstrated by comparing PIM results to those of reference quantum calculations on a set of model problems

  17. Computing thermal Wigner densities with the phase integration method.

    Science.gov (United States)

    Beutier, J; Borgis, D; Vuilleumier, R; Bonella, S

    2014-08-28

    We discuss how the Phase Integration Method (PIM), recently developed to compute symmetrized time correlation functions [M. Monteferrante, S. Bonella, and G. Ciccotti, Mol. Phys. 109, 3015 (2011)], can be adapted to sampling/generating the thermal Wigner density, a key ingredient, for example, in many approximate schemes for simulating quantum time dependent properties. PIM combines a path integral representation of the density with a cumulant expansion to represent the Wigner function in a form calculable via existing Monte Carlo algorithms for sampling noisy probability densities. The method is able to capture highly non-classical effects such as correlation among the momenta and coordinates parts of the density, or correlations among the momenta themselves. By using alternatives to cumulants, it can also indicate the presence of negative parts of the Wigner density. Both properties are demonstrated by comparing PIM results to those of reference quantum calculations on a set of model problems.

  18. Computational methods for ab initio detection of microRNAs

    Directory of Open Access Journals (Sweden)

    Malik eYousef

    2012-10-01

    Full Text Available MicroRNAs are small RNA sequences of 18-24 nucleotides in length, which serve as templates to drive post transcriptional gene silencing. The canonical microRNA pathway starts with transcription from DNA and is followed by processing via the Microprocessor complex, yielding a hairpin structure. Which is then exported into the cytosol where it is processed by Dicer and then incorporated into the RNA induced silencing complex. All of these biogenesis steps add to the overall specificity of miRNA production and effect. Unfortunately, their modes of action are just beginning to be elucidated and therefore computational prediction algorithms cannot model the process but are usually forced to employ machine learning approaches. This work focuses on ab initio prediction methods throughout; and therefore homology-based miRNA detection methods are not discussed. Current ab initio prediction algorithms, their ties to data mining, and their prediction accuracy are detailed.

  19. Data graphing methods, articles of manufacture, and computing devices

    Energy Technology Data Exchange (ETDEWEB)

    Wong, Pak Chung; Mackey, Patrick S.; Cook, Kristin A.; Foote, Harlan P.; Whiting, Mark A.

    2016-12-13

    Data graphing methods, articles of manufacture, and computing devices are described. In one aspect, a method includes accessing a data set, displaying a graphical representation including data of the data set which is arranged according to a first of different hierarchical levels, wherein the first hierarchical level represents the data at a first of a plurality of different resolutions which respectively correspond to respective ones of the hierarchical levels, selecting a portion of the graphical representation wherein the data of the portion is arranged according to the first hierarchical level at the first resolution, modifying the graphical representation by arranging the data of the portion according to a second of the hierarchal levels at a second of the resolutions, and after the modifying, displaying the graphical representation wherein the data of the portion is arranged according to the second hierarchal level at the second resolution.

  20. A finite element solution method for quadrics parallel computer

    International Nuclear Information System (INIS)

    Zucchini, A.

    1996-08-01

    A distributed preconditioned conjugate gradient method for finite element analysis has been developed and implemented on a parallel SIMD Quadrics computer. The main characteristic of the method is that it does not require any actual assembling of all element equations in a global system. The physical domain of the problem is partitioned in cells of n p finite elements and each cell element is assigned to a different node of an n p -processors machine. Element stiffness matrices are stored in the data memory of the assigned processing node and the solution process is completely executed in parallel at element level. Inter-element and therefore inter-processor communications are required once per iteration to perform local sums of vector quantities between neighbouring elements. A prototype implementation has been tested on an 8-nodes Quadrics machine in a simple 2D benchmark problem

  1. A novel dual energy method for enhanced quantitative computed tomography

    Science.gov (United States)

    Emami, A.; Ghadiri, H.; Rahmim, A.; Ay, M. R.

    2018-01-01

    Accurate assessment of bone mineral density (BMD) is critically important in clinical practice, and conveniently enabled via quantitative computed tomography (QCT). Meanwhile, dual-energy QCT (DEQCT) enables enhanced detection of small changes in BMD relative to single-energy QCT (SEQCT). In the present study, we aimed to investigate the accuracy of QCT methods, with particular emphasis on a new dual-energy approach, in comparison to single-energy and conventional dual-energy techniques. We used a sinogram-based analytical CT simulator to model the complete chain of CT data acquisitions, and assessed performance of SEQCT and different DEQCT techniques in quantification of BMD. We demonstrate a 120% reduction in error when using a proposed dual-energy Simultaneous Equation by Constrained Least-squares method, enabling more accurate bone mineral measurements.

  2. Comparison of four computational methods for computing Q factors and resonance wavelengths in photonic crystal membrane cavities

    DEFF Research Database (Denmark)

    de Lasson, Jakob Rosenkrantz; Frandsen, Lars Hagedorn; Burger, Sven

    2016-01-01

    We benchmark four state-of-the-art computational methods by computing quality factors and resonance wavelengths in photonic crystal membrane L5 and L9 line defect cavities.The convergence of the methods with respect to resolution, degrees of freedom and number ofmodes is investigated. Special att...... attention is paid to the influence of the size of the computational domain. Convergence is not obtained for some of the methods, indicating that some are moresuitable than others for analyzing line defect cavities....

  3. Chapter 22: Compressed Air Evaluation Protocol. The Uniform Methods Project: Methods for Determining Energy Efficiency Savings for Specific Measures

    Energy Technology Data Exchange (ETDEWEB)

    Kurnik, Charles W [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Benton, Nathanael [Nexant, Inc., San Francisco, CA (United States); Burns, Patrick [Nexant, Inc., San Francisco, CA (United States)

    2017-10-18

    Compressed-air systems are used widely throughout industry for many operations, including pneumatic tools, packaging and automation equipment, conveyors, and other industrial process operations. Compressed-air systems are defined as a group of subsystems composed of air compressors, air treatment equipment, controls, piping, pneumatic tools, pneumatically powered machinery, and process applications using compressed air. A compressed-air system has three primary functional subsystems: supply, distribution, and demand. Air compressors are the primary energy consumers in a compressed-air system and are the primary focus of this protocol. The two compressed-air energy efficiency measures specifically addressed in this protocol are: High-efficiency/variable speed drive (VSD) compressor replacing modulating, load/unload, or constant-speed compressor; and Compressed-air leak survey and repairs. This protocol provides direction on how to reliably verify savings from these two measures using a consistent approach for each.

  4. Computer prediction of subsurface radionuclide transport: an adaptive numerical method

    International Nuclear Information System (INIS)

    Neuman, S.P.

    1983-01-01

    Radionuclide transport in the subsurface is often modeled with the aid of the advection-dispersion equation. A review of existing computer methods for the solution of this equation shows that there is need for improvement. To answer this need, a new adaptive numerical method is proposed based on an Eulerian-Lagrangian formulation. The method is based on a decomposition of the concentration field into two parts, one advective and one dispersive, in a rigorous manner that does not leave room for ambiguity. The advective component of steep concentration fronts is tracked forward with the aid of moving particles clustered around each front. Away from such fronts the advection problem is handled by an efficient modified method of characteristics called single-step reverse particle tracking. When a front dissipates with time, its forward tracking stops automatically and the corresponding cloud of particles is eliminated. The dispersion problem is solved by an unconventional Lagrangian finite element formulation on a fixed grid which involves only symmetric and diagonal matrices. Preliminary tests against analytical solutions of ne- and two-dimensional dispersion in a uniform steady state velocity field suggest that the proposed adaptive method can handle the entire range of Peclet numbers from 0 to infinity, with Courant numbers well in excess of 1

  5. Parallel computation of multigroup reactivity coefficient using iterative method

    Science.gov (United States)

    Susmikanti, Mike; Dewayatna, Winter

    2013-09-01

    One of the research activities to support the commercial radioisotope production program is a safety research target irradiation FPM (Fission Product Molybdenum). FPM targets form a tube made of stainless steel in which the nuclear degrees of superimposed high-enriched uranium. FPM irradiation tube is intended to obtain fission. The fission material widely used in the form of kits in the world of nuclear medicine. Irradiation FPM tube reactor core would interfere with performance. One of the disorders comes from changes in flux or reactivity. It is necessary to study a method for calculating safety terrace ongoing configuration changes during the life of the reactor, making the code faster became an absolute necessity. Neutron safety margin for the research reactor can be reused without modification to the calculation of the reactivity of the reactor, so that is an advantage of using perturbation method. The criticality and flux in multigroup diffusion model was calculate at various irradiation positions in some uranium content. This model has a complex computation. Several parallel algorithms with iterative method have been developed for the sparse and big matrix solution. The Black-Red Gauss Seidel Iteration and the power iteration parallel method can be used to solve multigroup diffusion equation system and calculated the criticality and reactivity coeficient. This research was developed code for reactivity calculation which used one of safety analysis with parallel processing. It can be done more quickly and efficiently by utilizing the parallel processing in the multicore computer. This code was applied for the safety limits calculation of irradiated targets FPM with increment Uranium.

  6. Chapter 18: Variable Frequency Drive Evaluation Protocol. The Uniform Methods Project: Methods for Determining Energy Efficiency Savings for Specific Measures

    Energy Technology Data Exchange (ETDEWEB)

    Romberger, Jeff [SBW Consulting, Inc., Bellevue, WA (United States)

    2017-06-21

    An adjustable-speed drive (ASD) includes all devices that vary the speed of a rotating load, including those that vary the motor speed and linkage devices that allow constant motor speed while varying the load speed. The Variable Frequency Drive Evaluation Protocol presented here addresses evaluation issues for variable-frequency drives (VFDs) installed on commercial and industrial motor-driven centrifugal fans and pumps for which torque varies with speed. Constant torque load applications, such as those for positive displacement pumps, are not covered by this protocol.

  7. Impact of reduced-radiation dual-energy protocols using 320-detector row computed tomography for analyzing urinary calculus components: initial in vitro evaluation.

    Science.gov (United States)

    Cai, Xiangran; Zhou, Qingchun; Yu, Juan; Xian, Zhaohui; Feng, Youzhen; Yang, Wencai; Mo, Xukai

    2014-10-01

    To evaluate the impact of reduced-radiation dual-energy (DE) protocols using 320-detector row computed tomography on the differentiation of urinary calculus components. A total of 58 urinary calculi were placed into the same phantom and underwent DE scanning with 320-detector row computed tomography. Each calculus was scanned 4 times with the DE protocols using 135 kV and 80 kV tube voltage and different tube current combinations, including 100 mA and 570 mA (group A), 50 mA and 290 mA (group B), 30 mA and 170 mA (group C), and 10 mA and 60 mA (group D). The acquisition data of all 4 groups were then analyzed by stone DE analysis software, and the results were compared with x-ray diffraction analysis. Noise, contrast-to-noise ratio, and radiation dose were compared. Calculi were correctly identified in 56 of 58 stones (96.6%) using group A and B protocols. However, only 35 stones (60.3%) and 16 stones (27.6%) were correctly diagnosed using group C and D protocols, respectively. Mean noise increased significantly and mean contrast-to-noise ratio decreased significantly from groups A to D (P calculus component analysis while reducing patient radiation exposure to 1.81 mSv. Further reduction of tube currents may compromise diagnostic accuracy. Copyright © 2014 Elsevier Inc. All rights reserved.

  8. Overall evaluability of low dose protocol for computed tomography angiography of thoracic aorta using 80 kV and iterative reconstruction algorithm using different concentration contrast media.

    Science.gov (United States)

    Annoni, Andrea Daniele; Mancini, Maria E; Andreini, Daniele; Formenti, Alberto; Mushtaq, Saima; Nobili, Enrica; Guglielmo, Marco; Baggiano, Andrea; Conte, Edoardo; Pepi, Mauro

    2017-10-01

    Multidetector Computed Tomography Angiography (MDCTA) is presently the imaging modality of choice for aortic disease. However, the effective radiation dose and the risk related to the use of contrast agents associated with MDCTA is an issue of concern. Aim of this study was to assess image quality of a low dose ECG-gated MDCTA of thoracic aorta using different concentration contrast media without tailored injection protocol. Two-hundred patients were randomised into four different scan protocols: Group A (Iodixanol 320 and 80 Kvp tube voltage), Group B (Iodixanol 320 and 100 Kvp tube voltage), Group C (Iomeprol 400 and 80 Kvp tube voltage) and Group D (Iomeprol 400 and 100 Kvp tube voltage). Image quality, noise, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) and effective dose (ED) were compared among groups. No significant differences in image noise, SNR and CNR between groups with the same tube voltage. Significant differences in SNR and CNR were found among groups with 80 kV versus groups using 100 kV but without differences in terms of image quality. ED was significantly lower in groups with 80 kV. Multidetector Computed Tomography Angiography protocols using 80 kV and low concentration contrast media are feasible without need of tailored injection protocols. © 2017 The Royal Australian and New Zealand College of Radiologists.

  9. Clinical and cost effectiveness of computer treatment for aphasia post stroke (Big CACTUS): study protocol for a randomised controlled trial.

    Science.gov (United States)

    Palmer, Rebecca; Cooper, Cindy; Enderby, Pam; Brady, Marian; Julious, Steven; Bowen, Audrey; Latimer, Nicholas

    2015-01-27

    Aphasia affects the ability to speak, comprehend spoken language, read and write. One third of stroke survivors experience aphasia. Evidence suggests that aphasia can continue to improve after the first few months with intensive speech and language therapy, which is frequently beyond what resources allow. The development of computer software for language practice provides an opportunity for self-managed therapy. This pragmatic randomised controlled trial will investigate the clinical and cost effectiveness of a computerised approach to long-term aphasia therapy post stroke. A total of 285 adults with aphasia at least four months post stroke will be randomly allocated to either usual care, computerised intervention in addition to usual care or attention and activity control in addition to usual care. Those in the intervention group will receive six months of self-managed word finding practice on their home computer with monthly face-to-face support from a volunteer/assistant. Those in the attention control group will receive puzzle activities, supplemented by monthly telephone calls. Study delivery will be coordinated by 20 speech and language therapy departments across the United Kingdom. Outcome measures will be made at baseline, six, nine and 12 months after randomisation by blinded speech and language therapist assessors. Primary outcomes are the change in number of words (of personal relevance) named correctly at six months and improvement in functional conversation. Primary outcomes will be analysed using a Hochberg testing procedure. Significance will be declared if differences in both word retrieval and functional conversation at six months are significant at the 5% level, or if either comparison is significant at 2.5%. A cost utility analysis will be undertaken from the NHS and personal social service perspective. Differences between costs and quality-adjusted life years in the three groups will be described and the incremental cost effectiveness ratio

  10. Protocol: A simple phenol-based method for 96-well extraction of high quality RNA from Arabidopsis

    Directory of Open Access Journals (Sweden)

    Coustham Vincent

    2011-03-01

    Full Text Available Abstract Background Many experiments in modern plant molecular biology require the processing of large numbers of samples for a variety of applications from mutant screens to the analysis of natural variants. A severe bottleneck to many such analyses is the acquisition of good yields of high quality RNA suitable for use in sensitive downstream applications such as real time quantitative reverse-transcription-polymerase chain reaction (real time qRT-PCR. Although several commercial kits are available for high-throughput RNA extraction in 96-well format, only one non-kit method has been described in the literature using the commercial reagent TRIZOL. Results We describe an unusual phenomenon when using TRIZOL reagent with young Arabidopsis seedlings. This prompted us to develop a high-throughput RNA extraction protocol (HTP96 adapted from a well established phenol:chloroform-LiCl method (P:C-L that is cheap, reliable and requires no specialist equipment. With this protocol 192 high quality RNA samples can be prepared in 96-well format in three hours (less than 1 minute per sample with less than 1% loss of samples. We demonstrate that the RNA derived from this protocol is of high quality and suitable for use in real time qRT-PCR assays. Conclusion The development of the HTP96 protocol has vastly increased our sample throughput, allowing us to fully exploit the large sample capacity of modern real time qRT-PCR thermocyclers, now commonplace in many labs, and develop an effective high-throughput gene expression platform. We propose that the HTP96 protocol will significantly benefit any plant scientist with the task of obtaining hundreds of high quality RNA extractions.

  11. Defining robustness protocols: a method to include and evaluate robustness in clinical plans

    International Nuclear Information System (INIS)

    McGowan, S E; Albertini, F; Lomax, A J; Thomas, S J

    2015-01-01

    We aim to define a site-specific robustness protocol to be used during the clinical plan evaluation process. Plan robustness of 16 skull base IMPT plans to systematic range and random set-up errors have been retrospectively and systematically analysed. This was determined by calculating the error-bar dose distribution (ebDD) for all the plans and by defining some metrics used to define protocols aiding the plan assessment. Additionally, an example of how to clinically use the defined robustness database is given whereby a plan with sub-optimal brainstem robustness was identified. The advantage of using different beam arrangements to improve the plan robustness was analysed. Using the ebDD it was found range errors had a smaller effect on dose distribution than the corresponding set-up error in a single fraction, and that organs at risk were most robust to the range errors, whereas the target was more robust to set-up errors. A database was created to aid planners in terms of plan robustness aims in these volumes. This resulted in the definition of site-specific robustness protocols. The use of robustness constraints allowed for the identification of a specific patient that may have benefited from a treatment of greater individuality. A new beam arrangement showed to be preferential when balancing conformality and robustness for this case. The ebDD and error-bar volume histogram proved effective in analysing plan robustness. The process of retrospective analysis could be used to establish site-specific robustness planning protocols in proton therapy. These protocols allow the planner to determine plans that, although delivering a dosimetrically adequate dose distribution, have resulted in sub-optimal robustness to these uncertainties. For these cases the use of different beam start conditions may improve the plan robustness to set-up and range uncertainties. (paper)

  12. Defining robustness protocols: a method to include and evaluate robustness in clinical plans

    Science.gov (United States)

    McGowan, S. E.; Albertini, F.; Thomas, S. J.; Lomax, A. J.

    2015-04-01

    We aim to define a site-specific robustness protocol to be used during the clinical plan evaluation process. Plan robustness of 16 skull base IMPT plans to systematic range and random set-up errors have been retrospectively and systematically analysed. This was determined by calculating the error-bar dose distribution (ebDD) for all the plans and by defining some metrics used to define protocols aiding the plan assessment. Additionally, an example of how to clinically use the defined robustness database is given whereby a plan with sub-optimal brainstem robustness was identified. The advantage of using different beam arrangements to improve the plan robustness was analysed. Using the ebDD it was found range errors had a smaller effect on dose distribution than the corresponding set-up error in a single fraction, and that organs at risk were most robust to the range errors, whereas the target was more robust to set-up errors. A database was created to aid planners in terms of plan robustness aims in these volumes. This resulted in the definition of site-specific robustness protocols. The use of robustness constraints allowed for the identification of a specific patient that may have benefited from a treatment of greater individuality. A new beam arrangement showed to be preferential when balancing conformality and robustness for this case. The ebDD and error-bar volume histogram proved effective in analysing plan robustness. The process of retrospective analysis could be used to establish site-specific robustness planning protocols in proton therapy. These protocols allow the planner to determine plans that, although delivering a dosimetrically adequate dose distribution, have resulted in sub-optimal robustness to these uncertainties. For these cases the use of different beam start conditions may improve the plan robustness to set-up and range uncertainties.

  13. Dementia and Imagination: a mixed-methods protocol for arts and science research.

    Science.gov (United States)

    Windle, Gill; Newman, Andrew; Burholt, Vanessa; Woods, Bob; O'Brien, Dave; Baber, Michael; Hounsome, Barry; Parkinson, Clive; Tischler, Victoria

    2016-11-02

    Dementia and Imagination is a multidisciplinary research collaboration bringing together arts and science to address current evidence limitations around the benefits of visual art activities in dementia care. The research questions ask: Can art improve quality of life and well-being? If it does make a difference, how does it do this-and why? Does it have wider social and community benefits? This mixed-methods study recruits participants from residential care homes, National Health Service (NHS) wards and communities in England and Wales. A visual art intervention is developed and delivered as 1×2-hour weekly group session for 3 months in care and community settings to N=100 people living with dementia. Quantitative and qualitative data are collected at 3 time points to examine the impact on their quality of life, and the perceptions of those who care for them (N=100 family and professional carers). Repeated-measures systematic observations of well-being are obtained during the intervention (intervention vs control condition). The health economics component conducts a social return on investment evaluation of the intervention. Qualitative data are collected at 3 time points (n=35 carers/staff and n=35 people living with dementia) to explore changes in social connectedness. Self-reported outcomes of the intervention delivery are obtained (n=100). Focus groups with intervention participants (n=40) explore perceptions of impact. Social network analysis of quantitative and qualitative data from arts and healthcare professionals (N=100) examines changes in perceptions and practice. The study is approved by North Wales Research Ethics Committee-West. A range of activities will share the research findings, including international and national academic conferences, quarterly newsletters and the project website. Public engagement projects will target a broad range of stakeholders. Policy and practice summaries will be developed. The visual art intervention protocol will

  14. Evaluation of a focussed protocol for hand-held echocardiography and computer-assisted auscultation in detecting latent rheumatic heart disease in scholars.

    Science.gov (United States)

    Zühlke, Liesl J; Engel, Mark E; Nkepu, Simpiwe; Mayosi, Bongani M

    2016-08-01

    Introduction Echocardiography is the diagnostic test of choice for latent rheumatic heart disease. The utility of echocardiography for large-scale screening is limited by high cost, complex diagnostic protocols, and time to acquire multiple images. We evaluated the performance of a brief hand-held echocardiography protocol and computer-assisted auscultation in detecting latent rheumatic heart disease with or without pathological murmur. A total of 27 asymptomatic patients with latent rheumatic heart disease based on the World Heart Federation criteria and 66 healthy controls were examined by standard cardiac auscultation to detect pathological murmur. Hand-held echocardiography using a focussed protocol that utilises one view - that is, the parasternal long-axis view - and one measurement - that is, mitral regurgitant jet - and a computer-assisted auscultation utilising an automated decision tool were performed on all patients. The sensitivity and specificity of computer-assisted auscultation in latent rheumatic heart disease were 4% (95% CI 1.0-20.4%) and 93.7% (95% CI 84.5-98.3%), respectively. The sensitivity and specificity of the focussed hand-held echocardiography protocol for definite rheumatic heart disease were 92.3% (95% CI 63.9-99.8%) and 100%, respectively. The test reliability of hand-held echocardiography was 98.7% for definite and 94.7% for borderline disease, and the adjusted diagnostic odds ratios were 1041 and 263.9 for definite and borderline disease, respectively. Computer-assisted auscultation has extremely low sensitivity but high specificity for pathological murmur in latent rheumatic heart disease. Focussed hand-held echocardiography has fair sensitivity but high specificity and diagnostic utility for definite or borderline rheumatic heart disease in asymptomatic patients.

  15. Application of Computational Methods in Planaria Research: A Current Update

    Directory of Open Access Journals (Sweden)

    Ghosh Shyamasree

    2017-07-01

    Full Text Available Planaria is a member of the Phylum Platyhelminthes including flatworms. Planarians possess the unique ability of regeneration from adult stem cells or neoblasts and finds importance as a model organism for regeneration and developmental studies. Although research is being actively carried out globally through conventional methods to understand the process of regeneration from neoblasts, biology of development, neurobiology and immunology of Planaria, there are many thought provoking questions related to stem cell plasticity, and uniqueness of regenerative potential in Planarians amongst other members of Phylum Platyhelminthes. The complexity of receptors and signalling mechanisms, immune system network, biology of repair, responses to injury are yet to be understood in Planaria. Genomic and transcriptomic studies have generated a vast repository of data, but their availability and analysis is a challenging task. Data mining, computational approaches of gene curation, bioinformatics tools for analysis of transcriptomic data, designing of databases, application of algorithms in deciphering changes of morphology by RNA interference (RNAi approaches, understanding regeneration experiments is a new venture in Planaria research that is helping researchers across the globe in understanding the biology. We highlight the applications of Hidden Markov models (HMMs in designing of computational tools and their applications in Planaria decoding their complex biology.

  16. Scalable optical quantum computer

    Energy Technology Data Exchange (ETDEWEB)

    Manykin, E A; Mel' nichenko, E V [Institute for Superconductivity and Solid-State Physics, Russian Research Centre ' Kurchatov Institute' , Moscow (Russian Federation)

    2014-12-31

    A way of designing a scalable optical quantum computer based on the photon echo effect is proposed. Individual rare earth ions Pr{sup 3+}, regularly located in the lattice of the orthosilicate (Y{sub 2}SiO{sub 5}) crystal, are suggested to be used as optical qubits. Operations with qubits are performed using coherent and incoherent laser pulses. The operation protocol includes both the method of measurement-based quantum computations and the technique of optical computations. Modern hybrid photon echo protocols, which provide a sufficient quantum efficiency when reading recorded states, are considered as most promising for quantum computations and communications. (quantum computer)

  17. Scalable optical quantum computer

    International Nuclear Information System (INIS)

    Manykin, E A; Mel'nichenko, E V

    2014-01-01

    A way of designing a scalable optical quantum computer based on the photon echo effect is proposed. Individual rare earth ions Pr 3+ , regularly located in the lattice of the orthosilicate (Y 2 SiO 5 ) crystal, are suggested to be used as optical qubits. Operations with qubits are performed using coherent and incoherent laser pulses. The operation protocol includes both the method of measurement-based quantum computations and the technique of optical computations. Modern hybrid photon echo protocols, which provide a sufficient quantum efficiency when reading recorded states, are considered as most promising for quantum computations and communications. (quantum computer)

  18. Software Defects, Scientific Computation and the Scientific Method

    CERN Multimedia

    CERN. Geneva

    2011-01-01

    Computation has rapidly grown in the last 50 years so that in many scientific areas it is the dominant partner in the practice of science. Unfortunately, unlike the experimental sciences, it does not adhere well to the principles of the scientific method as espoused by, for example, the philosopher Karl Popper. Such principles are built around the notions of deniability and reproducibility. Although much research effort has been spent on measuring the density of software defects, much less has been spent on the more difficult problem of measuring their effect on the output of a program. This talk explores these issues with numerous examples suggesting how this situation might be improved to match the demands of modern science. Finally it develops a theoretical model based on an amalgam of statistical mechanics and Hartley/Shannon information theory which suggests that software systems have strong implementation independent behaviour and supports the widely observed phenomenon that defects clust...

  19. Computation of Hemagglutinin Free Energy Difference by the Confinement Method

    Science.gov (United States)

    2017-01-01

    Hemagglutinin (HA) mediates membrane fusion, a crucial step during influenza virus cell entry. How many HAs are needed for this process is still subject to debate. To aid in this discussion, the confinement free energy method was used to calculate the conformational free energy difference between the extended intermediate and postfusion state of HA. Special care was taken to comply with the general guidelines for free energy calculations, thereby obtaining convergence and demonstrating reliability of the results. The energy that one HA trimer contributes to fusion was found to be 34.2 ± 3.4kBT, similar to the known contributions from other fusion proteins. Although computationally expensive, the technique used is a promising tool for the further energetic characterization of fusion protein mechanisms. Knowledge of the energetic contributions per protein, and of conserved residues that are crucial for fusion, aids in the development of fusion inhibitors for antiviral drugs. PMID:29151344

  20. Conference on Boundary and Interior Layers : Computational and Asymptotic Methods

    CERN Document Server

    Stynes, Martin; Zhang, Zhimin

    2017-01-01

    This volume collects papers associated with lectures that were presented at the BAIL 2016 conference, which was held from 14 to 19 August 2016 at Beijing Computational Science Research Center and Tsinghua University in Beijing, China. It showcases the variety and quality of current research into numerical and asymptotic methods for theoretical and practical problems whose solutions involve layer phenomena. The BAIL (Boundary And Interior Layers) conferences, held usually in even-numbered years, bring together mathematicians and engineers/physicists whose research involves layer phenomena, with the aim of promoting interaction between these often-separate disciplines. These layers appear as solutions of singularly perturbed differential equations of various types, and are common in physical problems, most notably in fluid dynamics. This book is of interest for current researchers from mathematics, engineering and physics whose work involves the accurate app roximation of solutions of singularly perturbed diffe...

  1. Computational Methods for Sensitivity and Uncertainty Analysis in Criticality Safety

    International Nuclear Information System (INIS)

    Broadhead, B.L.; Childs, R.L.; Rearden, B.T.

    1999-01-01

    Interest in the sensitivity methods that were developed and widely used in the 1970s (the FORSS methodology at ORNL among others) has increased recently as a result of potential use in the area of criticality safety data validation procedures to define computational bias, uncertainties and area(s) of applicability. Functional forms of the resulting sensitivity coefficients can be used as formal parameters in the determination of applicability of benchmark experiments to their corresponding industrial application areas. In order for these techniques to be generally useful to the criticality safety practitioner, the procedures governing their use had to be updated and simplified. This paper will describe the resulting sensitivity analysis tools that have been generated for potential use by the criticality safety community

  2. Statistical physics and computational methods for evolutionary game theory

    CERN Document Server

    Javarone, Marco Alberto

    2018-01-01

    This book presents an introduction to Evolutionary Game Theory (EGT) which is an emerging field in the area of complex systems attracting the attention of researchers from disparate scientific communities. EGT allows one to represent and study several complex phenomena, such as the emergence of cooperation in social systems, the role of conformity in shaping the equilibrium of a population, and the dynamics in biological and ecological systems. Since EGT models belong to the area of complex systems, statistical physics constitutes a fundamental ingredient for investigating their behavior. At the same time, the complexity of some EGT models, such as those realized by means of agent-based methods, often require the implementation of numerical simulations. Therefore, beyond providing an introduction to EGT, this book gives a brief overview of the main statistical physics tools (such as phase transitions and the Ising model) and computational strategies for simulating evolutionary games (such as Monte Carlo algor...

  3. Activation method for measuring the neutron spectra parameters. Computer software

    International Nuclear Information System (INIS)

    Efimov, B.V.; Ionov, V.S.; Konyaev, S.I.; Marin, S.V.

    2005-01-01

    The description of mathematical statement of a task for definition the spectral characteristics of neutron fields with use developed in RRC KI unified activation detectors (UKD) is resulted. The method of processing of results offered by authors activation measurements and calculation of the parameters used for an estimation of the neutron spectra characteristics is discussed. Features of processing of the experimental data received at measurements of activation with using UKD are considered. Activation detectors UKD contain a little bit specially the picked up isotopes giving at irradiation peaks scale of activity in the common spectrum scale of activity. Computing processing of results of the measurements is applied on definition of spectrum parameters for nuclear reactor installations with thermal and close to such power spectrum of neutrons. The example of the data processing, the measurements received at carrying out at RRC KI research reactor F-1 is resulted [ru

  4. Cameras for Public Health Surveillance: A Methods Protocol for Crowdsourced Annotation of Point-of-Sale Photographs.

    Science.gov (United States)

    Ilakkuvan, Vinu; Tacelosky, Michael; Ivey, Keith C; Pearson, Jennifer L; Cantrell, Jennifer; Vallone, Donna M; Abrams, David B; Kirchner, Thomas R

    2014-04-09

    Photographs are an effective way to collect detailed and objective information about the environment, particularly for public health surveillance. However, accurately and reliably annotating (ie, extracting information from) photographs remains difficult, a critical bottleneck inhibiting the use of photographs for systematic surveillance. The advent of distributed human computation (ie, crowdsourcing) platforms represents a veritable breakthrough, making it possible for the first time to accurately, quickly, and repeatedly annotate photos at relatively low cost. This paper describes a methods protocol, using photographs from point-of-sale surveillance studies in the field of tobacco control to demonstrate the development and testing of custom-built tools that can greatly enhance the quality of crowdsourced annotation. Enhancing the quality of crowdsourced photo annotation requires a number of approaches and tools. The crowdsourced photo annotation process is greatly simplified by decomposing the overall process into smaller tasks, which improves accuracy and speed and enables adaptive processing, in which irrelevant data is filtered out and more difficult targets receive increased scrutiny. Additionally, zoom tools enable users to see details within photographs and crop tools highlight where within an image a specific object of interest is found, generating a set of photographs that answer specific questions. Beyond such tools, optimizing the number of raters (ie, crowd size) for accuracy and reliability is an important facet of crowdsourced photo annotation. This can be determined in a systematic manner based on the difficulty of the task and the desired level of accuracy, using receiver operating characteristic (ROC) analyses. Usability tests of the zoom and crop tool suggest that these tools significantly improve annotation accuracy. The tests asked raters to extract data from photographs, not for the purposes of assessing the quality of that data, but rather to

  5. A computed microtomography method for understanding epiphyseal growth plate fusion

    Science.gov (United States)

    Staines, Katherine A.; Madi, Kamel; Javaheri, Behzad; Lee, Peter D.; Pitsillides, Andrew A.

    2017-12-01

    The epiphyseal growth plate is a developmental region responsible for linear bone growth, in which chondrocytes undertake a tightly regulated series of biological processes. Concomitant with the cessation of growth and sexual maturation, the human growth plate undergoes progressive narrowing, and ultimately disappears. Despite the crucial role of this growth plate fusion ‘bridging’ event, the precise mechanisms by which it is governed are complex and yet to be established. Progress is likely hindered by the current methods for growth plate visualisation; these are invasive and largely rely on histological procedures. Here we describe our non-invasive method utilising synchrotron x-ray computed microtomography for the examination of growth plate bridging, which ultimately leads to its closure coincident with termination of further longitudinal bone growth. We then apply this method to a dataset obtained from a benchtop microcomputed tomography scanner to highlight its potential for wide usage. Furthermore, we conduct finite element modelling at the micron-scale to reveal the effects of growth plate bridging on local tissue mechanics. Employment of these 3D analyses of growth plate bone bridging is likely to advance our understanding of the physiological mechanisms that control growth plate fusion.

  6. Methods and computer codes for probabilistic sensitivity and uncertainty analysis

    International Nuclear Information System (INIS)

    Vaurio, J.K.

    1985-01-01

    This paper describes the methods and applications experience with two computer codes that are now available from the National Energy Software Center at Argonne National Laboratory. The purpose of the SCREEN code is to identify a group of most important input variables of a code that has many (tens, hundreds) input variables with uncertainties, and do this without relying on judgment or exhaustive sensitivity studies. Purpose of the PROSA-2 code is to propagate uncertainties and calculate the distributions of interesting output variable(s) of a safety analysis code using response surface techniques, based on the same runs used for screening. Several applications are discussed, but the codes are generic, not tailored to any specific safety application code. They are compatible in terms of input/output requirements but also independent of each other, e.g., PROSA-2 can be used without first using SCREEN if a set of important input variables has first been selected by other methods. Also, although SCREEN can select cases to be run (by random sampling), a user can select cases by other methods if he so prefers, and still use the rest of SCREEN for identifying important input variables

  7. Emerging Computational Methods for the Rational Discovery of Allosteric Drugs.

    Science.gov (United States)

    Wagner, Jeffrey R; Lee, Christopher T; Durrant, Jacob D; Malmstrom, Robert D; Feher, Victoria A; Amaro, Rommie E

    2016-06-08

    Allosteric drug development holds promise for delivering medicines that are more selective and less toxic than those that target orthosteric sites. To date, the discovery of allosteric binding sites and lead compounds has been mostly serendipitous, achieved through high-throughput screening. Over the past decade, structural data has become more readily available for larger protein systems and more membrane protein classes (e.g., GPCRs and ion channels), which are common allosteric drug targets. In parallel, improved simulation methods now provide better atomistic understanding of the protein dynamics and cooperative motions that are critical to allosteric mechanisms. As a result of these advances, the field of predictive allosteric drug development is now on the cusp of a new era of rational structure-based computational methods. Here, we review algorithms that predict allosteric sites based on sequence data and molecular dynamics simulations, describe tools that assess the druggability of these pockets, and discuss how Markov state models and topology analyses provide insight into the relationship between protein dynamics and allosteric drug binding. In each section, we first provide an overview of the various method classes before describing relevant algorithms and software packages.

  8. Computation of rectangular source integral by rational parameter polynomial method

    International Nuclear Information System (INIS)

    Prabha, Hem

    2001-01-01

    Hubbell et al. (J. Res. Nat Bureau Standards 64C, (1960) 121) have obtained a series expansion for the calculation of the radiation field generated by a plane isotropic rectangular source (plaque), in which leading term is the integral H(a,b). In this paper another integral I(a,b), which is related with the integral H(a,b) has been solved by the rational parameter polynomial method. From I(a,b), we compute H(a,b). Using this method the integral I(a,b) is expressed in the form of a polynomial of a rational parameter. Generally, a function f (x) is expressed in terms of x. In this method this is expressed in terms of x/(1+x). In this way, the accuracy of the expression is good over a wide range of x as compared to the earlier approach. The results for I(a,b) and H(a,b) are given for a sixth degree polynomial and are found to be in good agreement with the results obtained by numerically integrating the integral. Accuracy could be increased either by increasing the degree of the polynomial or by dividing the range of integration. The results of H(a,b) and I(a,b) are given for values of b and a up to 2.0 and 20.0, respectively

  9. Health care access for rural youth on equal terms? A mixed methods study protocol in northern Sweden.

    Science.gov (United States)

    Goicolea, Isabel; Carson, Dean; San Sebastian, Miguel; Christianson, Monica; Wiklund, Maria; Hurtig, Anna-Karin

    2018-01-11

    The purpose of this paper is to propose a protocol for researching the impact of rural youth health service strategies on health care access. There has been no published comprehensive assessment of the effectiveness of youth health strategies in rural areas, and there is no clearly articulated model of how such assessments might be conducted. The protocol described here aims to gather information to; i) Assess rural youth access to health care according to their needs, ii) Identify and understand the strategies developed in rural areas to promote youth access to health care, and iii) Propose actions for further improvement. The protocol is described with particular reference to research being undertaken in the four northernmost counties of Sweden, which contain a widely dispersed and diverse youth population. The protocol proposes qualitative and quantitative methodologies sequentially in four phases. First, to map youth access to health care according to their health care needs, including assessing horizontal equity (equal use of health care for equivalent health needs,) and vertical equity (people with greater health needs should receive more health care than those with lesser needs). Second, a multiple case study design investigates strategies developed across the region (youth clinics, internet applications, public health programs) to improve youth access to health care. Third, qualitative comparative analysis of the 24 rural municipalities in the region identifies the best combination of conditions leading to high youth access to health care. Fourth, a concept mapping study involving rural stakeholders, care providers and youth provides recommended actions to improve rural youth access to health care. The implementation of this research protocol will contribute to 1) generating knowledge that could contribute to strengthening rural youth access to health care, as well as to 2) advancing the application of mixed methods to explore access to health care.

  10. The AgMIP Coordinated Climate-Crop Modeling Project (C3MP): Methods and Protocols

    Science.gov (United States)

    Shukla, Sonali P.; Ruane, Alexander Clark

    2014-01-01

    Climate change is expected to alter a multitude of factors important to agricultural systems, including pests, diseases, weeds, extreme climate events, water resources, soil degradation, and socio-economic pressures. Changes to carbon dioxide concentration ([CO2]), temperature, and water (CTW) will be the primary drivers of change in crop growth and agricultural systems. Therefore, establishing the CTW-change sensitivity of crop yields is an urgent research need and warrants diverse methods of investigation. Crop models provide a biophysical, process-based tool to investigate crop responses across varying environmental conditions and farm management techniques, and have been applied in climate impact assessment by using a variety of methods (White et al., 2011, and references therein). However, there is a significant amount of divergence between various crop models' responses to CTW changes (Rotter et al., 2011). While the application of a site-based crop model is relatively simple, the coordination of such agricultural impact assessments on larger scales requires consistent and timely contributions from a large number of crop modelers, each time a new global climate model (GCM) scenario or downscaling technique is created. A coordinated, global effort to rapidly examine CTW sensitivity across multiple crops, crop models, and sites is needed to aid model development and enhance the assessment of climate impacts (Deser et al., 2012). To fulfill this need, the Coordinated Climate-Crop Modeling Project (C3MP) (Ruane et al., 2014) was initiated within the Agricultural Model Intercomparison and Improvement Project (AgMIP; Rosenzweig et al., 2013). The submitted results from C3MP Phase 1 (February 15, 2013-December 31, 2013) are currently being analyzed. This chapter serves to present and update the C3MP protocols, discuss the initial participation and general findings, comment on needed adjustments, and describe continued and future development. AgMIP aims to improve

  11. Comparing calibration methods of electron beams using plane-parallel chambers with absorbed-dose to water based protocols

    International Nuclear Information System (INIS)

    Stewart, K.J.; Seuntjens, J.P.

    2002-01-01

    Recent absorbed-dose-based protocols allow for two methods of calibrating electron beams using plane-parallel chambers, one using the N D,w Co for a plane-parallel chamber, and the other relying on cross-calibration of the plane-parallel chamber in a high-energy electron beam against a cylindrical chamber which has an N D,w Co factor. The second method is recommended as it avoids problems associated with the P wall correction factors at 60 Co for plane-parallel chambers which are used in the determination of the beam quality conversion factors. In this article we investigate the consistency of these two methods for the PTW Roos, Scanditronics NACP02, and PTW Markus chambers. We processed our data using both the AAPM TG-51 and the IAEA TRS-398 protocols. Wall correction factors in 60 Co beams and absorbed-dose beam quality conversion factors for 20 MeV electrons were derived for these chambers by cross-calibration against a cylindrical ionization chamber. Systematic differences of up to 1.6% were found between our values of P wall and those from the Monte Carlo calculations underlying AAPM TG-51, and up to 0.6% when comparing with the IAEA TRS-398 protocol. The differences in P wall translate directly into differences in the beam quality conversion factors in the respective protocols. The relatively large spread in the experimental data of P wall , and consequently the absorbed-dose beam quality conversion factor, confirms the importance of the cross-calibration technique when using plane-parallel chambers for calibrating clinical electron beams. We confirmed that for well-guarded plane-parallel chambers, the fluence perturbation correction factor at d max is not significantly different from the value at d ref . For the PTW Markus chamber the variation in the latter factor is consistent with published fits relating it to average energy at depth

  12. Thermal/optical methods for elemental carbon quantification in soils and urban dusts: equivalence of different analysis protocols.

    Directory of Open Access Journals (Sweden)

    Yongming Han

    Full Text Available Quantifying elemental carbon (EC content in geological samples is challenging due to interferences of crustal, salt, and organic material. Thermal/optical analysis, combined with acid pretreatment, represents a feasible approach. However, the consistency of various thermal/optical analysis protocols for this type of samples has never been examined. In this study, urban street dust and soil samples from Baoji, China were pretreated with acids and analyzed with four thermal/optical protocols to investigate how analytical conditions and optical correction affect EC measurement. The EC values measured with reflectance correction (ECR were found always higher and less sensitive to temperature program than the EC values measured with transmittance correction (ECT. A high-temperature method with extended heating times (STN120 showed the highest ECT/ECR ratio (0.86 while a low-temperature protocol (IMPROVE-550, with heating time adjusted for sample loading, showed the lowest (0.53. STN ECT was higher than IMPROVE ECT, in contrast to results from aerosol samples. A higher peak inert-mode temperature and extended heating times can elevate ECT/ECR ratios for pretreated geological samples by promoting pyrolyzed organic carbon (PyOC removal over EC under trace levels of oxygen. Considering that PyOC within filter increases ECR while decreases ECT from the actual EC levels, simultaneous ECR and ECT measurements would constrain the range of EC loading and provide information on method performance. Further testing with standard reference materials of common environmental matrices supports the findings. Char and soot fractions of EC can be further separated using the IMPROVE protocol. The char/soot ratio was lower in street dusts (2.2 on average than in soils (5.2 on average, most likely reflecting motor vehicle emissions. The soot concentrations agreed with EC from CTO-375, a pure thermal method.

  13. Thermal/optical methods for elemental carbon quantification in soils and urban dusts: equivalence of different analysis protocols.

    Science.gov (United States)

    Han, Yongming; Chen, Antony; Cao, Junji; Fung, Kochy; Ho, Fai; Yan, Beizhan; Zhan, Changlin; Liu, Suixin; Wei, Chong; An, Zhisheng

    2013-01-01

    Quantifying elemental carbon (EC) content in geological samples is challenging due to interferences of crustal, salt, and organic material. Thermal/optical analysis, combined with acid pretreatment, represents a feasible approach. However, the consistency of various thermal/optical analysis protocols for this type of samples has never been examined. In this study, urban street dust and soil samples from Baoji, China were pretreated with acids and analyzed with four thermal/optical protocols to investigate how analytical conditions and optical correction affect EC measurement. The EC values measured with reflectance correction (ECR) were found always higher and less sensitive to temperature program than the EC values measured with transmittance correction (ECT). A high-temperature method with extended heating times (STN120) showed the highest ECT/ECR ratio (0.86) while a low-temperature protocol (IMPROVE-550), with heating time adjusted for sample loading, showed the lowest (0.53). STN ECT was higher than IMPROVE ECT, in contrast to results from aerosol samples. A higher peak inert-mode temperature and extended heating times can elevate ECT/ECR ratios for pretreated geological samples by promoting pyrolyzed organic carbon (PyOC) removal over EC under trace levels of oxygen. Considering that PyOC within filter increases ECR while decreases ECT from the actual EC levels, simultaneous ECR and ECT measurements would constrain the range of EC loading and provide information on method performance. Further testing with standard reference materials of common environmental matrices supports the findings. Char and soot fractions of EC can be further separated using the IMPROVE protocol. The char/soot ratio was lower in street dusts (2.2 on average) than in soils (5.2 on average), most likely reflecting motor vehicle emissions. The soot concentrations agreed with EC from CTO-375, a pure thermal method.

  14. Study protocol for the Cities Changing Diabetes programme: a global mixed-methods approach.

    Science.gov (United States)

    Napier, A David; Nolan, John J; Bagger, Malene; Hesseldal, Louise; Volkmann, Anna-Maria

    2017-11-08

    Urban living has been shown to affect health in various ways. As the world is becoming more urbanised and almost two-thirds of people with diabetes now live in cities, research into the relationship between urban living, health and diabetes is key to improving the lives of many. The majority of people with diabetes have type 2 diabetes, a subset linked to overweight and obesity, decreased physical activity and unhealthy diets. Diabetes has significant consequences for those living with the condition as well as their families, relationships and wider society. Although care and management are improving, complications remain common, and diabetes is among the leading causes of vision loss, amputation, neuropathy and renal and cardiovascular disease worldwide. We present a research protocol for exploring the drivers of type 2 diabetes and its complications in urban settings through the Cities Changing Diabetes (CCD) partnership programme. A global study protocol is implemented in eight collaborating CCD partner cities. In each city, academic institutions, municipal representatives and local stakeholders collaborate to set research priorities and plan implementation of findings. Local academic teams execute the study following the global study protocol presented here. A quantitative Rule of Halves analysis obtains measures of the magnitude of the diabetes burden, the diagnosis rates in each city and the outcomes of care. A qualitative Diabetes Vulnerability Assessment explores the urban context in vulnerability to type 2 diabetes and identifies social factors and cultural determinants relevant to health, well-being and diabetes. The protocol steers the collection of primary and secondary data across the study sites. Research ethics board approval has been sought and obtained in each site. Findings from each of the local studies as well as the result from combined multisite (global) analyses will be reported in a series of core scientific journal papers. © Article author

  15. New method development in prehistoric stone tool research: evaluating use duration and data analysis protocols.

    Science.gov (United States)

    Evans, Adrian A; Macdonald, Danielle A; Giusca, Claudiu L; Leach, Richard K

    2014-10-01

    Lithic microwear is a research field of prehistoric stone tool (lithic) analysis that has been developed with the aim to identify how stone tools were used. It has been shown that laser scanning confocal microscopy has the potential to be a useful quantitative tool in the study of prehistoric stone tool function. In this paper, two important lines of inquiry are investigated: (1) whether the texture of worn surfaces is constant under varying durations of tool use, and (2) the development of rapid objective data analysis protocols. This study reports on the attempt to further develop these areas of study and results in a better understanding of the complexities underlying the development of flexible analytical algorithms for surface analysis. The results show that when sampling is optimised, surface texture may be linked to contact material type, independent of use duration. Further research is needed to validate this finding and test an expanded range of contact materials. The use of automated analytical protocols has shown promise but is only reliable if sampling location and scale are defined. Results suggest that the sampling protocol reports on the degree of worn surface invasiveness, complicating the ability to investigate duration related textural characterisation. Copyright © 2014. Published by Elsevier Ltd.

  16. Efficient Computational Research Protocol to Survey Free Energy Surface for Solution Chemical Reaction in the QM/MM Framework: The FEG-ER Methodology and Its Application to Isomerization Reaction of Glycine in Aqueous Solution.

    Science.gov (United States)

    Takenaka, Norio; Kitamura, Yukichi; Nagaoka, Masataka

    2016-03-03

    In solution chemical reaction, we often need to consider a multidimensional free energy (FE) surface (FES) which is analogous to a Born-Oppenheimer potential energy surface. To survey the FES, an efficient computational research protocol is proposed within the QM/MM framework; (i) we first obtain some stable states (or transition states) involved by optimizing their structures on the FES, in a stepwise fashion, finally using the free energy gradient (FEG) method, and then (ii) we directly obtain the FE differences among any arbitrary states on the FES, efficiently by employing the QM/MM method with energy representation (ER), i.e., the QM/MM-ER method. To validate the calculation accuracy and efficiency, we applied the above FEG-ER methodology to a typical isomerization reaction of glycine in aqueous solution, and reproduced quite satisfactorily the experimental value of the reaction FE. Further, it was found that the structural relaxation of the solute in the QM/MM force field is not negligible to estimate correctly the FES. We believe that the present research protocol should become prevailing as one computational strategy and will play promising and important roles in solution chemistry toward solution reaction ergodography.

  17. A fast iterative method for computing particle beams penetrating matter

    International Nuclear Information System (INIS)

    Boergers, C.

    1997-01-01

    Beams of microscopic particles penetrating matter are important in several fields. The application motivating our parameter choices in this paper is electron beam cancer therapy. Mathematically, a steady particle beam penetrating matter, or a configuration of several such beams, is modeled by a boundary value problem for a Boltzmann equation. Grid-based discretization of this problem leads to a system of algebraic equations. This system is typically very large because of the large number of independent variables in the Boltzmann equation (six if time independence is the only dimension-reducing assumption). If grid-based methods are to be practical at all, it is therefore necessary to develop fast solvers for the discretized problems. This is the subject of the present paper. For two-dimensional, mono-energetic, linear particle beam problems, we describe an iterative domain decomposition algorithm based on overlapping decompositions of the set of particle directions and computationally demonstrate its rapid, grid independent convergence. There appears to be no fundamental obstacle to generalizing the method to three-dimensional, energy dependent problems. 34 refs., 15 figs., 6 tabs

  18. Global Seabed Materials and Habitats Mapped: The Computational Methods

    Science.gov (United States)

    Jenkins, C. J.

    2016-02-01

    What the seabed is made of has proven difficult to map on the scale of whole ocean-basins. Direct sampling and observation can be augmented with proxy-parameter methods such as acoustics. Both avenues are essential to obtain enough detail and coverage, and also to validate the mapping methods. We focus on the direct observations such as samplings, photo and video, probes, diver and sub reports, and surveyed features. These are often in word-descriptive form: over 85% of the records for site materials are in this form, whether as sample/view descriptions or classifications, or described parameters such as consolidation, color, odor, structures and components. Descriptions are absolutely necessary for unusual materials and for processes - in other words, for research. This project dbSEABED not only has the largest collection of seafloor materials data worldwide, but it uses advanced computing math to obtain the best possible coverages and detail. Included in those techniques are linguistic text analysis (e.g., Natural Language Processing, NLP), fuzzy set theory (FST), and machine learning (ML, e.g., Random Forest). These techniques allow efficient and accurate import of huge datasets, thereby optimizing the data that exists. They merge quantitative and qualitative types of data for rich parameter sets, and extrapolate where the data are sparse for best map production. The dbSEABED data resources are now very widely used worldwide in oceanographic research, environmental management, the geosciences, engineering and survey.

  19. Semi-coarsening multigrid methods for parallel computing

    Energy Technology Data Exchange (ETDEWEB)

    Jones, J.E.

    1996-12-31

    Standard multigrid methods are not well suited for problems with anisotropic coefficients which can occur, for example, on grids that are stretched to resolve a boundary layer. There are several different modifications of the standard multigrid algorithm that yield efficient methods for anisotropic problems. In the paper, we investigate the parallel performance of these multigrid algorithms. Multigrid algorithms which work well for anisotropic problems are based on line relaxation and/or semi-coarsening. In semi-coarsening multigrid algorithms a grid is coarsened in only one of the coordinate directions unlike standard or full-coarsening multigrid algorithms where a grid is coarsened in each of the coordinate directions. When both semi-coarsening and line relaxation are used, the resulting multigrid algorithm is robust and automatic in that it requires no knowledge of the nature of the anisotropy. This is the basic multigrid algorithm whose parallel performance we investigate in the paper. The algorithm is currently being implemented on an IBM SP2 and its performance is being analyzed. In addition to looking at the parallel performance of the basic semi-coarsening algorithm, we present algorithmic modifications with potentially better parallel efficiency. One modification reduces the amount of computational work done in relaxation at the expense of using multiple coarse grids. This modification is also being implemented with the aim of comparing its performance to that of the basic semi-coarsening algorithm.

  20. Particular application of methods of AdaBoost and LBP to the problems of computer vision

    OpenAIRE

    Волошин, Микола Володимирович

    2012-01-01

    The application of AdaBoost method and local binary pattern (LBP) method for different spheres of computer vision implementation, such as personality identification and computer iridology, is considered in the article. The goal of the research is to develop error-correcting methods and systems for implements of computer vision and computer iridology, in particular. This article considers the problem of colour spaces, which are used as a filter and as a pre-processing of images. Method of AdaB...

  1. Non-unitary probabilistic quantum computing circuit and method

    Science.gov (United States)

    Williams, Colin P. (Inventor); Gingrich, Robert M. (Inventor)

    2009-01-01

    A quantum circuit performing quantum computation in a quantum computer. A chosen transformation of an initial n-qubit state is probabilistically obtained. The circuit comprises a unitary quantum operator obtained from a non-unitary quantum operator, operating on an n-qubit state and an ancilla state. When operation on the ancilla state provides a success condition, computation is stopped. When operation on the ancilla state provides a failure condition, computation is performed again on the ancilla state and the n-qubit state obtained in the previous computation, until a success condition is obtained.

  2. Nuclear power reactor analysis, methods, algorithms and computer programs

    International Nuclear Information System (INIS)

    Matausek, M.V

    1981-01-01

    Full text: For a developing country buying its first nuclear power plants from a foreign supplier, disregarding the type and scope of the contract, there is a certain number of activities which have to be performed by local stuff and domestic organizations. This particularly applies to the choice of the nuclear fuel cycle strategy and the choice of the type and size of the reactors, to bid parameters specification, bid evaluation and final safety analysis report evaluation, as well as to in-core fuel management activities. In the Nuclear Engineering Department of the Boris Kidric Institute of Nuclear Sciences (NET IBK) the continual work is going on, related to the following topics: cross section and resonance integral calculations, spectrum calculations, generation of group constants, lattice and cell problems, criticality and global power distribution search, fuel burnup analysis, in-core fuel management procedures, cost analysis and power plant economics, safety and accident analysis, shielding problems and environmental impact studies, etc. The present paper gives the details of the methods developed and the results achieved, with the particular emphasis on the NET IBK computer program package for the needs of planning, construction and operation of nuclear power plants. The main problems encountered so far were related to small working team, lack of large and powerful computers, absence of reliable basic nuclear data and shortage of experimental and empirical results for testing theoretical models. Some of these difficulties have been overcome thanks to bilateral and multilateral cooperation with developed countries, mostly through IAEA. It is the authors opinion, however, that mutual cooperation of developing countries, having similar problems and similar goals, could lead to significant results. Some activities of this kind are suggested and discussed. (author)

  3. Variability in usual care mechanical ventilation for pediatric acute lung injury: the potential benefit of a lung protective computer protocol.

    Science.gov (United States)

    Khemani, Robinder G; Sward, Katherine; Morris, Alan; Dean, J Michael; Newth, Christopher J L

    2011-11-01

    Although pediatric intensivists claim to embrace lung protective ventilation for acute lung injury (ALI), ventilator management is variable. We describe ventilator changes clinicians made for children with hypoxemic respiratory failure, and evaluate the potential acceptability of a pediatric ventilation protocol. This was a retrospective cohort study performed in a tertiary care pediatric intensive care unit (PICU). The study period was from January 2000 to July 2007. We included mechanically ventilated children with PaO(2)/FiO(2) (P/F) ratio less than 300. We assessed variability in ventilator management by evaluating actual changes to ventilator settings after an arterial blood gas (ABG). We evaluated the potential acceptability of a pediatric mechanical ventilation protocol we adapted from National Institutes of Health/National Heart, Lung, and Blood Institute (NIH/NHLBI) Acute Respiratory Distress Syndrome (ARDS) Network protocols by comparing actual practice changes in ventilator settings to changes that would have been recommended by the protocol. A total of 2,719 ABGs from 402 patients were associated with 6,017 ventilator settings. Clinicians infrequently decreased FiO(2), even when the PaO(2) was high (>68 mmHg). The protocol would have recommended more positive end expiratory pressure (PEEP) than was used in actual practice 42% of the time in the mid PaO(2) range (55-68 mmHg) and 67% of the time in the low PaO(2) range (ventilator rate (VR) when the protocol would have recommended a change, even when the pH was greater than 7.45 with PIP at least 35 cmH(2)O. There may be lost opportunities to minimize potentially injurious ventilator settings for children with ALI. A reproducible pediatric mechanical ventilation protocol could prompt clinicians to make ventilator changes that are consistent with lung protective ventilation.

  4. A Lightweight Protocol for Secure Video Streaming.

    Science.gov (United States)

    Venčkauskas, Algimantas; Morkevicius, Nerijus; Bagdonas, Kazimieras; Damaševičius, Robertas; Maskeliūnas, Rytis

    2018-05-14

    The Internet of Things (IoT) introduces many new challenges which cannot be solved using traditional cloud and host computing models. A new architecture known as fog computing is emerging to address these technological and security gaps. Traditional security paradigms focused on providing perimeter-based protections and client/server point to point protocols (e.g., Transport Layer Security (TLS)) are no longer the best choices for addressing new security challenges in fog computing end devices, where energy and computational resources are limited. In this paper, we present a lightweight secure streaming protocol for the fog computing "Fog Node-End Device" layer. This protocol is lightweight, connectionless, supports broadcast and multicast operations, and is able to provide data source authentication, data integrity, and confidentiality. The protocol is based on simple and energy efficient cryptographic methods, such as Hash Message Authentication Codes (HMAC) and symmetrical ciphers, and uses modified User Datagram Protocol (UDP) packets to embed authentication data into streaming data. Data redundancy could be added to improve reliability in lossy networks. The experimental results summarized in this paper confirm that the proposed method efficiently uses energy and computational resources and at the same time provides security properties on par with the Datagram TLS (DTLS) standard.

  5. Justification of computational methods to ensure information management systems

    Directory of Open Access Journals (Sweden)

    E. D. Chertov

    2016-01-01

    Full Text Available Summary. Due to the diversity and complexity of organizational management tasks a large enterprise, the construction of an information management system requires the establishment of interconnected complexes of means, implementing the most efficient way collect, transfer, accumulation and processing of information necessary drivers handle different ranks in the governance process. The main trends of the construction of integrated logistics management information systems can be considered: the creation of integrated data processing systems by centralizing storage and processing of data arrays; organization of computer systems to realize the time-sharing; aggregate-block principle of the integrated logistics; Use a wide range of peripheral devices with the unification of information and hardware communication. Main attention is paid to the application of the system of research of complex technical support, in particular, the definition of quality criteria for the operation of technical complex, the development of information base analysis methods of management information systems and define the requirements for technical means, as well as methods of structural synthesis of the major subsystems of integrated logistics. Thus, the aim is to study on the basis of systematic approach of integrated logistics management information system and the development of a number of methods of analysis and synthesis of complex logistics that are suitable for use in the practice of engineering systems design. The objective function of the complex logistics management information systems is the task of gathering systems, transmission and processing of specified amounts of information in the regulated time intervals with the required degree of accuracy while minimizing the reduced costs for the establishment and operation of technical complex. Achieving the objective function of the complex logistics to carry out certain organization of interaction of information

  6. Reshaping of computational system for dosimetry in neutron and photons radiotherapy based in stochastic methods - SISCODES

    International Nuclear Information System (INIS)

    Trindade, Bruno Machado

    2011-02-01

    This work shows the remodeling of the Computer System for Dosimetry of Neutrons and Photons in Radiotherapy Based on Stochastic Methods . SISCODES. The initial description and status, the alterations and expansions (proposed and concluded), and the latest system development status are shown. The SISCODES is a system that allows the execution of a 3D computational planning in radiation therapy, based on MCNP5 nuclear particle transport code. The SISCODES provides tools to build a patient's voxels model, to define a treatment planning, to simulate this planning, and to view the results of the simulation. The SISCODES implements a database of tissues, sources and nuclear data and an interface to access then. The graphical SISCODES modules were rewritten or were implemented using C++ language and GTKmm library. Studies about dose deviations were performed simulating a homogeneous water phantom as analogue of the human body in radiotherapy planning and a heterogeneous voxel phantom, pointing out possible dose miscalculations. The Soft-RT and PROPLAN computer codes that do interface with SISCODES are described. A set of voxels models created on the SISCODES are presented with its respective sizes and resolutions. To demonstrate the use of SISCODES, examples of radiation therapy and dosimetry simulations for prostate and heart are shown. Three protocols were simulated on the heart voxel model: Sm-153 filled balloon and P-32 stent, to prevent angioplasty restenosis; and Tl-201 myocardial perfusion, to imaging. Teletherapy with 6MV and 15MV beams were simulated to the prostate, and brachytherapy with I-125 seeds. The results of these simulations are shown on isodose curves and on dose-volume histograms. The SISCODES shows to be a useful tool for research of new radiation therapy treatments and, in future, can also be useful in medical practice. At the end, future improvements are proposed. I hope this work can contribute to develop more effective radiation therapy

  7. 26 CFR 1.167(b)-0 - Methods of computing depreciation.

    Science.gov (United States)

    2010-04-01

    ... 26 Internal Revenue 2 2010-04-01 2010-04-01 false Methods of computing depreciation. 1.167(b)-0....167(b)-0 Methods of computing depreciation. (a) In general. Any reasonable and consistently applied method of computing depreciation may be used or continued in use under section 167. Regardless of the...

  8. Use of digital computers for correction of gamma method and neutron-gamma method indications

    International Nuclear Information System (INIS)

    Lakhnyuk, V.M.

    1978-01-01

    The program for the NAIRI-S computer is described which is intended for accounting and elimination of the effect of by-processes when interpreting gamma and neutron-gamma logging indications. By means of slight corrections it is possible to use the program as a mathematical basis for logging diagram standardization by the method of multidimensional regressive analysis and estimation of rock reservoir properties

  9. Comparison of radiation doses using weight-based protocol and dose modulation techniques for patients undergoing biphasic abdominal computed tomography examinations

    Directory of Open Access Journals (Sweden)

    Livingstone Roshan

    2009-01-01

    Full Text Available Computed tomography (CT of the abdomen contributes a substantial amount of man-made radiation dose to patients and use of this modality is on the increase. This study intends to compare radiation dose and image quality using dose modulation techniques and weight- based protocol exposure parameters for biphasic abdominal CT. Using a six-slice CT scanner, a prospective study of 426 patients who underwent abdominal CT examinations was performed. Constant tube potentials of 90 kV and 120 kV were used for all arterial and portal venous phase respectively. The tube current-time product for weight-based protocol was optimized according to patient′s body weight; this was automatically selected in dose modulations. The effective dose using weight-based protocol, angular and z-axis dose modulation was 11.3 mSv, 9.5 mSv and 8.2 mSv respectively for the patient′s body weight ranging from 40 to 60 kg. For patients of body weights ranging 60 to 80 kg, the effective doses were 13.2 mSv, 11.2 mSv and 10.6 mSv respectively. The use of dose modulation technique resulted in a reduction of 16 to 28% in radiation dose with acceptable diagnostic accuracy in comparison to the use of weight-based protocol settings.

  10. Immunochemical protocols

    National Research Council Canada - National Science Library

    Pound, John D

    1998-01-01

    ... easy and important refinements often are not published. This much anticipated 2nd edition of Immunochemzcal Protocols therefore aims to provide a user-friendly up-to-date handbook of reliable techniques selected to suit the needs of molecular biologists. It covers the full breadth of the relevant established immunochemical methods, from protein blotting and immunoa...

  11. Methodical Approaches to Teaching of Computer Modeling in Computer Science Course

    Science.gov (United States)

    Rakhimzhanova, B. Lyazzat; Issabayeva, N. Darazha; Khakimova, Tiyshtik; Bolyskhanova, J. Madina

    2015-01-01

    The purpose of this study was to justify of the formation technique of representation of modeling methodology at computer science lessons. The necessity of studying computer modeling is that the current trends of strengthening of general education and worldview functions of computer science define the necessity of additional research of the…

  12. Recent advances in computational structural reliability analysis methods

    Science.gov (United States)

    Thacker, Ben H.; Wu, Y.-T.; Millwater, Harry R.; Torng, Tony Y.; Riha, David S.

    1993-10-01

    The goal of structural reliability analysis is to determine the probability that the structure will adequately perform its intended function when operating under the given environmental conditions. Thus, the notion of reliability admits the possibility of failure. Given the fact that many different modes of failure are usually possible, achievement of this goal is a formidable task, especially for large, complex structural systems. The traditional (deterministic) design methodology attempts to assure reliability by the application of safety factors and conservative assumptions. However, the safety factor approach lacks a quantitative basis in that the level of reliability is never known and usually results in overly conservative designs because of compounding conservatisms. Furthermore, problem parameters that control the reliability are not identified, nor their importance evaluated. A summary of recent advances in computational structural reliability assessment is presented. A significant level of activity in the research and development community was seen recently, much of which was directed towards the prediction of failure probabilities for single mode failures. The focus is to present some early results and demonstrations of advanced reliability methods applied to structural system problems. This includes structures that can fail as a result of multiple component failures (e.g., a redundant truss), or structural components that may fail due to multiple interacting failure modes (e.g., excessive deflection, resonate vibration, or creep rupture). From these results, some observations and recommendations are made with regard to future research needs.

  13. Computational Methods for Physical Model Information Management: Opening the Aperture

    International Nuclear Information System (INIS)

    Moser, F.; Kirgoeze, R.; Gagne, D.; Calle, D.; Murray, J.; Crowley, J.

    2015-01-01

    The volume, velocity and diversity of data available to analysts are growing exponentially, increasing the demands on analysts to stay abreast of developments in their areas of investigation. In parallel to the growth in data, technologies have been developed to efficiently process, store, and effectively extract information suitable for the development of a knowledge base capable of supporting inferential (decision logic) reasoning over semantic spaces. These technologies and methodologies, in effect, allow for automated discovery and mapping of information to specific steps in the Physical Model (Safeguard's standard reference of the Nuclear Fuel Cycle). This paper will describe and demonstrate an integrated service under development at the IAEA that utilizes machine learning techniques, computational natural language models, Bayesian methods and semantic/ontological reasoning capabilities to process large volumes of (streaming) information and associate relevant, discovered information to the appropriate process step in the Physical Model. The paper will detail how this capability will consume open source and controlled information sources and be integrated with other capabilities within the analysis environment, and provide the basis for a semantic knowledge base suitable for hosting future mission focused applications. (author)

  14. THE METHOD OF DESIGNING ASSISTED ON COMPUTER OF THE

    Directory of Open Access Journals (Sweden)

    LUCA Cornelia

    2015-05-01

    Full Text Available To the base of the footwear soles designing, is the shoe last. The shoe lasts have irregular shapes, with various curves witch can’t be represented by a simple mathematic function. In order to design the footwear’s soles it’s necessary to take from the shoe last some base contours. These contours are obtained with high precision in a 3D CAD system. In the paper, it will be presented a method of designing of the soles for footwear, computer assisted. The copying process of the shoe last is done using the 3D digitizer. For digitizing, the shoe last spatial shape is positioned on the peripheral of data gathering, witch follows automatically the shoe last’s surface. The wire network obtained through digitizing is numerically interpolated with the interpolator functions in order to obtain the spatial numerical shape of the shoe last. The 3D designing of the sole will be realized on the numerical shape of the shoe last following the next steps: the manufacture of the sole’s surface, the lateral surface realization of the sole’s shape, obtaining the link surface between the lateral side and the planner one of the sole, of the sole’s margin, the sole’s designing contains the skid proof area. The main advantage of the designing method is the design precision, visualization in 3D space of the sole and the possibility to take the best decision viewing the acceptance of new sole’s pattern.

  15. A method of paralleling computer calculation for two-dimensional kinetic plasma model

    International Nuclear Information System (INIS)

    Brazhnik, V.A.; Demchenko, V.V.; Dem'yanov, V.G.; D'yakov, V.E.; Ol'shanskij, V.V.; Panchenko, V.I.

    1987-01-01

    A method for parallel computer calculation and OSIRIS program complex realizing it and designed for numerical plasma simulation by the macroparticle method are described. The calculation can be carried out either with one or simultaneously with two computers BESM-6, that is provided by some package of interacting programs functioning in every computer. Program interaction in every computer is based on event techniques realized in OS DISPAK. Parallel computer calculation with two BESM-6 computers allows to accelerate the computation 1.5 times

  16. From human monocytes to genome-wide binding sites--a protocol for small amounts of blood: monocyte isolation/ChIP-protocol/library amplification/genome wide computational data analysis.

    Directory of Open Access Journals (Sweden)

    Sebastian Weiterer

    Full Text Available Chromatin immunoprecipitation in combination with a genome-wide analysis via high-throughput sequencing is the state of the art method to gain genome-wide representation of histone modification or transcription factor binding profiles. However, chromatin immunoprecipitation analysis in the context of human experimental samples is limited, especially in the case of blood cells. The typically extremely low yields of precipitated DNA are usually not compatible with library amplification for next generation sequencing. We developed a highly reproducible protocol to present a guideline from the first step of isolating monocytes from a blood sample to analyse the distribution of histone modifications in a genome-wide manner.The protocol describes the whole work flow from isolating monocytes from human blood samples followed by a high-sensitivity and small-scale chromatin immunoprecipitation assay with guidance for generating libraries compatible with next generation sequencing from small amounts of immunoprecipitated DNA.

  17. Detection of furcation involvement using periapical radiography and 2 cone-beam computed tomography imaging protocols with and without a metallic post: An animal study

    Energy Technology Data Exchange (ETDEWEB)

    Salineiro, Fernanda Cristina Sales; Gialain, Ivan Onone; Kobayashi-Velasco, Solange; Pannuti, Claudio Mendes; Cavalcanti, Marcelo Gusmao Paraiso [Dept. of Stomatology, School of Dentistry, University of Sao Paulo, Sao Paulo (Brazil)

    2017-03-15

    The purpose of this study was to assess the accuracy, sensitivity, and specificity of the diagnosis of incipient furcation involvement with periapical radiography (PR) and 2 cone-beam computed tomography (CBCT) imaging protocols, and to test metal artifact interference. Mandibular second molars in 10 macerated pig mandibles were divided into those that showed no furcation involvement and those with lesions in the furcation area. Exams using PR and 2 different CBCT imaging protocols were performed with and without a metallic post. Each image was analyzed twice by 2 observers who rated the absence or presence of furcation involvement according to a 5-point scale. Receiver operating characteristic (ROC) curves were used to evaluate the accuracy, sensitivity, and specificity of the observations. The accuracy of the CBCT imaging protocols ranged from 67.5% to 82.5% in the images obtained with a metallic post and from 72.5% to 80% in those without a metallic post. The accuracy of PR ranged from 37.5% to 55% in the images with a metallic post and from 42.5% to 62.5% in those without a metallic post. The area under the ROC curve values for the CBCT imaging protocols ranged from 0.813 to 0.802, and for PR ranged from 0.503 to 0.448. Both CBCT imaging protocols showed higher accuracy, sensitivity, and specificity than PR in the detection of incipient furcation involvement. Based on these results, CBCT may be considered a reliable tool for detecting incipient furcation involvement following a clinical periodontal exam, even in the presence of a metallic post.

  18. Overview of Computer Simulation Modeling Approaches and Methods

    Science.gov (United States)

    Robert E. Manning; Robert M. Itami; David N. Cole; Randy Gimblett

    2005-01-01

    The field of simulation modeling has grown greatly with recent advances in computer hardware and software. Much of this work has involved large scientific and industrial applications for which substantial financial resources are available. However, advances in object-oriented programming and simulation methodology, concurrent with dramatic increases in computer...

  19. Computational Fluid Dynamics Methods and Their Applications in Medical Science

    Directory of Open Access Journals (Sweden)

    Kowalewski Wojciech

    2016-12-01

    Full Text Available As defined by the National Institutes of Health: “Biomedical engineering integrates physical, chemical, mathematical, and computational sciences and engineering principles to study biology, medicine, behavior, and health”. Many issues in this area are closely related to fluid dynamics. This paper provides an overview of the basic concepts concerning Computational Fluid Dynamics and its applications in medicine.

  20. Multiprofissional electronic protocol in ophtalmology with enfasis in strabismus

    OpenAIRE

    RIBEIRO, CHRISTIE GRAF; MOREIRA, ANA TEREZA RAMOS; PINTO, JOSÉ SIMÃO DE PAULA; MALAFAIA, OSVALDO

    2016-01-01

    ABSTRACT Objective: to create and validate an electronic database in ophthalmology focused on strabismus, to computerize this database in the form of a systematic data collection software named Electronic Protocol, and to incorporate this protocol into the Integrated System of Electronic Protocols (SINPE(c)). Methods: this is a descriptive study, with the methodology divided into three phases: (1) development of a theoretical ophthalmologic database with emphasis on strabismus; (2) compute...

  1. The Effect of Health Information Technology on Health Care Provider Communication: A Mixed-Method Protocol.

    Science.gov (United States)

    Manojlovich, Milisa; Adler-Milstein, Julia; Harrod, Molly; Sales, Anne; Hofer, Timothy P; Saint, Sanjay; Krein, Sarah L

    2015-06-11

    Communication failures between physicians and nurses are one of the most common causes of adverse events for hospitalized patients, as well as a major root cause of all sentinel events. Communication technology (ie, the electronic medical record, computerized provider order entry, email, and pagers), which is a component of health information technology (HIT), may help reduce some communication failures but increase others because of an inadequate understanding of how communication technology is used. Increasing use of health information and communication technologies is likely to affect communication between nurses and physicians. The purpose of this study is to describe, in detail, how health information and communication technologies facilitate or hinder communication between nurses and physicians with the ultimate goal of identifying how we can optimize the use of these technologies to support effective communication. Effective communication is the process of developing shared understanding between communicators by establishing, testing, and maintaining relationships. Our theoretical model, based in communication and sociology theories, describes how health information and communication technologies affect communication through communication practices (ie, use of rich media; the location and availability of computers) and work relationships (ie, hierarchies and team stability). Therefore we seek to (1) identify the range of health information and communication technologies used in a national sample of medical-surgical acute care units, (2) describe communication practices and work relationships that may be influenced by health information and communication technologies in these same settings, and (3) explore how differences in health information and communication technologies, communication practices, and work relationships between physicians and nurses influence communication. This 4-year study uses a sequential mixed-methods design, beginning with a

  2. [Sampling and measurement methods of the protocol design of the China Nine-Province Survey for blindness, visual impairment and cataract surgery].

    Science.gov (United States)

    Zhao, Jia-liang; Wang, Yu; Gao, Xue-cheng; Ellwein, Leon B; Liu, Hu

    2011-09-01

    To design the protocol of the China nine-province survey for blindness, visual impairment and cataract surgery to evaluate the prevalence and main causes of blindness and visual impairment, and the prevalence and outcomes of the cataract surgery. The protocol design was began after accepting the task for the national survey for blindness, visual impairment and cataract surgery from the Department of Medicine, Ministry of Health, China, in November, 2005. The protocol in Beijing Shunyi Eye Study in 1996 and Guangdong Doumen County Eye Study in 1997, both supported by World Health Organization, was taken as the basis for the protocol design. The relative experts were invited to discuss and prove the draft protocol. An international advisor committee was established to examine and approve the draft protocol. Finally, the survey protocol was checked and approved by the Department of Medicine, Ministry of Health, China and Prevention Program of Blindness and Deafness, WHO. The survey protocol was designed according to the characteristics and the scale of the survey. The contents of the protocol included determination of target population and survey sites, calculation of the sample size, design of the random sampling, composition and organization of the survey teams, determination of the examinee, the flowchart of the field work, survey items and methods, diagnostic criteria of blindness and moderate and sever visual impairment, the measures of the quality control, the methods of the data management. The designed protocol became the standard and practical protocol for the survey to evaluate the prevalence and main causes of blindness and visual impairment, and the prevalence and outcomes of the cataract surgery.

  3. Methods of defining ontologies, word disambiguation methods, computer systems, and articles of manufacture

    Science.gov (United States)

    Sanfilippo, Antonio P [Richland, WA; Tratz, Stephen C [Richland, WA; Gregory, Michelle L [Richland, WA; Chappell, Alan R [Seattle, WA; Whitney, Paul D [Richland, WA; Posse, Christian [Seattle, WA; Baddeley, Robert L [Richland, WA; Hohimer, Ryan E [West Richland, WA

    2011-10-11

    Methods of defining ontologies, word disambiguation methods, computer systems, and articles of manufacture are described according to some aspects. In one aspect, a word disambiguation method includes accessing textual content to be disambiguated, wherein the textual content comprises a plurality of words individually comprising a plurality of word senses, for an individual word of the textual content, identifying one of the word senses of the word as indicative of the meaning of the word in the textual content, for the individual word, selecting one of a plurality of event classes of a lexical database ontology using the identified word sense of the individual word, and for the individual word, associating the selected one of the event classes with the textual content to provide disambiguation of a meaning of the individual word in the textual content.

  4. Advanced scientific computational methods and their applications to nuclear technologies. (3) Introduction of continuum simulation methods and their applications (3)

    International Nuclear Information System (INIS)

    Satake, Shin-ichi; Kunugi, Tomoaki

    2006-01-01

    Scientific computational methods have advanced remarkably with the progress of nuclear development. They have played the role of weft connecting each realm of nuclear engineering and then an introductory course of advanced scientific computational methods and their applications to nuclear technologies were prepared in serial form. This is the third issue showing the introduction of continuum simulation methods and their applications. Spectral methods and multi-interface calculation methods in fluid dynamics are reviewed. (T. Tanaka)

  5. A Simple Method for Dynamic Scheduling in a Heterogeneous Computing System

    OpenAIRE

    Žumer, Viljem; Brest, Janez

    2002-01-01

    A simple method for the dynamic scheduling on a heterogeneous computing system is proposed in this paper. It was implemented to minimize the parallel program execution time. The proposed method decomposes the program workload into computationally homogeneous subtasks, which may be of the different size, depending on the current load of each machine in a heterogeneous computing system.

  6. Computer classes and games in virtual reality environment to reduce loneliness among students of an elderly reference center: Study protocol for a randomised cross-over design.

    Science.gov (United States)

    Antunes, Thaiany Pedrozo Campos; Oliveira, Acary Souza Bulle de; Crocetta, Tania Brusque; Antão, Jennifer Yohanna Ferreira de Lima; Barbosa, Renata Thais de Almeida; Guarnieri, Regiani; Massetti, Thais; Monteiro, Carlos Bandeira de Mello; Abreu, Luiz Carlos de

    2017-03-01

    Physical and mental changes associated with aging commonly lead to a decrease in communication capacity, reducing social interactions and increasing loneliness. Computer classes for older adults make significant contributions to social and cognitive aspects of aging. Games in a virtual reality (VR) environment stimulate the practice of communicative and cognitive skills and might also bring benefits to older adults. Furthermore, it might help to initiate their contact to the modern technology. The purpose of this study protocol is to evaluate the effects of practicing VR games during computer classes on the level of loneliness of students of an elderly reference center. This study will be a prospective longitudinal study with a randomised cross-over design, with subjects aged 50 years and older, of both genders, spontaneously enrolled in computer classes for beginners. Data collection will be done in 3 moments: moment 0 (T0) - at baseline; moment 1 (T1) - after 8 typical computer classes; and moment 2 (T2) - after 8 computer classes which include 15 minutes for practicing games in VR environment. A characterization questionnaire, the short version of the Short Social and Emotional Loneliness Scale for Adults (SELSA-S) and 3 games with VR (Random, MoviLetrando, and Reaction Time) will be used. For the intervention phase 4 other games will be used: Coincident Timing, Motor Skill Analyser, Labyrinth, and Fitts. The statistical analysis will compare the evolution in loneliness perception, performance, and reaction time during the practice of the games between the 3 moments of data collection. Performance and reaction time during the practice of the games will also be correlated to the loneliness perception. The protocol is approved by the host institution's ethics committee under the number 52305215.3.0000.0082. Results will be disseminated via peer-reviewed journal articles and conferences. This clinical trial is registered at ClinicalTrials.gov identifier: NCT

  7. Modelling of elementary computer operations using the intellect method

    Energy Technology Data Exchange (ETDEWEB)

    Shabanov-kushnarenko, Yu P

    1982-01-01

    The formal and apparatus intellect theory is used to describe functions of machine intelligence. A mathematical description is proposed as well as a machine realisation as switched networks of some simple computer operations. 5 references.

  8. Testing of toxicity based methods to develop site specific clean up objectives - phase 1: Toxicity protocol screening and applicability

    International Nuclear Information System (INIS)

    Hamilton, H.; Kerr, D.; Thorne, W.; Taylor, B.; Zadnik, M.; Goudey, S.; Birkholz, D.

    1994-03-01

    A study was conducted to develop a cost-effective and practical protocol for using bio-assay based toxicity assessment methods for remediation of decommissioned oil and gas production, and processing facilities. The objective was to generate site-specific remediation criteria for contaminated sites. Most companies have used the chemical-specific approach which, however, did not meet the ultimate land use goal of agricultural production. The toxicity assessment method described in this study dealt with potential impairment to agricultural crop production and natural ecosystems. Human health concerns were not specifically addressed. It was suggested that chemical-specific methods should be used when human health concerns exist. . Results showed that toxicity tests will more directly identify ecological stress caused by site contamination than chemical-specific remediation criteria, which can be unnecessarily protective. 11 refs., 7 tabs., 6 figs

  9. Pair Programming as a Modern Method of Teaching Computer Science

    OpenAIRE

    Irena Nančovska Šerbec; Branko Kaučič; Jože Rugelj

    2008-01-01

    At the Faculty of Education, University of Ljubljana we educate future computer science teachers. Beside didactical, pedagogical, mathematical and other interdisciplinary knowledge, students gain knowledge and skills of programming that are crucial for computer science teachers. For all courses, the main emphasis is the absorption of professional competences, related to the teaching profession and the programming profile. The latter are selected according to the well-known document, the ACM C...

  10. Quality-of-service sensitivity to bio-inspired/evolutionary computational methods for intrusion detection in wireless ad hoc multimedia sensor networks

    Science.gov (United States)

    Hortos, William S.

    2012-06-01

    In the author's previous work, a cross-layer protocol approach to wireless sensor network (WSN) intrusion detection an identification is created with multiple bio-inspired/evolutionary computational methods applied to the functions of the protocol layers, a single method to each layer, to improve the intrusion-detection performance of the protocol over that of one method applied to only a single layer's functions. The WSN cross-layer protocol design embeds GAs, anti-phase synchronization, ACO, and a trust model based on quantized data reputation at the physical, MAC, network, and application layer, respectively. The construct neglects to assess the net effect of the combined bioinspired methods on the quality-of-service (QoS) performance for "normal" data streams, that is, streams without intrusions. Analytic expressions of throughput, delay, and jitter, coupled with simulation results for WSNs free of intrusion attacks, are the basis for sensitivity analyses of QoS metrics for normal traffic to the bio-inspired methods.

  11. Recent Advances in Computational Methods for Nuclear Magnetic Resonance Data Processing

    KAUST Repository

    Gao, Xin

    2013-01-01

    research attention from specialists in bioinformatics and computational biology. In this paper, we review recent advances in computational methods for NMR protein structure determination. We summarize the advantages of and bottlenecks in the existing

  12. Computational and statistical methods for high-throughput analysis of post-translational modifications of proteins

    DEFF Research Database (Denmark)

    Schwämmle, Veit; Braga, Thiago Verano; Roepstorff, Peter

    2015-01-01

    The investigation of post-translational modifications (PTMs) represents one of the main research focuses for the study of protein function and cell signaling. Mass spectrometry instrumentation with increasing sensitivity improved protocols for PTM enrichment and recently established pipelines...... for high-throughput experiments allow large-scale identification and quantification of several PTM types. This review addresses the concurrently emerging challenges for the computational analysis of the resulting data and presents PTM-centered approaches for spectra identification, statistical analysis...

  13. High performance computing and quantum trajectory method in CPU and GPU systems

    International Nuclear Information System (INIS)

    Wiśniewska, Joanna; Sawerwain, Marek; Leoński, Wiesław

    2015-01-01

    Nowadays, a dynamic progress in computational techniques allows for development of various methods, which offer significant speed-up of computations, especially those related to the problems of quantum optics and quantum computing. In this work, we propose computational solutions which re-implement the quantum trajectory method (QTM) algorithm in modern parallel computation environments in which multi-core CPUs and modern many-core GPUs can be used. In consequence, new computational routines are developed in more effective way than those applied in other commonly used packages, such as Quantum Optics Toolbox (QOT) for Matlab or QuTIP for Python

  14. Cochrane Qualitative and Implementation Methods Group guidance series-paper 2: methods for question formulation, searching, and protocol development for qualitative evidence synthesis.

    Science.gov (United States)

    Harris, Janet L; Booth, Andrew; Cargo, Margaret; Hannes, Karin; Harden, Angela; Flemming, Kate; Garside, Ruth; Pantoja, Tomas; Thomas, James; Noyes, Jane

    2018-05-01

    This paper updates previous Cochrane guidance on question formulation, searching, and protocol development, reflecting recent developments in methods for conducting qualitative evidence syntheses to inform Cochrane intervention reviews. Examples are used to illustrate how decisions about boundaries for a review are formed via an iterative process of constructing lines of inquiry and mapping the available information to ascertain whether evidence exists to answer questions related to effectiveness, implementation, feasibility, appropriateness, economic evidence, and equity. The process of question formulation allows reviewers to situate the topic in relation to how it informs and explains effectiveness, using the criterion of meaningfulness, appropriateness, feasibility, and implementation. Questions related to complex questions and interventions can be structured by drawing on an increasingly wide range of question frameworks. Logic models and theoretical frameworks are useful tools for conceptually mapping the literature to illustrate the complexity of the phenomenon of interest. Furthermore, protocol development may require iterative question formulation and searching. Consequently, the final protocol may function as a guide rather than a prescriptive route map, particularly in qualitative reviews that ask more exploratory and open-ended questions. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. A computer program for uncertainty analysis integrating regression and Bayesian methods

    Science.gov (United States)

    Lu, Dan; Ye, Ming; Hill, Mary C.; Poeter, Eileen P.; Curtis, Gary

    2014-01-01

    This work develops a new functionality in UCODE_2014 to evaluate Bayesian credible intervals using the Markov Chain Monte Carlo (MCMC) method. The MCMC capability in UCODE_2014 is based on the FORTRAN version of the differential evolution adaptive Metropolis (DREAM) algorithm of Vrugt et al. (2009), which estimates the posterior probability density function of model parameters in high-dimensional and multimodal sampling problems. The UCODE MCMC capability provides eleven prior probability distributions and three ways to initialize the sampling process. It evaluates parametric and predictive uncertainties and it has parallel computing capability based on multiple chains to accelerate the sampling process. This paper tests and demonstrates the MCMC capability using a 10-dimensional multimodal mathematical function, a 100-dimensional Gaussian function, and a groundwater reactive transport model. The use of the MCMC capability is made straightforward and flexible by adopting the JUPITER API protocol. With the new MCMC capability, UCODE_2014 can be used to calculate three types of uncertainty intervals, which all can account for prior information: (1) linear confidence intervals which require linearity and Gaussian error assumptions and typically 10s–100s of highly parallelizable model runs after optimization, (2) nonlinear confidence intervals which require a smooth objective function surface and Gaussian observation error assumptions and typically 100s–1,000s of partially parallelizable model runs after optimization, and (3) MCMC Bayesian credible intervals which require few assumptions and commonly 10,000s–100,000s or more partially parallelizable model runs. Ready access allows users to select methods best suited to their work, and to compare methods in many circumstances.

  16. Computational methods in several fields of radiation dosimetry

    International Nuclear Information System (INIS)

    Paretzke, Herwig G.

    2010-01-01

    Full text: Radiation dosimetry has to cope with a wide spectrum of applications and requirements in time and size. The ubiquitous presence of various radiation fields or radionuclides in the human home, working, urban or agricultural environment can lead to various dosimetric tasks starting from radioecology, retrospective and predictive dosimetry, personal dosimetry, up to measurements of radionuclide concentrations in environmental and food product and, finally in persons and their excreta. In all these fields measurements and computational models for the interpretation or understanding of observations are employed explicitly or implicitly. In this lecture some examples of own computational models will be given from the various dosimetric fields, including a) Radioecology (e.g. with the code systems based on ECOSYS, which was developed far before the Chernobyl reactor accident, and tested thoroughly afterwards), b) Internal dosimetry (improved metabolism models based on our own data), c) External dosimetry (with the new ICRU-ICRP-Voxelphantom developed by our lab), d) Radiation therapy (with GEANT IV as applied to mixed reactor radiation incident on individualized voxel phantoms), e) Some aspects of nanodosimetric track structure computations (not dealt with in the other presentation of this author). Finally, some general remarks will be made on the high explicit or implicit importance of computational models in radiation protection and other research field dealing with large systems, as well as on good scientific practices which should generally be followed when developing and applying such computational models

  17. ‘Fast cast’ and ‘needle Tenotomy’ protocols with the Ponseti method to improve clubfoot management in Bangladesh

    Directory of Open Access Journals (Sweden)

    Angela Evans

    2017-11-01

    Full Text Available Abstract Background The management of congenital talipes equino varus (clubfoot deformity has been transformed in the last 20 years as surgical correction has been replaced by the non-surgical Ponseti method. The Ponseti method, consists of corrective serial casting followed by maintenance bracing, and has been repeatedly demonstrated to give best results - regarded as the ‘gold standard’ treatment for paediatric clubfoot. Methods To develop the study protocol Level 2 evidence was used to modify the corrective casting phase of the Ponseti method in children aged up to 12 months. Using Level 4 evidence, the percutaneous Achilles tenotomy (PAT was performed using a 19-gauge needle instead of a scalpel blade, a technique found to reduce bleeding and scarring. Results A total of 123 children participated in this study; 88 male, 35 female. Both feet were affected in 67 cases, left only in 22 cases, right only in 34 cases. Typical clubfeet were found in 112/123 cases, six atypical, five syndromic. The average age at first cast was 51 days (13–240 days. The average number of casts applied was five (2–10 casts. The average number of days between the first cast and brace was 37.8 days (10–122 days, including 21 days in a post-PAT cast. Hence, average time of corrective casts was 17 days. Parents preferred the reduced casting time, and were less concerned about unseen skin wounds. PAT was performed in 103/123 cases, using the needle technique. All post tenotomy casts were in situ for three weeks. Minor complications occurred in seven cases - four cases had skin lesions, three cases disrupted casting phase. At another site, 452 PAT were performed using the needle technique. Conclusions The ‘fast cast’ protocol Ponseti casting was successfully used in infants aged less than 8 months. Extended manual manipulation of two minutes was the essential modification. Parents preferred the faster treatment phase, and ability to closer observe

  18. Survey of computed tomography doses in head and chest protocols; Levantamento de doses em tomografia computadorizada em protocolos de cranio e torax

    Energy Technology Data Exchange (ETDEWEB)

    Souza, Giordana Salvi de; Silva, Ana Maria Marques da, E-mail: giordana.souza@acad.pucrs.br [Pontificia Universidade Catolica do Rio Grande do Sul (PUC-RS), Porto Alegre, RS (Brazil). Faculdade de Fisica. Nucleo de Pesquisa em Imagens Medicas

    2016-07-01

    Computed tomography is a clinical tool for the diagnosis of patients. However, the patient is subjected to a complex dose distribution. The aim of this study was to survey dose indicators in head and chest protocols CT scans, in terms of Dose-Length Product(DLP) and effective dose for adult and pediatric patients, comparing them with diagnostic reference levels in the literature. Patients were divided into age groups and the following image acquisition parameters were collected: age, kV, mAs, Volumetric Computed Tomography Dose Index (CTDIvol) and DLP. The effective dose was found multiplying DLP by correction factors. The results were obtained from the third quartile and showed the importance of determining kV and mAs values for each patient depending on the studied region, age and thickness. (author)

  19. A fast computing method to distinguish the hyperbolic trajectory of an non-autonomous system

    Science.gov (United States)

    Jia, Meng; Fan, Yang-Yu; Tian, Wei-Jian

    2011-03-01

    Attempting to find a fast computing method to DHT (distinguished hyperbolic trajectory), this study first proves that the errors of the stable DHT can be ignored in normal direction when they are computed as the trajectories extend. This conclusion means that the stable flow with perturbation will approach to the real trajectory as it extends over time. Based on this theory and combined with the improved DHT computing method, this paper reports a new fast computing method to DHT, which magnifies the DHT computing speed without decreasing its accuracy. Project supported by the National Natural Science Foundation of China (Grant No. 60872159).

  20. A fast computing method to distinguish the hyperbolic trajectory of an non-autonomous system

    International Nuclear Information System (INIS)

    Jia Meng; Fan Yang-Yu; Tian Wei-Jian

    2011-01-01

    Attempting to find a fast computing method to DHT (distinguished hyperbolic trajectory), this study first proves that the errors of the stable DHT can be ignored in normal direction when they are computed as the trajectories extend. This conclusion means that the stable flow with perturbation will approach to the real trajectory as it extends over time. Based on this theory and combined with the improved DHT computing method, this paper reports a new fast computing method to DHT, which magnifies the DHT computing speed without decreasing its accuracy. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)

  1. Decomposition and Cross-Product-Based Method for Computing the Dynamic Equation of Robots

    Directory of Open Access Journals (Sweden)

    Ching-Long Shih

    2012-08-01

    Full Text Available This paper aims to demonstrate a clear relationship between Lagrange equations and Newton-Euler equations regarding computational methods for robot dynamics, from which we derive a systematic method for using either symbolic or on-line numerical computations. Based on the decomposition approach and cross-product operation, a computing method for robot dynamics can be easily developed. The advantages of this computing framework are that: it can be used for both symbolic and on-line numeric computation purposes, and it can also be applied to biped systems, as well as some simple closed-chain robot systems.

  2. Recent Advances in Computational Methods for Nuclear Magnetic Resonance Data Processing

    KAUST Repository

    Gao, Xin

    2013-01-11

    Although three-dimensional protein structure determination using nuclear magnetic resonance (NMR) spectroscopy is a computationally costly and tedious process that would benefit from advanced computational techniques, it has not garnered much research attention from specialists in bioinformatics and computational biology. In this paper, we review recent advances in computational methods for NMR protein structure determination. We summarize the advantages of and bottlenecks in the existing methods and outline some open problems in the field. We also discuss current trends in NMR technology development and suggest directions for research on future computational methods for NMR.

  3. Simplified method of computation for fatigue crack growth

    International Nuclear Information System (INIS)

    Stahlberg, R.

    1978-01-01

    A procedure is described for drastically reducing the computation time in calculating crack growth for variable-amplitude fatigue loading when the loading sequence is periodic. By the proposed procedure, the crack growth, r, per loading is approximated as a smooth function and its reciprocal is integrated, rather than summing crack growth cycle by cycle. The savings in computation time results since only a few pointwise values of r must be computed to generate an accurate interpolation function for numerical integration. Further time savings can be achieved by selecting the stress intensity coefficient (stress intensity divided by load) as the argument of r. Once r has been obtained as a function of stress intensity coefficient for a given material, environment, and loading sequence, it applies to any configuration of cracked structure. (orig.) [de

  4. A hybrid method for the parallel computation of Green's functions

    DEFF Research Database (Denmark)

    Petersen, Dan Erik; Li, Song; Stokbro, Kurt

    2009-01-01

    of the large number of times this calculation needs to be performed, this is computationally very expensive even on supercomputers. The classical approach is based on recurrence formulas which cannot be efficiently parallelized. This practically prevents the solution of large problems with hundreds...... of thousands of atoms. We propose new recurrences for a general class of sparse matrices to calculate Green's and lesser Green's function matrices which extend formulas derived by Takahashi and others. We show that these recurrences may lead to a dramatically reduced computational cost because they only...... require computing a small number of entries of the inverse matrix. Then. we propose a parallelization strategy for block tridiagonal matrices which involves a combination of Schur complement calculations and cyclic reduction. It achieves good scalability even on problems of modest size....

  5. Comparison of four classification methods for brain-computer interface

    Czech Academy of Sciences Publication Activity Database

    Frolov, A.; Húsek, Dušan; Bobrov, P.

    2011-01-01

    Roč. 21, č. 2 (2011), s. 101-115 ISSN 1210-0552 R&D Projects: GA MŠk(CZ) 1M0567; GA ČR GA201/05/0079; GA ČR GAP202/10/0262 Institutional research plan: CEZ:AV0Z10300504 Keywords : brain computer interface * motor imagery * visual imagery * EEG pattern classification * Bayesian classification * Common Spatial Patterns * Common Tensor Discriminant Analysis Subject RIV: IN - Informatics, Computer Science Impact factor: 0.646, year: 2011

  6. The Extrapolation-Accelerated Multilevel Aggregation Method in PageRank Computation

    Directory of Open Access Journals (Sweden)

    Bing-Yuan Pu

    2013-01-01

    Full Text Available An accelerated multilevel aggregation method is presented for calculating the stationary probability vector of an irreducible stochastic matrix in PageRank computation, where the vector extrapolation method is its accelerator. We show how to periodically combine the extrapolation method together with the multilevel aggregation method on the finest level for speeding up the PageRank computation. Detailed numerical results are given to illustrate the behavior of this method, and comparisons with the typical methods are also made.

  7. A secure RFID mutual authentication protocol for healthcare environments using elliptic curve cryptography.

    Science.gov (United States)

    Jin, Chunhua; Xu, Chunxiang; Zhang, Xiaojun; Zhao, Jining

    2015-03-01

    Radio Frequency Identification(RFID) is an automatic identification technology, which can be widely used in healthcare environments to locate and track staff, equipment and patients. However, potential security and privacy problems in RFID system remain a challenge. In this paper, we design a mutual authentication protocol for RFID based on elliptic curve cryptography(ECC). We use pre-computing method within tag's communication, so that our protocol can get better efficiency. In terms of security, our protocol can achieve confidentiality, unforgeability, mutual authentication, tag's anonymity, availability and forward security. Our protocol also can overcome the weakness in the existing protocols. Therefore, our protocol is suitable for healthcare environments.

  8. AN ENHANCED METHOD FOREXTENDING COMPUTATION AND RESOURCES BY MINIMIZING SERVICE DELAY IN EDGE CLOUD COMPUTING

    OpenAIRE

    B.Bavishna*1, Mrs.M.Agalya2 & Dr.G.Kavitha3

    2018-01-01

    A lot of research has been done in the field of cloud computing in computing domain. For its effective performance, variety of algorithms has been proposed. The role of virtualization is significant and its performance is dependent on VM Migration and allocation. More of the energy is absorbed in cloud; therefore, the utilization of numerous algorithms is required for saving energy and efficiency enhancement in the proposed work. In the proposed work, green algorithm has been considered with ...

  9. Chapter 10: Peak Demand and Time-Differentiated Energy Savings Cross-Cutting Protocol. The Uniform Methods Project: Methods for Determining Energy Efficiency Savings for Specific Measures

    Energy Technology Data Exchange (ETDEWEB)

    Kurnik, Charles W [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Stern, Frank [Navigant, Boulder, CO (United States); Spencer, Justin [Navigant, Boulder, CO (United States)

    2017-10-03

    Savings from electric energy efficiency measures and programs are often expressed in terms of annual energy and presented as kilowatt-hours per year (kWh/year). However, for a full assessment of the value of these savings, it is usually necessary to consider the measure or program's impact on peak demand as well as time-differentiated energy savings. This cross-cutting protocol describes methods for estimating the peak demand and time-differentiated energy impacts of measures implemented through energy efficiency programs.

  10. Computational methods to dissect cis-regulatory transcriptional ...

    Indian Academy of Sciences (India)

    The formation of diverse cell types from an invariant set of genes is governed by biochemical and molecular processes that regulate gene activity. A complete understanding of the regulatory mechanisms of gene expression is the major function of genomics. Computational genomics is a rapidly emerging area for ...

  11. Computer methods in designing tourist equipment for people with disabilities

    Science.gov (United States)

    Zuzda, Jolanta GraŻyna; Borkowski, Piotr; Popławska, Justyna; Latosiewicz, Robert; Moska, Eleonora

    2017-11-01

    Modern technologies enable disabled people to enjoy physical activity every day. Many new structures are matched individually and created for people who fancy active tourism, giving them wider opportunities for active pastime. The process of creating this type of devices in every stage, from initial design through assessment to validation, is assisted by various types of computer support software.

  12. New design methods for computer aided architecturald design methodology teaching

    NARCIS (Netherlands)

    Achten, H.H.

    2003-01-01

    Architects and architectural students are exploring new ways of design using Computer Aided Architectural Design software. This exploration is seldom backed up from a design methodological viewpoint. In this paper, a design methodological framework for reflection on innovate design processes by

  13. Computational methods for more fuel-efficient ship

    NARCIS (Netherlands)

    Koren, B.

    2008-01-01

    The flow of water around a ship powered by a combustion engine is a key factor in the ship's fuel consumption. The simulation of flow patterns around ship hulls is therefore an important aspect of ship design. While lengthy computations are required for such simulations, research by Jeroen Wackers

  14. New Methods of Mobile Computing: From Smartphones to Smart Education

    Science.gov (United States)

    Sykes, Edward R.

    2014-01-01

    Every aspect of our daily lives has been touched by the ubiquitous nature of mobile devices. We have experienced an exponential growth of mobile computing--a trend that seems to have no limit. This paper provides a report on the findings of a recent offering of an iPhone Application Development course at Sheridan College, Ontario, Canada. It…

  15. An affective music player: Methods and models for physiological computing

    NARCIS (Netherlands)

    Janssen, J.H.; Westerink, J.H.D.M.; van den Broek, Egon

    2009-01-01

    Affective computing is embraced by many to create more intelligent systems and smart environments. In this thesis, a specific affective application is envisioned: an affective physiological music player (APMP), which should be able to direct its user's mood. In a first study, the relationship

  16. Computer Facilitated Mathematical Methods in Chemical Engineering--Similarity Solution

    Science.gov (United States)

    Subramanian, Venkat R.

    2006-01-01

    High-performance computers coupled with highly efficient numerical schemes and user-friendly software packages have helped instructors to teach numerical solutions and analysis of various nonlinear models more efficiently in the classroom. One of the main objectives of a model is to provide insight about the system of interest. Analytical…

  17. All for One: Integrating Budgetary Methods by Computer.

    Science.gov (United States)

    Herman, Jerry J.

    1994-01-01

    With the advent of high speed and sophisticated computer programs, all budgetary systems can be combined in one fiscal management information system. Defines and provides examples for the four budgeting systems: (1) function/object; (2) planning, programming, budgeting system; (3) zero-based budgeting; and (4) site-based budgeting. (MLF)

  18. Method for quantitative assessment of nuclear safety computer codes

    International Nuclear Information System (INIS)

    Dearien, J.A.; Davis, C.B.; Matthews, L.J.

    1979-01-01

    A procedure has been developed for the quantitative assessment of nuclear safety computer codes and tested by comparison of RELAP4/MOD6 predictions with results from two Semiscale tests. This paper describes the developed procedure, the application of the procedure to the Semiscale tests, and the results obtained from the comparison

  19. A Parameter Estimation Method for Dynamic Computational Cognitive Models

    NARCIS (Netherlands)

    Thilakarathne, D.J.

    2015-01-01

    A dynamic computational cognitive model can be used to explore a selected complex cognitive phenomenon by providing some features or patterns over time. More specifically, it can be used to simulate, analyse and explain the behaviour of such a cognitive phenomenon. It generates output data in the

  20. Computed radiography imaging plates and associated methods of manufacture

    Science.gov (United States)

    Henry, Nathaniel F.; Moses, Alex K.

    2015-08-18

    Computed radiography imaging plates incorporating an intensifying material that is coupled to or intermixed with the phosphor layer, allowing electrons and/or low energy x-rays to impart their energy on the phosphor layer, while decreasing internal scattering and increasing resolution. The radiation needed to perform radiography can also be reduced as a result.

  1. Verifying a computational method for predicting extreme ground motion

    Science.gov (United States)

    Harris, R.A.; Barall, M.; Andrews, D.J.; Duan, B.; Ma, S.; Dunham, E.M.; Gabriel, A.-A.; Kaneko, Y.; Kase, Y.; Aagaard, Brad T.; Oglesby, D.D.; Ampuero, J.-P.; Hanks, T.C.; Abrahamson, N.

    2011-01-01

    In situations where seismological data is rare or nonexistent, computer simulations may be used to predict ground motions caused by future earthquakes. This is particularly practical in the case of extreme ground motions, where engineers of special buildings may need to design for an event that has not been historically observed but which may occur in the far-distant future. Once the simulations have been performed, however, they still need to be tested. The SCEC-USGS dynamic rupture code verification exercise provides a testing mechanism for simulations that involve spontaneous earthquake rupture. We have performed this examination for the specific computer code that was used to predict maximum possible ground motion near Yucca Mountain. Our SCEC-USGS group exercises have demonstrated that the specific computer code that was used for the Yucca Mountain simulations produces similar results to those produced by other computer codes when tackling the same science problem. We also found that the 3D ground motion simulations produced smaller ground motions than the 2D simulations.

  2. A comparison of methods for the assessment of postural load and duration of computer use

    NARCIS (Netherlands)

    Heinrich, J.; Blatter, B.M.; Bongers, P.M.

    2004-01-01

    Aim: To compare two different methods for assessment of postural load and duration of computer use in office workers. Methods: The study population existed of 87 computer workers. Questionnaire data about exposure were compared with exposures measured by a standardised or objective method. Measuring

  3. Minimizing the Free Energy: A Computer Method for Teaching Chemical Equilibrium Concepts.

    Science.gov (United States)

    Heald, Emerson F.

    1978-01-01

    Presents a computer method for teaching chemical equilibrium concepts using material balance conditions and the minimization of the free energy. Method for the calculation of chemical equilibrium, the computer program used to solve equilibrium problems and applications of the method are also included. (HM)

  4. High-Bandwidth Tactical-Network Data Analysis in a High-Performance-Computing (HPC) Environment: Transport Protocol (Transmission Control Protocol/User Datagram Protocol [TCP/UDP]) Analysis

    Science.gov (United States)

    2015-09-01

    the network Mac8 Medium Access Control ( Mac ) (Ethernet) address observed as destination for outgoing packets subsessionid8 Zero-based index of...15. SUBJECT TERMS tactical networks, data reduction, high-performance computing, data analysis, big data 16. SECURITY CLASSIFICATION OF: 17...Integer index of row cts_deid Device (instrument) Identifier where observation took place cts_collpt Collection point or logical observation point on

  5. Availability and performance of image/video-based vital signs monitoring methods: a systematic review protocol

    Directory of Open Access Journals (Sweden)

    Mirae Harford

    2017-10-01

    Full Text Available Abstract Background For many vital signs, monitoring methods require contact with the patient and/or are invasive in nature. There is increasing interest in developing still and video image-guided monitoring methods that are non-contact and non-invasive. We will undertake a systematic review of still and video image-based monitoring methods. Methods We will perform searches in multiple databases which include MEDLINE, Embase, CINAHL, Cochrane library, IEEE Xplore and ACM Digital Library. We will use OpenGrey and Google searches to access unpublished or commercial data. We will not use language or publication date restrictions. The primary goal is to summarise current image-based vital signs monitoring methods, limited to heart rate, respiratory rate, oxygen saturations and blood pressure. Of particular interest will be the effectiveness of image-based methods compared to reference devices. Other outcomes of interest include the quality of the method comparison studies with respect to published reporting guidelines, any limitations of non-contact non-invasive technology and application in different populations. Discussion To the best of our knowledge, this is the first systematic review of image-based non-contact methods of vital signs monitoring. Synthesis of currently available technology will facilitate future research in this highly topical area. Systematic review registration PROSPERO CRD42016029167

  6. 3D ultrasound computer tomography: Hardware setup, reconstruction methods and first clinical results

    Science.gov (United States)

    Gemmeke, Hartmut; Hopp, Torsten; Zapf, Michael; Kaiser, Clemens; Ruiter, Nicole V.

    2017-11-01

    A promising candidate for improved imaging of breast cancer is ultrasound computer tomography (USCT). Current experimental USCT systems are still focused in elevation dimension resulting in a large slice thickness, limited depth of field, loss of out-of-plane reflections, and a large number of movement steps to acquire a stack of images. 3D USCT emitting and receiving spherical wave fronts overcomes these limitations. We built an optimized 3D USCT, realizing for the first time the full benefits of a 3D system. The point spread function could be shown to be nearly isotropic in 3D, to have very low spatial variability and fit the predicted values. The contrast of the phantom images is very satisfactory in spite of imaging with a sparse aperture. The resolution and imaged details of the reflectivity reconstruction are comparable to a 3 T MRI volume. Important for the obtained resolution are the simultaneously obtained results of the transmission tomography. The KIT 3D USCT was then tested in a pilot study on ten patients. The primary goals of the pilot study were to test the USCT device, the data acquisition protocols, the image reconstruction methods and the image fusion techniques in a clinical environment. The study was conducted successfully; the data acquisition could be carried out for all patients with an average imaging time of six minutes per breast. The reconstructions provide promising images. Overlaid volumes of the modalities show qualitative and quantitative information at a glance. This paper gives a summary of the involved techniques, methods, and first results.

  7. Availability and performance of image/video-based vital signs monitoring methods: a systematic review protocol.

    Science.gov (United States)

    Harford, Mirae; Catherall, Jacqueline; Gerry, Stephen; Young, Duncan; Watkinson, Peter

    2017-10-25

    For many vital signs, monitoring methods require contact with the patient and/or are invasive in nature. There is increasing interest in developing still and video image-guided monitoring methods that are non-contact and non-invasive. We will undertake a systematic review of still and video image-based monitoring methods. We will perform searches in multiple databases which include MEDLINE, Embase, CINAHL, Cochrane library, IEEE Xplore and ACM Digital Library. We will use OpenGrey and Google searches to access unpublished or commercial data. We will not use language or publication date restrictions. The primary goal is to summarise current image-based vital signs monitoring methods, limited to heart rate, respiratory rate, oxygen saturations and blood pressure. Of particular interest will be the effectiveness of image-based methods compared to reference devices. Other outcomes of interest include the quality of the method comparison studies with respect to published reporting guidelines, any limitations of non-contact non-invasive technology and application in different populations. To the best of our knowledge, this is the first systematic review of image-based non-contact methods of vital signs monitoring. Synthesis of currently available technology will facilitate future research in this highly topical area. PROSPERO CRD42016029167.

  8. Mobile Internet Protocol Analysis

    National Research Council Canada - National Science Library

    Brachfeld, Lawrence

    1999-01-01

    ...) and User Datagram Protocol (UDP). Mobile IP allows mobile computers to send and receive packets addressed with their home network IP address, regardless of the IP address of their current point of attachment on the Internet...

  9. Assessing the Efficacy of an App-Based Method of Family Planning: The Dot Study Protocol

    OpenAIRE

    Simmons, Rebecca G; Shattuck, Dominick C; Jennings, Victoria H

    2017-01-01

    Background Some 222 million women worldwide have unmet needs for contraception; they want to avoid pregnancy, but are not using a contraceptive method, primarily because of concerns about side effects associated with most available methods. Expanding contraceptive options?particularly fertility awareness options that provide women with information about which days during their menstrual cycles they are likely to become pregnant if they have unprotected intercourse?has the potential to reduce ...

  10. Exploration of barriers and facilitators to publishing local public health findings: A mixed methods protocol

    OpenAIRE

    Smith, Selina A.; Webb, Nancy C.; Blumenthal, Daniel S.; Willcox, Bobbie; Ballance, Darra; Kinard, Faith; Gates, Madison L.

    2016-01-01

    Background Worldwide, the US accounts for a large proportion of journals related to public health. Although the American Public Health Association (APHA) includes 54 affiliated regional and state associations, little is known about their capacity to support public health scholarship. The aim of this study is to assess barriers and facilitators to operation of state journals for the dissemination of local public health research and practices. Methods A mixed methods approach will be used to co...

  11. Inferring biological functions of guanylyl cyclases with computational methods

    KAUST Repository

    Alquraishi, May Majed; Meier, Stuart Kurt

    2013-01-01

    A number of studies have shown that functionally related genes are often co-expressed and that computational based co-expression analysis can be used to accurately identify functional relationships between genes and by inference, their encoded proteins. Here we describe how a computational based co-expression analysis can be used to link the function of a specific gene of interest to a defined cellular response. Using a worked example we demonstrate how this methodology is used to link the function of the Arabidopsis Wall-Associated Kinase-Like 10 gene, which encodes a functional guanylyl cyclase, to host responses to pathogens. © Springer Science+Business Media New York 2013.

  12. Inferring biological functions of guanylyl cyclases with computational methods

    KAUST Repository

    Alquraishi, May Majed

    2013-09-03

    A number of studies have shown that functionally related genes are often co-expressed and that computational based co-expression analysis can be used to accurately identify functional relationships between genes and by inference, their encoded proteins. Here we describe how a computational based co-expression analysis can be used to link the function of a specific gene of interest to a defined cellular response. Using a worked example we demonstrate how this methodology is used to link the function of the Arabidopsis Wall-Associated Kinase-Like 10 gene, which encodes a functional guanylyl cyclase, to host responses to pathogens. © Springer Science+Business Media New York 2013.

  13. Computational Nuclear Physics and Post Hartree-Fock Methods

    Energy Technology Data Exchange (ETDEWEB)

    Lietz, Justin [Michigan State University; Sam, Novario [Michigan State University; Hjorth-Jensen, M. [University of Oslo, Norway; Hagen, Gaute [ORNL; Jansen, Gustav R. [ORNL

    2017-05-01

    We present a computational approach to infinite nuclear matter employing Hartree-Fock theory, many-body perturbation theory and coupled cluster theory. These lectures are closely linked with those of chapters 9, 10 and 11 and serve as input for the correlation functions employed in Monte Carlo calculations in chapter 9, the in-medium similarity renormalization group theory of dense fermionic systems of chapter 10 and the Green's function approach in chapter 11. We provide extensive code examples and benchmark calculations, allowing thereby an eventual reader to start writing her/his own codes. We start with an object-oriented serial code and end with discussions on strategies for porting the code to present and planned high-performance computing facilities.

  14. Multi-Level iterative methods in computational plasma physics

    International Nuclear Information System (INIS)

    Knoll, D.A.; Barnes, D.C.; Brackbill, J.U.; Chacon, L.; Lapenta, G.

    1999-01-01

    Plasma physics phenomena occur on a wide range of spatial scales and on a wide range of time scales. When attempting to model plasma physics problems numerically the authors are inevitably faced with the need for both fine spatial resolution (fine grids) and implicit time integration methods. Fine grids can tax the efficiency of iterative methods and large time steps can challenge the robustness of iterative methods. To meet these challenges they are developing a hybrid approach where multigrid methods are used as preconditioners to Krylov subspace based iterative methods such as conjugate gradients or GMRES. For nonlinear problems they apply multigrid preconditioning to a matrix-few Newton-GMRES method. Results are presented for application of these multilevel iterative methods to the field solves in implicit moment method PIC, multidimensional nonlinear Fokker-Planck problems, and their initial efforts in particle MHD

  15. Methods for the development of large computer codes under LTSS

    International Nuclear Information System (INIS)

    Sicilian, J.M.

    1977-06-01

    TRAC is a large computer code being developed by Group Q-6 for the analysis of the transient thermal hydraulic behavior of light-water nuclear reactors. A system designed to assist the development of TRAC is described. The system consists of a central HYDRA dataset, R6LIB, containing files used in the development of TRAC, and a file maintenance program, HORSE, which facilitates the use of this dataset

  16. Easy computer assisted teaching method for undergraduate surgery

    OpenAIRE

    Agrawal, Vijay P

    2015-01-01

    Use of computers to aid or support the education or training of people has become commonplace in medical education. Recent studies have shown that it can improve learning outcomes in diagnostic abilities, clinical skills and knowledge across different learner levels from undergraduate medical education to continuing medical education. It also enhance the educational process by increasing access to learning materials, standardising the educational process, providing opportunities for asynchron...

  17. The cell method a purely algebraic computational method in physics and engineering

    CERN Document Server

    Ferretti, Elena

    2014-01-01

    The Cell Method (CM) is a computational tool that maintains critical multidimensional attributes of physical phenomena in analysis. This information is neglected in the differential formulations of the classical approaches of finite element, boundary element, finite volume, and finite difference analysis, often leading to numerical instabilities and spurious results. This book highlights the central theoretical concepts of the CM that preserve a more accurate and precise representation of the geometric and topological features of variables for practical problem solving. Important applications occur in fields such as electromagnetics, electrodynamics, solid mechanics and fluids. CM addresses non-locality in continuum mechanics, an especially important circumstance in modeling heterogeneous materials. Professional engineers and scientists, as well as graduate students, are offered: A general overview of physics and its mathematical descriptions; Guidance on how to build direct, discrete formulations; Coverag...

  18. Stable numerical method in computation of stellar evolution

    International Nuclear Information System (INIS)

    Sugimoto, Daiichiro; Eriguchi, Yoshiharu; Nomoto, Ken-ichi.

    1982-01-01

    To compute the stellar structure and evolution in different stages, such as (1) red-giant stars in which the density and density gradient change over quite wide ranges, (2) rapid evolution with neutrino loss or unstable nuclear flashes, (3) hydrodynamical stages of star formation or supernova explosion, (4) transition phases from quasi-static to dynamical evolutions, (5) mass-accreting or losing stars in binary-star systems, and (6) evolution of stellar core whose mass is increasing by shell burning or decreasing by penetration of convective envelope into the core, we face ''multi-timescale problems'' which can neither be treated by simple-minded explicit scheme nor implicit one. This problem has been resolved by three prescriptions; one by introducing the hybrid scheme suitable for the multi-timescale problems of quasi-static evolution with heat transport, another by introducing also the hybrid scheme suitable for the multi-timescale problems of hydrodynamic evolution, and the other by introducing the Eulerian or, in other words, the mass fraction coordinate for evolution with changing mass. When all of them are combined in a single computer code, we can compute numerically stably any phase of stellar evolution including transition phases, as far as the star is spherically symmetric. (author)

  19. Unconventional methods of imaging: computational microscopy and compact implementations

    Science.gov (United States)

    McLeod, Euan; Ozcan, Aydogan

    2016-07-01

    In the past two decades or so, there has been a renaissance of optical microscopy research and development. Much work has been done in an effort to improve the resolution and sensitivity of microscopes, while at the same time to introduce new imaging modalities, and make existing imaging systems more efficient and more accessible. In this review, we look at two particular aspects of this renaissance: computational imaging techniques and compact imaging platforms. In many cases, these aspects go hand-in-hand because the use of computational techniques can simplify the demands placed on optical hardware in obtaining a desired imaging performance. In the first main section, we cover lens-based computational imaging, in particular, light-field microscopy, structured illumination, synthetic aperture, Fourier ptychography, and compressive imaging. In the second main section, we review lensfree holographic on-chip imaging, including how images are reconstructed, phase recovery techniques, and integration with smart substrates for more advanced imaging tasks. In the third main section we describe how these and other microscopy modalities have been implemented in compact and field-portable devices, often based around smartphones. Finally, we conclude with some comments about opportunities and demand for better results, and where we believe the field is heading.

  20. Analysis of multigrid methods on massively parallel computers: Architectural implications

    Science.gov (United States)

    Matheson, Lesley R.; Tarjan, Robert E.

    1993-01-01

    We study the potential performance of multigrid algorithms running on massively parallel computers with the intent of discovering whether presently envisioned machines will provide an efficient platform for such algorithms. We consider the domain parallel version of the standard V cycle algorithm on model problems, discretized using finite difference techniques in two and three dimensions on block structured grids of size 10(exp 6) and 10(exp 9), respectively. Our models of parallel computation were developed to reflect the computing characteristics of the current generation of massively parallel multicomputers. These models are based on an interconnection network of 256 to 16,384 message passing, 'workstation size' processors executing in an SPMD mode. The first model accomplishes interprocessor communications through a multistage permutation network. The communication cost is a logarithmic function which is similar to the costs in a variety of different topologies. The second model allows single stage communication costs only. Both models were designed with information provided by machine developers and utilize implementation derived parameters. With the medium grain parallelism of the current generation and the high fixed cost of an interprocessor communication, our analysis suggests an efficient implementation requires the machine to support the efficient transmission of long messages, (up to 1000 words) or the high initiation cost of a communication must be significantly reduced through an alternative optimization technique. Furthermore, with variable length message capability, our analysis suggests the low diameter multistage networks provide little or no advantage over a simple single stage communications network.