WorldWideScience

Sample records for protocols computational methods

  1. A Distributed Snapshot Protocol for Efficient Artificial Intelligence Computation in Cloud Computing Environments

    Directory of Open Access Journals (Sweden)

    JongBeom Lim

    2018-01-01

    Full Text Available Many artificial intelligence applications often require a huge amount of computing resources. As a result, cloud computing adoption rates are increasing in the artificial intelligence field. To support the demand for artificial intelligence applications and guarantee the service level agreement, cloud computing should provide not only computing resources but also fundamental mechanisms for efficient computing. In this regard, a snapshot protocol has been used to create a consistent snapshot of the global state in cloud computing environments. However, the existing snapshot protocols are not optimized in the context of artificial intelligence applications, where large-scale iterative computation is the norm. In this paper, we present a distributed snapshot protocol for efficient artificial intelligence computation in cloud computing environments. The proposed snapshot protocol is based on a distributed algorithm to run interconnected multiple nodes in a scalable fashion. Our snapshot protocol is able to deal with artificial intelligence applications, in which a large number of computing nodes are running. We reveal that our distributed snapshot protocol guarantees the correctness, safety, and liveness conditions.

  2. Blind quantum computation protocol in which Alice only makes measurements

    Science.gov (United States)

    Morimae, Tomoyuki; Fujii, Keisuke

    2013-05-01

    Blind quantum computation is a new secure quantum computing protocol which enables Alice (who does not have sufficient quantum technology) to delegate her quantum computation to Bob (who has a full-fledged quantum computer) in such a way that Bob cannot learn anything about Alice's input, output, and algorithm. In previous protocols, Alice needs to have a device which generates quantum states, such as single-photon states. Here we propose another type of blind computing protocol where Alice does only measurements, such as the polarization measurements with a threshold detector. In several experimental setups, such as optical systems, the measurement of a state is much easier than the generation of a single-qubit state. Therefore our protocols ease Alice's burden. Furthermore, the security of our protocol is based on the no-signaling principle, which is more fundamental than quantum physics. Finally, our protocols are device independent in the sense that Alice does not need to trust her measurement device in order to guarantee the security.

  3. Private quantum computation: an introduction to blind quantum computing and related protocols

    Science.gov (United States)

    Fitzsimons, Joseph F.

    2017-06-01

    Quantum technologies hold the promise of not only faster algorithmic processing of data, via quantum computation, but also of more secure communications, in the form of quantum cryptography. In recent years, a number of protocols have emerged which seek to marry these concepts for the purpose of securing computation rather than communication. These protocols address the task of securely delegating quantum computation to an untrusted device while maintaining the privacy, and in some instances the integrity, of the computation. We present a review of the progress to date in this emerging area.

  4. Caltech computer scientists develop FAST protocol to speed up Internet

    CERN Multimedia

    2003-01-01

    "Caltech computer scientists have developed a new data transfer protocol for the Internet fast enough to download a full-length DVD movie in less than five seconds. The protocol is called FAST, standing for Fast Active queue management Scalable Transmission Control Protocol" (1 page).

  5. A Novel UDT-Based Transfer Speed-Up Protocol for Fog Computing

    Directory of Open Access Journals (Sweden)

    Zhijie Han

    2018-01-01

    Full Text Available Fog computing is a distributed computing model as the middle layer between the cloud data center and the IoT device/sensor. It provides computing, network, and storage devices so that cloud based services can be closer to IOT devices and sensors. Cloud computing requires a lot of bandwidth, and the bandwidth of the wireless network is limited. In contrast, the amount of bandwidth required for “fog computing” is much less. In this paper, we improved a new protocol Peer Assistant UDT-Based Data Transfer Protocol (PaUDT, applied to Iot-Cloud computing. Furthermore, we compared the efficiency of the congestion control algorithm of UDT with the Adobe’s Secure Real-Time Media Flow Protocol (RTMFP, based on UDP completely at the transport layer. At last, we built an evaluation model of UDT in RTT and bit error ratio which describes the performance. The theoretical analysis and experiment result have shown that UDT has good performance in IoT-Cloud computing.

  6. Adaptive security protocol selection for mobile computing

    NARCIS (Netherlands)

    Pontes Soares Rocha, B.; Costa, D.N.O.; Moreira, R.A.; Rezende, C.G.; Loureiro, A.A.F.; Boukerche, A.

    2010-01-01

    The mobile computing paradigm has introduced new problems for application developers. Challenges include heterogeneity of hardware, software, and communication protocols, variability of resource limitations and varying wireless channel quality. In this scenario, security becomes a major concern for

  7. Minimal computational-space implementation of multiround quantum protocols

    International Nuclear Information System (INIS)

    Bisio, Alessandro; D'Ariano, Giacomo Mauro; Perinotti, Paolo; Chiribella, Giulio

    2011-01-01

    A single-party strategy in a multiround quantum protocol can be implemented by sequential networks of quantum operations connected by internal memories. Here, we provide an efficient realization in terms of computational-space resources.

  8. Computationally Developed Sham Stimulation Protocol for Multichannel Desynchronizing Stimulation

    Directory of Open Access Journals (Sweden)

    Magteld Zeitler

    2018-05-01

    Full Text Available A characteristic pattern of abnormal brain activity is abnormally strong neuronal synchronization, as found in several brain disorders, such as tinnitus, Parkinson's disease, and epilepsy. As observed in several diseases, different therapeutic interventions may induce a placebo effect that may be strong and hinder reliable clinical evaluations. Hence, to distinguish between specific, neuromodulation-induced effects and unspecific, placebo effects, it is important to mimic the therapeutic procedure as precisely as possibly, thereby providing controls that actually lack specific effects. Coordinated Reset (CR stimulation has been developed to specifically counteract abnormally strong synchronization by desynchronization. CR is a spatio-temporally patterned multichannel stimulation which reduces the extent of coincident neuronal activity and aims at an anti-kindling, i.e., an unlearning of both synaptic connectivity and neuronal synchrony. Apart from acute desynchronizing effects, CR may cause sustained, long-lasting desynchronizing effects, as already demonstrated in pre-clinical and clinical proof of concept studies. In this computational study, we set out to computationally develop a sham stimulation protocol for multichannel desynchronizing stimulation. To this end, we compare acute effects and long-lasting effects of six different spatio-temporally patterned stimulation protocols, including three variants of CR, using a no-stimulation condition as additional control. This is to provide an inventory of different stimulation algorithms with similar fundamental stimulation parameters (e.g., mean stimulation rates but qualitatively different acute and/or long-lasting effects. Stimulation protocols sharing basic parameters, but inducing nevertheless completely different or even no acute effects and/or after-effects, might serve as controls to validate the specific effects of particular desynchronizing protocols such as CR. In particular, based on

  9. Analysis and Verification of a Key Agreement Protocol over Cloud Computing Using Scyther Tool

    OpenAIRE

    Hazem A Elbaz

    2015-01-01

    The mostly cloud computing authentication mechanisms use public key infrastructure (PKI). Hierarchical Identity Based Cryptography (HIBC) has several advantages that sound well align with the demands of cloud computing. The main objectives of cloud computing authentication protocols are security and efficiency. In this paper, we clarify Hierarchical Identity Based Authentication Key Agreement (HIB-AKA) protocol, providing lightweight key management approach for cloud computing users. Then, we...

  10. Computer network time synchronization the network time protocol

    CERN Document Server

    Mills, David L

    2006-01-01

    What started with the sundial has, thus far, been refined to a level of precision based on atomic resonance: Time. Our obsession with time is evident in this continued scaling down to nanosecond resolution and beyond. But this obsession is not without warrant. Precision and time synchronization are critical in many applications, such as air traffic control and stock trading, and pose complex and important challenges in modern information networks.Penned by David L. Mills, the original developer of the Network Time Protocol (NTP), Computer Network Time Synchronization: The Network Time Protocol

  11. Protocol independent transmission method in software defined optical network

    Science.gov (United States)

    Liu, Yuze; Li, Hui; Hou, Yanfang; Qiu, Yajun; Ji, Yuefeng

    2016-10-01

    With the development of big data and cloud computing technology, the traditional software-defined network is facing new challenges (e.i., ubiquitous accessibility, higher bandwidth, more flexible management and greater security). Using a proprietary protocol or encoding format is a way to improve information security. However, the flow, which carried by proprietary protocol or code, cannot go through the traditional IP network. In addition, ultra- high-definition video transmission service once again become a hot spot. Traditionally, in the IP network, the Serial Digital Interface (SDI) signal must be compressed. This approach offers additional advantages but also bring some disadvantages such as signal degradation and high latency. To some extent, HD-SDI can also be regard as a proprietary protocol, which need transparent transmission such as optical channel. However, traditional optical networks cannot support flexible traffics . In response to aforementioned challenges for future network, one immediate solution would be to use NFV technology to abstract the network infrastructure and provide an all-optical switching topology graph for the SDN control plane. This paper proposes a new service-based software defined optical network architecture, including an infrastructure layer, a virtualization layer, a service abstract layer and an application layer. We then dwell on the corresponding service providing method in order to implement the protocol-independent transport. Finally, we experimentally evaluate that proposed service providing method can be applied to transmit the HD-SDI signal in the software-defined optical network.

  12. IMPROVED COMPUTATIONAL NEUTRONICS METHODS AND VALIDATION PROTOCOLS FOR THE ADVANCED TEST REACTOR

    Energy Technology Data Exchange (ETDEWEB)

    David W. Nigg; Joseph W. Nielsen; Benjamin M. Chase; Ronnie K. Murray; Kevin A. Steuhm

    2012-04-01

    The Idaho National Laboratory (INL) is in the process of modernizing the various reactor physics modeling and simulation tools used to support operation and safety assurance of the Advanced Test Reactor (ATR). Key accomplishments so far have encompassed both computational as well as experimental work. A new suite of stochastic and deterministic transport theory based reactor physics codes and their supporting nuclear data libraries (HELIOS, KENO6/SCALE, NEWT/SCALE, ATTILA, and an extended implementation of MCNP5) has been installed at the INL. Corresponding models of the ATR and ATRC are now operational with all five codes, demonstrating the basic feasibility of the new code packages for their intended purpose. Of particular importance, a set of as-run core depletion HELIOS calculations for all ATR cycles since August 2009 was successfully completed during 2011. This demonstration supported a decision late in the year to proceed with the phased incorporation of the HELIOS methodology into the ATR fuel cycle management process beginning in 2012. On the experimental side of the project, new hardware was fabricated, measurement protocols were finalized, and the first four of six planned physics code validation experiments based on neutron activation spectrometry were conducted at the ATRC facility. Data analysis for the first three experiments, focused on characterization of the neutron spectrum in one of the ATR flux traps, has been completed. The six experiments will ultimately form the basis for a flexible, easily-repeatable ATR physics code validation protocol that is consistent with applicable ASTM standards.

  13. Language, Semantics, and Methods for Security Protocols

    DEFF Research Database (Denmark)

    Crazzolara, Federico

    events. Methods like strand spaces and the inductive method of Paulson have been designed to support an intensional, event-based, style of reasoning. These methods have successfully tackled a number of protocols though in an ad hoc fashion. They make an informal spring from a protocol to its......-nets. They have persistent conditions and as we show in this thesis, unfold under reasonable assumptions to a more basic kind of nets. We relate SPL-nets to strand spaces and inductive rules, as well as trace languages and event structures so unifying a range of approaches, as well as providing conditions under...... reveal. The last few years have seen the emergence of successful intensional, event-based, formal approaches to reasoning about security protocols. The methods are concerned with reasoning about the events that a security protocol can perform, and make use of a causal dependency that exists between...

  14. Privacy-Preserving Data Aggregation Protocol for Fog Computing-Assisted Vehicle-to-Infrastructure Scenario

    Directory of Open Access Journals (Sweden)

    Yanan Chen

    2018-01-01

    Full Text Available Vehicle-to-infrastructure (V2I communication enables moving vehicles to upload real-time data about road surface situation to the Internet via fixed roadside units (RSU. Thanks to the resource restriction of mobile vehicles, fog computation-enhanced V2I communication scenario has received increasing attention recently. However, how to aggregate the sensed data from vehicles securely and efficiently still remains open to the V2I communication scenario. In this paper, a light-weight and anonymous aggregation protocol is proposed for the fog computing-based V2I communication scenario. With the proposed protocol, the data collected by the vehicles can be efficiently obtained by the RSU in a privacy-preserving manner. Particularly, we first suggest a certificateless aggregate signcryption (CL-A-SC scheme and prove its security in the random oracle model. The suggested CL-A-SC scheme, which is of independent interest, can achieve the merits of certificateless cryptography and signcryption scheme simultaneously. Then we put forward the anonymous aggregation protocol for V2I communication scenario as one extension of the suggested CL-A-SC scheme. Security analysis demonstrates that the proposed aggregation protocol achieves desirable security properties. The performance comparison shows that the proposed protocol significantly reduces the computation and communication overhead compared with the up-to-date protocols in this field.

  15. An Empirical Study and some Improvements of the MiniMac Protocol for Secure Computation

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Lauritsen, Rasmus; Toft, Tomas

    2014-01-01

    Recent developments in Multi-party Computation (MPC) has resulted in very efficient protocols for dishonest majority in the preprocessing model. In particular, two very promising protocols for Boolean circuits have been proposed by Nielsen et al. (nicknamed TinyOT) and by Damg˚ard and Zakarias...... suggest a modification of MiniMac that achieves increased parallelism at no extra communication cost. This gives an asymptotic improvement of the original protocol as well as an 8-fold speed-up of our implementation. We compare the resulting protocol to TinyOT for the case of secure computation in parallel...... of a large number of AES encryptions and find that it performs better than results reported so far on TinyOT, on the same hardware.p...

  16. Fast protocol for radiochromic film dosimetry using a cloud computing web application.

    Science.gov (United States)

    Calvo-Ortega, Juan-Francisco; Pozo, Miquel; Moragues, Sandra; Casals, Joan

    2017-07-01

    To investigate the feasibility of a fast protocol for radiochromic film dosimetry to verify intensity-modulated radiotherapy (IMRT) plans. EBT3 film dosimetry was conducted in this study using the triple-channel method implemented in the cloud computing application (Radiochromic.com). We described a fast protocol for radiochromic film dosimetry to obtain measurement results within 1h. Ten IMRT plans were delivered to evaluate the feasibility of the fast protocol. The dose distribution of the verification film was derived at 15, 30, 45min using the fast protocol and also at 24h after completing the irradiation. The four dose maps obtained per plan were compared using global and local gamma index (5%/3mm) with the calculated one by the treatment planning system. Gamma passing rates obtained for 15, 30 and 45min post-exposure were compared with those obtained after 24h. Small differences respect to the 24h protocol were found in the gamma passing rates obtained for films digitized at 15min (global: 99.6%±0.9% vs. 99.7%±0.5%; local: 96.3%±3.4% vs. 96.3%±3.8%), at 30min (global: 99.5%±0.9% vs. 99.7%±0.5%; local: 96.5%±3.2% vs. 96.3±3.8%) and at 45min (global: 99.2%±1.5% vs. 99.7%±0.5%; local: 96.1%±3.8% vs. 96.3±3.8%). The fast protocol permits dosimetric results within 1h when IMRT plans are verified, with similar results as those reported by the standard 24h protocol. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  17. A Lightweight Protocol for Secure Video Streaming.

    Science.gov (United States)

    Venčkauskas, Algimantas; Morkevicius, Nerijus; Bagdonas, Kazimieras; Damaševičius, Robertas; Maskeliūnas, Rytis

    2018-05-14

    The Internet of Things (IoT) introduces many new challenges which cannot be solved using traditional cloud and host computing models. A new architecture known as fog computing is emerging to address these technological and security gaps. Traditional security paradigms focused on providing perimeter-based protections and client/server point to point protocols (e.g., Transport Layer Security (TLS)) are no longer the best choices for addressing new security challenges in fog computing end devices, where energy and computational resources are limited. In this paper, we present a lightweight secure streaming protocol for the fog computing "Fog Node-End Device" layer. This protocol is lightweight, connectionless, supports broadcast and multicast operations, and is able to provide data source authentication, data integrity, and confidentiality. The protocol is based on simple and energy efficient cryptographic methods, such as Hash Message Authentication Codes (HMAC) and symmetrical ciphers, and uses modified User Datagram Protocol (UDP) packets to embed authentication data into streaming data. Data redundancy could be added to improve reliability in lossy networks. The experimental results summarized in this paper confirm that the proposed method efficiently uses energy and computational resources and at the same time provides security properties on par with the Datagram TLS (DTLS) standard.

  18. Immunocytochemical methods and protocols

    National Research Council Canada - National Science Library

    Javois, Lorette C

    1999-01-01

    ... monoclonal antibodies to study cell differentiation during embryonic development. For a select few disciplines volumes have been published focusing on the specific application of immunocytochemical techniques to that discipline. What distinguished Immunocytochemical Methods and Protocols from earlier books when it was first published four years ago was i...

  19. Readjustment of abdominal computed tomography protocols in a university hospital: impact on radiation dose

    Energy Technology Data Exchange (ETDEWEB)

    Romano, Ricardo Francisco Tavares; Salvadori, Priscila Silveira; Torres, Lucas Rios; Bretas, Elisa Almeida Sathler; Bekhor, Daniel; Medeiros, Regina Bitelli; D' Ippolito, Giuseppe, E-mail: ricardo.romano@unifesp.br [Universidade Federal de Sao Paulo (EPM/UNIFESP), Sao Paulo, SP (Brazil). Escola Paulista de Medicina; Caldana, Rogerio Pedreschi [Fleury Medicina e Saude, Sao Paulo, SP (Brazil)

    2015-09-15

    Objective: To assess the reduction of estimated radiation dose in abdominal computed tomography following the implementation of new scan protocols on the basis of clinical suspicion and of adjusted images acquisition parameters. Materials and Methods: Retrospective and prospective review of reports on radiation dose from abdominal CT scans performed three months before (group A - 551 studies) and three months after (group B - 788 studies) implementation of new scan protocols proposed as a function of clinical indications. Also, the images acquisition parameters were adjusted to reduce the radiation dose at each scan phase. The groups were compared for mean number of acquisition phases, mean CTDI{sub vol} per phase, mean DLP per phase, and mean DLP per scan. Results: A significant reduction was observed for group B as regards all the analyzed aspects, as follows: 33.9%, 25.0%, 27.0% and 52.5%, respectively for number of acquisition phases, CTDI{sub vol} per phase, DLP per phase and DLP per scan (p < 0.001). Conclusion: The rational use of abdominal computed tomography scan phases based on the clinical suspicion in conjunction with the adjusted images acquisition parameters allows for a 50% reduction in the radiation dose from abdominal computed tomography scans. (author)

  20. Antibody engineering: methods and protocols

    National Research Council Canada - National Science Library

    Chames, Patrick

    2012-01-01

    "Antibody Engineering: Methods and Protocols, Second Edition was compiled to give complete and easy access to a variety of antibody engineering techniques, starting from the creation of antibody repertoires and efficient...

  1. A Protocol for Provably Secure Authentication of a Tiny Entity to a High Performance Computing One

    Directory of Open Access Journals (Sweden)

    Siniša Tomović

    2016-01-01

    Full Text Available The problem of developing authentication protocols dedicated to a specific scenario where an entity with limited computational capabilities should prove the identity to a computationally powerful Verifier is addressed. An authentication protocol suitable for the considered scenario which jointly employs the learning parity with noise (LPN problem and a paradigm of random selection is proposed. It is shown that the proposed protocol is secure against active attacking scenarios and so called GRS man-in-the-middle (MIM attacking scenarios. In comparison with the related previously reported authentication protocols the proposed one provides reduction of the implementation complexity and at least the same level of the cryptographic security.

  2. [Multidisciplinary protocol for computed tomography imaging and angiographic embolization of splenic injury due to trauma: assessment of pre-protocol and post-protocol outcomes].

    Science.gov (United States)

    Koo, M; Sabaté, A; Magalló, P; García, M A; Domínguez, J; de Lama, M E; López, S

    2011-11-01

    To assess conservative treatment of splenic injury due to trauma, following a protocol for computed tomography (CT) and angiographic embolization. To quantify the predictive value of CT for detecting bleeding and need for embolization. The care protocol developed by the multidisciplinary team consisted of angiography with embolization of lesions revealed by contrast extravasation under CT as well as embolization of grade III-V injuries observed, or grade I-II injuries causing hemodynamic instability and/or need for blood transfusion. We collected data on demographic variables, injury severity score (ISS), angiographic findings, and injuries revealed by CT. Pre-protocol and post-protocol outcomes were compared. The sensitivity and specificity of CT findings were calculated for all patients who required angiographic embolization. Forty-four and 30 angiographies were performed in the pre- and post-protocol periods, respectively. The mean (SD) ISSs in the two periods were 25 (11) and 26 (12), respectively. A total of 24 (54%) embolizations were performed in the pre-protocol period and 28 (98%) after implementation of the protocol. Two and 7 embolizations involved the spleen in the 2 periods, respectively; abdominal laparotomies numbered 32 and 25, respectively, and 10 (31%) vs 4 (16%) splenectomies were performed. The specificity and sensitivity values for contrast extravasation found on CT and followed by embolization were 77.7% and 79.5%. The implementation of this multidisciplinary protocol using CT imaging and angiographic embolization led to a decrease in the number of splenectomies. The protocol allows us to take a more conservative treatment approach.

  3. Evaluation of condyle defects using different reconstruction protocols of cone-beam computed tomography

    International Nuclear Information System (INIS)

    Bastos, Luana Costa; Campos, Paulo Sergio Flores; Ramos-Perez, Flavia Maria de Moraes; Pontual, Andrea dos Anjos; Almeida, Solange Maria

    2013-01-01

    This study was conducted to investigate how well cone-beam computed tomography (CBCT) can detect simulated cavitary defects in condyles, and to test the influence of the reconstruction protocols. Defects were created with spherical diamond burs (numbers 1013, 1016, 3017) in superior and / or posterior surfaces of twenty condyles. The condyles were scanned, and cross-sectional reconstructions were performed with nine different protocols, based on slice thickness (0.2, 0.6, 1.0 mm) and on the filters (original image, Sharpen Mild, S9) used. Two observers evaluated the defects, determining their presence and location. Statistical analysis was carried out using simple Kappa coefficient and McNemar’s test to check inter- and intra-rater reliability. The chi-square test was used to compare the rater accuracy. Analysis of variance (Tukey's test) assessed the effect of the protocols used. Kappa values for inter- and intra-rater reliability demonstrate almost perfect agreement. The proportion of correct answers was significantly higher than that of errors for cavitary defects on both condyle surfaces (p < 0.01). Only in identifying the defects located on the posterior surface was it possible to observe the influence of the 1.0 mm protocol thickness and no filter, which showed a significantly lower value. Based on the results of the current study, the technique used was valid for identifying the existence of cavities in the condyle surface. However, the protocol of a 1.0 mm-thick slice and no filter proved to be the worst method for identifying the defects on the posterior surface. (author)

  4. Evaluation of condyle defects using different reconstruction protocols of cone-beam computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Bastos, Luana Costa; Campos, Paulo Sergio Flores, E-mail: bastosluana@ymail.com [Universidade Federal da Bahia (UFBA), Salvador, BA (Brazil). Fac. de Odontologia. Dept. de Radiologia Oral e Maxilofacial; Ramos-Perez, Flavia Maria de Moraes [Universidade Federal de Pernambuco (UFPE), Recife, PE (Brazil). Fac. de Odontologia. Dept. de Clinica e Odontologia Preventiva; Pontual, Andrea dos Anjos [Universidade Federal de Pernambuco (UFPE), Camaragibe, PE (Brazil). Fac. de Odontologia. Dept. de Radiologia Oral; Almeida, Solange Maria [Universidade Estadual de Campinas (UNICAMP), Piracicaba, SP (Brazil). Fac. de Odontologia. Dept. de Radiologia Oral

    2013-11-15

    This study was conducted to investigate how well cone-beam computed tomography (CBCT) can detect simulated cavitary defects in condyles, and to test the influence of the reconstruction protocols. Defects were created with spherical diamond burs (numbers 1013, 1016, 3017) in superior and / or posterior surfaces of twenty condyles. The condyles were scanned, and cross-sectional reconstructions were performed with nine different protocols, based on slice thickness (0.2, 0.6, 1.0 mm) and on the filters (original image, Sharpen Mild, S9) used. Two observers evaluated the defects, determining their presence and location. Statistical analysis was carried out using simple Kappa coefficient and McNemar’s test to check inter- and intra-rater reliability. The chi-square test was used to compare the rater accuracy. Analysis of variance (Tukey's test) assessed the effect of the protocols used. Kappa values for inter- and intra-rater reliability demonstrate almost perfect agreement. The proportion of correct answers was significantly higher than that of errors for cavitary defects on both condyle surfaces (p < 0.01). Only in identifying the defects located on the posterior surface was it possible to observe the influence of the 1.0 mm protocol thickness and no filter, which showed a significantly lower value. Based on the results of the current study, the technique used was valid for identifying the existence of cavities in the condyle surface. However, the protocol of a 1.0 mm-thick slice and no filter proved to be the worst method for identifying the defects on the posterior surface. (author)

  5. Dosimetric evaluation of cone beam computed tomography scanning protocols

    International Nuclear Information System (INIS)

    Soares, Maria Rosangela

    2015-01-01

    It was evaluated the cone beam computed tomography, CBCT scanning protocols, that was introduced in dental radiology at the end of the 1990's, and quickly became a fundamental examination for various procedures. Its main characteristic, the difference of medical CT is the beam shape. This study aimed to calculate the absorbed dose in eight tissues / organs of the head and neck, and to estimate the effective dose in 13 protocols and two techniques (stitched FOV e single FOV) of 5 equipment of different manufacturers of cone beam CT. For that purpose, a female anthropomorphic phantom was used, representing a default woman, in which were inserted thermoluminescent dosimeters at several points, representing organs / tissues with weighting values presented in the standard ICRP 103. The results were evaluated by comparing the dose according to the purpose of the tomographic image. Among the results, there is a difference up to 325% in the effective dose in relation to protocols with the same image goal. In relation to the image acquisition technique, the stitched FOV technique resulted in an effective dose of 5.3 times greater than the single FOV technique for protocols with the same image goal. In the individual contribution, the salivary glands are responsible for 31% of the effective dose in CT exams. The remaining tissues have also a significant contribution, 36%. The results drew attention to the need of estimating the effective dose in different equipment and protocols of the market, besides the knowledge of the radiation parameters and equipment manufacturing engineering to obtain the image. (author)

  6. Computational method and system for modeling, analyzing, and optimizing DNA amplification and synthesis

    Science.gov (United States)

    Vandersall, Jennifer A.; Gardner, Shea N.; Clague, David S.

    2010-05-04

    A computational method and computer-based system of modeling DNA synthesis for the design and interpretation of PCR amplification, parallel DNA synthesis, and microarray chip analysis. The method and system include modules that address the bioinformatics, kinetics, and thermodynamics of DNA amplification and synthesis. Specifically, the steps of DNA selection, as well as the kinetics and thermodynamics of DNA hybridization and extensions, are addressed, which enable the optimization of the processing and the prediction of the products as a function of DNA sequence, mixing protocol, time, temperature and concentration of species.

  7. A model based security testing method for protocol implementation.

    Science.gov (United States)

    Fu, Yu Long; Xin, Xiao Long

    2014-01-01

    The security of protocol implementation is important and hard to be verified. Since the penetration testing is usually based on the experience of the security tester and the specific protocol specifications, a formal and automatic verification method is always required. In this paper, we propose an extended model of IOLTS to describe the legal roles and intruders of security protocol implementations, and then combine them together to generate the suitable test cases to verify the security of protocol implementation.

  8. Is computer-assisted instruction more effective than other educational methods in achieving ECG competence among medical students and residents? Protocol for a systematic review and meta-analysis.

    Science.gov (United States)

    Viljoen, Charle André; Scott Millar, Rob; Engel, Mark E; Shelton, Mary; Burch, Vanessa

    2017-12-26

    Although ECG interpretation is an essential skill in clinical medicine, medical students and residents often lack ECG competence. Novel teaching methods are increasingly being implemented and investigated to improve ECG training. Computer-assisted instruction is one such method under investigation; however, its efficacy in achieving better ECG competence among medical students and residents remains uncertain. This article describes the protocol for a systematic review and meta-analysis that will compare the effectiveness of computer-assisted instruction with other teaching methods used for the ECG training of medical students and residents. Only studies with a comparative research design will be considered. Articles will be searched for in electronic databases (PubMed, Scopus, Web of Science, Academic Search Premier, CINAHL, PsycINFO, Education Resources Information Center, Africa-Wide Information and Teacher Reference Center). In addition, we will review citation indexes and conduct a grey literature search. Data extraction will be done on articles that met the predefined eligibility criteria. A descriptive analysis of the different teaching modalities will be provided and their educational impact will be assessed in terms of effect size and the modified version of Kirkpatrick framework for the evaluation of educational interventions. This systematic review aims to provide evidence as to whether computer-assisted instruction is an effective teaching modality for ECG training. It is hoped that the information garnered from this systematic review will assist in future curricular development and improve ECG training. As this research is a systematic review of published literature, ethical approval is not required. The results will be reported according to the Preferred Reporting Items for Systematic Review and Meta-Analysis statement and will be submitted to a peer-reviewed journal. The protocol and systematic review will be included in a PhD dissertation. CRD

  9. Dose optimization for multislice computed tomography protocols of the midface

    International Nuclear Information System (INIS)

    Lorenzen, M.; Wedegaertner, U.; Weber, C.; Adam, G.; Lorenzen, J.; Lockemann, U.

    2005-01-01

    Purpose: to optimize multislice computed tomography (MSCT) protocols of the midface for dose reduction and adequate image quality. Materials and methods: MSCT (somatom volume zoom, siemens) of the midface was performed on 3 cadavers within 24 hours of death with successive reduction of the tube current, applying 150, 100, 70 and 30 mAs at 120 kV as well as 40 and 21 mAs at 80 kV. At 120 kV, a pitch of 0.875 and collimation of 4 x 1 mm were used, and at 80 kV, a pitch of 0.7 and collimation of 2 x 0.5 mm. Images were reconstructed in transverse and coronal orientation. Qualitative image analysis was separately performed by two radiologists using a five-point scale (1 = excellent; 5 = poor) applying the following parameters: image quality, demarcation and sharpness of lamellar bone, overall image quality, and image noise (1 = minor; 5 = strong). The effective body dose [mSv] and organ dose [mSv] of the ocular lens (using the dosimetry system ''WINdose'') were calculated, and the interobserver agreement (kappa coefficient) was determined. Results: for the evaluation of the lamellar bone, adequate sharpness, demarcation and image quality was demonstrated at 120 kV/30 mAs, and for the overall image quality and noise, 120 kV/40 mAs was acceptable. With regard to image quality, the effective body dose could be reduced from 1.89 mSv to 0.34 mSv and the organ dose of the ocular lens from 27.2 mSv to 4.8 mSv. Interobserver agreement was moderate (kappa = 0.39). Conclusion: adequate image quality was achieved for MSCT protocols of the midface with 30 mAs at 120 kV, resulting in a dose reduction of 70% in comparison to standard protocols. (orig.)

  10. Reduction of cancer risk by optimization of Computed Tomography head protocols: far eastern Cuban experience

    International Nuclear Information System (INIS)

    Miller Clemente, R.; Adame Brooks, D.; Lores Guevara, M.; Perez Diaz, M.; Arias Garlobo, M. L.; Ortega Rodriguez, O.; Nepite Haber, R.; Grinnan Hernandez, O.; Guillama Llosas, A.

    2015-01-01

    The cancer risk estimation constitutes one way for the evaluation of the public health, regarding computed tomography (CT) exposures. Starting from the hypothesis that the optimization of CT protocols would reduce significantly the added cancer risk, the purpose of this research was the application of optimization strategies regarding head CT protocols, in order to reduce the factors affecting the risk of induced cancer. The applied systemic approach included technological and human components, represented by quantitative physical factors. the volumetric kerma indexes, compared with respect to standard, optimized and reference values, were evaluated with multiple means comparison method. The added cancer risk resulted from the application of the methodology for biological effects evaluation, at low doses with low Linear Energy Transfer. Human observers in all scenarios evaluated the image quality. the reduced dose was significantly lower than for standard head protocols and reference levels, where: (1) for pediatric patients, by using an Automatic Exposure Control system, a reduction of 31% compared with standard protocol and ages range of 10-14, and (2) adults, using a Bilateral Filter for images obtained at low doses of 62% from those of standard head protocol. The risk reduction was higher than 25%. The systemic approach used allows the effective identification of factors involved on cancer risk related with exposures to CT. The combination of dose modulation and image restoration with Bilateral Filter, provide a significantly reduction of cancer risk, with acceptable diagnostic image quality. (Author)

  11. Protocol Analysis as a Method for Analyzing Conversational Data.

    Science.gov (United States)

    Aleman, Carlos G.; Vangelisti, Anita L.

    Protocol analysis, a technique that uses people's verbal reports about their cognitions as they engage in an assigned task, has been used in a number of applications to provide insight into how people mentally plan, assess, and carry out those assignments. Using a system of networked computers where actors communicate with each other over…

  12. Protocol design and implementation using formal methods

    NARCIS (Netherlands)

    van Sinderen, Marten J.; Ferreira Pires, Luis; Pires, L.F.; Vissers, C.A.

    1992-01-01

    This paper reports on a number of formal methods that support correct protocol design and implementation. These methods are placed in the framework of a design methodology for distributed systems that was studied and developed within the ESPRIT II Lotosphere project (2304). The paper focuses on

  13. 3D-CT vascular setting protocol using computer graphics for the evaluation of maxillofacial lesions

    Directory of Open Access Journals (Sweden)

    CAVALCANTI Marcelo de Gusmão Paraiso

    2001-01-01

    Full Text Available In this paper we present the aspect of a mandibular giant cell granuloma in spiral computed tomography-based three-dimensional (3D-CT reconstructed images using computer graphics, and demonstrate the importance of the vascular protocol in permitting better diagnosis, visualization and determination of the dimensions of the lesion. We analyzed 21 patients with maxillofacial lesions of neoplastic and proliferative origins. Two oral and maxillofacial radiologists analyzed the images. The usefulness of interactive 3D images reconstructed by means of computer graphics, especially using a vascular setting protocol for qualitative and quantitative analyses for the diagnosis, determination of the extent of lesions, treatment planning and follow-up, was demonstrated. The technique is an important adjunct to the evaluation of lesions in relation to axial CT slices and 3D-CT bone images.

  14. 3D-CT vascular setting protocol using computer graphics for the evaluation of maxillofacial lesions.

    Science.gov (United States)

    Cavalcanti, M G; Ruprecht, A; Vannier, M W

    2001-01-01

    In this paper we present the aspect of a mandibular giant cell granuloma in spiral computed tomography-based three-dimensional (3D-CT) reconstructed images using computer graphics, and demonstrate the importance of the vascular protocol in permitting better diagnosis, visualization and determination of the dimensions of the lesion. We analyzed 21 patients with maxillofacial lesions of neoplastic and proliferative origins. Two oral and maxillofacial radiologists analyzed the images. The usefulness of interactive 3D images reconstructed by means of computer graphics, especially using a vascular setting protocol for qualitative and quantitative analyses for the diagnosis, determination of the extent of lesions, treatment planning and follow-up, was demonstrated. The technique is an important adjunct to the evaluation of lesions in relation to axial CT slices and 3D-CT bone images.

  15. A Lightweight Buyer-Seller Watermarking Protocol

    Directory of Open Access Journals (Sweden)

    Yongdong Wu

    2008-01-01

    Full Text Available The buyer-seller watermarking protocol enables a seller to successfully identify a traitor from a pirated copy, while preventing the seller from framing an innocent buyer. Based on finite field theory and the homomorphic property of public key cryptosystems such as RSA, several buyer-seller watermarking protocols (N. Memon and P. W. Wong (2001 and C.-L. Lei et al. (2004 have been proposed previously. However, those protocols require not only large computational power but also substantial network bandwidth. In this paper, we introduce a new buyer-seller protocol that overcomes those weaknesses by managing the watermarks. Compared with the earlier protocols, ours is n times faster in terms of computation, where n is the number of watermark elements, while incurring only O(1/lN times communication overhead given the finite field parameter lN. In addition, the quality of the watermarked image generated with our method is better, using the same watermark strength.

  16. Introduction to basic immunological methods : Generalities, Principles, Protocols and Variants of basic protocols

    International Nuclear Information System (INIS)

    Mejri, Naceur

    2013-01-01

    This manuscript is dedicated to student of biological sciences. It provides the information necessary to perform practical works, the most commonly used in immunology. During my doctoral and post-doctoral periods, panoply of methods was employed in diverse subjects in my research. Technical means used in my investigations were diverse enough that i could extract a set of techniques that cover most the basic immunological methods. Each chapter of this manuscript contains a fairly complete description of immunological methods. In each topic the basic protocol and its variants were preceded by background information provided in paragraphs concerning the principle and generalities. The emphasis is placed on describing situations in which each method and its variants were used. These basic immunological methods are useful for students and even researchers studying the immune system of human, nice and other species. Different subjects showed not only detailed protocols but also photos or/and shemas used as support to illustrate some knowledge or practical knowledge. I hope that students will find this manual interesting, easy to use contains necessary information to acquire skills in immunological practice. (Author)

  17. Tracing Method with Intra and Inter Protocols Correlation

    Directory of Open Access Journals (Sweden)

    Marin Mangri

    2009-05-01

    Full Text Available MEGACO or H.248 is a protocol enabling acentralized Softswitch (or MGC to control MGsbetween Voice over Packet (VoP networks andtraditional ones. To analyze much deeper the realimplementations it is useful to use a tracing systemwith intra and inter protocols correlation. For thisreason in the case of MEGACO-H.248 it is necessaryto find the appropriate method of correlation with allprotocols involved. Starting from Rel4 a separation ofCP (Control Plane and UP (User Plane managementwithin the networks appears. MEGACO protocol playsan important role in the migration to the new releasesor from monolithic platform to a network withdistributed components.

  18. Computer network time synchronization the network time protocol on earth and in space

    CERN Document Server

    Mills, David L

    2010-01-01

    Carefully coordinated, reliable, and accurate time synchronization is vital to a wide spectrum of fields-from air and ground traffic control, to buying and selling goods and services, to TV network programming. Ill-gotten time could even lead to the unimaginable and cause DNS caches to expire, leaving the entire Internet to implode on the root servers.Written by the original developer of the Network Time Protocol (NTP), Computer Network Time Synchronization: The Network Time Protocol on Earth and in Space, Second Edition addresses the technological infrastructure of time dissemination, distrib

  19. DNA arrays : methods and protocols [Methods in molecular biology, v. 170

    National Research Council Canada - National Science Library

    Rampal, Jang B

    2001-01-01

    "In DNA Arrays: Methods and Protocols, Jang Rampal and a authoritative panel of researchers, engineers, and technologists explain in detail how to design and construct DNA microarrays, as well as how to...

  20. BrEPS: a flexible and automatic protocol to compute enzyme-specific sequence profiles for functional annotation

    Directory of Open Access Journals (Sweden)

    Schomburg D

    2010-12-01

    Full Text Available Abstract Background Models for the simulation of metabolic networks require the accurate prediction of enzyme function. Based on a genomic sequence, enzymatic functions of gene products are today mainly predicted by sequence database searching and operon analysis. Other methods can support these techniques: We have developed an automatic method "BrEPS" that creates highly specific sequence patterns for the functional annotation of enzymes. Results The enzymes in the UniprotKB are identified and their sequences compared against each other with BLAST. The enzymes are then clustered into a number of trees, where each tree node is associated with a set of EC-numbers. The enzyme sequences in the tree nodes are aligned with ClustalW. The conserved columns of the resulting multiple alignments are used to construct sequence patterns. In the last step, we verify the quality of the patterns by computing their specificity. Patterns with low specificity are omitted and recomputed further down in the tree. The final high-quality patterns can be used for functional annotation. We ran our protocol on a recent Swiss-Prot release and show statistics, as well as a comparison to PRIAM, a probabilistic method that is also specialized on the functional annotation of enzymes. We determine the amount of true positive annotations for five common microorganisms with data from BRENDA and AMENDA serving as standard of truth. BrEPS is almost on par with PRIAM, a fact which we discuss in the context of five manually investigated cases. Conclusions Our protocol computes highly specific sequence patterns that can be used to support the functional annotation of enzymes. The main advantages of our method are that it is automatic and unsupervised, and quite fast once the patterns are evaluated. The results show that BrEPS can be a valuable addition to the reconstruction of metabolic networks.

  1. Generating Protocol Software from CPN Models Annotated with Pragmatics

    DEFF Research Database (Denmark)

    Simonsen, Kent Inge; Kristensen, Lars M.; Kindler, Ekkart

    2013-01-01

    and verify protocol software, but limited work exists on using CPN models of protocols as a basis for automated code generation. The contribution of this paper is a method for generating protocol software from a class of CPN models annotated with code generation pragmatics. Our code generation method...... consists of three main steps: automatically adding so-called derived pragmatics to the CPN model, computing an abstract template tree, which associates pragmatics with code templates, and applying the templates to generate code which can then be compiled. We illustrate our method using a unidirectional...

  2. Fast computation of the characteristics method on vector computers

    International Nuclear Information System (INIS)

    Kugo, Teruhiko

    2001-11-01

    Fast computation of the characteristics method to solve the neutron transport equation in a heterogeneous geometry has been studied. Two vector computation algorithms; an odd-even sweep (OES) method and an independent sequential sweep (ISS) method have been developed and their efficiency to a typical fuel assembly calculation has been investigated. For both methods, a vector computation is 15 times faster than a scalar computation. From a viewpoint of comparison between the OES and ISS methods, the followings are found: 1) there is a small difference in a computation speed, 2) the ISS method shows a faster convergence and 3) the ISS method saves about 80% of computer memory size compared with the OES method. It is, therefore, concluded that the ISS method is superior to the OES method as a vectorization method. In the vector computation, a table-look-up method to reduce computation time of an exponential function saves only 20% of a whole computation time. Both the coarse mesh rebalance method and the Aitken acceleration method are effective as acceleration methods for the characteristics method, a combination of them saves 70-80% of outer iterations compared with a free iteration. (author)

  3. Essential numerical computer methods

    CERN Document Server

    Johnson, Michael L

    2010-01-01

    The use of computers and computational methods has become ubiquitous in biological and biomedical research. During the last 2 decades most basic algorithms have not changed, but what has is the huge increase in computer speed and ease of use, along with the corresponding orders of magnitude decrease in cost. A general perception exists that the only applications of computers and computer methods in biological and biomedical research are either basic statistical analysis or the searching of DNA sequence data bases. While these are important applications they only scratch the surface of the current and potential applications of computers and computer methods in biomedical research. The various chapters within this volume include a wide variety of applications that extend far beyond this limited perception. As part of the Reliable Lab Solutions series, Essential Numerical Computer Methods brings together chapters from volumes 210, 240, 321, 383, 384, 454, and 467 of Methods in Enzymology. These chapters provide ...

  4. A novel region-growing based semi-automatic segmentation protocol for three-dimensional condylar reconstruction using cone beam computed tomography (CBCT.

    Directory of Open Access Journals (Sweden)

    Tong Xi

    Full Text Available OBJECTIVE: To present and validate a semi-automatic segmentation protocol to enable an accurate 3D reconstruction of the mandibular condyles using cone beam computed tomography (CBCT. MATERIALS AND METHODS: Approval from the regional medical ethics review board was obtained for this study. Bilateral mandibular condyles in ten CBCT datasets of patients were segmented using the currently proposed semi-automatic segmentation protocol. This segmentation protocol combined 3D region-growing and local thresholding algorithms. The segmentation of a total of twenty condyles was performed by two observers. The Dice-coefficient and distance map calculations were used to evaluate the accuracy and reproducibility of the segmented and 3D rendered condyles. RESULTS: The mean inter-observer Dice-coefficient was 0.98 (range [0.95-0.99]. An average 90th percentile distance of 0.32 mm was found, indicating an excellent inter-observer similarity of the segmented and 3D rendered condyles. No systematic errors were observed in the currently proposed segmentation protocol. CONCLUSION: The novel semi-automated segmentation protocol is an accurate and reproducible tool to segment and render condyles in 3D. The implementation of this protocol in the clinical practice allows the CBCT to be used as an imaging modality for the quantitative analysis of condylar morphology.

  5. Survey of computed tomography doses in head and chest protocols

    International Nuclear Information System (INIS)

    Souza, Giordana Salvi de; Silva, Ana Maria Marques da

    2016-01-01

    Computed tomography is a clinical tool for the diagnosis of patients. However, the patient is subjected to a complex dose distribution. The aim of this study was to survey dose indicators in head and chest protocols CT scans, in terms of Dose-Length Product(DLP) and effective dose for adult and pediatric patients, comparing them with diagnostic reference levels in the literature. Patients were divided into age groups and the following image acquisition parameters were collected: age, kV, mAs, Volumetric Computed Tomography Dose Index (CTDIvol) and DLP. The effective dose was found multiplying DLP by correction factors. The results were obtained from the third quartile and showed the importance of determining kV and mAs values for each patient depending on the studied region, age and thickness. (author)

  6. Quality-of-service sensitivity to bio-inspired/evolutionary computational methods for intrusion detection in wireless ad hoc multimedia sensor networks

    Science.gov (United States)

    Hortos, William S.

    2012-06-01

    In the author's previous work, a cross-layer protocol approach to wireless sensor network (WSN) intrusion detection an identification is created with multiple bio-inspired/evolutionary computational methods applied to the functions of the protocol layers, a single method to each layer, to improve the intrusion-detection performance of the protocol over that of one method applied to only a single layer's functions. The WSN cross-layer protocol design embeds GAs, anti-phase synchronization, ACO, and a trust model based on quantized data reputation at the physical, MAC, network, and application layer, respectively. The construct neglects to assess the net effect of the combined bioinspired methods on the quality-of-service (QoS) performance for "normal" data streams, that is, streams without intrusions. Analytic expressions of throughput, delay, and jitter, coupled with simulation results for WSNs free of intrusion attacks, are the basis for sensitivity analyses of QoS metrics for normal traffic to the bio-inspired methods.

  7. Scalable optical quantum computer

    International Nuclear Information System (INIS)

    Manykin, E A; Mel'nichenko, E V

    2014-01-01

    A way of designing a scalable optical quantum computer based on the photon echo effect is proposed. Individual rare earth ions Pr 3+ , regularly located in the lattice of the orthosilicate (Y 2 SiO 5 ) crystal, are suggested to be used as optical qubits. Operations with qubits are performed using coherent and incoherent laser pulses. The operation protocol includes both the method of measurement-based quantum computations and the technique of optical computations. Modern hybrid photon echo protocols, which provide a sufficient quantum efficiency when reading recorded states, are considered as most promising for quantum computations and communications. (quantum computer)

  8. Accuracy of a Computer-Aided Surgical Simulation (CASS) Protocol for Orthognathic Surgery: A Prospective Multicenter Study

    Science.gov (United States)

    Hsu, Sam Sheng-Pin; Gateno, Jaime; Bell, R. Bryan; Hirsch, David L.; Markiewicz, Michael R.; Teichgraeber, John F.; Zhou, Xiaobo; Xia, James J.

    2012-01-01

    Purpose The purpose of this prospective multicenter study was to assess the accuracy of a computer-aided surgical simulation (CASS) protocol for orthognathic surgery. Materials and Methods The accuracy of the CASS protocol was assessed by comparing planned and postoperative outcomes of 65 consecutive patients enrolled from 3 centers. Computer-generated surgical splints were used for all patients. For the genioplasty, one center utilized computer-generated chin templates to reposition the chin segment only for patients with asymmetry. Standard intraoperative measurements were utilized without the chin templates for the remaining patients. The primary outcome measurements were linear and angular differences for the maxilla, mandible and chin when the planned and postoperative models were registered at the cranium. The secondary outcome measurements were: maxillary dental midline difference between the planned and postoperative positions; and linear and angular differences of the chin segment between the groups with and without the use of the template. The latter was measured when the planned and postoperative models were registered at mandibular body. Statistical analyses were performed, and the accuracy was reported using root mean square deviation (RMSD) and Bland and Altman's method for assessing measurement agreement. Results In the primary outcome measurements, there was no statistically significant difference among the 3 centers for the maxilla and mandible. The largest RMSD was 1.0mm and 1.5° for the maxilla, and 1.1mm and 1.8° for the mandible. For the chin, there was a statistically significant difference between the groups with and without the use of the chin template. The chin template group showed excellent accuracy with largest positional RMSD of 1.0mm and the largest orientational RSMD of 2.2°. However, larger variances were observed in the group not using the chin template. This was significant in anteroposterior and superoinferior directions, as in

  9. A New Dual-purpose Quality Control Dosimetry Protocol for Diagnostic Reference-level Determination in Computed Tomography.

    Science.gov (United States)

    Sohrabi, Mehdi; Parsi, Masoumeh; Sina, Sedigheh

    2018-05-17

    A diagnostic reference level is an advisory dose level set by a regulatory authority in a country as an efficient criterion for protection of patients from unwanted medical exposure. In computed tomography, the direct dose measurement and data collection methods are commonly applied for determination of diagnostic reference levels. Recently, a new quality-control-based dose survey method was proposed by the authors to simplify the diagnostic reference-level determination using a retrospective quality control database usually available at a regulatory authority in a country. In line with such a development, a prospective dual-purpose quality control dosimetry protocol is proposed for determination of diagnostic reference levels in a country, which can be simply applied by quality control service providers. This new proposed method was applied to five computed tomography scanners in Shiraz, Iran, and diagnostic reference levels for head, abdomen/pelvis, sinus, chest, and lumbar spine examinations were determined. The results were compared to those obtained by the data collection and quality-control-based dose survey methods, carried out in parallel in this study, and were found to agree well within approximately 6%. This is highly acceptable for quality-control-based methods according to International Atomic Energy Agency tolerance levels (±20%).

  10. Efficient secure two-party protocols

    CERN Document Server

    Hazay, Carmit

    2010-01-01

    The authors present a comprehensive study of efficient protocols and techniques for secure two-party computation -- both general constructions that can be used to securely compute any functionality, and protocols for specific problems of interest. The book focuses on techniques for constructing efficient protocols and proving them secure. In addition, the authors study different definitional paradigms and compare the efficiency of protocols achieved under these different definitions.The book opens with a general introduction to secure computation and then presents definitions of security for a

  11. Computational Methodologies for Developing Structure–Morphology–Performance Relationships in Organic Solar Cells: A Protocol Review

    KAUST Repository

    Do, Khanh; Ravva, Mahesh Kumar; Wang, Tonghui; Bredas, Jean-Luc

    2016-01-01

    We outline a step-by-step protocol that incorporates a number of theoretical and computational methodologies to evaluate the structural and electronic properties of pi-conjugated semiconducting materials in the condensed phase. Our focus

  12. Method-centered digital communities on protocols.io for fast-paced scientific innovation.

    Science.gov (United States)

    Kindler, Lori; Stoliartchouk, Alexei; Teytelman, Leonid; Hurwitz, Bonnie L

    2016-01-01

    The Internet has enabled online social interaction for scientists beyond physical meetings and conferences. Yet despite these innovations in communication, dissemination of methods is often relegated to just academic publishing. Further, these methods remain static, with subsequent advances published elsewhere and unlinked. For communities undergoing fast-paced innovation, researchers need new capabilities to share, obtain feedback, and publish methods at the forefront of scientific development. For example, a renaissance in virology is now underway given the new metagenomic methods to sequence viral DNA directly from an environment. Metagenomics makes it possible to "see" natural viral communities that could not be previously studied through culturing methods. Yet, the knowledge of specialized techniques for the production and analysis of viral metagenomes remains in a subset of labs.  This problem is common to any community using and developing emerging technologies and techniques. We developed new capabilities to create virtual communities in protocols.io, an open access platform, for disseminating protocols and knowledge at the forefront of scientific development. To demonstrate these capabilities, we present a virology community forum called VERVENet. These new features allow virology researchers to share protocols and their annotations and optimizations, connect with the broader virtual community to share knowledge, job postings, conference announcements through a common online forum, and discover the current literature through personalized recommendations to promote discussion of cutting edge research. Virtual communities in protocols.io enhance a researcher's ability to: discuss and share protocols, connect with fellow community members, and learn about new and innovative research in the field.  The web-based software for developing virtual communities is free to use on protocols.io. Data are available through public APIs at protocols.io.

  13. Service-Oriented Synthesis of Distributed and Concurrent Protocol Specifications

    Directory of Open Access Journals (Sweden)

    Jehad Al Dallal

    2008-01-01

    Full Text Available Several methods have been proposed for synthesizing computer communication protocol specifications from service specifications. Some protocol synthesis methods based on the finite state machine (FSM model assume that primitives in the service specifications cannot be executed simultaneously. Others either handle only controlled primitive concurrency or have tight restrictions on the applicable FSM topologies. As a result, these synthesis methods are not applicable to an interesting variety of inherently concurrent applications, such as the Internet and mobile communication systems. This paper proposes a concurrent-based protocol synthesis method that eliminates the restrictions imposed by the earlier methods. The proposed method uses a synthesis method to obtain a sequential protocol specification (P-SPEC from a given service specification (S-SPEC. The resulting P-SPEC is then remodeled to consider the concurrency behavior specified in the S-SPEC, while guaranteeing that P-SPEC provides the specified service.

  14. Chapter 2: Commercial and Industrial Lighting Evaluation Protocol. The Uniform Methods Project: Methods for Determining Energy Efficiency Savings for Specific Measures

    Energy Technology Data Exchange (ETDEWEB)

    Kurnik, Charles W [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Gowans, Dakers [Left Fork Energy, Harrison, NY (United States); Telarico, Chad [DNV GL, Mahwah, NJ (United States)

    2017-11-02

    The Commercial and Industrial Lighting Evaluation Protocol (the protocol) describes methods to account for gross energy savings resulting from the programmatic installation of efficient lighting equipment in large populations of commercial, industrial, and other nonresidential facilities. This protocol does not address savings resulting from changes in codes and standards, or from education and training activities. A separate Uniform Methods Project (UMP) protocol, Chapter 3: Commercial and Industrial Lighting Controls Evaluation Protocol, addresses methods for evaluating savings resulting from lighting control measures such as adding time clocks, tuning energy management system commands, and adding occupancy sensors.

  15. Scalable optical quantum computer

    Energy Technology Data Exchange (ETDEWEB)

    Manykin, E A; Mel' nichenko, E V [Institute for Superconductivity and Solid-State Physics, Russian Research Centre ' Kurchatov Institute' , Moscow (Russian Federation)

    2014-12-31

    A way of designing a scalable optical quantum computer based on the photon echo effect is proposed. Individual rare earth ions Pr{sup 3+}, regularly located in the lattice of the orthosilicate (Y{sub 2}SiO{sub 5}) crystal, are suggested to be used as optical qubits. Operations with qubits are performed using coherent and incoherent laser pulses. The operation protocol includes both the method of measurement-based quantum computations and the technique of optical computations. Modern hybrid photon echo protocols, which provide a sufficient quantum efficiency when reading recorded states, are considered as most promising for quantum computations and communications. (quantum computer)

  16. Multiprofissional electronic protocol in ophtalmology with enfasis in strabismus

    OpenAIRE

    RIBEIRO, CHRISTIE GRAF; MOREIRA, ANA TEREZA RAMOS; PINTO, JOSÉ SIMÃO DE PAULA; MALAFAIA, OSVALDO

    2016-01-01

    ABSTRACT Objective: to create and validate an electronic database in ophthalmology focused on strabismus, to computerize this database in the form of a systematic data collection software named Electronic Protocol, and to incorporate this protocol into the Integrated System of Electronic Protocols (SINPE(c)). Methods: this is a descriptive study, with the methodology divided into three phases: (1) development of a theoretical ophthalmologic database with emphasis on strabismus; (2) compute...

  17. Cryptographic Protocols:

    DEFF Research Database (Denmark)

    Geisler, Martin Joakim Bittel

    cryptography was thus concerned with message confidentiality and integrity. Modern cryptography cover a much wider range of subjects including the area of secure multiparty computation, which will be the main topic of this dissertation. Our first contribution is a new protocol for secure comparison, presented...... implemented the comparison protocol in Java and benchmarks show that is it highly competitive and practical. The biggest contribution of this dissertation is a general framework for secure multiparty computation. Instead of making new ad hoc implementations for each protocol, we want a single and extensible...... in Chapter 2. Comparisons play a key role in many systems such as online auctions and benchmarks — it is not unreasonable to say that when parties come together for a multiparty computation, it is because they want to make decisions that depend on private information. Decisions depend on comparisons. We have...

  18. Computer-assisted machine-to-human protocols for authentication of a RAM-based embedded system

    Science.gov (United States)

    Idrissa, Abdourhamane; Aubert, Alain; Fournel, Thierry

    2012-06-01

    Mobile readers used for optical identification of manufactured products can be tampered in different ways: with hardware Trojan or by powering up with fake configuration data. How a human verifier can authenticate the reader to be handled for goods verification? In this paper, two cryptographic protocols are proposed to achieve the verification of a RAM-based system through a trusted auxiliary machine. Such a system is assumed to be composed of a RAM memory and a secure block (in practice a FPGA or a configurable microcontroller). The system is connected to an input/output interface and contains a Non Volatile Memory where the configuration data are stored. Here, except the secure block, all the blocks are exposed to attacks. At the registration stage of the first protocol, the MAC of both the secret and the configuration data, denoted M0 is computed by the mobile device without saving it then transmitted to the user in a secure environment. At the verification stage, the reader which is challenged with nonces sendsMACs / HMACs of both nonces and MAC M0 (to be recomputed), keyed with the secret. These responses are verified by the user through a trusted auxiliary MAC computer unit. Here the verifier does not need to tract a (long) list of challenge / response pairs. This makes the protocol tractable for a human verifier as its participation in the authentication process is increased. In counterpart the secret has to be shared with the auxiliary unit. This constraint is relaxed in a second protocol directly derived from Fiat-Shamir's scheme.

  19. Computational modeling of local hemodynamics phenomena: methods, tools and clinical applications

    International Nuclear Information System (INIS)

    Ponzini, R.; Rizzo, G.; Vergara, C.; Veneziani, A.; Morbiducci, U.; Montevecchi, F.M.; Redaelli, A.

    2009-01-01

    Local hemodynamics plays a key role in the onset of vessel wall pathophysiology, with peculiar blood flow structures (i.e. spatial velocity profiles, vortices, re-circulating zones, helical patterns and so on) characterizing the behavior of specific vascular districts. Thanks to the evolving technologies on computer sciences, mathematical modeling and hardware performances, the study of local hemodynamics can today afford also the use of a virtual environment to perform hypothesis testing, product development, protocol design and methods validation that just a couple of decades ago would have not been thinkable. Computational fluid dynamics (Cfd) appears to be more than a complementary partner to in vitro modeling and a possible substitute to animal models, furnishing a privileged environment for cheap fast and reproducible data generation.

  20. Influence of different luting protocols on shear bond strength of computer aided design/computer aided manufacturing resin nanoceramic material to dentin.

    Science.gov (United States)

    Poggio, Claudio; Pigozzo, Marco; Ceci, Matteo; Scribante, Andrea; Beltrami, Riccardo; Chiesa, Marco

    2016-01-01

    The purpose of this study was to evaluate the influence of three different luting protocols on shear bond strength of computer aided design/computer aided manufacturing (CAD/CAM) resin nanoceramic (RNC) material to dentin. In this in vitro study, 30 disks were milled from RNC blocks (Lava Ultimate/3M ESPE) with CAD/CAM technology. The disks were subsequently cemented to the exposed dentin of 30 recently extracted bovine permanent mandibular incisors. The specimens were randomly assigned into 3 groups of 10 teeth each. In Group 1, disks were cemented using a total-etch protocol (Scotchbond™ Universal Etchant phosphoric acid + Scotchbond Universal Adhesive + RelyX™ Ultimate conventional resin cement); in Group 2, disks were cemented using a self-etch protocol (Scotchbond Universal Adhesive + RelyX™ Ultimate conventional resin cement); in Group 3, disks were cemented using a self-adhesive protocol (RelyX™ Unicem 2 Automix self-adhesive resin cement). All cemented specimens were placed in a universal testing machine (Instron Universal Testing Machine 3343) and submitted to a shear bond strength test to check the strength of adhesion between the two substrates, dentin, and RNC disks. Specimens were stressed at a crosshead speed of 1 mm/min. Data were analyzed with analysis of variance and post-hoc Tukey's test at a level of significance of 0.05. Post-hoc Tukey testing showed that the highest shear strength values (P adhesives) showed better shear strength values compared to self-adhesive resin cements. Furthermore, conventional resin cements used together with a self-etch adhesive reported the highest values of adhesion.

  1. Estimation of the radiation exposure of a chest pain protocol with ECG-gating in dual-source computed tomography

    International Nuclear Information System (INIS)

    Ketelsen, Dominik; Luetkhoff, Marie H.; Thomas, Christoph; Werner, Matthias; Tsiflikas, Ilias; Reimann, Anja; Kopp, Andreas F.; Claussen, Claus D.; Heuschmid, Martin; Buchgeister, Markus; Burgstahler, Christof

    2009-01-01

    The aim of the study was to evaluate radiation exposure of a chest pain protocol with ECG-gated dual-source computed tomography (DSCT). An Alderson Rando phantom equipped with thermoluminescent dosimeters was used for dose measurements. Exposure was performed on a dual-source computed tomography system with a standard protocol for chest pain evaluation (120 kV, 320 mAs/rot) with different simulated heart rates (HRs). The dose of a standard chest CT examination (120 kV, 160 mAs) was also measured. Effective dose of the chest pain protocol was 19.3/21.9 mSv (male/female, HR 60), 17.9/20.4 mSv (male/female, HR 80) and 14.7/16.7 mSv (male/female, HR 100). Effective dose of a standard chest examination was 6.3 mSv (males) and 7.2 mSv (females). Radiation dose of the chest pain protocol increases significantly with a lower heart rate for both males (p = 0.040) and females (p = 0.044). The average radiation dose of a standard chest CT examination is about 36.5% that of a CT examination performed for chest pain. Using DSCT, the evaluated chest pain protocol revealed a higher radiation exposure compared with standard chest CT. Furthermore, HRs markedly influenced the dose exposure when using the ECG-gated chest pain protocol. (orig.)

  2. Practical considerations for optimizing cardiac computed tomography protocols for comprehensive acquisition prior to transcatheter aortic valve replacement.

    Science.gov (United States)

    Khalique, Omar K; Pulerwitz, Todd C; Halliburton, Sandra S; Kodali, Susheel K; Hahn, Rebecca T; Nazif, Tamim M; Vahl, Torsten P; George, Isaac; Leon, Martin B; D'Souza, Belinda; Einstein, Andrew J

    2016-01-01

    Transcatheter aortic valve replacement (TAVR) is performed frequently in patients with severe, symptomatic aortic stenosis who are at high risk or inoperable for open surgical aortic valve replacement. Computed tomography angiography (CTA) has become the gold standard imaging modality for pre-TAVR cardiac anatomic and vascular access assessment. Traditionally, cardiac CTA has been most frequently used for assessment of coronary artery stenosis, and scanning protocols have generally been tailored for this purpose. Pre-TAVR CTA has different goals than coronary CTA and the high prevalence of chronic kidney disease in the TAVR patient population creates a particular need to optimize protocols for a reduction in iodinated contrast volume. This document reviews details which allow the physician to tailor CTA examinations to maximize image quality and minimize harm, while factoring in multiple patient and scanner variables which must be considered in customizing a pre-TAVR protocol. Copyright © 2016 Society of Cardiovascular Computed Tomography. Published by Elsevier Inc. All rights reserved.

  3. A simple method for estimating the effective dose in dental CT. Conversion factors and calculation for a clinical low-dose protocol

    International Nuclear Information System (INIS)

    Homolka, P.; Kudler, H.; Nowotny, R.; Gahleitner, A.; Wien Univ.

    2001-01-01

    An easily appliable method to estimate effective dose including in its definition the high radio-sensitivity of the salivary glands from dental computed tomography is presented. Effective doses were calculated for a markedly dose reduced dental CT protocol as well as for standard settings. Data are compared with effective doses from the literature obtained with other modalities frequently used in dental care. Methods: Conversion factors based on the weighted Computed Tomography Dose Index were derived from published data to calculate effective dose values for various CT exposure settings. Results: Conversion factors determined can be used for clinically used kVp settings and prefiltrations. With reduced tube current an effective dose for a CT examination of the maxilla of 22 μSv can be achieved, which compares to values typically obtained with panoramic radiography (26 μSv). A CT scan of the mandible, respectively, gives 123 μSv comparable to a full mouth survey with intraoral films (150 μSv). Conclusion: For standard CT scan protocols of the mandible, effective doses exceed 600 μSv. Hence, low dose protocols for dental CT should be considered whenever feasable, especially for paediatric patients. If hard tissue diagnoses is performed, the potential of dose reduction is significant despite the higher image noise levels as readability is still adequate. (orig.) [de

  4. Breaking Megrelishvili protocol using matrix diagonalization

    Science.gov (United States)

    Arzaki, Muhammad; Triantoro Murdiansyah, Danang; Adi Prabowo, Satrio

    2018-03-01

    In this article we conduct a theoretical security analysis of Megrelishvili protocol—a linear algebra-based key agreement between two participants. We study the computational complexity of Megrelishvili vector-matrix problem (MVMP) as a mathematical problem that strongly relates to the security of Megrelishvili protocol. In particular, we investigate the asymptotic upper bounds for the running time and memory requirement of the MVMP that involves diagonalizable public matrix. Specifically, we devise a diagonalization method for solving the MVMP that is asymptotically faster than all of the previously existing algorithms. We also found an important counterintuitive result: the utilization of primitive matrix in Megrelishvili protocol makes the protocol more vulnerable to attacks.

  5. Formal Analysis of SET and NSL Protocols Using the Interpretation Functions-Based Method

    Directory of Open Access Journals (Sweden)

    Hanane Houmani

    2012-01-01

    Full Text Available Most applications in the Internet such as e-banking and e-commerce use the SET and the NSL protocols to protect the communication channel between the client and the server. Then, it is crucial to ensure that these protocols respect some security properties such as confidentiality, authentication, and integrity. In this paper, we analyze the SET and the NSL protocols with respect to the confidentiality (secrecy property. To perform this analysis, we use the interpretation functions-based method. The main idea behind the interpretation functions-based technique is to give sufficient conditions that allow to guarantee that a cryptographic protocol respects the secrecy property. The flexibility of the proposed conditions allows the verification of daily-life protocols such as SET and NSL. Also, this method could be used under different assumptions such as a variety of intruder abilities including algebraic properties of cryptographic primitives. The NSL protocol, for instance, is analyzed with and without the homomorphism property. We show also, using the SET protocol, the usefulness of this approach to correct weaknesses and problems discovered during the analysis.

  6. A protocol for the commissioning and quality assurance of new planning computers

    International Nuclear Information System (INIS)

    Ratcliffe, A.J.; Aukett, R.J.; Bolton, S.C.; Bonnett, D.E.

    1995-01-01

    Any new radiotherapy planning system needs to be thoroughly tested. Besides checking the accuracy of the algorithm by comparing plans done on the system with measurements done in a phantom, it is desirable for the user to compare the new equipment with a tried and tested system before it is used clinically. To test our recently purchased planning systems, a protocol was developed for running a comparison between these and our existing planning computer, an IGE RTPLAN. A summary of the test protocol that was developed is as follows: (1) A series of plans is created on the old system, to include at least one plan of each common type. The series includes at least one plan with a bone inhomogeneity, and one with an air or lung inhomogeneity, and these plans are computed both with and without inhomogeneity correction. Point dose calculations are made for a number of positions on each plan, including the dose at the centre of the treatment volume. (2) Each of these plans is reproduced as accurately as possible on the new system using the original CT data and patient outlines. (3) The old and new plans, including those with and without inhomogeneity correction are overlaid and compared using the following criteria: (a) how well the volumes of interest coincide, (b) how accurately the positions of the points of interest are reproduced, (c) the doses at the points of interest, (d) the distances between the isodoses defining the dose plateau, (e) the maximum displacement between the corresponding pairs of isodoses in the dose gradient around the tumour. The protocol has been used to test two systems: the (newly developed) Siemens Axiom and the Helax TMS (running on a DEC Alpha). A summary of the results obtained will be presented. These were sufficient to show up several minor problems, particularly in the Axiom system

  7. Computer assisted strain-gauge plethysmography is a practical method of excluding deep venous thrombosis

    International Nuclear Information System (INIS)

    Goddard, A.J.P.; Chakraverty, S.; Wright, J.

    2001-01-01

    AIM: To evaluate a computed strain-gauge plethysmograph (CSGP) as a screening tool to exclude above knee deep venous thrombosis (DVT). METHODS: The first phase took place in the Radiology department. One hundred and forty-nine patients had both Doppler ultrasound and CSGP performed. Discordant results were resolved by venography where possible. The second phase took place in an acute medical admissions ward using a modified protocol. A further 173 patients had both studies performed. The results were collated and analysed. RESULTS: Phase 1. The predictive value of a negative CSGP study was 98%. There were two false-negative CSGP results (false-negative rate 5%), including one equivocal CSGP study which had deep venous thrombosis on ultrasound examination. Two patients thought to have thrombus on ultrasound proved not to have acute thrombus on venography. Phase 2. The negative predictive value of CSGP using a modified protocol was 97%. There were two definite and one possible false-negative studies (false-negative rate 4-7%). CONCLUSION: Computer strain-gauge plethysmograph can provide a simple, cheap and effective method of excluding lower limb DVT. However, its use should be rigorously assessed in each hospital in which it is used. Goddard, A.J.P., Chakraverty, S. and Wright, J. (2001)

  8. A transport layer protocol for the future high speed grid computing: SCTP versus fast tcp multihoming

    International Nuclear Information System (INIS)

    Arshad, M.J.; Mian, M.S.

    2010-01-01

    TCP (Transmission Control Protocol) is designed for reliable data transfer on the global Internet today. One of its strong points is its use of flow control algorithm that allows TCP to adjust its congestion window if network congestion is occurred. A number of studies and investigations have confirmed that traditional TCP is not suitable for each and every type of application, for example, bulk data transfer over high speed long distance networks. TCP sustained the time of low-capacity and short-delay networks, however, for numerous factors it cannot be capable to efficiently deal with today's growing technologies (such as wide area Grid computing and optical-fiber networks). This research work surveys the congestion control mechanism of transport protocols, and addresses the different issues involved for transferring the huge data over the future high speed Grid computing and optical-fiber networks. This work also presents the simulations to compare the performance of FAST TCP multihoming with SCTP (Stream Control Transmission Protocol) multihoming in high speed networks. These simulation results show that FAST TCP multihoming achieves bandwidth aggregation efficiently and outperforms SCTP multihoming under a similar network conditions. The survey and simulation results presented in this work reveal that multihoming support into FAST TCP does provide a lot of benefits like redundancy, load-sharing and policy-based routing, which largely improves the whole performance of a network and can meet the increasing demand of the future high-speed network infrastructures (such as in Grid computing). (author)

  9. Masonry fireplace emissions test method: Repeatability and sensitivity to fueling protocol.

    Science.gov (United States)

    Stern, C H; Jaasma, D R; Champion, M R

    1993-03-01

    A test method for masonry fireplaces has been evaluated during testing on six masonry fireplace configurations. The method determines carbon monoxide and particulate matter emission rates (g/h) and factors (g/kg) and does not require weighing of the appliance to determine the timing of fuel loading.The intralaboratory repeatability of the test method has been determined from multiple tests on the six fireplaces. For the tested fireplaces, the ratio of the highest to lowest measured PM rate averaged 1.17 and in no case was greater than 1.32. The data suggest that some of the variation is due to differences in fuel properties.The influence of fueling protocol on emissions has also been studied. A modified fueling protocol, tested in large and small fireplaces, reduced CO and PM emission factors by roughly 40% and reduced CO and PM rates from 0 to 30%. For both of these fireplaces, emission rates were less sensitive to fueling protocol than emission factors.

  10. Efficient Secure Multiparty Subset Computation

    Directory of Open Access Journals (Sweden)

    Sufang Zhou

    2017-01-01

    Full Text Available Secure subset problem is important in secure multiparty computation, which is a vital field in cryptography. Most of the existing protocols for this problem can only keep the elements of one set private, while leaking the elements of the other set. In other words, they cannot solve the secure subset problem perfectly. While a few studies have addressed actual secure subsets, these protocols were mainly based on the oblivious polynomial evaluations with inefficient computation. In this study, we first design an efficient secure subset protocol for sets whose elements are drawn from a known set based on a new encoding method and homomorphic encryption scheme. If the elements of the sets are taken from a large domain, the existing protocol is inefficient. Using the Bloom filter and homomorphic encryption scheme, we further present an efficient protocol with linear computational complexity in the cardinality of the large set, and this is considered to be practical for inputs consisting of a large number of data. However, the second protocol that we design may yield a false positive. This probability can be rapidly decreased by reexecuting the protocol with different hash functions. Furthermore, we present the experimental performance analyses of these protocols.

  11. Regressive Imagery in Creative Problem-Solving: Comparing Verbal Protocols of Expert and Novice Visual Artists and Computer Programmers

    Science.gov (United States)

    Kozbelt, Aaron; Dexter, Scott; Dolese, Melissa; Meredith, Daniel; Ostrofsky, Justin

    2015-01-01

    We applied computer-based text analyses of regressive imagery to verbal protocols of individuals engaged in creative problem-solving in two domains: visual art (23 experts, 23 novices) and computer programming (14 experts, 14 novices). Percentages of words involving primary process and secondary process thought, plus emotion-related words, were…

  12. Blind quantum computation with identity authentication

    Science.gov (United States)

    Li, Qin; Li, Zhulin; Chan, Wai Hong; Zhang, Shengyu; Liu, Chengdong

    2018-04-01

    Blind quantum computation (BQC) allows a client with relatively few quantum resources or poor quantum technologies to delegate his computational problem to a quantum server such that the client's input, output, and algorithm are kept private. However, all existing BQC protocols focus on correctness verification of quantum computation but neglect authentication of participants' identity which probably leads to man-in-the-middle attacks or denial-of-service attacks. In this work, we use quantum identification to overcome such two kinds of attack for BQC, which will be called QI-BQC. We propose two QI-BQC protocols based on a typical single-server BQC protocol and a double-server BQC protocol. The two protocols can ensure both data integrity and mutual identification between participants with the help of a third trusted party (TTP). In addition, an unjammable public channel between a client and a server which is indispensable in previous BQC protocols is unnecessary, although it is required between TTP and each participant at some instant. Furthermore, the method to achieve identity verification in the presented protocols is general and it can be applied to other similar BQC protocols.

  13. A Simple XML Producer-Consumer Protocol

    Science.gov (United States)

    Smith, Warren; Gunter, Dan; Quesnel, Darcy; Biegel, Bryan (Technical Monitor)

    2001-01-01

    There are many different projects from government, academia, and industry that provide services for delivering events in distributed environments. The problem with these event services is that they are not general enough to support all uses and they speak different protocols so that they cannot interoperate. We require such interoperability when we, for example, wish to analyze the performance of an application in a distributed environment. Such an analysis might require performance information from the application, computer systems, networks, and scientific instruments. In this work we propose and evaluate a standard XML-based protocol for the transmission of events in distributed systems. One recent trend in government and academic research is the development and deployment of computational grids. Computational grids are large-scale distributed systems that typically consist of high-performance compute, storage, and networking resources. Examples of such computational grids are the DOE Science Grid, the NASA Information Power Grid (IPG), and the NSF Partnerships for Advanced Computing Infrastructure (PACIs). The major effort to deploy these grids is in the area of developing the software services to allow users to execute applications on these large and diverse sets of resources. These services include security, execution of remote applications, managing remote data, access to information about resources and services, and so on. There are several toolkits for providing these services such as Globus, Legion, and Condor. As part of these efforts to develop computational grids, the Global Grid Forum is working to standardize the protocols and APIs used by various grid services. This standardization will allow interoperability between the client and server software of the toolkits that are providing the grid services. The goal of the Performance Working Group of the Grid Forum is to standardize protocols and representations related to the storage and distribution of

  14. Task Group on Computer/Communication Protocols for Bibliographic Data Exchange. Interim Report = Groupe de Travail sur les Protocoles de Communication/Ordinateurs pour l'Exchange de Donnees Bibliographiques. Rapport d'Etape. May 1983.

    Science.gov (United States)

    Canadian Network Papers, 1983

    1983-01-01

    This preliminary report describes the work to date of the Task Group on Computer/Communication protocols for Bibliographic Data Interchange, which was formed in 1980 to develop a set of protocol standards to facilitate communication between heterogeneous library and information systems within the framework of Open Systems Interconnection (OSI). A…

  15. Quantitative methods for studying design protocols

    CERN Document Server

    Kan, Jeff WT

    2017-01-01

    This book is aimed at researchers and students who would like to engage in and deepen their understanding of design cognition research. The book presents new approaches for analyzing design thinking and proposes methods of measuring design processes. These methods seek to quantify design issues and design processes that are defined based on notions from the Function-Behavior-Structure (FBS) design ontology and from linkography. A linkograph is a network of linked design moves or segments. FBS ontology concepts have been used in both design theory and design thinking research and have yielded numerous results. Linkography is one of the most influential and elegant design cognition research methods. In this book Kan and Gero provide novel and state-of-the-art methods of analyzing design protocols that offer insights into design cognition by integrating segmentation with linkography by assigning FBS-based codes to design moves or segments and treating links as FBS transformation processes. They propose and test ...

  16. Computational Methodologies for Developing Structure–Morphology–Performance Relationships in Organic Solar Cells: A Protocol Review

    KAUST Repository

    Do, Khanh

    2016-09-08

    We outline a step-by-step protocol that incorporates a number of theoretical and computational methodologies to evaluate the structural and electronic properties of pi-conjugated semiconducting materials in the condensed phase. Our focus is on methodologies appropriate for the characterization, at the molecular level, of the morphology in blend systems consisting of an electron donor and electron acceptor, of importance for understanding the performance properties of bulk-heterojunction organic solar cells. The protocol is formulated as an introductory manual for investigators who aim to study the bulk-heterojunction morphology in molecular details, thereby facilitating the development of structure morphology property relationships when used in tandem with experimental results.

  17. Validation of internal dosimetry protocols based on stochastic method

    International Nuclear Information System (INIS)

    Mendes, Bruno M.; Fonseca, Telma C.F.; Almeida, Iassudara G.; Trindade, Bruno M.; Campos, Tarcisio P.R.

    2015-01-01

    Computational phantoms adapted to Monte Carlo codes have been applied successfully in radiation dosimetry fields. NRI research group has been developing Internal Dosimetry Protocols - IDPs, addressing distinct methodologies, software and computational human-simulators, to perform internal dosimetry, especially for new radiopharmaceuticals. Validation of the IDPs is critical to ensure the reliability of the simulations results. Inter comparisons of data from literature with those produced by our IDPs is a suitable method for validation. The aim of this study was to validate the IDPs following such inter comparison procedure. The Golem phantom has been reconfigured to run on MCNP5. The specific absorbed fractions (SAF) for photon at 30, 100 and 1000 keV energies were simulated based on the IDPs and compared with reference values (RV) published by Zankl and Petoussi-Henss, 1998. The SAF average differences from RV and those obtained in IDP simulations was 2.3 %. The SAF largest differences were found in situations involving low energy photons at 30 keV. The Adrenals and thyroid, i.e. the lowest mass organs, had the highest SAF discrepancies towards RV as 7.2 % and 3.8 %, respectively. The statistic differences of SAF applying our IDPs from reference values were considered acceptable at the 30, 100 and 1000 keV spectra. We believe that the main reason for the discrepancies in IDPs run, found in lower masses organs, was due to our source definition methodology. Improvements of source spatial distribution in the voxels may provide outputs more consistent with reference values for lower masses organs. (author)

  18. Validation of internal dosimetry protocols based on stochastic method

    Energy Technology Data Exchange (ETDEWEB)

    Mendes, Bruno M.; Fonseca, Telma C.F., E-mail: bmm@cdtn.br [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil); Almeida, Iassudara G.; Trindade, Bruno M.; Campos, Tarcisio P.R., E-mail: tprcampos@yahoo.com.br [Universidade Federal de Minas Gerais (DEN/UFMG), Belo Horizonte, MG (Brazil). Departamento de Engenharia Nuclear

    2015-07-01

    Computational phantoms adapted to Monte Carlo codes have been applied successfully in radiation dosimetry fields. NRI research group has been developing Internal Dosimetry Protocols - IDPs, addressing distinct methodologies, software and computational human-simulators, to perform internal dosimetry, especially for new radiopharmaceuticals. Validation of the IDPs is critical to ensure the reliability of the simulations results. Inter comparisons of data from literature with those produced by our IDPs is a suitable method for validation. The aim of this study was to validate the IDPs following such inter comparison procedure. The Golem phantom has been reconfigured to run on MCNP5. The specific absorbed fractions (SAF) for photon at 30, 100 and 1000 keV energies were simulated based on the IDPs and compared with reference values (RV) published by Zankl and Petoussi-Henss, 1998. The SAF average differences from RV and those obtained in IDP simulations was 2.3 %. The SAF largest differences were found in situations involving low energy photons at 30 keV. The Adrenals and thyroid, i.e. the lowest mass organs, had the highest SAF discrepancies towards RV as 7.2 % and 3.8 %, respectively. The statistic differences of SAF applying our IDPs from reference values were considered acceptable at the 30, 100 and 1000 keV spectra. We believe that the main reason for the discrepancies in IDPs run, found in lower masses organs, was due to our source definition methodology. Improvements of source spatial distribution in the voxels may provide outputs more consistent with reference values for lower masses organs. (author)

  19. Comparison of low- and ultralow-dose computed tomography protocols for quantitative lung and airway assessment.

    Science.gov (United States)

    Hammond, Emily; Sloan, Chelsea; Newell, John D; Sieren, Jered P; Saylor, Melissa; Vidal, Craig; Hogue, Shayna; De Stefano, Frank; Sieren, Alexa; Hoffman, Eric A; Sieren, Jessica C

    2017-09-01

    Quantitative computed tomography (CT) measures are increasingly being developed and used to characterize lung disease. With recent advances in CT technologies, we sought to evaluate the quantitative accuracy of lung imaging at low- and ultralow-radiation doses with the use of iterative reconstruction (IR), tube current modulation (TCM), and spectral shaping. We investigated the effect of five independent CT protocols reconstructed with IR on quantitative airway measures and global lung measures using an in vivo large animal model as a human subject surrogate. A control protocol was chosen (NIH-SPIROMICS + TCM) and five independent protocols investigating TCM, low- and ultralow-radiation dose, and spectral shaping. For all scans, quantitative global parenchymal measurements (mean, median and standard deviation of the parenchymal HU, along with measures of emphysema) and global airway measurements (number of segmented airways and pi10) were generated. In addition, selected individual airway measurements (minor and major inner diameter, wall thickness, inner and outer area, inner and outer perimeter, wall area fraction, and inner equivalent circle diameter) were evaluated. Comparisons were made between control and target protocols using difference and repeatability measures. Estimated CT volume dose index (CTDIvol) across all protocols ranged from 7.32 mGy to 0.32 mGy. Low- and ultralow-dose protocols required more manual editing and resolved fewer airway branches; yet, comparable pi10 whole lung measures were observed across all protocols. Similar trends in acquired parenchymal and airway measurements were observed across all protocols, with increased measurement differences using the ultralow-dose protocols. However, for small airways (1.9 ± 0.2 mm) and medium airways (5.7 ± 0.4 mm), the measurement differences across all protocols were comparable to the control protocol repeatability across breath holds. Diameters, wall thickness, wall area fraction

  20. Security Protocol Review Method Analyzer(SPRMAN)

    OpenAIRE

    Navaz, A. S. Syed; Narayanan, H. Iyyappa; Vinoth, R.

    2013-01-01

    This Paper is designed using J2EE (JSP, SERVLET), HTML as front end and a Oracle 9i is back end. SPRMAN is been developed for the client British Telecom (BT) UK., Telecom company. Actually the requirement of BT is, they are providing Network Security Related Products to their IT customers like Virtusa,Wipro,HCL etc., This product is framed out by set of protocols and these protocols are been associated with set of components. By grouping all these protocols and components together, product is...

  1. Quantum Communication Attacks on Classical Cryptographic Protocols

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre

    , one can show that the protocol remains secure even under such an attack. However, there are also cases where the honest players are quantum as well, even if the protocol uses classical communication. For instance, this is the case when classical multiparty computation is used as a “subroutine......In the literature on cryptographic protocols, it has been studied several times what happens if a classical protocol is attacked by a quantum adversary. Usually, this is taken to mean that the adversary runs a quantum algorithm, but communicates classically with the honest players. In several cases......” in quantum multiparty computation. Furthermore, in the future, players in a protocol may employ quantum computing simply to improve efficiency of their local computation, even if the communication is supposed to be classical. In such cases, it no longer seems clear that a quantum adversary must be limited...

  2. Quantum Communication Attacks on Classical Cryptographic Protocols

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre

    , one can show that the protocol remains secure even under such an attack. However, there are also cases where the honest players are quantum as well, even if the protocol uses classical communication. For instance, this is the case when classical multiparty computation is used as a “subroutine......” in quantum multiparty computation. Furthermore, in the future, players in a protocol may employ quantum computing simply to improve efficiency of their local computation, even if the communication is supposed to be classical. In such cases, it no longer seems clear that a quantum adversary must be limited......In the literature on cryptographic protocols, it has been studied several times what happens if a classical protocol is attacked by a quantum adversary. Usually, this is taken to mean that the adversary runs a quantum algorithm, but communicates classically with the honest players. In several cases...

  3. Methods in Molecular Biology Mouse Genetics: Methods and Protocols | Center for Cancer Research

    Science.gov (United States)

    Mouse Genetics: Methods and Protocols provides selected mouse genetic techniques and their application in modeling varieties of human diseases. The chapters are mainly focused on the generation of different transgenic mice to accomplish the manipulation of genes of interest, tracing cell lineages, and modeling human diseases.

  4. A computational protocol for the study of circularly polarized phosphorescence and circular dichroism in spin-forbidden absorption

    DEFF Research Database (Denmark)

    Kaminski, Maciej; Cukras, Janusz; Pecul, Magdalena

    2015-01-01

    We present a computational methodology to calculate the intensity of circular dichroism (CD) in spinforbidden absorption and of circularly polarized phosphorescence (CPP) signals, a manifestation of the optical activity of the triplet–singlet transitions in chiral compounds. The protocol is based...

  5. Computer simulations suggest that acute correction of hyperglycaemia with an insulin bolus protocol might be useful in brain FDG PET

    Energy Technology Data Exchange (ETDEWEB)

    Buchert, R.; Brenner, W.; Apostolova, I.; Mester, J.; Clausen, M. [University Medical Center Hamburg-Eppendorf (Germany). Dept. of Nuclear Medicine; Santer, R. [University Medical Center Hamburg-Eppendorf (Germany). Center for Gynaecology, Obstetrics and Paediatrics; Silverman, D.H.S. [David Geffen School of Medicine at UCLA, Los Angeles, CA (United States). Dept. of Molecular and Medical Pharmacology

    2009-07-01

    FDG PET in hyperglycaemic subjects often suffers from limited statistical image quality, which may hamper visual and quantitative evaluation. In our study the following insulin bolus protocol is proposed for acute correction of hyperglycaemia (> 7.0 mmol/l) in brain FDG PET. (i) Intravenous bolus injection of short-acting insulin, one I.E. for each 0.6 mmol/l blood glucose above 7.0. (ii) If 20 min after insulin administration plasma glucose is {<=} 7.0 mmol/l, proceed to (iii). If insulin has not taken sufficient effect step back to (i). Compute insulin dose with the updated blood glucose level. (iii) Wait further 20 min before injection of FDG. (iv) Continuous supervision of the patient during the whole scanning procedure. The potential of this protocol for improvement of image quality in brain FDG PET in hyperglycaemic subjects was evaluated by computer simulations within the Sokoloff model. A plausibility check of the prediction of the computer simulations on the magnitude of the effect that might be achieved by correction of hyperglycaemia was performed by retrospective evaluation of the relation between blood glucose level and brain FDG uptake in 89 subjects in whom FDG PET had been performed for diagnosis of Alzheimer's disease. The computer simulations suggested that acute correction of hyperglycaemia according to the proposed bolus insulin protocol might increase the FDG uptake of the brain by up to 80%. The magnitude of this effect was confirmed by the patient data. The proposed management protocol for acute correction of hyperglycaemia with insulin has the potential to significantly improve the statistical quality of brain FDG PET images. This should be confirmed in a prospective study in patients. (orig.)

  6. Computer simulations suggest that acute correction of hyperglycaemia with an insulin bolus protocol might be useful in brain FDG PET

    International Nuclear Information System (INIS)

    Buchert, R.; Brenner, W.; Apostolova, I.; Mester, J.; Clausen, M.; Santer, R.; Silverman, D.H.S.

    2009-01-01

    FDG PET in hyperglycaemic subjects often suffers from limited statistical image quality, which may hamper visual and quantitative evaluation. In our study the following insulin bolus protocol is proposed for acute correction of hyperglycaemia (> 7.0 mmol/l) in brain FDG PET. (i) Intravenous bolus injection of short-acting insulin, one I.E. for each 0.6 mmol/l blood glucose above 7.0. (ii) If 20 min after insulin administration plasma glucose is ≤ 7.0 mmol/l, proceed to (iii). If insulin has not taken sufficient effect step back to (i). Compute insulin dose with the updated blood glucose level. (iii) Wait further 20 min before injection of FDG. (iv) Continuous supervision of the patient during the whole scanning procedure. The potential of this protocol for improvement of image quality in brain FDG PET in hyperglycaemic subjects was evaluated by computer simulations within the Sokoloff model. A plausibility check of the prediction of the computer simulations on the magnitude of the effect that might be achieved by correction of hyperglycaemia was performed by retrospective evaluation of the relation between blood glucose level and brain FDG uptake in 89 subjects in whom FDG PET had been performed for diagnosis of Alzheimer's disease. The computer simulations suggested that acute correction of hyperglycaemia according to the proposed bolus insulin protocol might increase the FDG uptake of the brain by up to 80%. The magnitude of this effect was confirmed by the patient data. The proposed management protocol for acute correction of hyperglycaemia with insulin has the potential to significantly improve the statistical quality of brain FDG PET images. This should be confirmed in a prospective study in patients. (orig.)

  7. Distributed project scheduling at NASA: Requirements for manual protocols and computer-based support

    Science.gov (United States)

    Richards, Stephen F.

    1992-01-01

    The increasing complexity of space operations and the inclusion of interorganizational and international groups in the planning and control of space missions lead to requirements for greater communication, coordination, and cooperation among mission schedulers. These schedulers must jointly allocate scarce shared resources among the various operational and mission oriented activities while adhering to all constraints. This scheduling environment is complicated by such factors as the presence of varying perspectives and conflicting objectives among the schedulers, the need for different schedulers to work in parallel, and limited communication among schedulers. Smooth interaction among schedulers requires the use of protocols that govern such issues as resource sharing, authority to update the schedule, and communication of updates. This paper addresses the development and characteristics of such protocols and their use in a distributed scheduling environment that incorporates computer-aided scheduling tools. An example problem is drawn from the domain of Space Shuttle mission planning.

  8. Computational Biology Methods for Characterization of Pluripotent Cells.

    Science.gov (United States)

    Araúzo-Bravo, Marcos J

    2016-01-01

    Pluripotent cells are a powerful tool for regenerative medicine and drug discovery. Several techniques have been developed to induce pluripotency, or to extract pluripotent cells from different tissues and biological fluids. However, the characterization of pluripotency requires tedious, expensive, time-consuming, and not always reliable wet-lab experiments; thus, an easy, standard quality-control protocol of pluripotency assessment remains to be established. Here to help comes the use of high-throughput techniques, and in particular, the employment of gene expression microarrays, which has become a complementary technique for cellular characterization. Research has shown that the transcriptomics comparison with an Embryonic Stem Cell (ESC) of reference is a good approach to assess the pluripotency. Under the premise that the best protocol is a computer software source code, here I propose and explain line by line a software protocol coded in R-Bioconductor for pluripotency assessment based on the comparison of transcriptomics data of pluripotent cells with an ESC of reference. I provide advice for experimental design, warning about possible pitfalls, and guides for results interpretation.

  9. The impact of different cone beam computed tomography and multi-slice computed tomography scan parameters on virtual three-dimensional model accuracy using a highly precise ex vivo evaluation method.

    Science.gov (United States)

    Matta, Ragai-Edward; von Wilmowsky, Cornelius; Neuhuber, Winfried; Lell, Michael; Neukam, Friedrich W; Adler, Werner; Wichmann, Manfred; Bergauer, Bastian

    2016-05-01

    Multi-slice computed tomography (MSCT) and cone beam computed tomography (CBCT) are indispensable imaging techniques in advanced medicine. The possibility of creating virtual and corporal three-dimensional (3D) models enables detailed planning in craniofacial and oral surgery. The objective of this study was to evaluate the impact of different scan protocols for CBCT and MSCT on virtual 3D model accuracy using a software-based evaluation method that excludes human measurement errors. MSCT and CBCT scans with different manufacturers' predefined scan protocols were obtained from a human lower jaw and were superimposed with a master model generated by an optical scan of an industrial noncontact scanner. To determine the accuracy, the mean and standard deviations were calculated, and t-tests were used for comparisons between the different settings. Averaged over 10 repeated X-ray scans per method and 19 measurement points per scan (n = 190), it was found that the MSCT scan protocol 140 kV delivered the most accurate virtual 3D model, with a mean deviation of 0.106 mm compared to the master model. Only the CBCT scans with 0.2-voxel resolution delivered a similar accurate 3D model (mean deviation 0.119 mm). Within the limitations of this study, it was demonstrated that the accuracy of a 3D model of the lower jaw depends on the protocol used for MSCT and CBCT scans. Copyright © 2016 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  10. Simple proof of the unconditional security of the Bennett 1992 quantum key distribution protocol

    International Nuclear Information System (INIS)

    Zhang Quan; Tang Chaojing

    2002-01-01

    It is generally accepted that quantum key distribution (QKD) could supply legitimate users with unconditional security during their communication. Quite a lot of satisfactory efforts have been achieved on experimentations with quantum cryptography. However, when the eavesdropper has extra-powerful computational ability, has access to a quantum computer, for example, and can carry into execution any eavesdropping measurement that is allowed by the laws of physics, the security against such attacks has not been widely studied and rigorously proved for most QKD protocols. Quite recently, Shor and Preskill proved concisely the unconditional security of the Bennett-Brassard 1984 (BB84) protocol. Their method is highly valued for its clarity of concept and concision of form. In order to take advantage of the Shor-Preskill technique in their proof of the unconditional security of the BB84 QKD protocol, we introduced in this paper a transformation that can translate the Bennett 1992 (B92) protocol into the BB84 protocol. By proving that the transformation leaks no more information to the eavesdropper, we proved the unconditional security of the B92 protocol. We also settled the problem proposed by Lo about how to prove the unconditional security of the B92 protocol with the Shor-Preskill method

  11. New Computational Approaches for NMR-based Drug Design: A Protocol for Ligand Docking to Flexible Target Sites

    International Nuclear Information System (INIS)

    Gracia, Luis; Speidel, Joshua A.; Weinstein, Harel

    2006-01-01

    NMR-based drug design has met with some success in the last decade, as illustrated in numerous instances by Fesik's ''ligand screening by NMR'' approach. Ongoing efforts to generalize this success have led us to the development of a new paradigm in which quantitative computational approaches are being integrated with NMR derived data and biological assays. The key component of this work is the inclusion of the intrinsic dynamic quality of NMR structures in theoretical models and its use in docking. A new computational protocol is introduced here, designed to dock small molecule ligands to flexible proteins derived from NMR structures. The algorithm makes use of a combination of simulated annealing monte carlo simulations (SA/MC) and a mean field potential informed by the NMR data. The new protocol is illustrated in the context of an ongoing project aimed at developing new selective inhibitors for the PCAF bromodomains that interact with HIV Tat

  12. SU-F-I-43: A Software-Based Statistical Method to Compute Low Contrast Detectability in Computed Tomography Images

    Energy Technology Data Exchange (ETDEWEB)

    Chacko, M; Aldoohan, S [University of Oklahoma Health Sciences Center, Oklahoma City, OK (United States)

    2016-06-15

    Purpose: The low contrast detectability (LCD) of a CT scanner is its ability to detect and display faint lesions. The current approach to quantify LCD is achieved using vendor-specific methods and phantoms, typically by subjectively observing the smallest size object at a contrast level above phantom background. However, this approach does not yield clinically applicable values for LCD. The current study proposes a statistical LCD metric using software tools to not only to assess scanner performance, but also to quantify the key factors affecting LCD. This approach was developed using uniform QC phantoms, and its applicability was then extended under simulated clinical conditions. Methods: MATLAB software was developed to compute LCD using a uniform image of a QC phantom. For a given virtual object size, the software randomly samples the image within a selected area, and uses statistical analysis based on Student’s t-distribution to compute the LCD as the minimal Hounsfield Unit’s that can be distinguished from the background at the 95% confidence level. Its validity was assessed by comparison with the behavior of a known QC phantom under various scan protocols and a tissue-mimicking phantom. The contributions of beam quality and scattered radiation upon the computed LCD were quantified by using various external beam-hardening filters and phantom lengths. Results: As expected, the LCD was inversely related to object size under all scan conditions. The type of image reconstruction kernel filter and tissue/organ type strongly influenced the background noise characteristics and therefore, the computed LCD for the associated image. Conclusion: The proposed metric and its associated software tools are vendor-independent and can be used to analyze any LCD scanner performance. Furthermore, the method employed can be used in conjunction with the relationships established in this study between LCD and tissue type to extend these concepts to patients’ clinical CT

  13. Chapter 15: Commercial New Construction Evaluation Protocol. The Uniform Methods Project: Methods for Determining Energy Efficiency Savings for Specific Measures

    Energy Technology Data Exchange (ETDEWEB)

    Kurnik, Charles W. [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Keates, Steven [ADM Associates, Inc., Atlanta, GA (United States)

    2017-10-09

    This protocol is intended to describe the recommended method when evaluating the whole-building performance of new construction projects in the commercial sector. The protocol focuses on energy conservation measures (ECMs) or packages of measures where evaluators can analyze impacts using building simulation. These ECMs typically require the use of calibrated building simulations under Option D of the International Performance Measurement and Verification Protocol (IPMVP).

  14. Transgenic mouse - Methods and protocols, 2nd edition

    Directory of Open Access Journals (Sweden)

    Carlo Alberto Redi

    2011-09-01

    Full Text Available Marten H. Hofner (from the Dept. of Pathology of the Groningen University and Jan M. van Deursen (from the Mayo College of Medicine at Rochester, MN, USA provided us with the valuable second edition of Transgenic mouse: in fact, eventhough we are in the –omics era and already equipped with the state-of-the-art techniques in whatsoever field, still we need to have gene(s functional analysis data to understand common and complex deseases. Transgenesis is still an irreplaceable method and protocols to well perform it are more than welcome. Here, how to get genetic modified mice (the quintessential model of so many human deseases considering how much of the human genes are conserved in the mouse and the great block of genic synteny existing between the two genomes is analysed in deep and presented in clearly detailed step by step protocols....

  15. Evaluation of a focussed protocol for hand-held echocardiography and computer-assisted auscultation in detecting latent rheumatic heart disease in scholars.

    Science.gov (United States)

    Zühlke, Liesl J; Engel, Mark E; Nkepu, Simpiwe; Mayosi, Bongani M

    2016-08-01

    Introduction Echocardiography is the diagnostic test of choice for latent rheumatic heart disease. The utility of echocardiography for large-scale screening is limited by high cost, complex diagnostic protocols, and time to acquire multiple images. We evaluated the performance of a brief hand-held echocardiography protocol and computer-assisted auscultation in detecting latent rheumatic heart disease with or without pathological murmur. A total of 27 asymptomatic patients with latent rheumatic heart disease based on the World Heart Federation criteria and 66 healthy controls were examined by standard cardiac auscultation to detect pathological murmur. Hand-held echocardiography using a focussed protocol that utilises one view - that is, the parasternal long-axis view - and one measurement - that is, mitral regurgitant jet - and a computer-assisted auscultation utilising an automated decision tool were performed on all patients. The sensitivity and specificity of computer-assisted auscultation in latent rheumatic heart disease were 4% (95% CI 1.0-20.4%) and 93.7% (95% CI 84.5-98.3%), respectively. The sensitivity and specificity of the focussed hand-held echocardiography protocol for definite rheumatic heart disease were 92.3% (95% CI 63.9-99.8%) and 100%, respectively. The test reliability of hand-held echocardiography was 98.7% for definite and 94.7% for borderline disease, and the adjusted diagnostic odds ratios were 1041 and 263.9 for definite and borderline disease, respectively. Computer-assisted auscultation has extremely low sensitivity but high specificity for pathological murmur in latent rheumatic heart disease. Focussed hand-held echocardiography has fair sensitivity but high specificity and diagnostic utility for definite or borderline rheumatic heart disease in asymptomatic patients.

  16. Numerical methods in matrix computations

    CERN Document Server

    Björck, Åke

    2015-01-01

    Matrix algorithms are at the core of scientific computing and are indispensable tools in most applications in engineering. This book offers a comprehensive and up-to-date treatment of modern methods in matrix computation. It uses a unified approach to direct and iterative methods for linear systems, least squares and eigenvalue problems. A thorough analysis of the stability, accuracy, and complexity of the treated methods is given. Numerical Methods in Matrix Computations is suitable for use in courses on scientific computing and applied technical areas at advanced undergraduate and graduate level. A large bibliography is provided, which includes both historical and review papers as well as recent research papers. This makes the book useful also as a reference and guide to further study and research work. Åke Björck is a professor emeritus at the Department of Mathematics, Linköping University. He is a Fellow of the Society of Industrial and Applied Mathematics.

  17. A secure RFID mutual authentication protocol for healthcare environments using elliptic curve cryptography.

    Science.gov (United States)

    Jin, Chunhua; Xu, Chunxiang; Zhang, Xiaojun; Zhao, Jining

    2015-03-01

    Radio Frequency Identification(RFID) is an automatic identification technology, which can be widely used in healthcare environments to locate and track staff, equipment and patients. However, potential security and privacy problems in RFID system remain a challenge. In this paper, we design a mutual authentication protocol for RFID based on elliptic curve cryptography(ECC). We use pre-computing method within tag's communication, so that our protocol can get better efficiency. In terms of security, our protocol can achieve confidentiality, unforgeability, mutual authentication, tag's anonymity, availability and forward security. Our protocol also can overcome the weakness in the existing protocols. Therefore, our protocol is suitable for healthcare environments.

  18. Augmented Quadruple-Phase Contrast Media Administration and Triphasic Scan Protocol Increases Image Quality at Reduced Radiation Dose During Computed Tomography Urography.

    Science.gov (United States)

    Saade, Charbel; Mohamad, May; Kerek, Racha; Hamieh, Nadine; Alsheikh Deeb, Ibrahim; El-Achkar, Bassam; Tamim, Hani; Abdul Razzak, Farah; Haddad, Maurice; Abi-Ghanem, Alain S; El-Merhi, Fadi

    The aim of this article was to investigate the opacification of the renal vasculature and the urogenital system during computed tomography urography by using a quadruple-phase contrast media in a triphasic scan protocol. A total of 200 patients with possible urinary tract abnormalities were equally divided between 2 protocols. Protocol A used the conventional single bolus and quadruple-phase scan protocol (pre, arterial, venous, and delayed), retrospectively. Protocol B included a quadruple-phase contrast media injection with a triphasic scan protocol (pre, arterial and combined venous, and delayed), prospectively. Each protocol used 100 mL contrast and saline at a flow rate of 4.5 mL. Attenuation profiles and contrast-to-noise ratio of the renal arteries, veins, and urogenital tract were measured. Effective radiation dose calculation, data analysis by independent sample t test, receiver operating characteristic, and visual grading characteristic analyses were performed. In arterial circulation, only the inferior interlobular arteries in both protocols showed a statistical significance (P contrast-to-noise ratio than protocol A (protocol B: 22.68 ± 13.72; protocol A: 14.75 ± 5.76; P contrast media and triphasic scan protocol usage increases the image quality at a reduced radiation dose.

  19. Development of the protocol for purification of artemisinin based on combination of commercial and computationally designed adsorbents.

    Science.gov (United States)

    Piletska, Elena V; Karim, Kal; Cutler, Malcolm; Piletsky, Sergey A

    2013-01-01

    A polymeric adsorbent for extraction of the antimalarial drug artemisinin from Artemisia annua L. was computationally designed. This polymer demonstrated a high capacity for artemisinin (120 mg g(-1) ), quantitative recovery (87%) and was found to be an effective material for purification of artemisinin from complex plant matrix. The artemisinin quantification was conducted using an optimised HPLC-MS protocol, which was characterised by high precision and linearity in the concentration range between 0.05 and 2 μg mL(-1) . Optimisation of the purification protocol also involved screening of commercial adsorbents for the removal of waxes and other interfering natural compounds, which inhibit the crystallisation of artemisinin. As a result of a two step-purification protocol crystals of artemisinin were obtained, and artemisinin purity was evaluated as 75%. By performing the second stage of purification twice, the purity of artemisinin can be further improved to 99%. The developed protocol produced high-purity artemisinin using only a few purification steps that makes it suitable for large scale industrial manufacturing process. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Lung Ultrasonography in Patients With Idiopathic Pulmonary Fibrosis: Evaluation of a Simplified Protocol With High-Resolution Computed Tomographic Correlation.

    Science.gov (United States)

    Vassalou, Evangelia E; Raissaki, Maria; Magkanas, Eleftherios; Antoniou, Katerina M; Karantanas, Apostolos H

    2018-03-01

    To compare a simplified ultrasonographic (US) protocol in 2 patient positions with the same-positioned comprehensive US assessments and high-resolution computed tomographic (CT) findings in patients with idiopathic pulmonary fibrosis. Twenty-five consecutive patients with idiopathic pulmonary fibrosis were prospectively enrolled and examined in 2 sessions. During session 1, patients were examined with a US protocol including 56 lung intercostal spaces in supine/sitting (supine/sitting comprehensive protocol) and lateral decubitus (decubitus comprehensive protocol) positions. During session 2, patients were evaluated with a 16-intercostal space US protocol in sitting (sitting simplified protocol) and left/right decubitus (decubitus simplified protocol) positions. The 16 intercostal spaces were chosen according to the prevalence of idiopathic pulmonary fibrosis-related changes on high-resolution CT. The sum of B-lines counted in each intercostal space formed the US scores for all 4 US protocols: supine/sitting and decubitus comprehensive US scores and sitting and decubitus simplified US scores. High-resolution CT-related Warrick scores (J Rheumatol 1991; 18:1520-1528) were compared to US scores. The duration of each protocol was recorded. A significant correlation was found between all US scores and Warrick scores and between simplified and corresponding comprehensive scores (P idiopathic pulmonary fibrosis. The 16-intercostal space simplified protocol in the lateral decubitus position correlated better with high-resolution CT findings and was less time-consuming compared to the sitting position. © 2017 by the American Institute of Ultrasound in Medicine.

  1. Secure Multi-party Computation Protocol for Defense Applications in Military Operations Using Virtual Cryptography

    Science.gov (United States)

    Pathak, Rohit; Joshi, Satyadhar

    With the advent into the 20th century whole world has been facing the common dilemma of Terrorism. The suicide attacks on US twin towers 11 Sept. 2001, Train bombings in Madrid Spain 11 Mar. 2004, London bombings 7 Jul. 2005 and Mumbai attack 26 Nov. 2008 were some of the most disturbing, destructive and evil acts by terrorists in the last decade which has clearly shown their evil intent that they can go to any extent to accomplish their goals. Many terrorist organizations such as al Quaida, Harakat ul-Mujahidin, Hezbollah, Jaish-e-Mohammed, Lashkar-e-Toiba, etc. are carrying out training camps and terrorist operations which are accompanied with latest technology and high tech arsenal. To counter such terrorism our military is in need of advanced defense technology. One of the major issues of concern is secure communication. It has to be made sure that communication between different military forces is secure so that critical information is not leaked to the adversary. Military forces need secure communication to shield their confidential data from terrorist forces. Leakage of concerned data can prove hazardous, thus preservation and security is of prime importance. There may be a need to perform computations that require data from many military forces, but in some cases the associated forces would not want to reveal their data to other forces. In such situations Secure Multi-party Computations find their application. In this paper, we propose a new highly scalable Secure Multi-party Computation (SMC) protocol and algorithm for Defense applications which can be used to perform computation on encrypted data. Every party encrypts their data in accordance with a particular scheme. This encrypted data is distributed among some created virtual parties. These Virtual parties send their data to the TTP through an Anonymizer layer. TTP performs computation on encrypted data and announces the result. As the data sent was encrypted its actual value can’t be known by TTP

  2. Developing an optimum protocol for thermoluminescence dosimetry with gr-200 chips using Taguchi method

    International Nuclear Information System (INIS)

    Sadeghi, Maryam; Faghihi, Reza; Sina, Sedigheh

    2017-01-01

    Thermoluminescence dosimetry (TLD) is a powerful technique with wide applications in personal, environmental and clinical dosimetry. The optimum annealing, storage and reading protocols are very effective in accuracy of TLD response. The purpose of this study is to obtain an optimum protocol for GR-200; LiF: Mg, Cu, P, by optimizing the effective parameters, to increase the reliability of the TLD response using Taguchi method. Taguchi method has been used in this study for optimization of annealing, storage and reading protocols of the TLDs. A number of 108 GR-200 chips were divided into 27 groups, each containing four chips. The TLDs were exposed to three different doses, and stored, annealed and read out by different procedures as suggested by Taguchi Method. By comparing the signal-to-noise ratios the optimum dosimetry procedure was obtained. According to the results, the optimum values for annealing temperature (de.C), Annealing Time (s), Annealing to Exposure time (d), Exposure to Readout time (d), Pre-heat Temperature (de.C), Pre-heat Time (s), Heating Rate (de.C/s), Maximum Temperature of Readout (de.C), readout time (s) and Storage Temperature (de.C) are 240, 90, 1, 2, 50, 0, 15, 240, 13 and -20, respectively. Using the optimum protocol, an efficient glow curve with low residual signals can be achieved. Using optimum protocol obtained by Taguchi method, the dosimetry can be effectively performed with great accuracy. (authors)

  3. Cryptographic protocol security analysis based on bounded constructing algorithm

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    An efficient approach to analyzing cryptographic protocols is to develop automatic analysis tools based on formal methods. However, the approach has encountered the high computational complexity problem due to reasons that participants of protocols are arbitrary, their message structures are complex and their executions are concurrent. We propose an efficient automatic verifying algorithm for analyzing cryptographic protocols based on the Cryptographic Protocol Algebra (CPA) model proposed recently, in which algebraic techniques are used to simplify the description of cryptographic protocols and their executions. Redundant states generated in the analysis processes are much reduced by introducing a new algebraic technique called Universal Polynomial Equation and the algorithm can be used to verify the correctness of protocols in the infinite states space. We have implemented an efficient automatic analysis tool for cryptographic protocols, called ACT-SPA, based on this algorithm, and used the tool to check more than 20 cryptographic protocols. The analysis results show that this tool is more efficient, and an attack instance not offered previously is checked by using this tool.

  4. A new method for improving security in MANETs AODV Protocol

    Directory of Open Access Journals (Sweden)

    Zahra Alishahi

    2012-10-01

    Full Text Available In mobile ad hoc network (MANET, secure communication is more challenging task due to its fundamental characteristics like having less infrastructure, wireless link, distributed cooperation, dynamic topology, lack of association, resource constrained and physical vulnerability of node. In MANET, attacks can be broadly classified in two categories: routing attacks and data forwarding attacks. Any action not following rules of routing protocols belongs to routing attacks. The main objective of routing attacks is to disrupt normal functioning of network by advertising false routing updates. On the other hand, data forwarding attacks include actions such as modification or dropping data packet, which does not disrupt routing protocol. In this paper, we address the “Packet Drop Attack”, which is a serious threat to operational mobile ad hoc networks. The consequence of not forwarding other packets or dropping other packets prevents any kind of communication to be established in the network. Therefore, there is a need to address the packet dropping event takes higher priority for the mobile ad hoc networks to emerge and to operate, successfully. In this paper, we propose a method to secure ad hoc on-demand distance vector (AODV routing protocol. The proposed method provides security for routing packets where the malicious node acts as a black-hole and drops packets. In this method, the collaboration of a group of nodes is used to make accurate decisions. Validating received RREPs allows the source to select trusted path to its destination. The simulation results show that the proposed mechanism is able to detect any number of attackers.

  5. Methodics of computing the results of monitoring the exploratory gallery

    Directory of Open Access Journals (Sweden)

    Krúpa Víazoslav

    2000-09-01

    Full Text Available At building site of motorway tunnel Višòové-Dubná skala , the priority is given to driving of exploration galley that secures in detail: geologic, engineering geology, hydrogeology and geotechnics research. This research is based on gathering information for a supposed use of the full profile driving machine that would drive the motorway tunnel. From a part of the exploration gallery which is driven by the TBM method, a fulfilling information is gathered about the parameters of the driving process , those are gathered by a computer monitoring system. The system is mounted on a driving machine. This monitoring system is based on the industrial computer PC 104. It records 4 basic values of the driving process: the electromotor performance of the driving machine Voest-Alpine ATB 35HA, the speed of driving advance, the rotation speed of the disintegrating head TBM and the total head pressure. The pressure force is evaluated from the pressure in the hydraulic cylinders of the machine. Out of these values, the strength of rock mass, the angle of inner friction, etc. are mathematically calculated. These values characterize rock mass properties as their changes. To define the effectivity of the driving process, the value of specific energy and the working ability of driving head is used. The article defines the methodics of computing the gathered monitoring information, that is prepared for the driving machine Voest – Alpine ATB 35H at the Institute of Geotechnics SAS. It describes the input forms (protocols of the developed method created by an EXCEL program and shows selected samples of the graphical elaboration of the first monitoring results obtained from exploratory gallery driving process in the Višòové – Dubná skala motorway tunnel.

  6. Protocol for concomitant temporomandibular joint custom-fitted total joint reconstruction and orthognathic surgery utilizing computer-assisted surgical simulation.

    Science.gov (United States)

    Movahed, Reza; Teschke, Marcus; Wolford, Larry M

    2013-12-01

    Clinicians who address temporomandibular joint (TMJ) pathology and dentofacial deformities surgically can perform the surgery in 1 stage or 2 separate stages. The 2-stage approach requires the patient to undergo 2 separate operations and anesthesia, significantly prolonging the overall treatment. However, performing concomitant TMJ and orthognathic surgery (CTOS) in these cases requires careful treatment planning and surgical proficiency in the 2 surgical areas. This article presents a new treatment protocol for the application of computer-assisted surgical simulation in CTOS cases requiring reconstruction with patient-fitted total joint prostheses. The traditional and new CTOS protocols are described and compared. The new CTOS protocol helps decrease the preoperative workup time and increase the accuracy of model surgery. Copyright © 2013 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  7. Optimization on the dose versus noise in the image on protocols for computed tomography of pediatric head

    International Nuclear Information System (INIS)

    Saint'Yves, Thalis L.A.; Travassos, Paulo Cesar B.; Goncalves, Elicardo A.S.; Mecca A, Fernando; Silveira, Thiago B.

    2010-01-01

    This article aims to establish protocols optimized for computed tomography of pediatric skull, to the Picker Q2000 tomography of the Instituto Nacional de Cancer, through the analysis of dose x noise on the image with the variation of values of mAs and kVp. We used a water phantom to measure the noise, a pencil type ionization chamber to measure the dose in the air and the Alderson Randon phantom for check the quality of the image. We found values of mAs and kVp that reduce the skin dose of the original protocol used in 35.9%, maintaining the same image quality at a safe diagnosis. (author)

  8. Computational methods in drug discovery

    Directory of Open Access Journals (Sweden)

    Sumudu P. Leelananda

    2016-12-01

    Full Text Available The process for drug discovery and development is challenging, time consuming and expensive. Computer-aided drug discovery (CADD tools can act as a virtual shortcut, assisting in the expedition of this long process and potentially reducing the cost of research and development. Today CADD has become an effective and indispensable tool in therapeutic development. The human genome project has made available a substantial amount of sequence data that can be used in various drug discovery projects. Additionally, increasing knowledge of biological structures, as well as increasing computer power have made it possible to use computational methods effectively in various phases of the drug discovery and development pipeline. The importance of in silico tools is greater than ever before and has advanced pharmaceutical research. Here we present an overview of computational methods used in different facets of drug discovery and highlight some of the recent successes. In this review, both structure-based and ligand-based drug discovery methods are discussed. Advances in virtual high-throughput screening, protein structure prediction methods, protein–ligand docking, pharmacophore modeling and QSAR techniques are reviewed.

  9. Reference-based digital concept to restore partially edentulous patients following an immediate loading protocol: a pilot study

    NARCIS (Netherlands)

    Tahmaseb, A.; de Clerck, R.; Eckert, S.; Wismeijer, D.

    2011-01-01

    PURPOSE: To describe the use of a computer-aided three-dimensional planning protocol in combination with previously placed reference elements and computer-aided design/computer-assisted manufacture (CAD/CAM) technology to restore the partially edentulous patient. MATERIALS AND METHODS: Mini-implants

  10. Computing Nash equilibria through computational intelligence methods

    Science.gov (United States)

    Pavlidis, N. G.; Parsopoulos, K. E.; Vrahatis, M. N.

    2005-03-01

    Nash equilibrium constitutes a central solution concept in game theory. The task of detecting the Nash equilibria of a finite strategic game remains a challenging problem up-to-date. This paper investigates the effectiveness of three computational intelligence techniques, namely, covariance matrix adaptation evolution strategies, particle swarm optimization, as well as, differential evolution, to compute Nash equilibria of finite strategic games, as global minima of a real-valued, nonnegative function. An issue of particular interest is to detect more than one Nash equilibria of a game. The performance of the considered computational intelligence methods on this problem is investigated using multistart and deflection.

  11. What is the best contrast injection protocol for 64-row multi-detector cardiac computed tomography?

    International Nuclear Information System (INIS)

    Lu Jinguo; Lv Bin; Chen Xiongbiao; Tang Xiang; Jiang Shiliang; Dai Ruping

    2010-01-01

    Objective: To determine the optimal contrast injection protocol for 64-MDCT coronary angiography. Materials and methods: One hundred and fifty consecutive patients scheduled to undergo retrospectively electrocardiographically gated 64-MDCT. Each 30 patients were assigned to use a different contrast protocol: group 1: uniphasic protocol (contrast injection without saline flush); group 2: biphasic protocol (contrast injection with saline flush); group 3A, 3B and 3C: triphasic protocol (contrast media + different saline diluted contrast media + saline flush). Image quality scores and artifacts were compared and evaluated on both transaxial and three-dimensional coronary artery images among each contrast protocol. Results: Among the triphasic protocol groups, group 3A (30%:70% contrast media-saline mixture was used in second phase) used the least contrast media and had the least frequency of streak artifacts, but there were no significant differences in coronary artery attenuation, image quality, visualization right and left heart structures. Among the uniphasic protocol group (group 1), biphasic protocol group (group 2) and triphasic protocol subgroup (group 3A), there were no significant differences in image quality scores of coronary artery (P = 0.18); uniphasic protocol group had the highest frequency of streak artifacts (20 cases) (P < 0.05) and had the most amount contrast media (67.0 ± 5.3 ml); biphasic protocol group had the least amount of contrast media (59.9 ± 4.9 ml) (P < 0.05) and had the highest attenuation of left main coronary artery and right coronary artery (P < 0.01), but had the least amount of clear visualization right heart structure (6 cases); triphasic protocol group (group 3A) had the most amount of clear visualization right heart structures (29 cases) were the most among the three groups (P < 0.05). Conclusion: Biphasic protocol are superior to the traditional uniphasic protocols for using the least total contrast media, having the least

  12. What is the best contrast injection protocol for 64-row multi-detector cardiac computed tomography?

    Energy Technology Data Exchange (ETDEWEB)

    Lu Jinguo [Department of Radiology, Cardiovascular Institute and Fuwai Hospital, Peking Union Medical College and Chinese Academy of Medical Sciences, 167 Beilishi Road, Beijing (China); Lv Bin, E-mail: blu@vip.sina.co [Department of Radiology, Cardiovascular Institute and Fuwai Hospital, Peking Union Medical College and Chinese Academy of Medical Sciences, 167 Beilishi Road, Beijing (China); Chen Xiongbiao; Tang Xiang; Jiang Shiliang; Dai Ruping [Department of Radiology, Cardiovascular Institute and Fuwai Hospital, Peking Union Medical College and Chinese Academy of Medical Sciences, 167 Beilishi Road, Beijing (China)

    2010-08-15

    Objective: To determine the optimal contrast injection protocol for 64-MDCT coronary angiography. Materials and methods: One hundred and fifty consecutive patients scheduled to undergo retrospectively electrocardiographically gated 64-MDCT. Each 30 patients were assigned to use a different contrast protocol: group 1: uniphasic protocol (contrast injection without saline flush); group 2: biphasic protocol (contrast injection with saline flush); group 3A, 3B and 3C: triphasic protocol (contrast media + different saline diluted contrast media + saline flush). Image quality scores and artifacts were compared and evaluated on both transaxial and three-dimensional coronary artery images among each contrast protocol. Results: Among the triphasic protocol groups, group 3A (30%:70% contrast media-saline mixture was used in second phase) used the least contrast media and had the least frequency of streak artifacts, but there were no significant differences in coronary artery attenuation, image quality, visualization right and left heart structures. Among the uniphasic protocol group (group 1), biphasic protocol group (group 2) and triphasic protocol subgroup (group 3A), there were no significant differences in image quality scores of coronary artery (P = 0.18); uniphasic protocol group had the highest frequency of streak artifacts (20 cases) (P < 0.05) and had the most amount contrast media (67.0 {+-} 5.3 ml); biphasic protocol group had the least amount of contrast media (59.9 {+-} 4.9 ml) (P < 0.05) and had the highest attenuation of left main coronary artery and right coronary artery (P < 0.01), but had the least amount of clear visualization right heart structure (6 cases); triphasic protocol group (group 3A) had the most amount of clear visualization right heart structures (29 cases) were the most among the three groups (P < 0.05). Conclusion: Biphasic protocol are superior to the traditional uniphasic protocols for using the least total contrast media, having the least

  13. Developing an Optimum Protocol for Thermoluminescence Dosimetry with GR-200 Chips using Taguchi Method.

    Science.gov (United States)

    Sadeghi, Maryam; Faghihi, Reza; Sina, Sedigheh

    2017-06-15

    Thermoluminescence dosimetry (TLD) is a powerful technique with wide applications in personal, environmental and clinical dosimetry. The optimum annealing, storage and reading protocols are very effective in accuracy of TLD response. The purpose of this study is to obtain an optimum protocol for GR-200; LiF: Mg, Cu, P, by optimizing the effective parameters, to increase the reliability of the TLD response using Taguchi method. Taguchi method has been used in this study for optimization of annealing, storage and reading protocols of the TLDs. A number of 108 GR-200 chips were divided into 27 groups, each containing four chips. The TLDs were exposed to three different doses, and stored, annealed and read out by different procedures as suggested by Taguchi Method. By comparing the signal-to-noise ratios the optimum dosimetry procedure was obtained. According to the results, the optimum values for annealing temperature (°C), Annealing Time (s), Annealing to Exposure time (d), Exposure to Readout time (d), Pre-heat Temperature (°C), Pre-heat Time (s), Heating Rate (°C/s), Maximum Temperature of Readout (°C), readout time (s) and Storage Temperature (°C) are 240, 90, 1, 2, 50, 0, 15, 240, 13 and -20, respectively. Using the optimum protocol, an efficient glow curve with low residual signals can be achieved. Using optimum protocol obtained by Taguchi method, the dosimetry can be effectively performed with great accuracy. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  14. Data transmission protocol for Pi-of-the-Sky cameras

    Science.gov (United States)

    Uzycki, J.; Kasprowicz, G.; Mankiewicz, M.; Nawrocki, K.; Sitek, P.; Sokolowski, M.; Sulej, R.; Tlaczala, W.

    2006-10-01

    The large amount of data collected by the automatic astronomical cameras has to be transferred to the fast computers in a reliable way. The method chosen should ensure data streaming in both directions but in nonsymmetrical way. The Ethernet interface is very good choice because of its popularity and proven performance. However it requires TCP/IP stack implementation in devices like cameras for full compliance with existing network and operating systems. This paper describes NUDP protocol, which was made as supplement to standard UDP protocol and can be used as a simple-network protocol. The NUDP does not need TCP protocol implementation and makes it possible to run the Ethernet network with simple devices based on microcontroller and/or FPGA chips. The data transmission idea was created especially for the "Pi of the Sky" project.

  15. A web-based computer-tailored smoking prevention programme for primary school children: intervention design and study protocol

    Science.gov (United States)

    2012-01-01

    Background Although the number of smokers has declined in the last decade, smoking is still a major health problem among youngsters and adolescents. For this reason, there is a need for effective smoking prevention programmes targeting primary school children. A web-based computer-tailored feedback programme may be an effective intervention to stimulate youngsters not to start smoking, and increase their knowledge about the adverse effects of smoking and their attitudes and self-efficacy regarding non-smoking. Methods & design This paper describes the development and evaluation protocol of a web-based out-of-school smoking prevention programme for primary school children (age 10-13 years) entitled ‘Fun without Smokes’. It is a transformation of a postal mailed intervention to a web-based intervention. Besides this transformation the effects of prompts will be examined. This web-based intervention will be evaluated in a 2-year cluster randomised controlled trial (c-RCT) with three study arms. An intervention and intervention + prompt condition will be evaluated for effects on smoking behaviour, compared with a no information control condition. Information about pupils’ smoking status and other factors related to smoking will be obtained using a web-based questionnaire. After completing the questionnaire pupils in both intervention conditions will receive three computer-tailored feedback letters in their personal e-mail box. Attitudes, social influences and self-efficacy expectations will be the content of these personalised feedback letters. Pupils in the intervention + prompt condition will - in addition to the personalised feedback letters - receive e-mail and SMS messages prompting them to revisit the ‘Fun without Smokes’ website. The main outcome measures will be ever smoking and the utilisation of the ‘Fun without Smokes’ website. Measurements will be carried out at baseline, 12 months and 24 months of follow-up. Discussion The present study

  16. A web-based computer-tailored smoking prevention programme for primary school children: intervention design and study protocol

    Directory of Open Access Journals (Sweden)

    Cremers Henricus-Paul

    2012-06-01

    Full Text Available Abstract Background Although the number of smokers has declined in the last decade, smoking is still a major health problem among youngsters and adolescents. For this reason, there is a need for effective smoking prevention programmes targeting primary school children. A web-based computer-tailored feedback programme may be an effective intervention to stimulate youngsters not to start smoking, and increase their knowledge about the adverse effects of smoking and their attitudes and self-efficacy regarding non-smoking. Methods & design This paper describes the development and evaluation protocol of a web-based out-of-school smoking prevention programme for primary school children (age 10-13 years entitled ‘Fun without Smokes’. It is a transformation of a postal mailed intervention to a web-based intervention. Besides this transformation the effects of prompts will be examined. This web-based intervention will be evaluated in a 2-year cluster randomised controlled trial (c-RCT with three study arms. An intervention and intervention + prompt condition will be evaluated for effects on smoking behaviour, compared with a no information control condition. Information about pupils’ smoking status and other factors related to smoking will be obtained using a web-based questionnaire. After completing the questionnaire pupils in both intervention conditions will receive three computer-tailored feedback letters in their personal e-mail box. Attitudes, social influences and self-efficacy expectations will be the content of these personalised feedback letters. Pupils in the intervention + prompt condition will - in addition to the personalised feedback letters - receive e-mail and SMS messages prompting them to revisit the ‘Fun without Smokes’ website. The main outcome measures will be ever smoking and the utilisation of the ‘Fun without Smokes’ website. Measurements will be carried out at baseline, 12 months and 24 months of follow

  17. Low-dose X-ray computed tomography image reconstruction with a combined low-mAs and sparse-view protocol.

    Science.gov (United States)

    Gao, Yang; Bian, Zhaoying; Huang, Jing; Zhang, Yunwan; Niu, Shanzhou; Feng, Qianjin; Chen, Wufan; Liang, Zhengrong; Ma, Jianhua

    2014-06-16

    To realize low-dose imaging in X-ray computed tomography (CT) examination, lowering milliampere-seconds (low-mAs) or reducing the required number of projection views (sparse-view) per rotation around the body has been widely studied as an easy and effective approach. In this study, we are focusing on low-dose CT image reconstruction from the sinograms acquired with a combined low-mAs and sparse-view protocol and propose a two-step image reconstruction strategy. Specifically, to suppress significant statistical noise in the noisy and insufficient sinograms, an adaptive sinogram restoration (ASR) method is first proposed with consideration of the statistical property of sinogram data, and then to further acquire a high-quality image, a total variation based projection onto convex sets (TV-POCS) method is adopted with a slight modification. For simplicity, the present reconstruction strategy was termed as "ASR-TV-POCS." To evaluate the present ASR-TV-POCS method, both qualitative and quantitative studies were performed on a physical phantom. Experimental results have demonstrated that the present ASR-TV-POCS method can achieve promising gains over other existing methods in terms of the noise reduction, contrast-to-noise ratio, and edge detail preservation.

  18. Numerical Methods for Stochastic Computations A Spectral Method Approach

    CERN Document Server

    Xiu, Dongbin

    2010-01-01

    The first graduate-level textbook to focus on fundamental aspects of numerical methods for stochastic computations, this book describes the class of numerical methods based on generalized polynomial chaos (gPC). These fast, efficient, and accurate methods are an extension of the classical spectral methods of high-dimensional random spaces. Designed to simulate complex systems subject to random inputs, these methods are widely used in many areas of computer science and engineering. The book introduces polynomial approximation theory and probability theory; describes the basic theory of gPC meth

  19. New protocol for construction of eyeglasses-supported provisional nasal prosthesis using CAD/CAM techniques.

    Science.gov (United States)

    Ciocca, Leonardo; Fantini, Massimiliano; De Crescenzio, Francesca; Persiani, Franco; Scotti, Roberto

    2010-01-01

    A new protocol for making an immediate provisional eyeglasses-supported nasal prosthesis is presented that uses laser scanning, computer-aided design/computer-aided manufacturing procedures, and rapid prototyping techniques, reducing time and costs while increasing the quality of the final product. With this protocol, the eyeglasses were digitized, and the relative position of the nasal prosthesis was planned and evaluated in a virtual environment without any try-in appointment. This innovative method saves time, reduces costs, and restores the patient's aesthetic appearance after a disfiguration caused by ablation of the nasal pyramid better than conventional restoration methods. Moreover, the digital model of the designed nasal epithesis can be used to develop a definitive prosthesis anchored to osseointegrated craniofacial implants.

  20. Application of Quantum Process Calculus to Higher Dimensional Quantum Protocols

    Directory of Open Access Journals (Sweden)

    Simon J. Gay

    2014-07-01

    Full Text Available We describe the use of quantum process calculus to describe and analyze quantum communication protocols, following the successful field of formal methods from classical computer science. We have extended the quantum process calculus to describe d-dimensional quantum systems, which has not been done before. We summarise the necessary theory in the generalisation of quantum gates and Bell states and use the theory to apply the quantum process calculus CQP to quantum protocols, namely qudit teleportation and superdense coding.

  1. Replication protocol analysis: a method for the study of real-world design thinking

    DEFF Research Database (Denmark)

    Galle, Per; Kovacs, L. B.

    1996-01-01

    Given the brief of an architectural competition on site planning, and the design awarded the first prize, the first author (trained as an architect but not a participant in the competition) produced a line of reasoning that might have led from brief to design. In the paper, such ‘design replication......’ is refined into a method called ‘replication protocol analysis’ (RPA), and discussed from a methodological perspective of design research. It is argued that for the study of real-world design thinking this method offers distinct advantages over traditional ‘design protocol analysis’, which seeks to capture...

  2. Computational methods for fluid dynamics

    CERN Document Server

    Ferziger, Joel H

    2002-01-01

    In its 3rd revised and extended edition the book offers an overview of the techniques used to solve problems in fluid mechanics on computers and describes in detail those most often used in practice. Included are advanced methods in computational fluid dynamics, like direct and large-eddy simulation of turbulence, multigrid methods, parallel computing, moving grids, structured, block-structured and unstructured boundary-fitted grids, free surface flows. The 3rd edition contains a new section dealing with grid quality and an extended description of discretization methods. The book shows common roots and basic principles for many different methods. The book also contains a great deal of practical advice for code developers and users, it is designed to be equally useful to beginners and experts. The issues of numerical accuracy, estimation and reduction of numerical errors are dealt with in detail, with many examples. A full-feature user-friendly demo-version of a commercial CFD software has been added, which ca...

  3. Methods for computing color anaglyphs

    Science.gov (United States)

    McAllister, David F.; Zhou, Ya; Sullivan, Sophia

    2010-02-01

    A new computation technique is presented for calculating pixel colors in anaglyph images. The method depends upon knowing the RGB spectral distributions of the display device and the transmission functions of the filters in the viewing glasses. It requires the solution of a nonlinear least-squares program for each pixel in a stereo pair and is based on minimizing color distances in the CIEL*a*b* uniform color space. The method is compared with several techniques for computing anaglyphs including approximation in CIE space using the Euclidean and Uniform metrics, the Photoshop method and its variants, and a method proposed by Peter Wimmer. We also discuss the methods of desaturation and gamma correction for reducing retinal rivalry.

  4. Algorithm for planning a double-jaw orthognathic surgery using a computer-aided surgical simulation (CASS) protocol. Part 1: planning sequence

    Science.gov (United States)

    Xia, J. J.; Gateno, J.; Teichgraeber, J. F.; Yuan, P.; Chen, K.-C.; Li, J.; Zhang, X.; Tang, Z.; Alfi, D. M.

    2015-01-01

    The success of craniomaxillofacial (CMF) surgery depends not only on the surgical techniques, but also on an accurate surgical plan. The adoption of computer-aided surgical simulation (CASS) has created a paradigm shift in surgical planning. However, planning an orthognathic operation using CASS differs fundamentally from planning using traditional methods. With this in mind, the Surgical Planning Laboratory of Houston Methodist Research Institute has developed a CASS protocol designed specifically for orthognathic surgery. The purpose of this article is to present an algorithm using virtual tools for planning a double-jaw orthognathic operation. This paper will serve as an operation manual for surgeons wanting to incorporate CASS into their clinical practice. PMID:26573562

  5. A Web Resource for Standardized Benchmark Datasets, Metrics, and Rosetta Protocols for Macromolecular Modeling and Design.

    Directory of Open Access Journals (Sweden)

    Shane Ó Conchúir

    Full Text Available The development and validation of computational macromolecular modeling and design methods depend on suitable benchmark datasets and informative metrics for comparing protocols. In addition, if a method is intended to be adopted broadly in diverse biological applications, there needs to be information on appropriate parameters for each protocol, as well as metrics describing the expected accuracy compared to experimental data. In certain disciplines, there exist established benchmarks and public resources where experts in a particular methodology are encouraged to supply their most efficient implementation of each particular benchmark. We aim to provide such a resource for protocols in macromolecular modeling and design. We present a freely accessible web resource (https://kortemmelab.ucsf.edu/benchmarks to guide the development of protocols for protein modeling and design. The site provides benchmark datasets and metrics to compare the performance of a variety of modeling protocols using different computational sampling methods and energy functions, providing a "best practice" set of parameters for each method. Each benchmark has an associated downloadable benchmark capture archive containing the input files, analysis scripts, and tutorials for running the benchmark. The captures may be run with any suitable modeling method; we supply command lines for running the benchmarks using the Rosetta software suite. We have compiled initial benchmarks for the resource spanning three key areas: prediction of energetic effects of mutations, protein design, and protein structure prediction, each with associated state-of-the-art modeling protocols. With the help of the wider macromolecular modeling community, we hope to expand the variety of benchmarks included on the website and continue to evaluate new iterations of current methods as they become available.

  6. Why standard brain-computer interface (BCI) training protocols should be changed: an experimental study

    Science.gov (United States)

    Jeunet, Camille; Jahanpour, Emilie; Lotte, Fabien

    2016-06-01

    Objective. While promising, electroencephaloraphy based brain-computer interfaces (BCIs) are barely used due to their lack of reliability: 15% to 30% of users are unable to control a BCI. Standard training protocols may be partly responsible as they do not satisfy recommendations from psychology. Our main objective was to determine in practice to what extent standard training protocols impact users’ motor imagery based BCI (MI-BCI) control performance. Approach. We performed two experiments. The first consisted in evaluating the efficiency of a standard BCI training protocol for the acquisition of non-BCI related skills in a BCI-free context, which enabled us to rule out the possible impact of BCIs on the training outcome. Thus, participants (N = 54) were asked to perform simple motor tasks. The second experiment was aimed at measuring the correlations between motor tasks and MI-BCI performance. The ten best and ten worst performers of the first study were recruited for an MI-BCI experiment during which they had to learn to perform two MI tasks. We also assessed users’ spatial ability and pre-training μ rhythm amplitude, as both have been related to MI-BCI performance in the literature. Main results. Around 17% of the participants were unable to learn to perform the motor tasks, which is close to the BCI illiteracy rate. This suggests that standard training protocols are suboptimal for skill teaching. No correlation was found between motor tasks and MI-BCI performance. However, spatial ability played an important role in MI-BCI performance. In addition, once the spatial ability covariable had been controlled for, using an ANCOVA, it appeared that participants who faced difficulty during the first experiment improved during the second while the others did not. Significance. These studies suggest that (1) standard MI-BCI training protocols are suboptimal for skill teaching, (2) spatial ability is confirmed as impacting on MI-BCI performance, and (3) when faced

  7. Numerical computer methods part D

    CERN Document Server

    Johnson, Michael L

    2004-01-01

    The aim of this volume is to brief researchers of the importance of data analysis in enzymology, and of the modern methods that have developed concomitantly with computer hardware. It is also to validate researchers' computer programs with real and synthetic data to ascertain that the results produced are what they expected. Selected Contents: Prediction of protein structure; modeling and studying proteins with molecular dynamics; statistical error in isothermal titration calorimetry; analysis of circular dichroism data; model comparison methods.

  8. Advanced computational electromagnetic methods and applications

    CERN Document Server

    Li, Wenxing; Elsherbeni, Atef; Rahmat-Samii, Yahya

    2015-01-01

    This new resource covers the latest developments in computational electromagnetic methods, with emphasis on cutting-edge applications. This book is designed to extend existing literature to the latest development in computational electromagnetic methods, which are of interest to readers in both academic and industrial areas. The topics include advanced techniques in MoM, FEM and FDTD, spectral domain method, GPU and Phi hardware acceleration, metamaterials, frequency and time domain integral equations, and statistics methods in bio-electromagnetics.

  9. A computational simulation of long-term synaptic potentiation inducing protocol processes with model of CA3 hippocampal microcircuit.

    Science.gov (United States)

    Świetlik, D; Białowąs, J; Kusiak, A; Cichońska, D

    2018-01-01

    An experimental study of computational model of the CA3 region presents cog-nitive and behavioural functions the hippocampus. The main property of the CA3 region is plastic recurrent connectivity, where the connections allow it to behave as an auto-associative memory. The computer simulations showed that CA3 model performs efficient long-term synaptic potentiation (LTP) induction and high rate of sub-millisecond coincidence detection. Average frequency of the CA3 pyramidal cells model was substantially higher in simulations with LTP induction protocol than without the LTP. The entropy of pyramidal cells with LTP seemed to be significantly higher than without LTP induction protocol (p = 0.0001). There was depression of entropy, which was caused by an increase of forgetting coefficient in pyramidal cells simulations without LTP (R = -0.88, p = 0.0008), whereas such correlation did not appear in LTP simulation (p = 0.4458). Our model of CA3 hippocampal formation microcircuit biologically inspired lets you understand neurophysiologic data. (Folia Morphol 2018; 77, 2: 210-220).

  10. Method-centered digital communities on protocols.io for fast-paced scientific innovation [version 2; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Lori Kindler

    2017-06-01

    Full Text Available The Internet has enabled online social interaction for scientists beyond physical meetings and conferences. Yet despite these innovations in communication, dissemination of methods is often relegated to just academic publishing. Further, these methods remain static, with subsequent advances published elsewhere and unlinked. For communities undergoing fast-paced innovation, researchers need new capabilities to share, obtain feedback, and publish methods at the forefront of scientific development. For example, a renaissance in virology is now underway given the new metagenomic methods to sequence viral DNA directly from an environment. Metagenomics makes it possible to "see" natural viral communities that could not be previously studied through culturing methods. Yet, the knowledge of specialized techniques for the production and analysis of viral metagenomes remains in a subset of labs.  This problem is common to any community using and developing emerging technologies and techniques. We developed new capabilities to create virtual communities in protocols.io, an open access platform, for disseminating protocols and knowledge at the forefront of scientific development. To demonstrate these capabilities, we present a virology community forum called VERVENet. These new features allow virology researchers to share protocols and their annotations and optimizations, connect with the broader virtual community to share knowledge, job postings, conference announcements through a common online forum, and discover the current literature through personalized recommendations to promote discussion of cutting edge research. Virtual communities in protocols.io enhance a researcher's ability to: discuss and share protocols, connect with fellow community members, and learn about new and innovative research in the field.  The web-based software for developing virtual communities is free to use on protocols.io. Data are available through public APIs at protocols.io.

  11. A Survey of Automatic Protocol Reverse Engineering Approaches, Methods, and Tools on the Inputs and Outputs View

    OpenAIRE

    Baraka D. Sija; Young-Hoon Goo; Kyu-Seok Shim; Huru Hasanova; Myung-Sup Kim

    2018-01-01

    A network protocol defines rules that control communications between two or more machines on the Internet, whereas Automatic Protocol Reverse Engineering (APRE) defines the way of extracting the structure of a network protocol without accessing its specifications. Enough knowledge on undocumented protocols is essential for security purposes, network policy implementation, and management of network resources. This paper reviews and analyzes a total of 39 approaches, methods, and tools towards ...

  12. Playing With Population Protocols

    Directory of Open Access Journals (Sweden)

    Xavier Koegler

    2009-06-01

    Full Text Available Population protocols have been introduced as a model of sensor networks consisting of very limited mobile agents with no control over their own movement: A collection of anonymous agents, modeled by finite automata, interact in pairs according to some rules. Predicates on the initial configurations that can be computed by such protocols have been characterized under several hypotheses. We discuss here whether and when the rules of interactions between agents can be seen as a game from game theory. We do so by discussing several basic protocols.

  13. Computational Methods in Plasma Physics

    CERN Document Server

    Jardin, Stephen

    2010-01-01

    Assuming no prior knowledge of plasma physics or numerical methods, Computational Methods in Plasma Physics covers the computational mathematics and techniques needed to simulate magnetically confined plasmas in modern magnetic fusion experiments and future magnetic fusion reactors. Largely self-contained, the text presents the basic concepts necessary for the numerical solution of partial differential equations. Along with discussing numerical stability and accuracy, the author explores many of the algorithms used today in enough depth so that readers can analyze their stability, efficiency,

  14. A Pattern Language for Designing Application-Level Communication Protocols and the Improvement of Computer Science Education through Cloud Computing

    OpenAIRE

    Lascano, Jorge Edison

    2017-01-01

    Networking protocols have been developed throughout time following layered architectures such as the Open Systems Interconnection model and the Internet model. These protocols are grouped in the Internet protocol suite. Most developers do not deal with low-level protocols, instead they design application-level protocols on top of the low-level protocol. Although each application-level protocol is different, there is commonality among them and developers can apply lessons learned from one prot...

  15. A Survey of Automatic Protocol Reverse Engineering Approaches, Methods, and Tools on the Inputs and Outputs View

    Directory of Open Access Journals (Sweden)

    Baraka D. Sija

    2018-01-01

    Full Text Available A network protocol defines rules that control communications between two or more machines on the Internet, whereas Automatic Protocol Reverse Engineering (APRE defines the way of extracting the structure of a network protocol without accessing its specifications. Enough knowledge on undocumented protocols is essential for security purposes, network policy implementation, and management of network resources. This paper reviews and analyzes a total of 39 approaches, methods, and tools towards Protocol Reverse Engineering (PRE and classifies them into four divisions, approaches that reverse engineer protocol finite state machines, protocol formats, and both protocol finite state machines and protocol formats to approaches that focus directly on neither reverse engineering protocol formats nor protocol finite state machines. The efficiency of all approaches’ outputs based on their selected inputs is analyzed in general along with appropriate reverse engineering inputs format. Additionally, we present discussion and extended classification in terms of automated to manual approaches, known and novel categories of reverse engineered protocols, and a literature of reverse engineered protocols in relation to the seven layers’ OSI (Open Systems Interconnection model.

  16. Efficient Computational Research Protocol to Survey Free Energy Surface for Solution Chemical Reaction in the QM/MM Framework: The FEG-ER Methodology and Its Application to Isomerization Reaction of Glycine in Aqueous Solution.

    Science.gov (United States)

    Takenaka, Norio; Kitamura, Yukichi; Nagaoka, Masataka

    2016-03-03

    In solution chemical reaction, we often need to consider a multidimensional free energy (FE) surface (FES) which is analogous to a Born-Oppenheimer potential energy surface. To survey the FES, an efficient computational research protocol is proposed within the QM/MM framework; (i) we first obtain some stable states (or transition states) involved by optimizing their structures on the FES, in a stepwise fashion, finally using the free energy gradient (FEG) method, and then (ii) we directly obtain the FE differences among any arbitrary states on the FES, efficiently by employing the QM/MM method with energy representation (ER), i.e., the QM/MM-ER method. To validate the calculation accuracy and efficiency, we applied the above FEG-ER methodology to a typical isomerization reaction of glycine in aqueous solution, and reproduced quite satisfactorily the experimental value of the reaction FE. Further, it was found that the structural relaxation of the solute in the QM/MM force field is not negligible to estimate correctly the FES. We believe that the present research protocol should become prevailing as one computational strategy and will play promising and important roles in solution chemistry toward solution reaction ergodography.

  17. Computational efficiency for the surface renewal method

    Science.gov (United States)

    Kelley, Jason; Higgins, Chad

    2018-04-01

    Measuring surface fluxes using the surface renewal (SR) method requires programmatic algorithms for tabulation, algebraic calculation, and data quality control. A number of different methods have been published describing automated calibration of SR parameters. Because the SR method utilizes high-frequency (10 Hz+) measurements, some steps in the flux calculation are computationally expensive, especially when automating SR to perform many iterations of these calculations. Several new algorithms were written that perform the required calculations more efficiently and rapidly, and that tested for sensitivity to length of flux averaging period, ability to measure over a large range of lag timescales, and overall computational efficiency. These algorithms utilize signal processing techniques and algebraic simplifications that demonstrate simple modifications that dramatically improve computational efficiency. The results here complement efforts by other authors to standardize a robust and accurate computational SR method. Increased speed of computation time grants flexibility to implementing the SR method, opening new avenues for SR to be used in research, for applied monitoring, and in novel field deployments.

  18. Study on Cloud Security Based on Trust Spanning Tree Protocol

    Science.gov (United States)

    Lai, Yingxu; Liu, Zenghui; Pan, Qiuyue; Liu, Jing

    2015-09-01

    Attacks executed on Spanning Tree Protocol (STP) expose the weakness of link layer protocols and put the higher layers in jeopardy. Although the problems have been studied for many years and various solutions have been proposed, many security issues remain. To enhance the security and credibility of layer-2 network, we propose a trust-based spanning tree protocol aiming at achieving a higher credibility of LAN switch with a simple and lightweight authentication mechanism. If correctly implemented in each trusted switch, the authentication of trust-based STP can guarantee the credibility of topology information that is announced to other switch in the LAN. To verify the enforcement of the trusted protocol, we present a new trust evaluation method of the STP using a specification-based state model. We implement a prototype of trust-based STP to investigate its practicality. Experiment shows that the trusted protocol can achieve security goals and effectively avoid STP attacks with a lower computation overhead and good convergence performance.

  19. Estimation of the Thurstonian model for the 2-AC protocol

    DEFF Research Database (Denmark)

    Christensen, Rune Haubo Bojesen; Lee, Hye-Seong; Brockhoff, Per B.

    2012-01-01

    . This relationship makes it possible to extract estimates and standard errors of δ and τ from general statistical software, and furthermore, it makes it possible to combine standard regression modelling with the Thurstonian model for the 2-AC protocol. A model for replicated 2-AC data is proposed using cumulative......The 2-AC protocol is a 2-AFC protocol with a “no-difference” option and is technically identical to the paired preference test with a “no-preference” option. The Thurstonian model for the 2-AC protocol is parameterized by δ and a decision parameter τ, the estimates of which can be obtained...... by fairly simple well-known methods. In this paper we describe how standard errors of the parameters can be obtained and how exact power computations can be performed. We also show how the Thurstonian model for the 2-AC protocol is closely related to a statistical model known as a cumulative probit model...

  20. Performance comparison of secure comparison protocols

    NARCIS (Netherlands)

    Kerschbaum, F.; Biswas, D.; Hoogh, de S.J.A.

    2009-01-01

    Secure multiparty computation (SMC) has gained tremendous importance with the growth of the Internet and e-commerce, where mutually untrusted parties need to jointly compute a function of their private inputs. However, SMC protocols usually have very high computational complexities, rendering them

  1. Computational techniques of the simplex method

    CERN Document Server

    Maros, István

    2003-01-01

    Computational Techniques of the Simplex Method is a systematic treatment focused on the computational issues of the simplex method. It provides a comprehensive coverage of the most important and successful algorithmic and implementation techniques of the simplex method. It is a unique source of essential, never discussed details of algorithmic elements and their implementation. On the basis of the book the reader will be able to create a highly advanced implementation of the simplex method which, in turn, can be used directly or as a building block in other solution algorithms.

  2. Cloud Computing as Evolution of Distributed Computing – A Case Study for SlapOS Distributed Cloud Computing Platform

    Directory of Open Access Journals (Sweden)

    George SUCIU

    2013-01-01

    Full Text Available The cloud computing paradigm has been defined from several points of view, the main two directions being either as an evolution of the grid and distributed computing paradigm, or, on the contrary, as a disruptive revolution in the classical paradigms of operating systems, network layers and web applications. This paper presents a distributed cloud computing platform called SlapOS, which unifies technologies and communication protocols into a new technology model for offering any application as a service. Both cloud and distributed computing can be efficient methods for optimizing resources that are aggregated from a grid of standard PCs hosted in homes, offices and small data centers. The paper fills a gap in the existing distributed computing literature by providing a distributed cloud computing model which can be applied for deploying various applications.

  3. Computational methods for reversed-field equilibrium

    International Nuclear Information System (INIS)

    Boyd, J.K.; Auerbach, S.P.; Willmann, P.A.; Berk, H.L.; McNamara, B.

    1980-01-01

    Investigating the temporal evolution of reversed-field equilibrium caused by transport processes requires the solution of the Grad-Shafranov equation and computation of field-line-averaged quantities. The technique for field-line averaging and the computation of the Grad-Shafranov equation are presented. Application of Green's function to specify the Grad-Shafranov equation boundary condition is discussed. Hill's vortex formulas used to verify certain computations are detailed. Use of computer software to implement computational methods is described

  4. Advanced scientific computational methods and their applications to nuclear technologies. (4) Overview of scientific computational methods, introduction of continuum simulation methods and their applications (4)

    International Nuclear Information System (INIS)

    Sekimura, Naoto; Okita, Taira

    2006-01-01

    Scientific computational methods have advanced remarkably with the progress of nuclear development. They have played the role of weft connecting each realm of nuclear engineering and then an introductory course of advanced scientific computational methods and their applications to nuclear technologies were prepared in serial form. This is the fourth issue showing the overview of scientific computational methods with the introduction of continuum simulation methods and their applications. Simulation methods on physical radiation effects on materials are reviewed based on the process such as binary collision approximation, molecular dynamics, kinematic Monte Carlo method, reaction rate method and dislocation dynamics. (T. Tanaka)

  5. Zonal methods and computational fluid dynamics

    International Nuclear Information System (INIS)

    Atta, E.H.

    1985-01-01

    Recent advances in developing numerical algorithms for solving fluid flow problems, and the continuing improvement in the speed and storage of large scale computers have made it feasible to compute the flow field about complex and realistic configurations. Current solution methods involve the use of a hierarchy of mathematical models ranging from the linearized potential equation to the Navier Stokes equations. Because of the increasing complexity of both the geometries and flowfields encountered in practical fluid flow simulation, there is a growing emphasis in computational fluid dynamics on the use of zonal methods. A zonal method is one that subdivides the total flow region into interconnected smaller regions or zones. The flow solutions in these zones are then patched together to establish the global flow field solution. Zonal methods are primarily used either to limit the complexity of the governing flow equations to a localized region or to alleviate the grid generation problems about geometrically complex and multicomponent configurations. This paper surveys the application of zonal methods for solving the flow field about two and three-dimensional configurations. Various factors affecting their accuracy and ease of implementation are also discussed. From the presented review it is concluded that zonal methods promise to be very effective for computing complex flowfields and configurations. Currently there are increasing efforts to improve their efficiency, versatility, and accuracy

  6. Impact of reduced-radiation dual-energy protocols using 320-detector row computed tomography for analyzing urinary calculus components: initial in vitro evaluation.

    Science.gov (United States)

    Cai, Xiangran; Zhou, Qingchun; Yu, Juan; Xian, Zhaohui; Feng, Youzhen; Yang, Wencai; Mo, Xukai

    2014-10-01

    To evaluate the impact of reduced-radiation dual-energy (DE) protocols using 320-detector row computed tomography on the differentiation of urinary calculus components. A total of 58 urinary calculi were placed into the same phantom and underwent DE scanning with 320-detector row computed tomography. Each calculus was scanned 4 times with the DE protocols using 135 kV and 80 kV tube voltage and different tube current combinations, including 100 mA and 570 mA (group A), 50 mA and 290 mA (group B), 30 mA and 170 mA (group C), and 10 mA and 60 mA (group D). The acquisition data of all 4 groups were then analyzed by stone DE analysis software, and the results were compared with x-ray diffraction analysis. Noise, contrast-to-noise ratio, and radiation dose were compared. Calculi were correctly identified in 56 of 58 stones (96.6%) using group A and B protocols. However, only 35 stones (60.3%) and 16 stones (27.6%) were correctly diagnosed using group C and D protocols, respectively. Mean noise increased significantly and mean contrast-to-noise ratio decreased significantly from groups A to D (P calculus component analysis while reducing patient radiation exposure to 1.81 mSv. Further reduction of tube currents may compromise diagnostic accuracy. Copyright © 2014 Elsevier Inc. All rights reserved.

  7. Computational and mathematical methods in brain atlasing.

    Science.gov (United States)

    Nowinski, Wieslaw L

    2017-12-01

    Brain atlases have a wide range of use from education to research to clinical applications. Mathematical methods as well as computational methods and tools play a major role in the process of brain atlas building and developing atlas-based applications. Computational methods and tools cover three areas: dedicated editors for brain model creation, brain navigators supporting multiple platforms, and atlas-assisted specific applications. Mathematical methods in atlas building and developing atlas-aided applications deal with problems in image segmentation, geometric body modelling, physical modelling, atlas-to-scan registration, visualisation, interaction and virtual reality. Here I overview computational and mathematical methods in atlas building and developing atlas-assisted applications, and share my contribution to and experience in this field.

  8. Computational methods in power system analysis

    CERN Document Server

    Idema, Reijer

    2014-01-01

    This book treats state-of-the-art computational methods for power flow studies and contingency analysis. In the first part the authors present the relevant computational methods and mathematical concepts. In the second part, power flow and contingency analysis are treated. Furthermore, traditional methods to solve such problems are compared to modern solvers, developed using the knowledge of the first part of the book. Finally, these solvers are analyzed both theoretically and experimentally, clearly showing the benefits of the modern approach.

  9. Empirical evaluation methods in computer vision

    CERN Document Server

    Christensen, Henrik I

    2002-01-01

    This book provides comprehensive coverage of methods for the empirical evaluation of computer vision techniques. The practical use of computer vision requires empirical evaluation to ensure that the overall system has a guaranteed performance. The book contains articles that cover the design of experiments for evaluation, range image segmentation, the evaluation of face recognition and diffusion methods, image matching using correlation methods, and the performance of medical image processing algorithms. Sample Chapter(s). Foreword (228 KB). Chapter 1: Introduction (505 KB). Contents: Automate

  10. Tensor network method for reversible classical computation

    Science.gov (United States)

    Yang, Zhi-Cheng; Kourtis, Stefanos; Chamon, Claudio; Mucciolo, Eduardo R.; Ruckenstein, Andrei E.

    2018-03-01

    We develop a tensor network technique that can solve universal reversible classical computational problems, formulated as vertex models on a square lattice [Nat. Commun. 8, 15303 (2017), 10.1038/ncomms15303]. By encoding the truth table of each vertex constraint in a tensor, the total number of solutions compatible with partial inputs and outputs at the boundary can be represented as the full contraction of a tensor network. We introduce an iterative compression-decimation (ICD) scheme that performs this contraction efficiently. The ICD algorithm first propagates local constraints to longer ranges via repeated contraction-decomposition sweeps over all lattice bonds, thus achieving compression on a given length scale. It then decimates the lattice via coarse-graining tensor contractions. Repeated iterations of these two steps gradually collapse the tensor network and ultimately yield the exact tensor trace for large systems, without the need for manual control of tensor dimensions. Our protocol allows us to obtain the exact number of solutions for computations where a naive enumeration would take astronomically long times.

  11. Automated Verification of Quantum Protocols using MCMAS

    Directory of Open Access Journals (Sweden)

    F. Belardinelli

    2012-07-01

    Full Text Available We present a methodology for the automated verification of quantum protocols using MCMAS, a symbolic model checker for multi-agent systems The method is based on the logical framework developed by D'Hondt and Panangaden for investigating epistemic and temporal properties, built on the model for Distributed Measurement-based Quantum Computation (DMC, an extension of the Measurement Calculus to distributed quantum systems. We describe the translation map from DMC to interpreted systems, the typical formalism for reasoning about time and knowledge in multi-agent systems. Then, we introduce dmc2ispl, a compiler into the input language of the MCMAS model checker. We demonstrate the technique by verifying the Quantum Teleportation Protocol, and discuss the performance of the tool.

  12. Methods for CT automatic exposure control protocol translation between scanner platforms.

    Science.gov (United States)

    McKenney, Sarah E; Seibert, J Anthony; Lamba, Ramit; Boone, John M

    2014-03-01

    An imaging facility with a diverse fleet of CT scanners faces considerable challenges when propagating CT protocols with consistent image quality and patient dose across scanner makes and models. Although some protocol parameters can comfortably remain constant among scanners (eg, tube voltage, gantry rotation time), the automatic exposure control (AEC) parameter, which selects the overall mA level during tube current modulation, is difficult to match among scanners, especially from different CT manufacturers. Objective methods for converting tube current modulation protocols among CT scanners were developed. Three CT scanners were investigated, a GE LightSpeed 16 scanner, a GE VCT scanner, and a Siemens Definition AS+ scanner. Translation of the AEC parameters such as noise index and quality reference mAs across CT scanners was specifically investigated. A variable-diameter poly(methyl methacrylate) phantom was imaged on the 3 scanners using a range of AEC parameters for each scanner. The phantom consisted of 5 cylindrical sections with diameters of 13, 16, 20, 25, and 32 cm. The protocol translation scheme was based on matching either the volumetric CT dose index or image noise (in Hounsfield units) between two different CT scanners. A series of analytic fit functions, corresponding to different patient sizes (phantom diameters), were developed from the measured CT data. These functions relate the AEC metric of the reference scanner, the GE LightSpeed 16 in this case, to the AEC metric of a secondary scanner. When translating protocols between different models of CT scanners (from the GE LightSpeed 16 reference scanner to the GE VCT system), the translation functions were linear. However, a power-law function was necessary to convert the AEC functions of the GE LightSpeed 16 reference scanner to the Siemens Definition AS+ secondary scanner, because of differences in the AEC functionality designed by these two companies. Protocol translation on the basis of

  13. Chapter 3: Commercial and Industrial Lighting Controls Evaluation Protocol. The Uniform Methods Project: Methods for Determining Energy Efficiency Savings for Specific Measures

    Energy Technology Data Exchange (ETDEWEB)

    Kurnik, Charles W. [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Carlson, Stephen [DNV GL, Madison, WI (United States)

    2017-10-04

    This Commercial and Industrial Lighting Controls Evaluation Protocol (the protocol) describes methods to account for energy savings resulting from programmatic installation of lighting control equipment in large populations of commercial, industrial, government, institutional, and other nonresidential facilities. This protocol does not address savings resulting from changes in codes and standards, or from education and training activities. When lighting controls are installed in conjunction with a lighting retrofit project, the lighting control savings must be calculated parametrically with the lighting retrofit project so savings are not double counted.

  14. Authentication Protocol using Quantum Superposition States

    Energy Technology Data Exchange (ETDEWEB)

    Kanamori, Yoshito [University of Alaska; Yoo, Seong-Moo [University of Alabama, Huntsville; Gregory, Don A. [University of Alabama, Huntsville; Sheldon, Frederick T [ORNL

    2009-01-01

    When it became known that quantum computers could break the RSA (named for its creators - Rivest, Shamir, and Adleman) encryption algorithm within a polynomial-time, quantum cryptography began to be actively studied. Other classical cryptographic algorithms are only secure when malicious users do not have sufficient computational power to break security within a practical amount of time. Recently, many quantum authentication protocols sharing quantum entangled particles between communicators have been proposed, providing unconditional security. An issue caused by sharing quantum entangled particles is that it may not be simple to apply these protocols to authenticate a specific user in a group of many users. An authentication protocol using quantum superposition states instead of quantum entangled particles is proposed. The random number shared between a sender and a receiver can be used for classical encryption after the authentication has succeeded. The proposed protocol can be implemented with the current technologies we introduce in this paper.

  15. Exploring two methods of usability testing: concurrent versus retrospective think-aloud protocols

    NARCIS (Netherlands)

    van den Haak, M.J.; de Jong, Menno D.T.

    2003-01-01

    Think-aloud protocols are commonly used for the usability testing of instructional documents, Web sites and interfaces. This paper addresses the benefits and drawbacks of two think-aloud variations: the traditional concurrent think-aloud method and the less familiar retrospective think-aloud

  16. Exploring Two Methods of Usability Testing : Concurrent versus Retrospective Think-Aloud Protocols

    NARCIS (Netherlands)

    Van den Haak, Maaike J.; De Jong, Menno D. T.

    2003-01-01

    Think-aloud protocols are commonly used for the usability testing of instructional documents, web sites and interfaces. This paper addresses the benefits and drawbacks of two think-aloud variations: the traditional concurrent think-aloud method and the less familiar retrospective think-aloud

  17. A Forward-secure Grouping-proof Protocol for Multiple RFID Tags

    Directory of Open Access Journals (Sweden)

    Liu Ya-li

    2012-09-01

    Full Text Available Designing secure and robust grouping-proof protocols based on RFID characteristics becomes a hotspot in the research of security in Internet of Things (IOT. The proposed grouping-proof protocols recently have security and/or privacy omission and these schemes afford order-dependence by relaying message among tags through an RFID reader. In consequence, aiming at enhancing the robustness, improving scalability, reducing the computation costs on resource-constrained devices, and meanwhile combing Computational Intelligence (CI with Secure Multi-party Communication (SMC, a Forward-Secure Grouping-Proof Protocol (FSGP for multiple RFID tags based on Shamir's (, secret sharing is proposed. In comparison with the previous grouping-proof protocols, FSGP has the characteristics of forward-security and order-independence addressing the scalability issue by avoiding relaying message. Our protocol provides security enhancement, performance improvement, and meanwhile controls the computation cost, which equilibrates both security and low cost requirements for RFID tags.

  18. BioBlocks: Programming Protocols in Biology Made Easier.

    Science.gov (United States)

    Gupta, Vishal; Irimia, Jesús; Pau, Iván; Rodríguez-Patón, Alfonso

    2017-07-21

    The methods to execute biological experiments are evolving. Affordable fluid handling robots and on-demand biology enterprises are making automating entire experiments a reality. Automation offers the benefit of high-throughput experimentation, rapid prototyping, and improved reproducibility of results. However, learning to automate and codify experiments is a difficult task as it requires programming expertise. Here, we present a web-based visual development environment called BioBlocks for describing experimental protocols in biology. It is based on Google's Blockly and Scratch, and requires little or no experience in computer programming to automate the execution of experiments. The experiments can be specified, saved, modified, and shared between multiple users in an easy manner. BioBlocks is open-source and can be customized to execute protocols on local robotic platforms or remotely, that is, in the cloud. It aims to serve as a de facto open standard for programming protocols in Biology.

  19. Computational methods in earthquake engineering

    CERN Document Server

    Plevris, Vagelis; Lagaros, Nikos

    2017-01-01

    This is the third book in a series on Computational Methods in Earthquake Engineering. The purpose of this volume is to bring together the scientific communities of Computational Mechanics and Structural Dynamics, offering a wide coverage of timely issues on contemporary Earthquake Engineering. This volume will facilitate the exchange of ideas in topics of mutual interest and can serve as a platform for establishing links between research groups with complementary activities. The computational aspects are emphasized in order to address difficult engineering problems of great social and economic importance. .

  20. Mouse cell culture - Methods and protocols

    Directory of Open Access Journals (Sweden)

    CarloAlberto Redi

    2010-12-01

    Full Text Available The mouse is, out of any doubt, the experimental animal par excellence for many many colleagues within the scientific community, notably for those working in mammalian biology (in a broad sense, from basic genetic to modeling human diseases, starting at least from 1664 Robert Hooke experiments on air’s propertyn. Not surprising then that mouse cell cultures is a well established field of research itself and that there are several handbooks devoted to this discipline. Here, Andrew Ward and David Tosh provide a necessary update of the protocols currently needed. In fact, nearly half of the book is devoted to stem cells culture protocols, mainly embryonic, from a list of several organs (kidney, lung, oesophagus and intestine, pancreas and liver to mention some........

  1. Quantum protocols within Spekkens' toy model

    Science.gov (United States)

    Disilvestro, Leonardo; Markham, Damian

    2017-05-01

    Quantum mechanics is known to provide significant improvements in information processing tasks when compared to classical models. These advantages range from computational speedups to security improvements. A key question is where these advantages come from. The toy model developed by Spekkens [R. W. Spekkens, Phys. Rev. A 75, 032110 (2007), 10.1103/PhysRevA.75.032110] mimics many of the features of quantum mechanics, such as entanglement and no cloning, regarded as being important in this regard, despite being a local hidden variable theory. In this work, we study several protocols within Spekkens' toy model where we see it can also mimic the advantages and limitations shown in the quantum case. We first provide explicit proofs for the impossibility of toy bit commitment and the existence of a toy error correction protocol and consequent k -threshold secret sharing. Then, defining a toy computational model based on the quantum one-way computer, we prove the existence of blind and verified protocols. Importantly, these two last quantum protocols are known to achieve a better-than-classical security. Our results suggest that such quantum improvements need not arise from any Bell-type nonlocality or contextuality, but rather as a consequence of steering correlations.

  2. Computing discharge using the index velocity method

    Science.gov (United States)

    Levesque, Victor A.; Oberg, Kevin A.

    2012-01-01

    Application of the index velocity method for computing continuous records of discharge has become increasingly common, especially since the introduction of low-cost acoustic Doppler velocity meters (ADVMs) in 1997. Presently (2011), the index velocity method is being used to compute discharge records for approximately 470 gaging stations operated and maintained by the U.S. Geological Survey. The purpose of this report is to document and describe techniques for computing discharge records using the index velocity method. Computing discharge using the index velocity method differs from the traditional stage-discharge method by separating velocity and area into two ratings—the index velocity rating and the stage-area rating. The outputs from each of these ratings, mean channel velocity (V) and cross-sectional area (A), are then multiplied together to compute a discharge. For the index velocity method, V is a function of such parameters as streamwise velocity, stage, cross-stream velocity, and velocity head, and A is a function of stage and cross-section shape. The index velocity method can be used at locations where stage-discharge methods are used, but it is especially appropriate when more than one specific discharge can be measured for a specific stage. After the ADVM is selected, installed, and configured, the stage-area rating and the index velocity rating must be developed. A standard cross section is identified and surveyed in order to develop the stage-area rating. The standard cross section should be surveyed every year for the first 3 years of operation and thereafter at a lesser frequency, depending on the susceptibility of the cross section to change. Periodic measurements of discharge are used to calibrate and validate the index rating for the range of conditions experienced at the gaging station. Data from discharge measurements, ADVMs, and stage sensors are compiled for index-rating analysis. Index ratings are developed by means of regression

  3. Analysis of a security protocol in ?CRL

    NARCIS (Netherlands)

    J. Pang

    2002-01-01

    textabstractNeedham-Schroeder public-key protocol; With the growth and commercialization of the Internet, the security of communication between computers becomes a crucial point. A variety of security protocols based on cryptographic primitives are used to establish secure communication over

  4. The Simplest Protocol for Oblivious Transfer

    DEFF Research Database (Denmark)

    Chou, Tung; Orlandi, Claudio

    2015-01-01

    Oblivious Transfer (OT) is the fundamental building block of cryptographic protocols. In this paper we describe the simplest and most efficient protocol for 1-out-of-n OT to date, which is obtained by tweaking the Diffie-Hellman key-exchange protocol. The protocol achieves UC-security against...... active and adaptive corruptions in the random oracle model. Due to its simplicity, the protocol is extremely efficient and it allows to perform m 1-out-of-n OTs using only: - Computation: (n+1)m+2 exponentiations (mn for the receiver, mn+2 for the sender) and - Communication: 32(m+1) bytes (for the group...... optimizations) is at least one order of magnitude faster than previous work. Category / Keywords: cryptographic protocols / Oblivious Transfer, UC Security, Elliptic Curves, Efficient Implementation...

  5. Protocol Interoperability Between DDN and ISO (Defense Data Network and International Organization for Standardization) Protocols

    Science.gov (United States)

    1988-08-01

    services and protocols above the transport layer are usually implemented as user- callable utilities on the host computers, it is desirable to offer them...Networks, Prentice-hall, New Jersey, 1987 [ BOND 87] Bond , John, "Parallel-Processing Concepts Finally Come together in Real Systems", Computer Design

  6. Whatever works: a systematic user-centered training protocol to optimize brain-computer interfacing individually.

    Directory of Open Access Journals (Sweden)

    Elisabeth V C Friedrich

    Full Text Available This study implemented a systematic user-centered training protocol for a 4-class brain-computer interface (BCI. The goal was to optimize the BCI individually in order to achieve high performance within few sessions for all users. Eight able-bodied volunteers, who were initially naïve to the use of a BCI, participated in 10 sessions over a period of about 5 weeks. In an initial screening session, users were asked to perform the following seven mental tasks while multi-channel EEG was recorded: mental rotation, word association, auditory imagery, mental subtraction, spatial navigation, motor imagery of the left hand and motor imagery of both feet. Out of these seven mental tasks, the best 4-class combination as well as most reactive frequency band (between 8-30 Hz was selected individually for online control. Classification was based on common spatial patterns and Fisher's linear discriminant analysis. The number and time of classifier updates varied individually. Selection speed was increased by reducing trial length. To minimize differences in brain activity between sessions with and without feedback, sham feedback was provided in the screening and calibration runs in which usually no real-time feedback is shown. Selected task combinations and frequency ranges differed between users. The tasks that were included in the 4-class combination most often were (1 motor imagery of the left hand (2, one brain-teaser task (word association or mental subtraction (3, mental rotation task and (4 one more dynamic imagery task (auditory imagery, spatial navigation, imagery of the feet. Participants achieved mean performances over sessions of 44-84% and peak performances in single-sessions of 58-93% in this user-centered 4-class BCI protocol. This protocol is highly adjustable to individual users and thus could increase the percentage of users who can gain and maintain BCI control. A high priority for future work is to examine this protocol with severely

  7. An improved method for preparing Agrobacterium cells that simplifies the Arabidopsis transformation protocol

    Directory of Open Access Journals (Sweden)

    Ülker Bekir

    2006-10-01

    Full Text Available Abstract Background The Agrobacterium vacuum (Bechtold et al 1993 and floral-dip (Clough and Bent 1998 are very efficient methods for generating transgenic Arabidopsis plants. These methods allow plant transformation without the need for tissue culture. Large volumes of bacterial cultures grown in liquid media are necessary for both of these transformation methods. This limits the number of transformations that can be done at a given time due to the need for expensive large shakers and limited space on them. Additionally, the bacterial colonies derived from solid media necessary for starting these liquid cultures often fail to grow in such large volumes. Therefore the optimum stage of plant material for transformation is often missed and new plant material needs to be grown. Results To avoid problems associated with large bacterial liquid cultures, we investigated whether bacteria grown on plates are also suitable for plant transformation. We demonstrate here that bacteria grown on plates can be used with similar efficiency for transforming plants even after one week of storage at 4°C. This makes it much easier to synchronize Agrobacterium and plants for transformation. DNA gel blot analysis was carried out on the T1 plants surviving the herbicide selection and demonstrated that the surviving plants are indeed transgenic. Conclusion The simplified method works as efficiently as the previously reported protocols and significantly reduces the workload, cost and time. Additionally, the protocol reduces the risk of large scale contaminations involving GMOs. Most importantly, many more independent transformations per day can be performed using this modified protocol.

  8. Fibonacci’s Computation Methods vs Modern Algorithms

    Directory of Open Access Journals (Sweden)

    Ernesto Burattini

    2013-12-01

    Full Text Available In this paper we discuss some computational procedures given by Leonardo Pisano Fibonacci in his famous Liber Abaci book, and we propose their translation into a modern language for computers (C ++. Among the other we describe the method of “cross” multiplication, we evaluate its computational complexity in algorithmic terms and we show the output of a C ++ code that describes the development of the method applied to the product of two integers. In a similar way we show the operations performed on fractions introduced by Fibonacci. Thanks to the possibility to reproduce on a computer, the Fibonacci’s different computational procedures, it was possible to identify some calculation errors present in the different versions of the original text.

  9. New or improved computational methods and advanced reactor design

    International Nuclear Information System (INIS)

    Nakagawa, Masayuki; Takeda, Toshikazu; Ushio, Tadashi

    1997-01-01

    Nuclear computational method has been studied continuously up to date, as a fundamental technology supporting the nuclear development. At present, research on computational method according to new theory and the calculating method thought to be difficult to practise are also continued actively to find new development due to splendid improvement of features of computer. In Japan, many light water type reactors are now in operations, new computational methods are induced for nuclear design, and a lot of efforts are concentrated for intending to more improvement of economics and safety. In this paper, some new research results on the nuclear computational methods and their application to nuclear design of the reactor were described for introducing recent trend of the nuclear design of the reactor. 1) Advancement of the computational method, 2) Reactor core design and management of the light water reactor, and 3) Nuclear design of the fast reactor. (G.K.)

  10. Operating systems and network protocols for wireless sensor networks.

    Science.gov (United States)

    Dutta, Prabal; Dunkels, Adam

    2012-01-13

    Sensor network protocols exist to satisfy the communication needs of diverse applications, including data collection, event detection, target tracking and control. Network protocols to enable these services are constrained by the extreme resource scarcity of sensor nodes-including energy, computing, communications and storage-which must be carefully managed and multiplexed by the operating system. These challenges have led to new protocols and operating systems that are efficient in their energy consumption, careful in their computational needs and miserly in their memory footprints, all while discovering neighbours, forming networks, delivering data and correcting failures.

  11. A Standard Mutual Authentication Protocol for Cloud Computing Based Health Care System.

    Science.gov (United States)

    Mohit, Prerna; Amin, Ruhul; Karati, Arijit; Biswas, G P; Khan, Muhammad Khurram

    2017-04-01

    Telecare Medical Information System (TMIS) supports a standard platform to the patient for getting necessary medical treatment from the doctor(s) via Internet communication. Security protection is important for medical records (data) of the patients because of very sensitive information. Besides, patient anonymity is another most important property, which must be protected. Most recently, Chiou et al. suggested an authentication protocol for TMIS by utilizing the concept of cloud environment. They claimed that their protocol is patient anonymous and well security protected. We reviewed their protocol and found that it is completely insecure against patient anonymity. Further, the same protocol is not protected against mobile device stolen attack. In order to improve security level and complexity, we design a light weight authentication protocol for the same environment. Our security analysis ensures resilience of all possible security attacks. The performance of our protocol is relatively standard in comparison with the related previous research.

  12. New Heterogeneous Clustering Protocol for Prolonging Wireless Sensor Networks Lifetime

    Directory of Open Access Journals (Sweden)

    Md. Golam Rashed

    2014-06-01

    Full Text Available Clustering in wireless sensor networks is one of the crucial methods for increasing of network lifetime. The network characteristics of existing classical clustering protocols for wireless sensor network are homogeneous. Clustering protocols fail to maintain the stability of the system, especially when nodes are heterogeneous. We have seen that the behavior of Heterogeneous-Hierarchical Energy Aware Routing Protocol (H-HEARP becomes very unstable once the first node dies, especially in the presence of node heterogeneity. In this paper we assume a new clustering protocol whose network characteristics is heterogeneous for prolonging of network lifetime. The computer simulation results demonstrate that the proposed clustering algorithm outperforms than other clustering algorithms in terms of the time interval before the death of the first node (we refer to as stability period. The simulation results also show the high performance of the proposed clustering algorithm for higher values of extra energy brought by more powerful nodes.

  13. Attacks on quantum key distribution protocols that employ non-ITS authentication

    Science.gov (United States)

    Pacher, C.; Abidin, A.; Lorünser, T.; Peev, M.; Ursin, R.; Zeilinger, A.; Larsson, J.-Å.

    2016-01-01

    We demonstrate how adversaries with large computing resources can break quantum key distribution (QKD) protocols which employ a particular message authentication code suggested previously. This authentication code, featuring low key consumption, is not information-theoretically secure (ITS) since for each message the eavesdropper has intercepted she is able to send a different message from a set of messages that she can calculate by finding collisions of a cryptographic hash function. However, when this authentication code was introduced, it was shown to prevent straightforward man-in-the-middle (MITM) attacks against QKD protocols. In this paper, we prove that the set of messages that collide with any given message under this authentication code contains with high probability a message that has small Hamming distance to any other given message. Based on this fact, we present extended MITM attacks against different versions of BB84 QKD protocols using the addressed authentication code; for three protocols, we describe every single action taken by the adversary. For all protocols, the adversary can obtain complete knowledge of the key, and for most protocols her success probability in doing so approaches unity. Since the attacks work against all authentication methods which allow to calculate colliding messages, the underlying building blocks of the presented attacks expose the potential pitfalls arising as a consequence of non-ITS authentication in QKD post-processing. We propose countermeasures, increasing the eavesdroppers demand for computational power, and also prove necessary and sufficient conditions for upgrading the discussed authentication code to the ITS level.

  14. Cheater detection in SPDZ multiparty computation

    NARCIS (Netherlands)

    G. Spini (Gabriele); S. Fehr (Serge); A. Nascimento; P. Barreto

    2016-01-01

    textabstractIn this work we revisit the SPDZ multiparty computation protocol by Damgård et al. for securely computing a function in the presence of an unbounded number of dishonest parties. The SPDZ protocol is distinguished by its fast performance. A downside of the SPDZ protocol is that one single

  15. Generalized routing protocols for multihop relay networks

    KAUST Repository

    Khan, Fahd Ahmed

    2011-07-01

    Performance of multihop cooperative networks depends on the routing protocols employed. In this paper we propose the last-n-hop selection protocol, the dual path protocol, the forward-backward last-n-hop selection protocol and the forward-backward dual path protocol for the routing of data through multihop relay networks. The average symbol error probability performance of the schemes is analysed by simulations. It is shown that close to optimal performance can be achieved by using the last-n-hop selection protocol and its forward-backward variant. Furthermore we also compute the complexity of the protocols in terms of number of channel state information required and the number of comparisons required for routing the signal through the network. © 2011 IEEE.

  16. Scanning protocol of dual-source computed tomography for aortic dissection

    International Nuclear Information System (INIS)

    Zhai Mingchun; Wang Yongmei

    2013-01-01

    Objective: To find a dual-source CT scanning protocol which can obtain high image quality with low radiation dose for diagnosis of aortic dissection. Methods: Total 120 patients with suspected aortic dissection were randomly and equally assigned into three groups. Patients in Croup A were performed CTA exam with prospectively electrocardiogram- gated high pitch spiral mode (FLASH). Patients in Croup B were performed CTA exam with retrospective electrocardiogram- gated spiral mode. Patients in Croup C were performed CTA exam with conventional mode which no electrocardiogram-gated. The image quality, radiation dose, advantages and disadvantages among the three scan protocol were analyzed. Results: For image quality, seventeen, twenty two and one patients in group A were granted to grade 1, 2, 3 respectively, and none was in grade 4; thirty three and seven patients in group B were granted to grade 1, 2, respectively, and none was in grade 3 and 4; fourteen and twenty six patients in group C were granted to grade 3, 4, respectively, and none was in grade 1 and 2. There was no significant difference between group A and B in image quality. Compared with the image quality, Group A and B were significantly higher than Group C. Mean effective radiation dose of Croup A, B and C were 7.7±0.4 mSv, 33.11±3.38 mSv, and 7.6±0.68 mSv, respectively. Group B was significantly higher than Groups A and C (P<0.05, P<0.05, respectively), and there was no significant difference between Group A and C (P=0.826). Conclusions: Prospectively electrocardiogram-gated high pitch spiral mode can be the first line protocol for evaluation of aortic dissection. It can achieve high image quality with low radiation dose. Conventional mode with no electrocardiogram-gated can be selectively used for Stanford B aortic dissection. (authors)

  17. Advanced Internet Protocols, Services, and Applications

    CERN Document Server

    Oki, Eiji; Tatipamula, Mallikarjun; Vogt, Christian

    2012-01-01

    Today, the internet and computer networking are essential parts of business, learning, and personal communications and entertainment. Virtually all messages or transactions sent over the internet are carried using internet infrastructure- based on advanced internet protocols. Advanced internet protocols ensure that both public and private networks operate with maximum performance, security, and flexibility. This book is intended to provide a comprehensive technical overview and survey of advanced internet protocols, first providing a solid introduction and going on to discu

  18. Methods for the evaluation of hospital cooperation activities (Systematic review protocol

    Directory of Open Access Journals (Sweden)

    Rotter Thomas

    2012-02-01

    Full Text Available Abstract Background Hospital partnerships, mergers and cooperatives are arrangements frequently seen as a means of improving health service delivery. Many of the assumptions used in planning hospital cooperatives are not stated clearly and are often based on limited or poor scientific evidence. Methods This is a protocol for a systematic review, following the Cochrane EPOC methodology. The review aims to document, catalogue and synthesize the existing literature on the reported methods for the evaluation of hospital cooperation activities as well as methods of hospital cooperation. We will search the Database of Abstracts of Reviews of Effectiveness, the Effective Practice and Organisation of Care Register, the Cochrane Central Register of Controlled Trials and bibliographic databases including PubMed (via NLM, Web of Science, NHS EED, Business Source Premier (via EBSCO and Global Health for publications that report on methods for evaluating hospital cooperatives, strategic partnerships, mergers, alliances, networks and related activities and methods used for such partnerships. The method proposed by the Cochrane EPOC group regarding randomized study designs, controlled clinical trials, controlled before and after studies, and interrupted time series will be followed. In addition, we will also include cohort, case-control studies, and relevant non-comparative publications such as case reports. We will categorize and analyze the review findings according to the study design employed, the study quality (low versus high quality studies and the method reported in the primary studies. We will present the results of studies in tabular form. Discussion Overall, the systematic review aims to identify, assess and synthesize the evidence to underpin hospital cooperation activities as defined in this protocol. As a result, the review will provide an evidence base for partnerships, alliances or other fields of cooperation in a hospital setting. PROSPERO

  19. Advanced scientific computational methods and their applications of nuclear technologies. (1) Overview of scientific computational methods, introduction of continuum simulation methods and their applications (1)

    International Nuclear Information System (INIS)

    Oka, Yoshiaki; Okuda, Hiroshi

    2006-01-01

    Scientific computational methods have advanced remarkably with the progress of nuclear development. They have played the role of weft connecting each realm of nuclear engineering and then an introductory course of advanced scientific computational methods and their applications to nuclear technologies were prepared in serial form. This is the first issue showing their overview and introduction of continuum simulation methods. Finite element method as their applications is also reviewed. (T. Tanaka)

  20. Scaling HEP to Web size with RESTful protocols: The frontier example

    International Nuclear Information System (INIS)

    Dykstra, Dave

    2011-01-01

    The World-Wide-Web has scaled to an enormous size. The largest single contributor to its scalability is the HTTP protocol, particularly when used in conformity to REST (REpresentational State Transfer) principles. High Energy Physics (HEP) computing also has to scale to an enormous size, so it makes sense to base much of it on RESTful protocols. Frontier, which reads databases with an HTTP-based RESTful protocol, has successfully scaled to deliver production detector conditions data from both the CMS and ATLAS LHC detectors to hundreds of thousands of computer cores worldwide. Frontier is also able to re-use a large amount of standard software that runs the Web: on the clients, caches, and servers. I discuss the specific ways in which HTTP and REST enable high scalability for Frontier. I also briefly discuss another protocol used in HEP computing that is HTTP-based and RESTful, and another protocol that could benefit from it. My goal is to encourage HEP protocol designers to consider HTTP and REST whenever the same information is needed in many places.

  1. Mobile Internet Protocol Analysis

    National Research Council Canada - National Science Library

    Brachfeld, Lawrence

    1999-01-01

    ...) and User Datagram Protocol (UDP). Mobile IP allows mobile computers to send and receive packets addressed with their home network IP address, regardless of the IP address of their current point of attachment on the Internet...

  2. Water demand forecasting: review of soft computing methods.

    Science.gov (United States)

    Ghalehkhondabi, Iman; Ardjmand, Ehsan; Young, William A; Weckman, Gary R

    2017-07-01

    Demand forecasting plays a vital role in resource management for governments and private companies. Considering the scarcity of water and its inherent constraints, demand management and forecasting in this domain are critically important. Several soft computing techniques have been developed over the last few decades for water demand forecasting. This study focuses on soft computing methods of water consumption forecasting published between 2005 and 2015. These methods include artificial neural networks (ANNs), fuzzy and neuro-fuzzy models, support vector machines, metaheuristics, and system dynamics. Furthermore, it was discussed that while in short-term forecasting, ANNs have been superior in many cases, but it is still very difficult to pick a single method as the overall best. According to the literature, various methods and their hybrids are applied to water demand forecasting. However, it seems soft computing has a lot more to contribute to water demand forecasting. These contribution areas include, but are not limited, to various ANN architectures, unsupervised methods, deep learning, various metaheuristics, and ensemble methods. Moreover, it is found that soft computing methods are mainly used for short-term demand forecasting.

  3. Multiparty Computation from Somewhat Homomorphic Encryption

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Pastro, Valerio; Smart, Nigel

    2011-01-01

    independent of the function to be computed and of the inputs, and a much more efficient online phase where the actual computation takes place. The online phase is unconditionally secure and has total computational (and communication) complexity linear in $n$, the number of players, where earlier work......We propose a general multiparty computation protocol secure against an active adversary corrupting up to $n-1$ of the $n$ players. The protocol may be used to compute securely arithmetic circuits over any finite field $\\F_{p^k}$. Our protocol consists of a preprocessing phase that is both...... was quadratic in $n$. Hence, the work done by each player in the online phase is independent of $n$ and moreover is only a small constant factor larger than what one would need to compute the circuit in the clear. It is the first protocol in the preprocessing model with these properties. We show a lower bound...

  4. Database organization for computer-aided characterization of laser diode

    International Nuclear Information System (INIS)

    Oyedokun, Z.O.

    1988-01-01

    Computer-aided data logging involves a huge amount of data which must be properly managed for optimized storage space, easy access, retrieval and utilization. An organization method is developed to enhance the advantages of computer-based data logging of the testing of the semiconductor injection laser which optimize storage space, permit authorized user easy access and inhibits penetration. This method is based on unique file identification protocol tree structure and command file-oriented access procedures

  5. Standardization and Optimization of Computed Tomography Protocols to Achieve Low-Dose

    Science.gov (United States)

    Chin, Cynthia; Cody, Dianna D.; Gupta, Rajiv; Hess, Christopher P.; Kalra, Mannudeep K.; Kofler, James M.; Krishnam, Mayil S.; Einstein, Andrew J.

    2014-01-01

    The increase in radiation exposure due to CT scans has been of growing concern in recent years. CT scanners differ in their capabilities and various indications require unique protocols, but there remains room for standardization and optimization. In this paper we summarize approaches to reduce dose, as discussed in lectures comprising the first session of the 2013 UCSF Virtual Symposium on Radiation Safety in Computed Tomography. The experience of scanning at low dose in different body regions, for both diagnostic and interventional CT procedures, is addressed. An essential primary step is justifying the medical need for each scan. General guiding principles for reducing dose include tailoring a scan to a patient, minimizing scan length, use of tube current modulation and minimizing tube current, minimizing-tube potential, iterative reconstruction, and periodic review of CT studies. Organized efforts for standardization have been spearheaded by professional societies such as the American Association of Physicists in Medicine. Finally, all team members should demonstrate an awareness of the importance of minimizing dose. PMID:24589403

  6. e-SCP-ECG+ Protocol: An Expansion on SCP-ECG Protocol for Health Telemonitoring—Pilot Implementation

    Directory of Open Access Journals (Sweden)

    George J. Mandellos

    2010-01-01

    Full Text Available Standard Communication Protocol for Computer-assisted Electrocardiography (SCP-ECG provides standardized communication among different ECG devices and medical information systems. This paper extends the use of this protocol in order to be included in health monitoring systems. It introduces new sections into SCP-ECG structure for transferring data for positioning, allergies, and five additional biosignals: noninvasive blood pressure (NiBP, body temperature (Temp, Carbon dioxide (CO2, blood oxygen saturation (SPO2, and pulse rate. It also introduces new tags in existing sections for transferring comprehensive demographic data. The proposed enhanced version is referred to as e-SCP-ECG+ protocol. This paper also considers the pilot implementation of the new protocol as a software component in a Health Telemonitoring System.

  7. Electromagnetic computation methods for lightning surge protection studies

    CERN Document Server

    Baba, Yoshihiro

    2016-01-01

    This book is the first to consolidate current research and to examine the theories of electromagnetic computation methods in relation to lightning surge protection. The authors introduce and compare existing electromagnetic computation methods such as the method of moments (MOM), the partial element equivalent circuit (PEEC), the finite element method (FEM), the transmission-line modeling (TLM) method, and the finite-difference time-domain (FDTD) method. The application of FDTD method to lightning protection studies is a topic that has matured through many practical applications in the past decade, and the authors explain the derivation of Maxwell's equations required by the FDTD, and modeling of various electrical components needed in computing lightning electromagnetic fields and surges with the FDTD method. The book describes the application of FDTD method to current and emerging problems of lightning surge protection of continuously more complex installations, particularly in critical infrastructures of e...

  8. Computational methods for data evaluation and assimilation

    CERN Document Server

    Cacuci, Dan Gabriel

    2013-01-01

    Data evaluation and data combination require the use of a wide range of probability theory concepts and tools, from deductive statistics mainly concerning frequencies and sample tallies to inductive inference for assimilating non-frequency data and a priori knowledge. Computational Methods for Data Evaluation and Assimilation presents interdisciplinary methods for integrating experimental and computational information. This self-contained book shows how the methods can be applied in many scientific and engineering areas. After presenting the fundamentals underlying the evaluation of experiment

  9. SPP: A data base processor data communications protocol

    Science.gov (United States)

    Fishwick, P. A.

    1983-01-01

    The design and implementation of a data communications protocol for the Intel Data Base Processor (DBP) is defined. The protocol is termed SPP (Service Port Protocol) since it enables data transfer between the host computer and the DBP service port. The protocol implementation is extensible in that it is explicitly layered and the protocol functionality is hierarchically organized. Extensive trace and performance capabilities have been supplied with the protocol software to permit optional efficient monitoring of the data transfer between the host and the Intel data base processor. Machine independence was considered to be an important attribute during the design and implementation of SPP. The protocol source is fully commented and is included in Appendix A of this report.

  10. Gamma camera performance: technical assessment protocol

    International Nuclear Information System (INIS)

    Bolster, A.A.; Waddington, W.A.

    1996-01-01

    This protocol addresses the performance assessment of single and dual headed gamma cameras. No attempt is made to assess the performance of any associated computing systems. Evaluations are usually performed on a gamma camera commercially available within the United Kingdom and recently installed at a clinical site. In consultation with the manufacturer, GCAT selects the site and liaises with local staff to arrange a mutually convenient time for assessment. The manufacturer is encouraged to have a representative present during the evaluation. Three to four days are typically required for the evaluation team to perform the necessary measurements. When access time is limited, the team will modify the protocol to test the camera as thoroughly as possible. Data are acquired on the camera's computer system and are subsequently transferred to the independent GCAT computer system for analysis. This transfer from site computer to the independent system is effected via a hardware interface and Interfile data transfer. (author)

  11. Gamma camera performance: technical assessment protocol

    Energy Technology Data Exchange (ETDEWEB)

    Bolster, A.A. [West Glasgow Hospitals NHS Trust, London (United Kingdom). Dept. of Clinical Physics; Waddington, W.A. [University College London Hospitals NHS Trust, London (United Kingdom). Inst. of Nuclear Medicine

    1996-12-31

    This protocol addresses the performance assessment of single and dual headed gamma cameras. No attempt is made to assess the performance of any associated computing systems. Evaluations are usually performed on a gamma camera commercially available within the United Kingdom and recently installed at a clinical site. In consultation with the manufacturer, GCAT selects the site and liaises with local staff to arrange a mutually convenient time for assessment. The manufacturer is encouraged to have a representative present during the evaluation. Three to four days are typically required for the evaluation team to perform the necessary measurements. When access time is limited, the team will modify the protocol to test the camera as thoroughly as possible. Data are acquired on the camera`s computer system and are subsequently transferred to the independent GCAT computer system for analysis. This transfer from site computer to the independent system is effected via a hardware interface and Interfile data transfer. (author).

  12. Computer Anti-forensics Methods and their Impact on Computer Forensic Investigation

    OpenAIRE

    Pajek, Przemyslaw; Pimenidis, Elias

    2009-01-01

    Electronic crime is very difficult to investigate and prosecute, mainly\\ud due to the fact that investigators have to build their cases based on artefacts left\\ud on computer systems. Nowadays, computer criminals are aware of computer forensics\\ud methods and techniques and try to use countermeasure techniques to efficiently\\ud impede the investigation processes. In many cases investigation with\\ud such countermeasure techniques in place appears to be too expensive, or too\\ud time consuming t...

  13. LEACH-A: An Adaptive Method for Improving LEACH Protocol

    Directory of Open Access Journals (Sweden)

    Jianli ZHAO

    2014-01-01

    Full Text Available Energy has become one of the most important constraints on wireless sensor networks. Hence, many researchers in this field focus on how to design a routing protocol to prolong the lifetime of the network. The classical hierarchical protocols such as LEACH and LEACH-C have better performance in saving the energy consumption. However, the choosing strategy only based on the largest residue energy or shortest distance will still consume more energy. In this paper an adaptive routing protocol named “LEACH-A” which has an energy threshold E0 is proposed. If there are cluster nodes whose residual energy are greater than E0, the node of largest residual energy is selected to communicated with the base station; When all the cluster nodes energy are less than E0, the node nearest to the base station is select to communication with the base station. Simulations show that our improved protocol LEACH-A performs better than the LEACH and the LEACH-C.

  14. A software defined RTU multi-protocol automatic adaptation data transmission method

    Science.gov (United States)

    Jin, Huiying; Xu, Xingwu; Wang, Zhanfeng; Ma, Weijun; Li, Sheng; Su, Yong; Pan, Yunpeng

    2018-02-01

    Remote terminal unit (RTU) is the core device of the monitor system in hydrology and water resources. Different devices often have different communication protocols in the application layer, which results in the difficulty in information analysis and communication networking. Therefore, we introduced the idea of software defined hardware, and abstracted the common feature of mainstream communication protocols of RTU application layer, and proposed a uniformed common protocol model. Then, various communication protocol algorithms of application layer are modularized according to the model. The executable codes of these algorithms are labeled by the virtual functions and stored in the flash chips of embedded CPU to form the protocol stack. According to the configuration commands to initialize the RTU communication systems, it is able to achieve dynamic assembling and loading of various application layer communication protocols of RTU and complete the efficient transport of sensor data from RTU to central station when the data acquisition protocol of sensors and various external communication terminals remain unchanged.

  15. Computational and instrumental methods in EPR

    CERN Document Server

    Bender, Christopher J

    2006-01-01

    Computational and Instrumental Methods in EPR Prof. Bender, Fordham University Prof. Lawrence J. Berliner, University of Denver Electron magnetic resonance has been greatly facilitated by the introduction of advances in instrumentation and better computational tools, such as the increasingly widespread use of the density matrix formalism. This volume is devoted to both instrumentation and computation aspects of EPR, while addressing applications such as spin relaxation time measurements, the measurement of hyperfine interaction parameters, and the recovery of Mn(II) spin Hamiltonian parameters via spectral simulation. Key features: Microwave Amplitude Modulation Technique to Measure Spin-Lattice (T1) and Spin-Spin (T2) Relaxation Times Improvement in the Measurement of Spin-Lattice Relaxation Time in Electron Paramagnetic Resonance Quantitative Measurement of Magnetic Hyperfine Parameters and the Physical Organic Chemistry of Supramolecular Systems New Methods of Simulation of Mn(II) EPR Spectra: Single Cryst...

  16. Computer-aided dental prostheses construction using reverse engineering.

    Science.gov (United States)

    Solaberrieta, E; Minguez, R; Barrenetxea, L; Sierra, E; Etxaniz, O

    2014-01-01

    The implementation of computer-aided design/computer-aided manufacturing (CAD/CAM) systems with virtual articulators, which take into account the kinematics, constitutes a breakthrough in the construction of customised dental prostheses. This paper presents a multidisciplinary protocol involving CAM techniques to produce dental prostheses. This protocol includes a step-by-step procedure using innovative reverse engineering technologies to transform completely virtual design processes into customised prostheses. A special emphasis is placed on a novel method that permits a virtual location of the models. The complete workflow includes the optical scanning of the patient, the use of reverse engineering software and, if necessary, the use of rapid prototyping to produce CAD temporary prostheses.

  17. Backpressure-based control protocols: design and computational aspects

    NARCIS (Netherlands)

    Miretskiy, D.I.; Scheinhardt, Willem R.W.; Mandjes, M.R.H.

    2009-01-01

    Congestion control in packet-based networks is often realized by feedback protocols. In this paper we assess their performance under a back-pressure mechanism that has been proposed and standardized for Ethernet metropolitan networks. In such a mechanism the service rate of an upstream queue is

  18. Backpressure-based control protocols: Design and computational aspects

    NARCIS (Netherlands)

    Miretskiy, D.I.; Scheinhardt, W.R.W.; Mandjes, M.R.H.

    2009-01-01

    Congestion control in packet-based networks is often realized by feedback protocols. In this paper we assess their performance under a back-pressure mechanism that has been proposed and standardized for Ethernet metropolitan networks. In such a mechanism the service rate of an upstream queue is

  19. Evaluation of Extraction Protocols for Simultaneous Polar and Non-Polar Yeast Metabolite Analysis Using Multivariate Projection Methods

    Directory of Open Access Journals (Sweden)

    Nicolas P. Tambellini

    2013-07-01

    Full Text Available Metabolomic and lipidomic approaches aim to measure metabolites or lipids in the cell. Metabolite extraction is a key step in obtaining useful and reliable data for successful metabolite studies. Significant efforts have been made to identify the optimal extraction protocol for various platforms and biological systems, for both polar and non-polar metabolites. Here we report an approach utilizing chemoinformatics for systematic comparison of protocols to extract both from a single sample of the model yeast organism Saccharomyces cerevisiae. Three chloroform/methanol/water partitioning based extraction protocols found in literature were evaluated for their effectiveness at reproducibly extracting both polar and non-polar metabolites. Fatty acid methyl esters and methoxyamine/trimethylsilyl derivatized aqueous compounds were analyzed by gas chromatography mass spectrometry to evaluate non-polar or polar metabolite analysis. The comparative breadth and amount of recovered metabolites was evaluated using multivariate projection methods. This approach identified an optimal protocol consisting of 64 identified polar metabolites from 105 ion hits and 12 fatty acids recovered, and will potentially attenuate the error and variation associated with combining metabolite profiles from different samples for untargeted analysis with both polar and non-polar analytes. It also confirmed the value of using multivariate projection methods to compare established extraction protocols.

  20. Outsourcing Set Intersection Computation Based on Bloom Filter for Privacy Preservation in Multimedia Processing

    Directory of Open Access Journals (Sweden)

    Hongliang Zhu

    2018-01-01

    Full Text Available With the development of cloud computing, the advantages of low cost and high computation ability meet the demands of complicated computation of multimedia processing. Outsourcing computation of cloud could enable users with limited computing resources to store and process distributed multimedia application data without installing multimedia application software in local computer terminals, but the main problem is how to protect the security of user data in untrusted public cloud services. In recent years, the privacy-preserving outsourcing computation is one of the most common methods to solve the security problems of cloud computing. However, the existing computation cannot meet the needs for the large number of nodes and the dynamic topologies. In this paper, we introduce a novel privacy-preserving outsourcing computation method which combines GM homomorphic encryption scheme and Bloom filter together to solve this problem and propose a new privacy-preserving outsourcing set intersection computation protocol. Results show that the new protocol resolves the privacy-preserving outsourcing set intersection computation problem without increasing the complexity and the false positive probability. Besides, the number of participants, the size of input secret sets, and the online time of participants are not limited.

  1. Formalization of Quantum Protocols using Coq

    Directory of Open Access Journals (Sweden)

    Jaap Boender

    2015-11-01

    Full Text Available Quantum Information Processing, which is an exciting area of research at the intersection of physics and computer science, has great potential for influencing the future development of information processing systems. The building of practical, general purpose Quantum Computers may be some years into the future. However, Quantum Communication and Quantum Cryptography are well developed. Commercial Quantum Key Distribution systems are easily available and several QKD networks have been built in various parts of the world. The security of the protocols used in these implementations rely on information-theoretic proofs, which may or may not reflect actual system behaviour. Moreover, testing of implementations cannot guarantee the absence of bugs and errors. This paper presents a novel framework for modelling and verifying quantum protocols and their implementations using the proof assistant Coq. We provide a Coq library for quantum bits (qubits, quantum gates, and quantum measurement. As a step towards verifying practical quantum communication and security protocols such as Quantum Key Distribution, we support multiple qubits, communication and entanglement. We illustrate these concepts by modelling the Quantum Teleportation Protocol, which communicates the state of an unknown quantum bit using only a classical channel.

  2. Computational methods for high-energy source shielding

    International Nuclear Information System (INIS)

    Armstrong, T.W.; Cloth, P.; Filges, D.

    1983-01-01

    The computational methods for high-energy radiation transport related to shielding of the SNQ-spallation source are outlined. The basic approach is to couple radiation-transport computer codes which use Monte Carlo methods and discrete ordinates methods. A code system is suggested that incorporates state-of-the-art radiation-transport techniques. The stepwise verification of that system is briefly summarized. The complexity of the resulting code system suggests a more straightforward code specially tailored for thick shield calculations. A short guide line to future development of such a Monte Carlo code is given

  3. Replication protocol analysis: a method for the study of real-world design thinking

    DEFF Research Database (Denmark)

    Galle, Per; Kovacs, L. B.

    1996-01-01

    ’ is refined into a method called ‘replication protocol analysis’ (RPA), and discussed from a methodological perspective of design research. It is argued that for the study of real-world design thinking this method offers distinct advantages over traditional ‘design protocol analysis’, which seeks to capture......Given the brief of an architectural competition on site planning, and the design awarded the first prize, the first author (trained as an architect but not a participant in the competition) produced a line of reasoning that might have led from brief to design. In the paper, such ‘design replication...... the designer’s authentic line of reasoning. To illustrate how RPA can be used, the site planning case is briefly presented, and part of the replicated line of reasoning analysed. One result of the analysis is a glimpse of a ‘logic of design’; another is an insight which sheds new light on Darke’s classical...

  4. Adaptive Relay Activation in the Network Coding Protocols

    DEFF Research Database (Denmark)

    Pahlevani, Peyman; Roetter, Daniel Enrique Lucani; Fitzek, Frank

    2015-01-01

    State-of-the-art Network coding based routing protocols exploit the link quality information to compute the transmission rate in the intermediate nodes. However, the link quality discovery protocols are usually inaccurate, and introduce overhead in wireless mesh networks. In this paper, we presen...

  5. Secure Multiparty Quantum Computation for Summation and Multiplication.

    Science.gov (United States)

    Shi, Run-hua; Mu, Yi; Zhong, Hong; Cui, Jie; Zhang, Shun

    2016-01-21

    As a fundamental primitive, Secure Multiparty Summation and Multiplication can be used to build complex secure protocols for other multiparty computations, specially, numerical computations. However, there is still lack of systematical and efficient quantum methods to compute Secure Multiparty Summation and Multiplication. In this paper, we present a novel and efficient quantum approach to securely compute the summation and multiplication of multiparty private inputs, respectively. Compared to classical solutions, our proposed approach can ensure the unconditional security and the perfect privacy protection based on the physical principle of quantum mechanics.

  6. Methods for teaching geometric modelling and computer graphics

    Energy Technology Data Exchange (ETDEWEB)

    Rotkov, S.I.; Faitel`son, Yu. Ts.

    1992-05-01

    This paper considers methods for teaching the methods and algorithms of geometric modelling and computer graphics to programmers, designers and users of CAD and computer-aided research systems. There is a bibliography that can be used to prepare lectures and practical classes. 37 refs., 1 tab.

  7. Computer Animation Based on Particle Methods

    Directory of Open Access Journals (Sweden)

    Rafal Wcislo

    1999-01-01

    Full Text Available The paper presents the main issues of a computer animation of a set of elastic macroscopic objects based on the particle method. The main assumption of the generated animations is to achieve very realistic movements in a scene observed on the computer display. The objects (solid bodies interact mechanically with each other, The movements and deformations of solids are calculated using the particle method. Phenomena connected with the behaviour of solids in the gravitational field, their defomtations caused by collisions and interactions with the optional liquid medium are simulated. The simulation ofthe liquid is performed using the cellular automata method. The paper presents both simulation schemes (particle method and cellular automata rules an the method of combining them in the single animation program. ln order to speed up the execution of the program the parallel version based on the network of workstation was developed. The paper describes the methods of the parallelization and it considers problems of load-balancing, collision detection, process synchronization and distributed control of the animation.

  8. Multiparty Computation for Dishonest Majority

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Orlandi, Claudio

    2010-01-01

    Multiparty computation protocols have been known for more than twenty years now, but due to their lack of efficiency their use is still limited in real-world applications: the goal of this paper is the design of efficient two and multi party computation protocols aimed to fill the gap between the...

  9. Three-dimensional protein structure prediction: Methods and computational strategies.

    Science.gov (United States)

    Dorn, Márcio; E Silva, Mariel Barbachan; Buriol, Luciana S; Lamb, Luis C

    2014-10-12

    A long standing problem in structural bioinformatics is to determine the three-dimensional (3-D) structure of a protein when only a sequence of amino acid residues is given. Many computational methodologies and algorithms have been proposed as a solution to the 3-D Protein Structure Prediction (3-D-PSP) problem. These methods can be divided in four main classes: (a) first principle methods without database information; (b) first principle methods with database information; (c) fold recognition and threading methods; and (d) comparative modeling methods and sequence alignment strategies. Deterministic computational techniques, optimization techniques, data mining and machine learning approaches are typically used in the construction of computational solutions for the PSP problem. Our main goal with this work is to review the methods and computational strategies that are currently used in 3-D protein prediction. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Classical versus Computer Algebra Methods in Elementary Geometry

    Science.gov (United States)

    Pech, Pavel

    2005-01-01

    Computer algebra methods based on results of commutative algebra like Groebner bases of ideals and elimination of variables make it possible to solve complex, elementary and non elementary problems of geometry, which are difficult to solve using a classical approach. Computer algebra methods permit the proof of geometric theorems, automatic…

  11. A secured authentication protocol for wireless sensor networks using elliptic curves cryptography.

    Science.gov (United States)

    Yeh, Hsiu-Lien; Chen, Tien-Ho; Liu, Pin-Chuan; Kim, Tai-Hoo; Wei, Hsin-Wen

    2011-01-01

    User authentication is a crucial service in wireless sensor networks (WSNs) that is becoming increasingly common in WSNs because wireless sensor nodes are typically deployed in an unattended environment, leaving them open to possible hostile network attack. Because wireless sensor nodes are limited in computing power, data storage and communication capabilities, any user authentication protocol must be designed to operate efficiently in a resource constrained environment. In this paper, we review several proposed WSN user authentication protocols, with a detailed review of the M.L Das protocol and a cryptanalysis of Das' protocol that shows several security weaknesses. Furthermore, this paper proposes an ECC-based user authentication protocol that resolves these weaknesses. According to our analysis of security of the ECC-based protocol, it is suitable for applications with higher security requirements. Finally, we present a comparison of security, computation, and communication costs and performances for the proposed protocols. The ECC-based protocol is shown to be suitable for higher security WSNs.

  12. Summary Report on Unconditionally Secure Protocols

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Salvail, Louis; Cachin, Christian

    This document describes the state of the art snd some of the main open problems in the area of unconditionally secure cryptographic protocols. The most essential part of a cryptographic protocol is not its being secure. Imagine a cryptographic protocol which is secure, but where we do not know...... that it is secure. Such a protocol would do little in providing security. When all comes to all, cryptographic security is done for the sake of people, and the essential part of security is for people what it has always been, namely to feel secure. To feel secure employing a given cryptographic protocol we need...... to know that is is secure. I.e. we need a proof that it is secure. Today the proof of security of essentially all practically employed cryptographic protocols relies on computational assumptions. To prove that currently employed ways to communicate securely over the Internet are secure we e.g. need...

  13. Short Review on Quantum Key Distribution Protocols.

    Science.gov (United States)

    Giampouris, Dimitris

    2017-01-01

    Cryptographic protocols and mechanisms are widely investigated under the notion of quantum computing. Quantum cryptography offers particular advantages over classical ones, whereas in some cases established protocols have to be revisited in order to maintain their functionality. The purpose of this paper is to provide the basic definitions and review the most important theoretical advancements concerning the BB84 and E91 protocols. It also aims to offer a summary on some key developments on the field of quantum key distribution, closely related with the two aforementioned protocols. The main goal of this study is to provide the necessary background information along with a thorough review on the theoretical aspects of QKD, concentrating on specific protocols. The BB84 and E91 protocols have been chosen because most other protocols are similar to these, a fact that makes them important for the general understanding of how the QKD mechanism functions.

  14. Comparison of Five Computational Methods for Computing Q Factors in Photonic Crystal Membrane Cavities

    DEFF Research Database (Denmark)

    Novitsky, Andrey; de Lasson, Jakob Rosenkrantz; Frandsen, Lars Hagedorn

    2017-01-01

    Five state-of-the-art computational methods are benchmarked by computing quality factors and resonance wavelengths in photonic crystal membrane L5 and L9 line defect cavities. The convergence of the methods with respect to resolution, degrees of freedom and number of modes is investigated. Specia...

  15. Costing 'healthy' food baskets in Australia - a systematic review of food price and affordability monitoring tools, protocols and methods.

    Science.gov (United States)

    Lewis, Meron; Lee, Amanda

    2016-11-01

    To undertake a systematic review to determine similarities and differences in metrics and results between recently and/or currently used tools, protocols and methods for monitoring Australian healthy food prices and affordability. Electronic databases of peer-reviewed literature and online grey literature were systematically searched using the PRISMA approach for articles and reports relating to healthy food and diet price assessment tools, protocols, methods and results that utilised retail pricing. National, state, regional and local areas of Australia from 1995 to 2015. Assessment tools, protocols and methods to measure the price of 'healthy' foods and diets. The search identified fifty-nine discrete surveys of 'healthy' food pricing incorporating six major food pricing tools (those used in multiple areas and time periods) and five minor food pricing tools (those used in a single survey area or time period). Analysis demonstrated methodological differences regarding: included foods; reference households; use of availability and/or quality measures; household income sources; store sampling methods; data collection protocols; analysis methods; and results. 'Healthy' food price assessment methods used in Australia lack comparability across all metrics and most do not fully align with a 'healthy' diet as recommended by the current Australian Dietary Guidelines. None have been applied nationally. Assessment of the price, price differential and affordability of healthy (recommended) and current (unhealthy) diets would provide more robust and meaningful data to inform health and fiscal policy in Australia. The INFORMAS 'optimal' approach provides a potential framework for development of these methods.

  16. Identity-Based Authentication for Cloud Computing

    Science.gov (United States)

    Li, Hongwei; Dai, Yuanshun; Tian, Ling; Yang, Haomiao

    Cloud computing is a recently developed new technology for complex systems with massive-scale services sharing among numerous users. Therefore, authentication of both users and services is a significant issue for the trust and security of the cloud computing. SSL Authentication Protocol (SAP), once applied in cloud computing, will become so complicated that users will undergo a heavily loaded point both in computation and communication. This paper, based on the identity-based hierarchical model for cloud computing (IBHMCC) and its corresponding encryption and signature schemes, presented a new identity-based authentication protocol for cloud computing and services. Through simulation testing, it is shown that the authentication protocol is more lightweight and efficient than SAP, specially the more lightweight user side. Such merit of our model with great scalability is very suited to the massive-scale cloud.

  17. Methods in computed angiotomography of the brain

    International Nuclear Information System (INIS)

    Yamamoto, Yuji; Asari, Shoji; Sadamoto, Kazuhiko.

    1985-01-01

    Authors introduce the methods in computed angiotomography of the brain. Setting of the scan planes and levels and the minimum dose bolus (MinDB) injection of contrast medium are described in detail. These methods are easily and safely employed with the use of already propagated CT scanners. Computed angiotomography is expected for clinical applications in many institutions because of its diagnostic value in screening of cerebrovascular lesions and in demonstrating the relationship between pathological lesions and cerebral vessels. (author)

  18. Variational-moment method for computing magnetohydrodynamic equilibria

    International Nuclear Information System (INIS)

    Lao, L.L.

    1983-08-01

    A fast yet accurate method to compute magnetohydrodynamic equilibria is provided by the variational-moment method, which is similar to the classical Rayleigh-Ritz-Galerkin approximation. The equilibrium solution sought is decomposed into a spectral representation. The partial differential equations describing the equilibrium are then recast into their equivalent variational form and systematically reduced to an optimum finite set of coupled ordinary differential equations. An appropriate spectral decomposition can make the series representing the solution coverge rapidly and hence substantially reduces the amount of computational time involved. The moment method was developed first to compute fixed-boundary inverse equilibria in axisymmetric toroidal geometry, and was demonstrated to be both efficient and accurate. The method since has been generalized to calculate free-boundary axisymmetric equilibria, to include toroidal plasma rotation and pressure anisotropy, and to treat three-dimensional toroidal geometry. In all these formulations, the flux surfaces are assumed to be smooth and nested so that the solutions can be decomposed in Fourier series in inverse coordinates. These recent developments and the advantages and limitations of the moment method are reviewed. The use of alternate coordinates for decomposition is discussed

  19. Security Protocols in a Nutshell

    OpenAIRE

    Toorani, Mohsen

    2016-01-01

    Security protocols are building blocks in secure communications. They deploy some security mechanisms to provide certain security services. Security protocols are considered abstract when analyzed, but they can have extra vulnerabilities when implemented. This manuscript provides a holistic study on security protocols. It reviews foundations of security protocols, taxonomy of attacks on security protocols and their implementations, and different methods and models for security analysis of pro...

  20. Three-Stage Quantum Cryptography Protocol under Collective-Rotation Noise

    Directory of Open Access Journals (Sweden)

    Linsen Wu

    2015-05-01

    Full Text Available Information security is increasingly important as society migrates to the information age. Classical cryptography widely used nowadays is based on computational complexity, which means that it assumes that solving some particular mathematical problems is hard on a classical computer. With the development of supercomputers and, potentially, quantum computers, classical cryptography has more and more potential risks. Quantum cryptography provides a solution which is based on the Heisenberg uncertainty principle and no-cloning theorem. While BB84-based quantum protocols are only secure when a single photon is used in communication, the three-stage quantum protocol is multi-photon tolerant. However, existing analyses assume perfect noiseless channels. In this paper, a multi-photon analysis is performed for the three-stage quantum protocol under the collective-rotation noise model. The analysis provides insights into the impact of the noise level on a three-stage quantum cryptography system.

  1. 26 CFR 1.167(b)-0 - Methods of computing depreciation.

    Science.gov (United States)

    2010-04-01

    ... 26 Internal Revenue 2 2010-04-01 2010-04-01 false Methods of computing depreciation. 1.167(b)-0....167(b)-0 Methods of computing depreciation. (a) In general. Any reasonable and consistently applied method of computing depreciation may be used or continued in use under section 167. Regardless of the...

  2. Denver screening protocol for blunt cerebrovascular injury reduces the use of multi-detector computed tomography angiography.

    Science.gov (United States)

    Beliaev, Andrei M; Barber, P Alan; Marshall, Roger J; Civil, Ian

    2014-06-01

    Blunt cerebrovascular injury (BCVI) occurs in 0.2-2.7% of blunt trauma patients and has up to 30% mortality. Conventional screening does not recognize up to 20% of BCVI patients. To improve diagnosis of BCVI, both an expanded battery of screening criteria and a multi-detector computed tomography angiography (CTA) have been suggested. The aim of this study is to investigate whether the use of CTA restricted to the Denver protocol screen-positive patients would reduce the unnecessary use of CTA as a pre-emptive screening tool. This is a registry-based study of blunt trauma patients admitted to Auckland City Hospital from 1998 to 2012. The diagnosis of BCVI was confirmed or excluded with CTA, magnetic resonance angiography and, if these imaging were non-conclusive, four-vessel digital subtraction angiography. Thirty (61%) BCVI and 19 (39%) non-BCVI patients met eligibility criteria. The Denver protocol applied to our cohort of patients had a sensitivity of 97% (95% confidence interval (CI): 83-100%) and a specificity of 42% (95% CI: 20-67%). With a prevalence of BCVI in blunt trauma patients of 0.2% and 2.7%, post-test odds of a screen-positive test were 0.03 (95% CI: 0.002-0.005) and 0.046 (95% CI: 0.314-0.068), respectively. Application of the CTA to the Denver protocol screen-positive trauma patients can decrease the use of CTA as a pre-emptive screening tool by 95-97% and reduces its hazards. © 2013 Royal Australasian College of Surgeons.

  3. Quantum multi-signature protocol based on teleportation

    International Nuclear Information System (INIS)

    Wen Xiao-jun; Liu Yun; Sun Yu

    2007-01-01

    In this paper, a protocol which can be used in multi-user quantum signature is proposed. The scheme of signature and verification is based on the correlation of Greenberger-Horne-Zeilinger (GHZ) states and the controlled quantum teleportation. Different from the digital signatures, which are based on computational complexity, the proposed protocol has perfect security in the noiseless quantum channels. Compared to previous quantum signature schemes, this protocol can verify the signature independent of an arbitrator as well as realize multi-user signature together. (orig.)

  4. Practical Secure Computation with Pre-Processing

    DEFF Research Database (Denmark)

    Zakarias, Rasmus Winther

    Secure Multiparty Computation has been divided between protocols best suited for binary circuits and protocols best suited for arithmetic circuits. With their MiniMac protocol in [DZ13], Damgård and Zakarias take an important step towards bridging these worlds with an arithmetic protocol tuned...... space for pre-processing material than computing the non-linear parts online (depends on the quality of circuit of course). Surprisingly, even for our optimized AES-circuit this is not the case. We further improve the design of the pre-processing material and end up with only 10 megabyes of pre...... a protocol for small field arithmetic to do fast large integer multipli- cations. This is achieved by devising pre-processing material that allows the Toom-Cook multiplication algorithm to run between the parties with linear communication complexity. With this result computation on the CPU by the parties...

  5. Application of Blind Quantum Computation to Two-Party Quantum Computation

    Science.gov (United States)

    Sun, Zhiyuan; Li, Qin; Yu, Fang; Chan, Wai Hong

    2018-03-01

    Blind quantum computation (BQC) allows a client who has only limited quantum power to achieve quantum computation with the help of a remote quantum server and still keep the client's input, output, and algorithm private. Recently, Kashefi and Wallden extended BQC to achieve two-party quantum computation which allows two parties Alice and Bob to perform a joint unitary transform upon their inputs. However, in their protocol Alice has to prepare rotated single qubits and perform Pauli operations, and Bob needs to have a powerful quantum computer. In this work, we also utilize the idea of BQC to put forward an improved two-party quantum computation protocol in which the operations of both Alice and Bob are simplified since Alice only needs to apply Pauli operations and Bob is just required to prepare and encrypt his input qubits.

  6. Application of Blind Quantum Computation to Two-Party Quantum Computation

    Science.gov (United States)

    Sun, Zhiyuan; Li, Qin; Yu, Fang; Chan, Wai Hong

    2018-06-01

    Blind quantum computation (BQC) allows a client who has only limited quantum power to achieve quantum computation with the help of a remote quantum server and still keep the client's input, output, and algorithm private. Recently, Kashefi and Wallden extended BQC to achieve two-party quantum computation which allows two parties Alice and Bob to perform a joint unitary transform upon their inputs. However, in their protocol Alice has to prepare rotated single qubits and perform Pauli operations, and Bob needs to have a powerful quantum computer. In this work, we also utilize the idea of BQC to put forward an improved two-party quantum computation protocol in which the operations of both Alice and Bob are simplified since Alice only needs to apply Pauli operations and Bob is just required to prepare and encrypt his input qubits.

  7. Improved look-up table method of computer-generated holograms.

    Science.gov (United States)

    Wei, Hui; Gong, Guanghong; Li, Ni

    2016-11-10

    Heavy computation load and vast memory requirements are major bottlenecks of computer-generated holograms (CGHs), which are promising and challenging in three-dimensional displays. To solve these problems, an improved look-up table (LUT) method suitable for arbitrarily sampled object points is proposed and implemented on a graphics processing unit (GPU) whose reconstructed object quality is consistent with that of the coherent ray-trace (CRT) method. The concept of distance factor is defined, and the distance factors are pre-computed off-line and stored in a look-up table. The results show that while reconstruction quality close to that of the CRT method is obtained, the on-line computation time is dramatically reduced compared with the LUT method on the GPU and the memory usage is lower than that of the novel-LUT considerably. Optical experiments are carried out to validate the effectiveness of the proposed method.

  8. Comparison of radiation doses using weight-based protocol and dose modulation techniques for patients undergoing biphasic abdominal computed tomography examinations

    Directory of Open Access Journals (Sweden)

    Livingstone Roshan

    2009-01-01

    Full Text Available Computed tomography (CT of the abdomen contributes a substantial amount of man-made radiation dose to patients and use of this modality is on the increase. This study intends to compare radiation dose and image quality using dose modulation techniques and weight- based protocol exposure parameters for biphasic abdominal CT. Using a six-slice CT scanner, a prospective study of 426 patients who underwent abdominal CT examinations was performed. Constant tube potentials of 90 kV and 120 kV were used for all arterial and portal venous phase respectively. The tube current-time product for weight-based protocol was optimized according to patient′s body weight; this was automatically selected in dose modulations. The effective dose using weight-based protocol, angular and z-axis dose modulation was 11.3 mSv, 9.5 mSv and 8.2 mSv respectively for the patient′s body weight ranging from 40 to 60 kg. For patients of body weights ranging 60 to 80 kg, the effective doses were 13.2 mSv, 11.2 mSv and 10.6 mSv respectively. The use of dose modulation technique resulted in a reduction of 16 to 28% in radiation dose with acceptable diagnostic accuracy in comparison to the use of weight-based protocol settings.

  9. Methods and experimental techniques in computer engineering

    CERN Document Server

    Schiaffonati, Viola

    2014-01-01

    Computing and science reveal a synergic relationship. On the one hand, it is widely evident that computing plays an important role in the scientific endeavor. On the other hand, the role of scientific method in computing is getting increasingly important, especially in providing ways to experimentally evaluate the properties of complex computing systems. This book critically presents these issues from a unitary conceptual and methodological perspective by addressing specific case studies at the intersection between computing and science. The book originates from, and collects the experience of, a course for PhD students in Information Engineering held at the Politecnico di Milano. Following the structure of the course, the book features contributions from some researchers who are working at the intersection between computing and science.

  10. SmartShadow models and methods for pervasive computing

    CERN Document Server

    Wu, Zhaohui

    2013-01-01

    SmartShadow: Models and Methods for Pervasive Computing offers a new perspective on pervasive computing with SmartShadow, which is designed to model a user as a personality ""shadow"" and to model pervasive computing environments as user-centric dynamic virtual personal spaces. Just like human beings' shadows in the physical world, it follows people wherever they go, providing them with pervasive services. The model, methods, and software infrastructure for SmartShadow are presented and an application for smart cars is also introduced.  The book can serve as a valuable reference work for resea

  11. Latency correction of event-related potentials between different experimental protocols

    Science.gov (United States)

    Iturrate, I.; Chavarriaga, R.; Montesano, L.; Minguez, J.; Millán, JdR

    2014-06-01

    Objective. A fundamental issue in EEG event-related potentials (ERPs) studies is the amount of data required to have an accurate ERP model. This also impacts the time required to train a classifier for a brain-computer interface (BCI). This issue is mainly due to the poor signal-to-noise ratio and the large fluctuations of the EEG caused by several sources of variability. One of these sources is directly related to the experimental protocol or application designed, and may affect the amplitude or latency of ERPs. This usually prevents BCI classifiers from generalizing among different experimental protocols. In this paper, we analyze the effect of the amplitude and the latency variations among different experimental protocols based on the same type of ERP. Approach. We present a method to analyze and compensate for the latency variations in BCI applications. The algorithm has been tested on two widely used ERPs (P300 and observation error potentials), in three experimental protocols in each case. We report the ERP analysis and single-trial classification. Main results. The results obtained show that the designed experimental protocols significantly affect the latency of the recorded potentials but not the amplitudes. Significance. These results show how the use of latency-corrected data can be used to generalize the BCIs, reducing the calibration time when facing a new experimental protocol.

  12. Fast and maliciously secure two-party computation using the GPU

    DEFF Research Database (Denmark)

    Frederiksen, Tore Kasper; Nielsen, Jesper Buus

    2013-01-01

    We describe, and implement, a maliciously secure protocol for two-party computation in a parallel computational model. Our protocol is based on Yao’s garbled circuit and an efficient OT extension. The implementation is done using CUDA and yields fast results for maliciously secure two-party compu......-party computation in a financially feasible and practical setting by using a consumer grade CPU and GPU. Our protocol further uses some novel constructions in order to combine garbled circuits and an OT extension in a parallel and maliciously secure setting.......We describe, and implement, a maliciously secure protocol for two-party computation in a parallel computational model. Our protocol is based on Yao’s garbled circuit and an efficient OT extension. The implementation is done using CUDA and yields fast results for maliciously secure two...

  13. Epidemic Protocols for Pervasive Computing Systems - Moving Focus from Architecture to Protocol

    DEFF Research Database (Denmark)

    Mogensen, Martin

    2009-01-01

    Pervasive computing systems are inherently running on unstable networks and devices, subject to constant topology changes, network failures, and high churn. For this reason, pervasive computing infrastructures need to handle these issues as part of their design. This is, however, not feasible, si...

  14. A Logical Analysis of Quantum Voting Protocols

    Science.gov (United States)

    Rad, Soroush Rafiee; Shirinkalam, Elahe; Smets, Sonja

    2017-12-01

    In this paper we provide a logical analysis of the Quantum Voting Protocol for Anonymous Surveying as developed by Horoshko and Kilin in (Phys. Lett. A 375, 1172-1175 2011). In particular we make use of the probabilistic logic of quantum programs as developed in (Int. J. Theor. Phys. 53, 3628-3647 2014) to provide a formal specification of the protocol and to derive its correctness. Our analysis is part of a wider program on the application of quantum logics to the formal verification of protocols in quantum communication and quantum computation.

  15. Computational methods for two-phase flow and particle transport

    CERN Document Server

    Lee, Wen Ho

    2013-01-01

    This book describes mathematical formulations and computational methods for solving two-phase flow problems with a computer code that calculates thermal hydraulic problems related to light water and fast breeder reactors. The physical model also handles the particle and gas flow problems that arise from coal gasification and fluidized beds. The second part of this book deals with the computational methods for particle transport.

  16. Research and application of ARP protocol vulnerability attack and defense technology based on trusted network

    Science.gov (United States)

    Xi, Huixing

    2017-03-01

    With the continuous development of network technology and the rapid spread of the Internet, computer networks have been around the world every corner. However, the network attacks frequently occur. The ARP protocol vulnerability is one of the most common vulnerabilities in the TCP / IP four-layer architecture. The network protocol vulnerabilities can lead to the intrusion and attack of the information system, and disable or disable the normal defense function of the system [1]. At present, ARP spoofing Trojans spread widely in the LAN, the network security to run a huge hidden danger, is the primary threat to LAN security. In this paper, the author summarizes the research status and the key technologies involved in ARP protocol, analyzes the formation mechanism of ARP protocol vulnerability, and analyzes the feasibility of the attack technique. Based on the summary of the common defensive methods, the advantages and disadvantages of each defense method. At the same time, the current defense method is improved, and the advantage of the improved defense algorithm is given. At the end of this paper, the appropriate test method is selected and the test environment is set up. Experiment and test are carried out for each proposed improved defense algorithm.

  17. Computational methods for structural load and resistance modeling

    Science.gov (United States)

    Thacker, B. H.; Millwater, H. R.; Harren, S. V.

    1991-01-01

    An automated capability for computing structural reliability considering uncertainties in both load and resistance variables is presented. The computations are carried out using an automated Advanced Mean Value iteration algorithm (AMV +) with performance functions involving load and resistance variables obtained by both explicit and implicit methods. A complete description of the procedures used is given as well as several illustrative examples, verified by Monte Carlo Analysis. In particular, the computational methods described in the paper are shown to be quite accurate and efficient for a material nonlinear structure considering material damage as a function of several primitive random variables. The results show clearly the effectiveness of the algorithms for computing the reliability of large-scale structural systems with a maximum number of resolutions.

  18. Numerical evaluation of methods for computing tomographic projections

    International Nuclear Information System (INIS)

    Zhuang, W.; Gopal, S.S.; Hebert, T.J.

    1994-01-01

    Methods for computing forward/back projections of 2-D images can be viewed as numerical integration techniques. The accuracy of any ray-driven projection method can be improved by increasing the number of ray-paths that are traced per projection bin. The accuracy of pixel-driven projection methods can be increased by dividing each pixel into a number of smaller sub-pixels and projecting each sub-pixel. The authors compared four competing methods of computing forward/back projections: bilinear interpolation, ray-tracing, pixel-driven projection based upon sub-pixels, and pixel-driven projection based upon circular, rather than square, pixels. This latter method is equivalent to a fast, bi-nonlinear interpolation. These methods and the choice of the number of ray-paths per projection bin or the number of sub-pixels per pixel present a trade-off between computational speed and accuracy. To solve the problem of assessing backprojection accuracy, the analytical inverse Fourier transform of the ramp filtered forward projection of the Shepp and Logan head phantom is derived

  19. Computer methods in physics 250 problems with guided solutions

    CERN Document Server

    Landau, Rubin H

    2018-01-01

    Our future scientists and professionals must be conversant in computational techniques. In order to facilitate integration of computer methods into existing physics courses, this textbook offers a large number of worked examples and problems with fully guided solutions in Python as well as other languages (Mathematica, Java, C, Fortran, and Maple). It’s also intended as a self-study guide for learning how to use computer methods in physics. The authors include an introductory chapter on numerical tools and indication of computational and physics difficulty level for each problem.

  20. Proceedings of computational methods in materials science

    International Nuclear Information System (INIS)

    Mark, J.E. Glicksman, M.E.; Marsh, S.P.

    1992-01-01

    The Symposium on which this volume is based was conceived as a timely expression of some of the fast-paced developments occurring throughout materials science and engineering. It focuses particularly on those involving modern computational methods applied to model and predict the response of materials under a diverse range of physico-chemical conditions. The current easy access of many materials scientists in industry, government laboratories, and academe to high-performance computers has opened many new vistas for predicting the behavior of complex materials under realistic conditions. Some have even argued that modern computational methods in materials science and engineering are literally redefining the bounds of our knowledge from which we predict structure-property relationships, perhaps forever changing the historically descriptive character of the science and much of the engineering

  1. Iterated Gate Teleportation and Blind Quantum Computation.

    Science.gov (United States)

    Pérez-Delgado, Carlos A; Fitzsimons, Joseph F

    2015-06-05

    Blind quantum computation allows a user to delegate a computation to an untrusted server while keeping the computation hidden. A number of recent works have sought to establish bounds on the communication requirements necessary to implement blind computation, and a bound based on the no-programming theorem of Nielsen and Chuang has emerged as a natural limiting factor. Here we show that this constraint only holds in limited scenarios, and show how to overcome it using a novel method of iterated gate teleportations. This technique enables drastic reductions in the communication required for distributed quantum protocols, extending beyond the blind computation setting. Applied to blind quantum computation, this technique offers significant efficiency improvements, and in some scenarios offers an exponential reduction in communication requirements.

  2. Intelligent QoS routing algorithm based on improved AODV protocol for Ad Hoc networks

    Science.gov (United States)

    Huibin, Liu; Jun, Zhang

    2016-04-01

    Mobile Ad Hoc Networks were playing an increasingly important part in disaster reliefs, military battlefields and scientific explorations. However, networks routing difficulties are more and more outstanding due to inherent structures. This paper proposed an improved cuckoo searching-based Ad hoc On-Demand Distance Vector Routing protocol (CSAODV). It elaborately designs the calculation methods of optimal routing algorithm used by protocol and transmission mechanism of communication-package. In calculation of optimal routing algorithm by CS Algorithm, by increasing QoS constraint, the found optimal routing algorithm can conform to the requirements of specified bandwidth and time delay, and a certain balance can be obtained among computation spending, bandwidth and time delay. Take advantage of NS2 simulation software to take performance test on protocol in three circumstances and validate the feasibility and validity of CSAODV protocol. In results, CSAODV routing protocol is more adapt to the change of network topological structure than AODV protocol, which improves package delivery fraction of protocol effectively, reduce the transmission time delay of network, reduce the extra burden to network brought by controlling information, and improve the routing efficiency of network.

  3. Protocol: A simple phenol-based method for 96-well extraction of high quality RNA from Arabidopsis

    Directory of Open Access Journals (Sweden)

    Coustham Vincent

    2011-03-01

    Full Text Available Abstract Background Many experiments in modern plant molecular biology require the processing of large numbers of samples for a variety of applications from mutant screens to the analysis of natural variants. A severe bottleneck to many such analyses is the acquisition of good yields of high quality RNA suitable for use in sensitive downstream applications such as real time quantitative reverse-transcription-polymerase chain reaction (real time qRT-PCR. Although several commercial kits are available for high-throughput RNA extraction in 96-well format, only one non-kit method has been described in the literature using the commercial reagent TRIZOL. Results We describe an unusual phenomenon when using TRIZOL reagent with young Arabidopsis seedlings. This prompted us to develop a high-throughput RNA extraction protocol (HTP96 adapted from a well established phenol:chloroform-LiCl method (P:C-L that is cheap, reliable and requires no specialist equipment. With this protocol 192 high quality RNA samples can be prepared in 96-well format in three hours (less than 1 minute per sample with less than 1% loss of samples. We demonstrate that the RNA derived from this protocol is of high quality and suitable for use in real time qRT-PCR assays. Conclusion The development of the HTP96 protocol has vastly increased our sample throughput, allowing us to fully exploit the large sample capacity of modern real time qRT-PCR thermocyclers, now commonplace in many labs, and develop an effective high-throughput gene expression platform. We propose that the HTP96 protocol will significantly benefit any plant scientist with the task of obtaining hundreds of high quality RNA extractions.

  4. Flaws in current human training protocols for spontaneous Brain-Computer Interfaces: lessons learned from instructional design

    Directory of Open Access Journals (Sweden)

    Fabien eLotte

    2013-09-01

    Full Text Available While recent research on Brain-Computer Interfaces (BCI has highlighted their potential for many applications, they remain barely used outside laboratories. The main reason is their lack of robustness. Indeed, with current BCI, mental state recognition is usually slow and often incorrect. Spontaneous BCI (i.e., mental imagery-based BCI often rely on mutual learning efforts by the user and the machine, with BCI users learning to produce stable EEG patterns (spontaneous BCI control being widely acknowledged as a skill while the computer learns to automatically recognize these EEG patterns, using signal processing. Most research so far was focused on signal processing, mostly neglecting the human in the loop. However, how well the user masters the BCI skill is also a key element explaining BCI robustness. Indeed, if the user is not able to produce stable and distinct EEG patterns, then no signal processing algorithm would be able to recognize them. Unfortunately, despite the importance of BCI training protocols, they have been scarcely studied so far, and used mostly unchanged for years.In this paper, we advocate that current human training approaches for spontaneous BCI are most likely inappropriate. We notably study instructional design literature in order to identify the key requirements and guidelines for a successful training procedure that promotes a good and efficient skill learning. This literature study highlights that current spontaneous BCI user training procedures satisfy very few of these requirements and hence are likely to be suboptimal. We therefore identify the flaws in BCI training protocols according to instructional design principles, at several levels: in the instructions provided to the user, in the tasks he/she has to perform, and in the feedback provided. For each level, we propose new research directions that are theoretically expected to address some of these flaws and to help users learn the BCI skill more efficiently.

  5. Computational methods in molecular imaging technologies

    CERN Document Server

    Gunjan, Vinit Kumar; Venkatesh, C; Amarnath, M

    2017-01-01

    This book highlights the experimental investigations that have been carried out on magnetic resonance imaging and computed tomography (MRI & CT) images using state-of-the-art Computational Image processing techniques, and tabulates the statistical values wherever necessary. In a very simple and straightforward way, it explains how image processing methods are used to improve the quality of medical images and facilitate analysis. It offers a valuable resource for researchers, engineers, medical doctors and bioinformatics experts alike.

  6. Computational and experimental methods for enclosed natural convection

    International Nuclear Information System (INIS)

    Larson, D.W.; Gartling, D.K.; Schimmel, W.P. Jr.

    1977-10-01

    Two computational procedures and one optical experimental procedure for studying enclosed natural convection are described. The finite-difference and finite-element numerical methods are developed and several sample problems are solved. Results obtained from the two computational approaches are compared. A temperature-visualization scheme using laser holographic interferometry is described, and results from this experimental procedure are compared with results from both numerical methods

  7. Unconditionally verifiable blind quantum computation

    Science.gov (United States)

    Fitzsimons, Joseph F.; Kashefi, Elham

    2017-07-01

    Blind quantum computing (BQC) allows a client to have a server carry out a quantum computation for them such that the client's input, output, and computation remain private. A desirable property for any BQC protocol is verification, whereby the client can verify with high probability whether the server has followed the instructions of the protocol or if there has been some deviation resulting in a corrupted output state. A verifiable BQC protocol can be viewed as an interactive proof system leading to consequences for complexity theory. We previously proposed [A. Broadbent, J. Fitzsimons, and E. Kashefi, in Proceedings of the 50th Annual Symposium on Foundations of Computer Science, Atlanta, 2009 (IEEE, Piscataway, 2009), p. 517] a universal and unconditionally secure BQC scheme where the client only needs to be able to prepare single qubits in separable states randomly chosen from a finite set and send them to the server, who has the balance of the required quantum computational resources. In this paper we extend that protocol with additional functionality allowing blind computational basis measurements, which we use to construct another verifiable BQC protocol based on a different class of resource states. We rigorously prove that the probability of failing to detect an incorrect output is exponentially small in a security parameter, while resource overhead remains polynomial in this parameter. This resource state allows entangling gates to be performed between arbitrary pairs of logical qubits with only constant overhead. This is a significant improvement on the original scheme, which required that all computations to be performed must first be put into a nearest-neighbor form, incurring linear overhead in the number of qubits. Such an improvement has important consequences for efficiency and fault-tolerance thresholds.

  8. Computational methods for stellerator configurations

    International Nuclear Information System (INIS)

    Betancourt, O.

    1992-01-01

    This project had two main objectives. The first one was to continue to develop computational methods for the study of three dimensional magnetic confinement configurations. The second one was to collaborate and interact with researchers in the field who can use these techniques to study and design fusion experiments. The first objective has been achieved with the development of the spectral code BETAS and the formulation of a new variational approach for the study of magnetic island formation in a self consistent fashion. The code can compute the correct island width corresponding to the saturated island, a result shown by comparing the computed island with the results of unstable tearing modes in Tokamaks and with experimental results in the IMS Stellarator. In addition to studying three dimensional nonlinear effects in Tokamaks configurations, these self consistent computed island equilibria will be used to study transport effects due to magnetic island formation and to nonlinearly bifurcated equilibria. The second objective was achieved through direct collaboration with Steve Hirshman at Oak Ridge, D. Anderson and R. Talmage at Wisconsin as well as through participation in the Sherwood and APS meetings

  9. Combinatorial methods with computer applications

    CERN Document Server

    Gross, Jonathan L

    2007-01-01

    Combinatorial Methods with Computer Applications provides in-depth coverage of recurrences, generating functions, partitions, and permutations, along with some of the most interesting graph and network topics, design constructions, and finite geometries. Requiring only a foundation in discrete mathematics, it can serve as the textbook in a combinatorial methods course or in a combined graph theory and combinatorics course.After an introduction to combinatorics, the book explores six systematic approaches within a comprehensive framework: sequences, solving recurrences, evaluating summation exp

  10. [Sampling and measurement methods of the protocol design of the China Nine-Province Survey for blindness, visual impairment and cataract surgery].

    Science.gov (United States)

    Zhao, Jia-liang; Wang, Yu; Gao, Xue-cheng; Ellwein, Leon B; Liu, Hu

    2011-09-01

    To design the protocol of the China nine-province survey for blindness, visual impairment and cataract surgery to evaluate the prevalence and main causes of blindness and visual impairment, and the prevalence and outcomes of the cataract surgery. The protocol design was began after accepting the task for the national survey for blindness, visual impairment and cataract surgery from the Department of Medicine, Ministry of Health, China, in November, 2005. The protocol in Beijing Shunyi Eye Study in 1996 and Guangdong Doumen County Eye Study in 1997, both supported by World Health Organization, was taken as the basis for the protocol design. The relative experts were invited to discuss and prove the draft protocol. An international advisor committee was established to examine and approve the draft protocol. Finally, the survey protocol was checked and approved by the Department of Medicine, Ministry of Health, China and Prevention Program of Blindness and Deafness, WHO. The survey protocol was designed according to the characteristics and the scale of the survey. The contents of the protocol included determination of target population and survey sites, calculation of the sample size, design of the random sampling, composition and organization of the survey teams, determination of the examinee, the flowchart of the field work, survey items and methods, diagnostic criteria of blindness and moderate and sever visual impairment, the measures of the quality control, the methods of the data management. The designed protocol became the standard and practical protocol for the survey to evaluate the prevalence and main causes of blindness and visual impairment, and the prevalence and outcomes of the cataract surgery.

  11. Hybrid Monte Carlo methods in computational finance

    NARCIS (Netherlands)

    Leitao Rodriguez, A.

    2017-01-01

    Monte Carlo methods are highly appreciated and intensively employed in computational finance in the context of financial derivatives valuation or risk management. The method offers valuable advantages like flexibility, easy interpretation and straightforward implementation. Furthermore, the

  12. SPECT/CT workflow and imaging protocols

    Energy Technology Data Exchange (ETDEWEB)

    Beckers, Catherine [University Hospital of Liege, Division of Nuclear Medicine and Oncological Imaging, Department of Medical Physics, Liege (Belgium); Hustinx, Roland [University Hospital of Liege, Division of Nuclear Medicine and Oncological Imaging, Department of Medical Physics, Liege (Belgium); Domaine Universitaire du Sart Tilman, Service de Medecine Nucleaire et Imagerie Oncologique, CHU de Liege, Liege (Belgium)

    2014-05-15

    Introducing a hybrid imaging method such as single photon emission computed tomography (SPECT)/CT greatly alters the routine in the nuclear medicine department. It requires designing new workflow processes and the revision of original scheduling process and imaging protocols. In addition, the imaging protocol should be adapted for each individual patient, so that performing CT is fully justified and the CT procedure is fully tailored to address the clinical issue. Such refinements often occur before the procedure is started but may be required at some intermediate stage of the procedure. Furthermore, SPECT/CT leads in many instances to a new partnership with the radiology department. This article presents practical advice and highlights the key clinical elements which need to be considered to help understand the workflow process of SPECT/CT and optimise imaging protocols. The workflow process using SPECT/CT is complex in particular because of its bimodal character, the large spectrum of stakeholders, the multiplicity of their activities at various time points and the need for real-time decision-making. With help from analytical tools developed for quality assessment, the workflow process using SPECT/CT may be separated into related, but independent steps, each with its specific human and material resources to use as inputs or outputs. This helps identify factors that could contribute to failure in routine clinical practice. At each step of the process, practical aspects to optimise imaging procedure and protocols are developed. A decision-making algorithm for justifying each CT indication as well as the appropriateness of each CT protocol is the cornerstone of routine clinical practice using SPECT/CT. In conclusion, implementing hybrid SPECT/CT imaging requires new ways of working. It is highly rewarding from a clinical perspective, but it also proves to be a daily challenge in terms of management. (orig.)

  13. SPECT/CT workflow and imaging protocols

    International Nuclear Information System (INIS)

    Beckers, Catherine; Hustinx, Roland

    2014-01-01

    Introducing a hybrid imaging method such as single photon emission computed tomography (SPECT)/CT greatly alters the routine in the nuclear medicine department. It requires designing new workflow processes and the revision of original scheduling process and imaging protocols. In addition, the imaging protocol should be adapted for each individual patient, so that performing CT is fully justified and the CT procedure is fully tailored to address the clinical issue. Such refinements often occur before the procedure is started but may be required at some intermediate stage of the procedure. Furthermore, SPECT/CT leads in many instances to a new partnership with the radiology department. This article presents practical advice and highlights the key clinical elements which need to be considered to help understand the workflow process of SPECT/CT and optimise imaging protocols. The workflow process using SPECT/CT is complex in particular because of its bimodal character, the large spectrum of stakeholders, the multiplicity of their activities at various time points and the need for real-time decision-making. With help from analytical tools developed for quality assessment, the workflow process using SPECT/CT may be separated into related, but independent steps, each with its specific human and material resources to use as inputs or outputs. This helps identify factors that could contribute to failure in routine clinical practice. At each step of the process, practical aspects to optimise imaging procedure and protocols are developed. A decision-making algorithm for justifying each CT indication as well as the appropriateness of each CT protocol is the cornerstone of routine clinical practice using SPECT/CT. In conclusion, implementing hybrid SPECT/CT imaging requires new ways of working. It is highly rewarding from a clinical perspective, but it also proves to be a daily challenge in terms of management. (orig.)

  14. Testing and Validation of Computational Methods for Mass Spectrometry.

    Science.gov (United States)

    Gatto, Laurent; Hansen, Kasper D; Hoopmann, Michael R; Hermjakob, Henning; Kohlbacher, Oliver; Beyer, Andreas

    2016-03-04

    High-throughput methods based on mass spectrometry (proteomics, metabolomics, lipidomics, etc.) produce a wealth of data that cannot be analyzed without computational methods. The impact of the choice of method on the overall result of a biological study is often underappreciated, but different methods can result in very different biological findings. It is thus essential to evaluate and compare the correctness and relative performance of computational methods. The volume of the data as well as the complexity of the algorithms render unbiased comparisons challenging. This paper discusses some problems and challenges in testing and validation of computational methods. We discuss the different types of data (simulated and experimental validation data) as well as different metrics to compare methods. We also introduce a new public repository for mass spectrometric reference data sets ( http://compms.org/RefData ) that contains a collection of publicly available data sets for performance evaluation for a wide range of different methods.

  15. Geometric computations with interval and new robust methods applications in computer graphics, GIS and computational geometry

    CERN Document Server

    Ratschek, H

    2003-01-01

    This undergraduate and postgraduate text will familiarise readers with interval arithmetic and related tools to gain reliable and validated results and logically correct decisions for a variety of geometric computations plus the means for alleviating the effects of the errors. It also considers computations on geometric point-sets, which are neither robust nor reliable in processing with standard methods. The authors provide two effective tools for obtaining correct results: (a) interval arithmetic, and (b) ESSA the new powerful algorithm which improves many geometric computations and makes th

  16. An Authentication Protocol for Future Sensor Networks.

    Science.gov (United States)

    Bilal, Muhammad; Kang, Shin-Gak

    2017-04-28

    Authentication is one of the essential security services in Wireless Sensor Networks (WSNs) for ensuring secure data sessions. Sensor node authentication ensures the confidentiality and validity of data collected by the sensor node, whereas user authentication guarantees that only legitimate users can access the sensor data. In a mobile WSN, sensor and user nodes move across the network and exchange data with multiple nodes, thus experiencing the authentication process multiple times. The integration of WSNs with Internet of Things (IoT) brings forth a new kind of WSN architecture along with stricter security requirements; for instance, a sensor node or a user node may need to establish multiple concurrent secure data sessions. With concurrent data sessions, the frequency of the re-authentication process increases in proportion to the number of concurrent connections. Moreover, to establish multiple data sessions, it is essential that a protocol participant have the capability of running multiple instances of the protocol run, which makes the security issue even more challenging. The currently available authentication protocols were designed for the autonomous WSN and do not account for the above requirements. Hence, ensuring a lightweight and efficient authentication protocol has become more crucial. In this paper, we present a novel, lightweight and efficient key exchange and authentication protocol suite called the Secure Mobile Sensor Network (SMSN) Authentication Protocol. In the SMSN a mobile node goes through an initial authentication procedure and receives a re-authentication ticket from the base station. Later a mobile node can use this re-authentication ticket when establishing multiple data exchange sessions and/or when moving across the network. This scheme reduces the communication and computational complexity of the authentication process. We proved the strength of our protocol with rigorous security analysis (including formal analysis using the BAN

  17. Computational methods for three-dimensional microscopy reconstruction

    CERN Document Server

    Frank, Joachim

    2014-01-01

    Approaches to the recovery of three-dimensional information on a biological object, which are often formulated or implemented initially in an intuitive way, are concisely described here based on physical models of the object and the image-formation process. Both three-dimensional electron microscopy and X-ray tomography can be captured in the same mathematical framework, leading to closely-related computational approaches, but the methodologies differ in detail and hence pose different challenges. The editors of this volume, Gabor T. Herman and Joachim Frank, are experts in the respective methodologies and present research at the forefront of biological imaging and structural biology.   Computational Methods for Three-Dimensional Microscopy Reconstruction will serve as a useful resource for scholars interested in the development of computational methods for structural biology and cell biology, particularly in the area of 3D imaging and modeling.

  18. GRAPH-BASED POST INCIDENT INTERNAL AUDIT METHOD OF COMPUTER EQUIPMENT

    Directory of Open Access Journals (Sweden)

    I. S. Pantiukhin

    2016-05-01

    Full Text Available Graph-based post incident internal audit method of computer equipment is proposed. The essence of the proposed solution consists in the establishing of relationships among hard disk damps (image, RAM and network. This method is intended for description of information security incident properties during the internal post incident audit of computer equipment. Hard disk damps receiving and formation process takes place at the first step. It is followed by separation of these damps into the set of components. The set of components includes a large set of attributes that forms the basis for the formation of the graph. Separated data is recorded into the non-relational database management system (NoSQL that is adapted for graph storage, fast access and processing. Damps linking application method is applied at the final step. The presented method gives the possibility to human expert in information security or computer forensics for more precise, informative internal audit of computer equipment. The proposed method allows reducing the time spent on internal audit of computer equipment, increasing accuracy and informativeness of such audit. The method has a development potential and can be applied along with the other components in the tasks of users’ identification and computer forensics.

  19. Overall evaluability of low dose protocol for computed tomography angiography of thoracic aorta using 80 kV and iterative reconstruction algorithm using different concentration contrast media.

    Science.gov (United States)

    Annoni, Andrea Daniele; Mancini, Maria E; Andreini, Daniele; Formenti, Alberto; Mushtaq, Saima; Nobili, Enrica; Guglielmo, Marco; Baggiano, Andrea; Conte, Edoardo; Pepi, Mauro

    2017-10-01

    Multidetector Computed Tomography Angiography (MDCTA) is presently the imaging modality of choice for aortic disease. However, the effective radiation dose and the risk related to the use of contrast agents associated with MDCTA is an issue of concern. Aim of this study was to assess image quality of a low dose ECG-gated MDCTA of thoracic aorta using different concentration contrast media without tailored injection protocol. Two-hundred patients were randomised into four different scan protocols: Group A (Iodixanol 320 and 80 Kvp tube voltage), Group B (Iodixanol 320 and 100 Kvp tube voltage), Group C (Iomeprol 400 and 80 Kvp tube voltage) and Group D (Iomeprol 400 and 100 Kvp tube voltage). Image quality, noise, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) and effective dose (ED) were compared among groups. No significant differences in image noise, SNR and CNR between groups with the same tube voltage. Significant differences in SNR and CNR were found among groups with 80 kV versus groups using 100 kV but without differences in terms of image quality. ED was significantly lower in groups with 80 kV. Multidetector Computed Tomography Angiography protocols using 80 kV and low concentration contrast media are feasible without need of tailored injection protocols. © 2017 The Royal Australian and New Zealand College of Radiologists.

  20. Domain decomposition methods and parallel computing

    International Nuclear Information System (INIS)

    Meurant, G.

    1991-01-01

    In this paper, we show how to efficiently solve large linear systems on parallel computers. These linear systems arise from discretization of scientific computing problems described by systems of partial differential equations. We show how to get a discrete finite dimensional system from the continuous problem and the chosen conjugate gradient iterative algorithm is briefly described. Then, the different kinds of parallel architectures are reviewed and their advantages and deficiencies are emphasized. We sketch the problems found in programming the conjugate gradient method on parallel computers. For this algorithm to be efficient on parallel machines, domain decomposition techniques are introduced. We give results of numerical experiments showing that these techniques allow a good rate of convergence for the conjugate gradient algorithm as well as computational speeds in excess of a billion of floating point operations per second. (author). 5 refs., 11 figs., 2 tabs., 1 inset

  1. Application of statistical method for FBR plant transient computation

    International Nuclear Information System (INIS)

    Kikuchi, Norihiro; Mochizuki, Hiroyasu

    2014-01-01

    Highlights: • A statistical method with a large trial number up to 10,000 is applied to the plant system analysis. • A turbine trip test conducted at the “Monju” reactor is selected as a plant transient. • A reduction method of trial numbers is discussed. • The result with reduced trial number can express the base regions of the computed distribution. -- Abstract: It is obvious that design tolerances, errors included in operation, and statistical errors in empirical correlations effect on the transient behavior. The purpose of the present study is to apply above mentioned statistical errors to a plant system computation in order to evaluate the statistical distribution contained in the transient evolution. A selected computation case is the turbine trip test conducted at 40% electric power of the prototype fast reactor “Monju”. All of the heat transport systems of “Monju” are modeled with the NETFLOW++ system code which has been validated using the plant transient tests of the experimental fast reactor Joyo, and “Monju”. The effects of parameters on upper plenum temperature are confirmed by sensitivity analyses, and dominant parameters are chosen. The statistical errors are applied to each computation deck by using a pseudorandom number and the Monte-Carlo method. The dSFMT (Double precision SIMD-oriented Fast Mersenne Twister) that is developed version of Mersenne Twister (MT), is adopted as the pseudorandom number generator. In the present study, uniform random numbers are generated by dSFMT, and these random numbers are transformed to the normal distribution by the Box–Muller method. Ten thousands of different computations are performed at once. In every computation case, the steady calculation is performed for 12,000 s, and transient calculation is performed for 4000 s. In the purpose of the present statistical computation, it is important that the base regions of distribution functions should be calculated precisely. A large number of

  2. Computational simulation in architectural and environmental acoustics methods and applications of wave-based computation

    CERN Document Server

    Sakamoto, Shinichi; Otsuru, Toru

    2014-01-01

    This book reviews a variety of methods for wave-based acoustic simulation and recent applications to architectural and environmental acoustic problems. Following an introduction providing an overview of computational simulation of sound environment, the book is in two parts: four chapters on methods and four chapters on applications. The first part explains the fundamentals and advanced techniques for three popular methods, namely, the finite-difference time-domain method, the finite element method, and the boundary element method, as well as alternative time-domain methods. The second part demonstrates various applications to room acoustics simulation, noise propagation simulation, acoustic property simulation for building components, and auralization. This book is a valuable reference that covers the state of the art in computational simulation for architectural and environmental acoustics.  

  3. Principles of Protocol Design

    DEFF Research Database (Denmark)

    Sharp, Robin

    This is a new and updated edition of a book first published in 1994. The book introduces the reader to the principles used in the construction of a large range of modern data communication protocols, as used in distributed computer systems of all kinds. The approach taken is rather a formal one...

  4. Genomics protocols [Methods in molecular biology, v. 175

    National Research Council Canada - National Science Library

    Starkey, Michael P; Elaswarapu, Ramnath

    2001-01-01

    .... Drawing on emerging technologies in the fields of bioinformatics and proteomics, these protocols cover not only those traditionally recognized as genomics, but also early therapeutich approaches...

  5. BLUES function method in computational physics

    Science.gov (United States)

    Indekeu, Joseph O.; Müller-Nedebock, Kristian K.

    2018-04-01

    We introduce a computational method in physics that goes ‘beyond linear use of equation superposition’ (BLUES). A BLUES function is defined as a solution of a nonlinear differential equation (DE) with a delta source that is at the same time a Green’s function for a related linear DE. For an arbitrary source, the BLUES function can be used to construct an exact solution to the nonlinear DE with a different, but related source. Alternatively, the BLUES function can be used to construct an approximate piecewise analytical solution to the nonlinear DE with an arbitrary source. For this alternative use the related linear DE need not be known. The method is illustrated in a few examples using analytical calculations and numerical computations. Areas for further applications are suggested.

  6. Recent Advances in Computational Methods for Nuclear Magnetic Resonance Data Processing

    KAUST Repository

    Gao, Xin

    2013-01-11

    Although three-dimensional protein structure determination using nuclear magnetic resonance (NMR) spectroscopy is a computationally costly and tedious process that would benefit from advanced computational techniques, it has not garnered much research attention from specialists in bioinformatics and computational biology. In this paper, we review recent advances in computational methods for NMR protein structure determination. We summarize the advantages of and bottlenecks in the existing methods and outline some open problems in the field. We also discuss current trends in NMR technology development and suggest directions for research on future computational methods for NMR.

  7. SU-F-R-40: Robustness Test of Computed Tomography Textures of Lung Tissues to Varying Scanning Protocols Using a Realistic Phantom Environment

    International Nuclear Information System (INIS)

    Lee, S; Markel, D; Hegyi, G; El Naqa, I

    2016-01-01

    Purpose: The reliability of computed tomography (CT) textures is an important element of radiomics analysis. This study investigates the dependency of lung CT textures on different breathing phases and changes in CT image acquisition protocols in a realistic phantom setting. Methods: We investigated 11 CT texture features for radiation-induced lung disease from 3 categories (first-order, grey level co-ocurrence matrix (GLCM), and Law’s filter). A biomechanical swine lung phantom was scanned at two breathing phases (inhale/exhale) and two scanning protocols set for PET/CT and diagnostic CT scanning. Lung volumes acquired from the CT images were divided into 2-dimensional sub-regions with a grid spacing of 31 mm. The distribution of the evaluated texture features from these sub-regions were compared between the two scanning protocols and two breathing phases. The significance of each factor on feature values were tested at 95% significance level using analysis of covariance (ANCOVA) model with interaction terms included. Robustness of a feature to a scanning factor was defined as non-significant dependence on the factor. Results: Three GLCM textures (variance, sum entropy, difference entropy) were robust to breathing changes. Two GLCM (variance, sum entropy) and 3 Law’s filter textures (S5L5, E5L5, W5L5) were robust to scanner changes. Moreover, the two GLCM textures (variance, sum entropy) were consistent across all 4 scanning conditions. First-order features, especially Hounsfield unit intensity features, presented the most drastic variation up to 39%. Conclusion: Amongst the studied features, GLCM and Law’s filter texture features were more robust than first-order features. However, the majority of the features were modified by either breathing phase or scanner changes, suggesting a need for calibration when retrospectively comparing scans obtained at different conditions. Further investigation is necessary to identify the sensitivity of individual image

  8. SU-F-R-40: Robustness Test of Computed Tomography Textures of Lung Tissues to Varying Scanning Protocols Using a Realistic Phantom Environment

    Energy Technology Data Exchange (ETDEWEB)

    Lee, S; Markel, D; Hegyi, G [Medical Physics Unit, McGill University, Montreal, Quebec (Canada); El Naqa, I [University of Michigan, Ann Arbor, MI (United States)

    2016-06-15

    Purpose: The reliability of computed tomography (CT) textures is an important element of radiomics analysis. This study investigates the dependency of lung CT textures on different breathing phases and changes in CT image acquisition protocols in a realistic phantom setting. Methods: We investigated 11 CT texture features for radiation-induced lung disease from 3 categories (first-order, grey level co-ocurrence matrix (GLCM), and Law’s filter). A biomechanical swine lung phantom was scanned at two breathing phases (inhale/exhale) and two scanning protocols set for PET/CT and diagnostic CT scanning. Lung volumes acquired from the CT images were divided into 2-dimensional sub-regions with a grid spacing of 31 mm. The distribution of the evaluated texture features from these sub-regions were compared between the two scanning protocols and two breathing phases. The significance of each factor on feature values were tested at 95% significance level using analysis of covariance (ANCOVA) model with interaction terms included. Robustness of a feature to a scanning factor was defined as non-significant dependence on the factor. Results: Three GLCM textures (variance, sum entropy, difference entropy) were robust to breathing changes. Two GLCM (variance, sum entropy) and 3 Law’s filter textures (S5L5, E5L5, W5L5) were robust to scanner changes. Moreover, the two GLCM textures (variance, sum entropy) were consistent across all 4 scanning conditions. First-order features, especially Hounsfield unit intensity features, presented the most drastic variation up to 39%. Conclusion: Amongst the studied features, GLCM and Law’s filter texture features were more robust than first-order features. However, the majority of the features were modified by either breathing phase or scanner changes, suggesting a need for calibration when retrospectively comparing scans obtained at different conditions. Further investigation is necessary to identify the sensitivity of individual image

  9. Electromagnetic field computation by network methods

    CERN Document Server

    Felsen, Leopold B; Russer, Peter

    2009-01-01

    This monograph proposes a systematic and rigorous treatment of electromagnetic field representations in complex structures. The book presents new strong models by combining important computational methods. This is the last book of the late Leopold Felsen.

  10. Computational botany methods for automated species identification

    CERN Document Server

    Remagnino, Paolo; Wilkin, Paul; Cope, James; Kirkup, Don

    2017-01-01

    This book discusses innovative methods for mining information from images of plants, especially leaves, and highlights the diagnostic features that can be implemented in fully automatic systems for identifying plant species. Adopting a multidisciplinary approach, it explores the problem of plant species identification, covering both the concepts of taxonomy and morphology. It then provides an overview of morphometrics, including the historical background and the main steps in the morphometric analysis of leaves together with a number of applications. The core of the book focuses on novel diagnostic methods for plant species identification developed from a computer scientist’s perspective. It then concludes with a chapter on the characterization of botanists' visions, which highlights important cognitive aspects that can be implemented in a computer system to more accurately replicate the human expert’s fixation process. The book not only represents an authoritative guide to advanced computational tools fo...

  11. Computation of saddle-type slow manifolds using iterative methods

    DEFF Research Database (Denmark)

    Kristiansen, Kristian Uldall

    2015-01-01

    with respect to , appropriate estimates are directly attainable using the method of this paper. The method is applied to several examples, including a model for a pair of neurons coupled by reciprocal inhibition with two slow and two fast variables, and the computation of homoclinic connections in the Fitz......This paper presents an alternative approach for the computation of trajectory segments on slow manifolds of saddle type. This approach is based on iterative methods rather than collocation-type methods. Compared to collocation methods, which require mesh refinements to ensure uniform convergence...

  12. Constant round group key agreement protocols: A comparative study

    NARCIS (Netherlands)

    Makri, E.; Konstantinou, Elisavet

    2011-01-01

    The scope of this paper is to review and evaluate all constant round Group Key Agreement (GKA) protocols proposed so far in the literature. We have gathered all GKA protocols that require 1,2,3,4 and 5 rounds and examined their efficiency. In particular, we calculated each protocol’s computation and

  13. Zero-Knowledge Protocols and Multiparty Computation

    DEFF Research Database (Denmark)

    Pastro, Valerio

    majority, in which all players but one are controlled by the adversary. In Chapter 5 we present both the preprocessing and the online phase of [DKL+ 13], while in Chapter 2 we describe only the preprocessing phase of [DPSZ12] since the combination of this preprocessing phase with the online phase of [DKL...... on information-theoretic message authentication codes, requires only a linear amount of data from the preprocessing, and improves on the number of field multiplications needed to perform one secure multiplication (linear, instead of quadratic as in earlier work). The preprocessing phase in Chapter 5 comes...... in an actively secure flavour and in a covertly secure one, both of which compare favourably to previous work in terms of efficiency and provable security. Moreover, the covertly secure solution includes a key generation protocol that allows players to obtain a public key and shares of a corresponding secret key...

  14. Multi-server blind quantum computation over collective-noise channels

    Science.gov (United States)

    Xiao, Min; Liu, Lin; Song, Xiuli

    2018-03-01

    Blind quantum computation (BQC) enables ordinary clients to securely outsource their computation task to costly quantum servers. Besides two essential properties, namely correctness and blindness, practical BQC protocols also should make clients as classical as possible and tolerate faults from nonideal quantum channel. In this paper, using logical Bell states as quantum resource, we propose multi-server BQC protocols over collective-dephasing noise channel and collective-rotation noise channel, respectively. The proposed protocols permit completely or almost classical client, meet the correctness and blindness requirements of BQC protocol, and are typically practical BQC protocols.

  15. Secure Two-Party Computation with Low Communication

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Faust, Sebastian; Hazay, Carmit

    2011-01-01

    We propose a 2-party UC-secure computation protocol that can compute any function securely. The protocol requires only two messages, communication that is poly-logarithmic in the size of the circuit description of the function, and the workload for one of the parties is also only poly-logarithmic...

  16. Analysis of limiting information characteristics of quantum-cryptography protocols

    International Nuclear Information System (INIS)

    Sych, D V; Grishanin, Boris A; Zadkov, Viktor N

    2005-01-01

    The problem of increasing the critical error rate of quantum-cryptography protocols by varying a set of letters in a quantum alphabet for space of a fixed dimensionality is studied. Quantum alphabets forming regular polyhedra on the Bloch sphere and the continual alphabet equally including all the quantum states are considered. It is shown that, in the absence of basis reconciliation, a protocol with the tetrahedral alphabet has the highest critical error rate among the protocols considered, while after the basis reconciliation, a protocol with the continual alphabet possesses the highest critical error rate. (quantum optics and quantum computation)

  17. Computational Methods for Modeling Aptamers and Designing Riboswitches

    Directory of Open Access Journals (Sweden)

    Sha Gong

    2017-11-01

    Full Text Available Riboswitches, which are located within certain noncoding RNA region perform functions as genetic “switches”, regulating when and where genes are expressed in response to certain ligands. Understanding the numerous functions of riboswitches requires computation models to predict structures and structural changes of the aptamer domains. Although aptamers often form a complex structure, computational approaches, such as RNAComposer and Rosetta, have already been applied to model the tertiary (three-dimensional (3D structure for several aptamers. As structural changes in aptamers must be achieved within the certain time window for effective regulation, kinetics is another key point for understanding aptamer function in riboswitch-mediated gene regulation. The coarse-grained self-organized polymer (SOP model using Langevin dynamics simulation has been successfully developed to investigate folding kinetics of aptamers, while their co-transcriptional folding kinetics can be modeled by the helix-based computational method and BarMap approach. Based on the known aptamers, the web server Riboswitch Calculator and other theoretical methods provide a new tool to design synthetic riboswitches. This review will represent an overview of these computational methods for modeling structure and kinetics of riboswitch aptamers and for designing riboswitches.

  18. Efficient Multi-Party Computation over Rings

    DEFF Research Database (Denmark)

    Cramer, Ronald; Fehr, Serge; Ishai, Yuval

    2003-01-01

    Secure multi-party computation (MPC) is an active research area, and a wide range of literature can be found nowadays suggesting improvements and generalizations of existing protocols in various directions. However, all current techniques for secure MPC apply to functions that are represented by ...... the usefulness of the above results by presenting a novel application of MPC over (non-field) rings to the round-efficient secure computation of the maximum function. Basic Research in Computer Science (www.brics.dk), funded by the Danish National Research Foundation.......Secure multi-party computation (MPC) is an active research area, and a wide range of literature can be found nowadays suggesting improvements and generalizations of existing protocols in various directions. However, all current techniques for secure MPC apply to functions that are represented...... by (boolean or arithmetic) circuits over finite fields. We are motivated by two limitations of these techniques: – Generality. Existing protocols do not apply to computation over more general algebraic structures (except via a brute-force simulation of computation in these structures). – Efficiency. The best...

  19. In silico toxicology protocols.

    Science.gov (United States)

    Myatt, Glenn J; Ahlberg, Ernst; Akahori, Yumi; Allen, David; Amberg, Alexander; Anger, Lennart T; Aptula, Aynur; Auerbach, Scott; Beilke, Lisa; Bellion, Phillip; Benigni, Romualdo; Bercu, Joel; Booth, Ewan D; Bower, Dave; Brigo, Alessandro; Burden, Natalie; Cammerer, Zoryana; Cronin, Mark T D; Cross, Kevin P; Custer, Laura; Dettwiler, Magdalena; Dobo, Krista; Ford, Kevin A; Fortin, Marie C; Gad-McDonald, Samantha E; Gellatly, Nichola; Gervais, Véronique; Glover, Kyle P; Glowienke, Susanne; Van Gompel, Jacky; Gutsell, Steve; Hardy, Barry; Harvey, James S; Hillegass, Jedd; Honma, Masamitsu; Hsieh, Jui-Hua; Hsu, Chia-Wen; Hughes, Kathy; Johnson, Candice; Jolly, Robert; Jones, David; Kemper, Ray; Kenyon, Michelle O; Kim, Marlene T; Kruhlak, Naomi L; Kulkarni, Sunil A; Kümmerer, Klaus; Leavitt, Penny; Majer, Bernhard; Masten, Scott; Miller, Scott; Moser, Janet; Mumtaz, Moiz; Muster, Wolfgang; Neilson, Louise; Oprea, Tudor I; Patlewicz, Grace; Paulino, Alexandre; Lo Piparo, Elena; Powley, Mark; Quigley, Donald P; Reddy, M Vijayaraj; Richarz, Andrea-Nicole; Ruiz, Patricia; Schilter, Benoit; Serafimova, Rositsa; Simpson, Wendy; Stavitskaya, Lidiya; Stidl, Reinhard; Suarez-Rodriguez, Diana; Szabo, David T; Teasdale, Andrew; Trejo-Martin, Alejandra; Valentin, Jean-Pierre; Vuorinen, Anna; Wall, Brian A; Watts, Pete; White, Angela T; Wichard, Joerg; Witt, Kristine L; Woolley, Adam; Woolley, David; Zwickl, Craig; Hasselgren, Catrin

    2018-04-17

    The present publication surveys several applications of in silico (i.e., computational) toxicology approaches across different industries and institutions. It highlights the need to develop standardized protocols when conducting toxicity-related predictions. This contribution articulates the information needed for protocols to support in silico predictions for major toxicological endpoints of concern (e.g., genetic toxicity, carcinogenicity, acute toxicity, reproductive toxicity, developmental toxicity) across several industries and regulatory bodies. Such novel in silico toxicology (IST) protocols, when fully developed and implemented, will ensure in silico toxicological assessments are performed and evaluated in a consistent, reproducible, and well-documented manner across industries and regulatory bodies to support wider uptake and acceptance of the approaches. The development of IST protocols is an initiative developed through a collaboration among an international consortium to reflect the state-of-the-art in in silico toxicology for hazard identification and characterization. A general outline for describing the development of such protocols is included and it is based on in silico predictions and/or available experimental data for a defined series of relevant toxicological effects or mechanisms. The publication presents a novel approach for determining the reliability of in silico predictions alongside experimental data. In addition, we discuss how to determine the level of confidence in the assessment based on the relevance and reliability of the information. Copyright © 2018. Published by Elsevier Inc.

  20. Investigation of optimal scanning protocol for X-ray computed tomography polymer gel dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Sellakumar, P. [Bangalore Institute of Oncology, 44-45/2, II Cross, RRMR Extension, Bangalore 560 027 (India)], E-mail: psellakumar@rediffmail.com; James Jebaseelan Samuel, E. [School of Science and Humanities, VIT University, Vellore 632 014 (India); Supe, Sanjay S. [Department of Radiation Physics, Kidwai Memorial Institute of Oncology, Hosur Road, Bangalore 560 027 (India)

    2007-11-15

    X-ray computed tomography is one of the potential tool used to evaluate the polymer gel dosimeters in three dimensions. The purpose of this study is to investigate the factors which affect the image noise for X-ray CT polymer gel dosimetry. A cylindrical water filled phantom was imaged with single slice Siemens Somatom Emotion CT scanner. The imaging parameters like tube voltage, tube current, slice scan time, slice thickness and reconstruction algorithm were varied independently to study the dependence of noise on each other. Reductions of noise with number of images to be averaged and spatial uniformity of the image were also investigated. Normoxic polymer gel PAGAT was manufactured and irradiated using Siemens Primus linear accelerator. The radiation induced change in CT number was evaluated using X-ray CT scanner. From this study it is clear that image noise is reduced with increase in tube voltage, tube current, slice scan time, slice thickness and also reduced with increasing the number of images averaged. However to reduce the tube load and total scan time, it was concluded that tube voltage of 130 kV, tube current of 200 mA, scan time of 1.5 s, slice thickness of 3 mm for high dose gradient and 5 mm for low dose gradient were optimal scanning protocols for this scanner. Optimum number of images to be averaged was concluded to be 25 for X-ray CT polymer gel dosimetry. Choice of reconstruction algorithm was also critical. From the study it is also clear that CT number increase with imaging tube voltage and shows the energy dependency of polymer gel dosimeter. Hence for evaluation of polymer gel dosimeters with X-ray CT scanner needs the optimization of scanning protocols to reduce the image noise.

  1. Computational methods for protein identification from mass spectrometry data.

    Directory of Open Access Journals (Sweden)

    Leo McHugh

    2008-02-01

    Full Text Available Protein identification using mass spectrometry is an indispensable computational tool in the life sciences. A dramatic increase in the use of proteomic strategies to understand the biology of living systems generates an ongoing need for more effective, efficient, and accurate computational methods for protein identification. A wide range of computational methods, each with various implementations, are available to complement different proteomic approaches. A solid knowledge of the range of algorithms available and, more critically, the accuracy and effectiveness of these techniques is essential to ensure as many of the proteins as possible, within any particular experiment, are correctly identified. Here, we undertake a systematic review of the currently available methods and algorithms for interpreting, managing, and analyzing biological data associated with protein identification. We summarize the advances in computational solutions as they have responded to corresponding advances in mass spectrometry hardware. The evolution of scoring algorithms and metrics for automated protein identification are also discussed with a focus on the relative performance of different techniques. We also consider the relative advantages and limitations of different techniques in particular biological contexts. Finally, we present our perspective on future developments in the area of computational protein identification by considering the most recent literature on new and promising approaches to the problem as well as identifying areas yet to be explored and the potential application of methods from other areas of computational biology.

  2. Quantum Computer Science

    Science.gov (United States)

    Mermin, N. David

    2007-08-01

    Preface; 1. Cbits and Qbits; 2. General features and some simple examples; 3. Breaking RSA encryption with a quantum computer; 4. Searching with a quantum computer; 5. Quantum error correction; 6. Protocols that use just a few Qbits; Appendices; Index.

  3. Computer science handbook. Vol. 13.3. Environmental computer science. Computer science methods for environmental protection and environmental research

    International Nuclear Information System (INIS)

    Page, B.; Hilty, L.M.

    1994-01-01

    Environmental computer science is a new partial discipline of applied computer science, which makes use of methods and techniques of information processing in environmental protection. Thanks to the inter-disciplinary nature of environmental problems, computer science acts as a mediator between numerous disciplines and institutions in this sector. The handbook reflects the broad spectrum of state-of-the art environmental computer science. The following important subjects are dealt with: Environmental databases and information systems, environmental monitoring, modelling and simulation, visualization of environmental data and knowledge-based systems in the environmental sector. (orig.) [de

  4. Connectivity-Based Reliable Multicast MAC Protocol for IEEE 802.11 Wireless LANs

    Directory of Open Access Journals (Sweden)

    Woo-Yong Choi

    2009-01-01

    Full Text Available We propose the efficient reliable multicast MAC protocol based on the connectivity information among the recipients. Enhancing the BMMM (Batch Mode Multicast MAC protocol, the reliable multicast MAC protocol significantly reduces the RAK (Request for ACK frame transmissions in a reasonable computational time and enhances the MAC performance. By the analytical performance analysis, the throughputs of the BMMM protocol and our proposed MAC protocol are derived. Numerical examples show that our proposed MAC protocol increases the reliable multicast MAC performance for IEEE 802.11 wireless LANs.

  5. A computational method for sharp interface advection

    DEFF Research Database (Denmark)

    Roenby, Johan; Bredmose, Henrik; Jasak, Hrvoje

    2016-01-01

    We devise a numerical method for passive advection of a surface, such as the interface between two incompressible fluids, across a computational mesh. The method is called isoAdvector, and is developed for general meshes consisting of arbitrary polyhedral cells. The algorithm is based on the volu...

  6. Computer-aided proofs for multiparty computation with active security

    DEFF Research Database (Denmark)

    Haagh, Helene; Karbyshev, Aleksandr; Oechsner, Sabine

    2018-01-01

    Secure multi-party computation (MPC) is a general cryptographic technique that allows distrusting parties to compute a function of their individual inputs, while only revealing the output of the function. It has found applications in areas such as auctioning, email filtering, and secure...... teleconference. Given its importance, it is crucial that the protocols are specified and implemented correctly. In the programming language community it has become good practice to use computer proof assistants to verify correctness proofs. In the field of cryptography, EasyCrypt is the state of the art proof...... public-key encryption, signatures, garbled circuits and differential privacy. Here we show for the first time that it can also be used to prove security of MPC against a malicious adversary. We formalize additive and replicated secret sharing schemes and apply them to Maurer's MPC protocol for secure...

  7. Comparative assessment of computational methods for the determination of solvation free energies in alcohol-based molecules.

    Science.gov (United States)

    Martins, Silvia A; Sousa, Sergio F

    2013-06-05

    The determination of differences in solvation free energies between related drug molecules remains an important challenge in computational drug optimization, when fast and accurate calculation of differences in binding free energy are required. In this study, we have evaluated the performance of five commonly used polarized continuum model (PCM) methodologies in the determination of solvation free energies for 53 typical alcohol and alkane small molecules. In addition, the performance of these PCM methods, of a thermodynamic integration (TI) protocol and of the Poisson-Boltzmann (PB) and generalized Born (GB) methods, were tested in the determination of solvation free energies changes for 28 common alkane-alcohol transformations, by the substitution of an hydrogen atom for a hydroxyl substituent. The results show that the solvation model D (SMD) performs better among the PCM-based approaches in estimating solvation free energies for alcohol molecules, and solvation free energy changes for alkane-alcohol transformations, with an average error below 1 kcal/mol for both quantities. However, for the determination of solvation free energy changes on alkane-alcohol transformation, PB and TI yielded better results. TI was particularly accurate in the treatment of hydroxyl groups additions to aromatic rings (0.53 kcal/mol), a common transformation when optimizing drug-binding in computer-aided drug design. Copyright © 2013 Wiley Periodicals, Inc.

  8. Method of generating a computer readable model

    DEFF Research Database (Denmark)

    2008-01-01

    A method of generating a computer readable model of a geometrical object constructed from a plurality of interconnectable construction elements, wherein each construction element has a number of connection elements for connecting the construction element with another construction element. The met......A method of generating a computer readable model of a geometrical object constructed from a plurality of interconnectable construction elements, wherein each construction element has a number of connection elements for connecting the construction element with another construction element....... The method comprises encoding a first and a second one of the construction elements as corresponding data structures, each representing the connection elements of the corresponding construction element, and each of the connection elements having associated with it a predetermined connection type. The method...... further comprises determining a first connection element of the first construction element and a second connection element of the second construction element located in a predetermined proximity of each other; and retrieving connectivity information of the corresponding connection types of the first...

  9. Hamiltonian lattice field theory: Computer calculations using variational methods

    International Nuclear Information System (INIS)

    Zako, R.L.

    1991-01-01

    I develop a variational method for systematic numerical computation of physical quantities -- bound state energies and scattering amplitudes -- in quantum field theory. An infinite-volume, continuum theory is approximated by a theory on a finite spatial lattice, which is amenable to numerical computation. I present an algorithm for computing approximate energy eigenvalues and eigenstates in the lattice theory and for bounding the resulting errors. I also show how to select basis states and choose variational parameters in order to minimize errors. The algorithm is based on the Rayleigh-Ritz principle and Kato's generalizations of Temple's formula. The algorithm could be adapted to systems such as atoms and molecules. I show how to compute Green's functions from energy eigenvalues and eigenstates in the lattice theory, and relate these to physical (renormalized) coupling constants, bound state energies and Green's functions. Thus one can compute approximate physical quantities in a lattice theory that approximates a quantum field theory with specified physical coupling constants. I discuss the errors in both approximations. In principle, the errors can be made arbitrarily small by increasing the size of the lattice, decreasing the lattice spacing and computing sufficiently long. Unfortunately, I do not understand the infinite-volume and continuum limits well enough to quantify errors due to the lattice approximation. Thus the method is currently incomplete. I apply the method to real scalar field theories using a Fock basis of free particle states. All needed quantities can be calculated efficiently with this basis. The generalization to more complicated theories is straightforward. I describe a computer implementation of the method and present numerical results for simple quantum mechanical systems

  10. Hamiltonian lattice field theory: Computer calculations using variational methods

    International Nuclear Information System (INIS)

    Zako, R.L.

    1991-01-01

    A variational method is developed for systematic numerical computation of physical quantities-bound state energies and scattering amplitudes-in quantum field theory. An infinite-volume, continuum theory is approximated by a theory on a finite spatial lattice, which is amenable to numerical computation. An algorithm is presented for computing approximate energy eigenvalues and eigenstates in the lattice theory and for bounding the resulting errors. It is shown how to select basis states and choose variational parameters in order to minimize errors. The algorithm is based on the Rayleigh-Ritz principle and Kato's generalizations of Temple's formula. The algorithm could be adapted to systems such as atoms and molecules. It is shown how to compute Green's functions from energy eigenvalues and eigenstates in the lattice theory, and relate these to physical (renormalized) coupling constants, bound state energies and Green's functions. Thus one can compute approximate physical quantities in a lattice theory that approximates a quantum field theory with specified physical coupling constants. The author discusses the errors in both approximations. In principle, the errors can be made arbitrarily small by increasing the size of the lattice, decreasing the lattice spacing and computing sufficiently long. Unfortunately, the author does not understand the infinite-volume and continuum limits well enough to quantify errors due to the lattice approximation. Thus the method is currently incomplete. The method is applied to real scalar field theories using a Fock basis of free particle states. All needed quantities can be calculated efficiently with this basis. The generalization to more complicated theories is straightforward. The author describes a computer implementation of the method and present numerical results for simple quantum mechanical systems

  11. Comparison of four computational methods for computing Q factors and resonance wavelengths in photonic crystal membrane cavities

    DEFF Research Database (Denmark)

    de Lasson, Jakob Rosenkrantz; Frandsen, Lars Hagedorn; Burger, Sven

    2016-01-01

    We benchmark four state-of-the-art computational methods by computing quality factors and resonance wavelengths in photonic crystal membrane L5 and L9 line defect cavities.The convergence of the methods with respect to resolution, degrees of freedom and number ofmodes is investigated. Special att...... attention is paid to the influence of the size of the computational domain. Convergence is not obtained for some of the methods, indicating that some are moresuitable than others for analyzing line defect cavities....

  12. The Direct Lighting Computation in Global Illumination Methods

    Science.gov (United States)

    Wang, Changyaw Allen

    1994-01-01

    Creating realistic images is a computationally expensive process, but it is very important for applications such as interior design, product design, education, virtual reality, and movie special effects. To generate realistic images, state-of-art rendering techniques are employed to simulate global illumination, which accounts for the interreflection of light among objects. In this document, we formalize the global illumination problem into a eight -dimensional integral and discuss various methods that can accelerate the process of approximating this integral. We focus on the direct lighting computation, which accounts for the light reaching the viewer from the emitting sources after exactly one reflection, Monte Carlo sampling methods, and light source simplification. Results include a new sample generation method, a framework for the prediction of the total number of samples used in a solution, and a generalized Monte Carlo approach for computing the direct lighting from an environment which for the first time makes ray tracing feasible for highly complex environments.

  13. Password Authenticated Key Exchange and Protected Password Change Protocols

    Directory of Open Access Journals (Sweden)

    Ting-Yi Chang

    2017-07-01

    Full Text Available In this paper, we propose new password authenticated key exchange (PAKE and protected password change (PPC protocols without any symmetric or public-key cryptosystems. The security of the proposed protocols is based on the computational Diffie-Hellman assumption in the random oracle model. The proposed scheme can resist both forgery server and denial of service attacks.

  14. Comparative analysis of five DNA isolation protocols and three drying methods for leaves samples of Nectandra megapotamica (Spreng. Mez

    Directory of Open Access Journals (Sweden)

    Leonardo Severo da Costa

    2016-06-01

    Full Text Available The aim of the study was to establish a DNA isolation protocol Nectandra megapotamica (Spreng. Mez., able to obtain samples of high yield and quality for use in genomic analysis. A commercial kit and four classical methods of DNA extraction were tested, including three cetyltrimethylammonium bromide (CTAB-based and one sodium dodecyl sulfate (SDS-based methods. Three drying methods for leaves samples were also evaluated including drying at room temperature (RT, in an oven at 40ºC (S40, and in a microwave oven (FMO. The DNA solutions obtained from different types of leaves samples using the five protocols were assessed in terms of cost, execution time, and quality and yield of extracted DNA. The commercial kit did not extract DNA with sufficient quantity or quality for successful PCR reactions. Among the classic methods, only the protocols of Dellaporta and of Khanuja yielded DNA extractions for all three types of foliar samples that resulted in successful PCR reactions and subsequent enzyme restriction assays. Based on the evaluated variables, the most appropriate DNA extraction method for Nectandra megapotamica (Spreng. Mez. was that of Dellaporta, regardless of the method used to dry the samples. The selected method has a relatively low cost and total execution time. Moreover, the quality and quantity of DNA extracted using this method was sufficient for DNA sequence amplification using PCR reactions and to get restriction fragments.

  15. Reduced order methods for modeling and computational reduction

    CERN Document Server

    Rozza, Gianluigi

    2014-01-01

    This monograph addresses the state of the art of reduced order methods for modeling and computational reduction of complex parametrized systems, governed by ordinary and/or partial differential equations, with a special emphasis on real time computing techniques and applications in computational mechanics, bioengineering and computer graphics.  Several topics are covered, including: design, optimization, and control theory in real-time with applications in engineering; data assimilation, geometry registration, and parameter estimation with special attention to real-time computing in biomedical engineering and computational physics; real-time visualization of physics-based simulations in computer science; the treatment of high-dimensional problems in state space, physical space, or parameter space; the interactions between different model reduction and dimensionality reduction approaches; the development of general error estimation frameworks which take into account both model and discretization effects. This...

  16. Static Validation of Security Protocols

    DEFF Research Database (Denmark)

    Bodei, Chiara; Buchholtz, Mikael; Degano, P.

    2005-01-01

    We methodically expand protocol narrations into terms of a process algebra in order to specify some of the checks that need to be made in a protocol. We then apply static analysis technology to develop an automatic validation procedure for protocols. Finally, we demonstrate that these techniques ...... suffice to identify several authentication flaws in symmetric and asymmetric key protocols such as Needham-Schroeder symmetric key, Otway-Rees, Yahalom, Andrew secure RPC, Needham-Schroeder asymmetric key, and Beller-Chang-Yacobi MSR...

  17. Multi-party Quantum Computation

    OpenAIRE

    Smith, Adam

    2001-01-01

    We investigate definitions of and protocols for multi-party quantum computing in the scenario where the secret data are quantum systems. We work in the quantum information-theoretic model, where no assumptions are made on the computational power of the adversary. For the slightly weaker task of verifiable quantum secret sharing, we give a protocol which tolerates any t < n/4 cheating parties (out of n). This is shown to be optimal. We use this new tool to establish that any multi-party quantu...

  18. The Extrapolation-Accelerated Multilevel Aggregation Method in PageRank Computation

    Directory of Open Access Journals (Sweden)

    Bing-Yuan Pu

    2013-01-01

    Full Text Available An accelerated multilevel aggregation method is presented for calculating the stationary probability vector of an irreducible stochastic matrix in PageRank computation, where the vector extrapolation method is its accelerator. We show how to periodically combine the extrapolation method together with the multilevel aggregation method on the finest level for speeding up the PageRank computation. Detailed numerical results are given to illustrate the behavior of this method, and comparisons with the typical methods are also made.

  19. Evolutionary Computing Methods for Spectral Retrieval

    Science.gov (United States)

    Terrile, Richard; Fink, Wolfgang; Huntsberger, Terrance; Lee, Seugwon; Tisdale, Edwin; VonAllmen, Paul; Tinetti, Geivanna

    2009-01-01

    A methodology for processing spectral images to retrieve information on underlying physical, chemical, and/or biological phenomena is based on evolutionary and related computational methods implemented in software. In a typical case, the solution (the information that one seeks to retrieve) consists of parameters of a mathematical model that represents one or more of the phenomena of interest. The methodology was developed for the initial purpose of retrieving the desired information from spectral image data acquired by remote-sensing instruments aimed at planets (including the Earth). Examples of information desired in such applications include trace gas concentrations, temperature profiles, surface types, day/night fractions, cloud/aerosol fractions, seasons, and viewing angles. The methodology is also potentially useful for retrieving information on chemical and/or biological hazards in terrestrial settings. In this methodology, one utilizes an iterative process that minimizes a fitness function indicative of the degree of dissimilarity between observed and synthetic spectral and angular data. The evolutionary computing methods that lie at the heart of this process yield a population of solutions (sets of the desired parameters) within an accuracy represented by a fitness-function value specified by the user. The evolutionary computing methods (ECM) used in this methodology are Genetic Algorithms and Simulated Annealing, both of which are well-established optimization techniques and have also been described in previous NASA Tech Briefs articles. These are embedded in a conceptual framework, represented in the architecture of the implementing software, that enables automatic retrieval of spectral and angular data and analysis of the retrieved solutions for uniqueness.

  20. High Performance Computing Multicast

    Science.gov (United States)

    2012-02-01

    A History of the Virtual Synchrony Replication Model,” in Replication: Theory and Practice, Charron-Bost, B., Pedone, F., and Schiper, A. (Eds...Performance Computing IP / IPv4 Internet Protocol (version 4.0) IPMC Internet Protocol MultiCast LAN Local Area Network MCMD Dr. Multicast MPI

  1. One-way quantum computing in superconducting circuits

    Science.gov (United States)

    Albarrán-Arriagada, F.; Alvarado Barrios, G.; Sanz, M.; Romero, G.; Lamata, L.; Retamal, J. C.; Solano, E.

    2018-03-01

    We propose a method for the implementation of one-way quantum computing in superconducting circuits. Measurement-based quantum computing is a universal quantum computation paradigm in which an initial cluster state provides the quantum resource, while the iteration of sequential measurements and local rotations encodes the quantum algorithm. Up to now, technical constraints have limited a scalable approach to this quantum computing alternative. The initial cluster state can be generated with available controlled-phase gates, while the quantum algorithm makes use of high-fidelity readout and coherent feedforward. With current technology, we estimate that quantum algorithms with above 20 qubits may be implemented in the path toward quantum supremacy. Moreover, we propose an alternative initial state with properties of maximal persistence and maximal connectedness, reducing the required resources of one-way quantum computing protocols.

  2. Monte Carlo methods of PageRank computation

    NARCIS (Netherlands)

    Litvak, Nelli

    2004-01-01

    We describe and analyze an on-line Monte Carlo method of PageRank computation. The PageRank is being estimated basing on results of a large number of short independent simulation runs initiated from each page that contains outgoing hyperlinks. The method does not require any storage of the hyperlink

  3. Computational methods for industrial radiation measurement applications

    International Nuclear Information System (INIS)

    Gardner, R.P.; Guo, P.; Ao, Q.

    1996-01-01

    Computational methods have been used with considerable success to complement radiation measurements in solving a wide range of industrial problems. The almost exponential growth of computer capability and applications in the last few years leads to a open-quotes black boxclose quotes mentality for radiation measurement applications. If a black box is defined as any radiation measurement device that is capable of measuring the parameters of interest when a wide range of operating and sample conditions may occur, then the development of computational methods for industrial radiation measurement applications should now be focused on the black box approach and the deduction of properties of interest from the response with acceptable accuracy and reasonable efficiency. Nowadays, increasingly better understanding of radiation physical processes, more accurate and complete fundamental physical data, and more advanced modeling and software/hardware techniques have made it possible to make giant strides in that direction with new ideas implemented with computer software. The Center for Engineering Applications of Radioisotopes (CEAR) at North Carolina State University has been working on a variety of projects in the area of radiation analyzers and gauges for accomplishing this for quite some time, and they are discussed here with emphasis on current accomplishments

  4. Computationally efficient methods for digital control

    NARCIS (Netherlands)

    Guerreiro Tome Antunes, D.J.; Hespanha, J.P.; Silvestre, C.J.; Kataria, N.; Brewer, F.

    2008-01-01

    The problem of designing a digital controller is considered with the novelty of explicitly taking into account the computation cost of the controller implementation. A class of controller emulation methods inspired by numerical analysis is proposed. Through various examples it is shown that these

  5. Assessing the Efficacy of an App-Based Method of Family Planning: The Dot Study Protocol.

    Science.gov (United States)

    Simmons, Rebecca G; Shattuck, Dominick C; Jennings, Victoria H

    2017-01-18

    assess pregnancy status over time. This paper outlines the protocol for this efficacy trial, following the Standard Protocol Items: Recommendations for Intervention Trials checklist, to provide an overview of the rationale, methodology, and analysis plan. Participants will be asked to provide daily sexual history data and periodically answer surveys administered through a call center or directly on their phone. Funding for the study was provided in 2013 under the United States Agency for International Development Fertility Awareness for Community Transformation project. Recruitment for the study will begin in January of 2017. The study is expected to last approximately 18 months, depending on recruitment. Findings on the study's primary outcomes are expected to be finalized by September 2018. Reproducibility and transparency, important aspects of all research, are particularly critical in developing new approaches to research design. This protocol outlines the first study to prospectively test both the efficacy (correct use) and effectiveness (actual use) of a pregnancy prevention app. This protocol and the processes it describes reflect the dynamic integration of mobile technologies, a call center, and Health Insurance Portability and Accountability Act-compliant study procedures. Future fertility app studies can build on our approaches to develop methodologies that can contribute to the evidence base around app-based methods of contraception. ClinicalTrials.gov NCT02833922; https://clinicaltrials.gov/ct2/show/NCT02833922 (Archived be WebCite at http://www.webcitation.org/6nDkr0e76). ©Rebecca G Simmons, Dominick C Shattuck, Victoria H Jennings. Originally published in JMIR Research Protocols (http://www.researchprotocols.org), 18.01.2017.

  6. Les Protocoles de routage dans les réseaux mobiles Ad Hoc ...

    African Journals Online (AJOL)

    Dans cet article nous présentons un état de l'art sur les protocoles de routage dans les réseaux mobiles Ad hoc. Mots clés : Réseaux mobiles ad hoc/, Protocoles de routage. Routing protocols in wireless mobile ad hoc networks. Abstract:The communication between users with handheld computers interconnected through ...

  7. Shortened screening method for phosphorus fractionation in sediments A complementary approach to the standards, measurements and testing harmonised protocol

    International Nuclear Information System (INIS)

    Pardo, Patricia; Rauret, Gemma; Lopez-Sanchez, Jose Fermin

    2004-01-01

    The SMT protocol, a sediment phosphorus fractionation method harmonised and validated in the frame of the standards, measurements and testing (SMT) programme (European Commission), establishes five fractions of phosphorus according to their extractability. The determination of phosphate extracted is carried out spectrophotometrically. This protocol has been applied to 11 sediments of different origin and characteristics and the phosphorus extracted in each fraction was determined not only by UV-Vis spectrophotometry, but also by inductively coupled plasma-atomic emission spectrometry. The use of these two determination techniques allowed the differentiation between phosphorus that was present in the extracts as soluble reactive phosphorus and as total phosphorus. From the comparison of data obtained with both determination techniques a shortened screening method, for a quick evaluation of the magnitude and importance of the fractions given by the SMT protocol, is proposed and validated using two certified reference materials

  8. Active Computer Network Defense: An Assessment

    Science.gov (United States)

    2001-04-01

    sufficient base of knowledge in information technology can be assumed to be working on some form of computer network warfare, even if only defensive in...the Defense Information Infrastructure (DII) to attack. Transmission Control Protocol/ Internet Protocol (TCP/IP) networks are inherently resistant to...aims to create this part of information superiority, and computer network defense is one of its fundamental components. Most of these efforts center

  9. Developing a multimodal biometric authentication system using soft computing methods.

    Science.gov (United States)

    Malcangi, Mario

    2015-01-01

    Robust personal authentication is becoming ever more important in computer-based applications. Among a variety of methods, biometric offers several advantages, mainly in embedded system applications. Hard and soft multi-biometric, combined with hard and soft computing methods, can be applied to improve the personal authentication process and to generalize the applicability. This chapter describes the embedded implementation of a multi-biometric (voiceprint and fingerprint) multimodal identification system based on hard computing methods (DSP) for feature extraction and matching, an artificial neural network (ANN) for soft feature pattern matching, and a fuzzy logic engine (FLE) for data fusion and decision.

  10. Evolutionary Computation Methods and their applications in Statistics

    Directory of Open Access Journals (Sweden)

    Francesco Battaglia

    2013-05-01

    Full Text Available A brief discussion of the genesis of evolutionary computation methods, their relationship to artificial intelligence, and the contribution of genetics and Darwin’s theory of natural evolution is provided. Then, the main evolutionary computation methods are illustrated: evolution strategies, genetic algorithms, estimation of distribution algorithms, differential evolution, and a brief description of some evolutionary behavior methods such as ant colony and particle swarm optimization. We also discuss the role of the genetic algorithm for multivariate probability distribution random generation, rather than as a function optimizer. Finally, some relevant applications of genetic algorithm to statistical problems are reviewed: selection of variables in regression, time series model building, outlier identification, cluster analysis, design of experiments.

  11. Computer methods for transient fluid-structure analysis of nuclear reactors

    International Nuclear Information System (INIS)

    Belytschko, T.; Liu, W.K.

    1985-01-01

    Fluid-structure interaction problems in nuclear engineering are categorized according to the dominant physical phenomena and the appropriate computational methods. Linear fluid models that are considered include acoustic fluids, incompressible fluids undergoing small disturbances, and small amplitude sloshing. Methods available in general-purpose codes for these linear fluid problems are described. For nonlinear fluid problems, the major features of alternative computational treatments are reviewed; some special-purpose and multipurpose computer codes applicable to these problems are then described. For illustration, some examples of nuclear reactor problems that entail coupled fluid-structure analysis are described along with computational results

  12. Computational biology in the cloud: methods and new insights from computing at scale.

    Science.gov (United States)

    Kasson, Peter M

    2013-01-01

    The past few years have seen both explosions in the size of biological data sets and the proliferation of new, highly flexible on-demand computing capabilities. The sheer amount of information available from genomic and metagenomic sequencing, high-throughput proteomics, experimental and simulation datasets on molecular structure and dynamics affords an opportunity for greatly expanded insight, but it creates new challenges of scale for computation, storage, and interpretation of petascale data. Cloud computing resources have the potential to help solve these problems by offering a utility model of computing and storage: near-unlimited capacity, the ability to burst usage, and cheap and flexible payment models. Effective use of cloud computing on large biological datasets requires dealing with non-trivial problems of scale and robustness, since performance-limiting factors can change substantially when a dataset grows by a factor of 10,000 or more. New computing paradigms are thus often needed. The use of cloud platforms also creates new opportunities to share data, reduce duplication, and to provide easy reproducibility by making the datasets and computational methods easily available.

  13. Comparing calibration methods of electron beams using plane-parallel chambers with absorbed-dose to water based protocols

    International Nuclear Information System (INIS)

    Stewart, K.J.; Seuntjens, J.P.

    2002-01-01

    Recent absorbed-dose-based protocols allow for two methods of calibrating electron beams using plane-parallel chambers, one using the N D,w Co for a plane-parallel chamber, and the other relying on cross-calibration of the plane-parallel chamber in a high-energy electron beam against a cylindrical chamber which has an N D,w Co factor. The second method is recommended as it avoids problems associated with the P wall correction factors at 60 Co for plane-parallel chambers which are used in the determination of the beam quality conversion factors. In this article we investigate the consistency of these two methods for the PTW Roos, Scanditronics NACP02, and PTW Markus chambers. We processed our data using both the AAPM TG-51 and the IAEA TRS-398 protocols. Wall correction factors in 60 Co beams and absorbed-dose beam quality conversion factors for 20 MeV electrons were derived for these chambers by cross-calibration against a cylindrical ionization chamber. Systematic differences of up to 1.6% were found between our values of P wall and those from the Monte Carlo calculations underlying AAPM TG-51, and up to 0.6% when comparing with the IAEA TRS-398 protocol. The differences in P wall translate directly into differences in the beam quality conversion factors in the respective protocols. The relatively large spread in the experimental data of P wall , and consequently the absorbed-dose beam quality conversion factor, confirms the importance of the cross-calibration technique when using plane-parallel chambers for calibrating clinical electron beams. We confirmed that for well-guarded plane-parallel chambers, the fluence perturbation correction factor at d max is not significantly different from the value at d ref . For the PTW Markus chamber the variation in the latter factor is consistent with published fits relating it to average energy at depth

  14. Thermal/optical methods for elemental carbon quantification in soils and urban dusts: equivalence of different analysis protocols.

    Directory of Open Access Journals (Sweden)

    Yongming Han

    Full Text Available Quantifying elemental carbon (EC content in geological samples is challenging due to interferences of crustal, salt, and organic material. Thermal/optical analysis, combined with acid pretreatment, represents a feasible approach. However, the consistency of various thermal/optical analysis protocols for this type of samples has never been examined. In this study, urban street dust and soil samples from Baoji, China were pretreated with acids and analyzed with four thermal/optical protocols to investigate how analytical conditions and optical correction affect EC measurement. The EC values measured with reflectance correction (ECR were found always higher and less sensitive to temperature program than the EC values measured with transmittance correction (ECT. A high-temperature method with extended heating times (STN120 showed the highest ECT/ECR ratio (0.86 while a low-temperature protocol (IMPROVE-550, with heating time adjusted for sample loading, showed the lowest (0.53. STN ECT was higher than IMPROVE ECT, in contrast to results from aerosol samples. A higher peak inert-mode temperature and extended heating times can elevate ECT/ECR ratios for pretreated geological samples by promoting pyrolyzed organic carbon (PyOC removal over EC under trace levels of oxygen. Considering that PyOC within filter increases ECR while decreases ECT from the actual EC levels, simultaneous ECR and ECT measurements would constrain the range of EC loading and provide information on method performance. Further testing with standard reference materials of common environmental matrices supports the findings. Char and soot fractions of EC can be further separated using the IMPROVE protocol. The char/soot ratio was lower in street dusts (2.2 on average than in soils (5.2 on average, most likely reflecting motor vehicle emissions. The soot concentrations agreed with EC from CTO-375, a pure thermal method.

  15. Thermal/optical methods for elemental carbon quantification in soils and urban dusts: equivalence of different analysis protocols.

    Science.gov (United States)

    Han, Yongming; Chen, Antony; Cao, Junji; Fung, Kochy; Ho, Fai; Yan, Beizhan; Zhan, Changlin; Liu, Suixin; Wei, Chong; An, Zhisheng

    2013-01-01

    Quantifying elemental carbon (EC) content in geological samples is challenging due to interferences of crustal, salt, and organic material. Thermal/optical analysis, combined with acid pretreatment, represents a feasible approach. However, the consistency of various thermal/optical analysis protocols for this type of samples has never been examined. In this study, urban street dust and soil samples from Baoji, China were pretreated with acids and analyzed with four thermal/optical protocols to investigate how analytical conditions and optical correction affect EC measurement. The EC values measured with reflectance correction (ECR) were found always higher and less sensitive to temperature program than the EC values measured with transmittance correction (ECT). A high-temperature method with extended heating times (STN120) showed the highest ECT/ECR ratio (0.86) while a low-temperature protocol (IMPROVE-550), with heating time adjusted for sample loading, showed the lowest (0.53). STN ECT was higher than IMPROVE ECT, in contrast to results from aerosol samples. A higher peak inert-mode temperature and extended heating times can elevate ECT/ECR ratios for pretreated geological samples by promoting pyrolyzed organic carbon (PyOC) removal over EC under trace levels of oxygen. Considering that PyOC within filter increases ECR while decreases ECT from the actual EC levels, simultaneous ECR and ECT measurements would constrain the range of EC loading and provide information on method performance. Further testing with standard reference materials of common environmental matrices supports the findings. Char and soot fractions of EC can be further separated using the IMPROVE protocol. The char/soot ratio was lower in street dusts (2.2 on average) than in soils (5.2 on average), most likely reflecting motor vehicle emissions. The soot concentrations agreed with EC from CTO-375, a pure thermal method.

  16. Data analysis through interactive computer animation method (DATICAM)

    International Nuclear Information System (INIS)

    Curtis, J.N.; Schwieder, D.H.

    1983-01-01

    DATICAM is an interactive computer animation method designed to aid in the analysis of nuclear research data. DATICAM was developed at the Idaho National Engineering Laboratory (INEL) by EG and G Idaho, Inc. INEL analysts use DATICAM to produce computer codes that are better able to predict the behavior of nuclear power reactors. In addition to increased code accuracy, DATICAM has saved manpower and computer costs. DATICAM has been generalized to assist in the data analysis of virtually any data-producing dynamic process

  17. Split bolus technique in polytrauma: a prospective study on scan protocols for trauma analysis

    NARCIS (Netherlands)

    Beenen, Ludo F. M.; Sierink, Joanne C.; Kolkman, Saskia; Nio, C. Yung; Saltzherr, Teun Peter; Dijkgraaf, Marcel G. W.; Goslings, J. Carel

    2015-01-01

    For the evaluation of severely injured trauma patients a variety of total body computed tomography (CT) scanning protocols exist. Frequently multiple pass protocols are used. A split bolus contrast protocol can reduce the number of passes through the body, and thereby radiation exposure, in this

  18. An Augmented Fast Marching Method for Computing Skeletons and Centerlines

    NARCIS (Netherlands)

    Telea, Alexandru; Wijk, Jarke J. van

    2002-01-01

    We present a simple and robust method for computing skeletons for arbitrary planar objects and centerlines for 3D objects. We augment the Fast Marching Method (FMM) widely used in level set applications by computing the paramterized boundary location every pixel came from during the boundary

  19. Numerical computer methods part E

    CERN Document Server

    Johnson, Michael L

    2004-01-01

    The contributions in this volume emphasize analysis of experimental data and analytical biochemistry, with examples taken from biochemistry. They serve to inform biomedical researchers of the modern data analysis methods that have developed concomitantly with computer hardware. Selected Contents: A practical approach to interpretation of SVD results; modeling of oscillations in endocrine networks with feedback; quantifying asynchronous breathing; sample entropy; wavelet modeling and processing of nasal airflow traces.

  20. Unconditionally Secure Protocols

    DEFF Research Database (Denmark)

    Meldgaard, Sigurd Torkel

    This thesis contains research on the theory of secure multi-party computation (MPC). Especially information theoretically (as opposed to computationally) secure protocols. It contains results from two main lines of work. One line on Information Theoretically Secure Oblivious RAMS, and how....... We construct an oblivious RAM that hides the client's access pattern with information theoretic security with an amortized $\\log^3 N$ query overhead. And how to employ a second server that is guaranteed not to conspire with the first to improve the overhead to $\\log^2 N$, while also avoiding...... they are used to speed up secure computation. An Oblivious RAM is a construction for a client with a small $O(1)$ internal memory to store $N$ pieces of data on a server while revealing nothing more than the size of the memory $N$, and the number of accesses. This specifically includes hiding the access pattern...

  1. Semi-Homomorphic Encryption and Multiparty Computation

    DEFF Research Database (Denmark)

    Bendlin, Rikke; Damgård, Ivan Bjerre; Orlandi, Claudio

    2011-01-01

    allow us to construct an efficient multiparty computation protocol for arithmetic circuits, UC-secure against a dishonest majority. The protocol consists of a preprocessing phase and an online phase. Neither the inputs nor the function to be computed have to be known during preprocessing. Moreover......, the online phase is extremely efficient as it requires no cryptographic operations: the parties only need to exchange additive shares and verify information theoretic MACs. Our contribution is therefore twofold: from a theoretical point of view, we can base multiparty computation on a variety of different...

  2. The Experiment Method for Manufacturing Grid Development on Single Computer

    Institute of Scientific and Technical Information of China (English)

    XIAO Youan; ZHOU Zude

    2006-01-01

    In this paper, an experiment method for the Manufacturing Grid application system development in the single personal computer environment is proposed. The characteristic of the proposed method is constructing a full prototype Manufacturing Grid application system which is hosted on a single personal computer with the virtual machine technology. Firstly, it builds all the Manufacturing Grid physical resource nodes on an abstraction layer of a single personal computer with the virtual machine technology. Secondly, all the virtual Manufacturing Grid resource nodes will be connected with virtual network and the application software will be deployed on each Manufacturing Grid nodes. Then, we can obtain a prototype Manufacturing Grid application system which is working in the single personal computer, and can carry on the experiment on this foundation. Compared with the known experiment methods for the Manufacturing Grid application system development, the proposed method has the advantages of the known methods, such as cost inexpensively, operation simple, and can get the confidence experiment result easily. The Manufacturing Grid application system constructed with the proposed method has the high scalability, stability and reliability. It is can be migrated to the real application environment rapidly.

  3. Computational methods for molecular imaging

    CERN Document Server

    Shi, Kuangyu; Li, Shuo

    2015-01-01

    This volume contains original submissions on the development and application of molecular imaging computing. The editors invited authors to submit high-quality contributions on a wide range of topics including, but not limited to: • Image Synthesis & Reconstruction of Emission Tomography (PET, SPECT) and other Molecular Imaging Modalities • Molecular Imaging Enhancement • Data Analysis of Clinical & Pre-clinical Molecular Imaging • Multi-Modal Image Processing (PET/CT, PET/MR, SPECT/CT, etc.) • Machine Learning and Data Mining in Molecular Imaging. Molecular imaging is an evolving clinical and research discipline enabling the visualization, characterization and quantification of biological processes taking place at the cellular and subcellular levels within intact living subjects. Computational methods play an important role in the development of molecular imaging, from image synthesis to data analysis and from clinical diagnosis to therapy individualization. This work will bring readers fro...

  4. A method of paralleling computer calculation for two-dimensional kinetic plasma model

    International Nuclear Information System (INIS)

    Brazhnik, V.A.; Demchenko, V.V.; Dem'yanov, V.G.; D'yakov, V.E.; Ol'shanskij, V.V.; Panchenko, V.I.

    1987-01-01

    A method for parallel computer calculation and OSIRIS program complex realizing it and designed for numerical plasma simulation by the macroparticle method are described. The calculation can be carried out either with one or simultaneously with two computers BESM-6, that is provided by some package of interacting programs functioning in every computer. Program interaction in every computer is based on event techniques realized in OS DISPAK. Parallel computer calculation with two BESM-6 computers allows to accelerate the computation 1.5 times

  5. Discrete linear canonical transform computation by adaptive method.

    Science.gov (United States)

    Zhang, Feng; Tao, Ran; Wang, Yue

    2013-07-29

    The linear canonical transform (LCT) describes the effect of quadratic phase systems on a wavefield and generalizes many optical transforms. In this paper, the computation method for the discrete LCT using the adaptive least-mean-square (LMS) algorithm is presented. The computation approaches of the block-based discrete LCT and the stream-based discrete LCT using the LMS algorithm are derived, and the implementation structures of these approaches by the adaptive filter system are considered. The proposed computation approaches have the inherent parallel structures which make them suitable for efficient VLSI implementations, and are robust to the propagation of possible errors in the computation process.

  6. Three numerical methods for the computation of the electrostatic energy

    International Nuclear Information System (INIS)

    Poenaru, D.N.; Galeriu, D.

    1975-01-01

    The FORTRAN programs for computation of the electrostatic energy of a body with axial symmetry by Lawrence, Hill-Wheeler and Beringer methods are presented in detail. The accuracy, time of computation and the required memory of these methods are tested at various deformations for two simple parametrisations: two overlapping identical spheres and a spheroid. On this basis the field of application of each method is recomended

  7. DelPhi web server v2: incorporating atomic-style geometrical figures into the computational protocol.

    Science.gov (United States)

    Smith, Nicholas; Witham, Shawn; Sarkar, Subhra; Zhang, Jie; Li, Lin; Li, Chuan; Alexov, Emil

    2012-06-15

    A new edition of the DelPhi web server, DelPhi web server v2, is released to include atomic presentation of geometrical figures. These geometrical objects can be used to model nano-size objects together with real biological macromolecules. The position and size of the object can be manipulated by the user in real time until desired results are achieved. The server fixes structural defects, adds hydrogen atoms and calculates electrostatic energies and the corresponding electrostatic potential and ionic distributions. The web server follows a client-server architecture built on PHP and HTML and utilizes DelPhi software. The computation is carried out on supercomputer cluster and results are given back to the user via http protocol, including the ability to visualize the structure and corresponding electrostatic potential via Jmol implementation. The DelPhi web server is available from http://compbio.clemson.edu/delphi_webserver.

  8. Pilot studies for the North American Soil Geochemical Landscapes Project - Site selection, sampling protocols, analytical methods, and quality control protocols

    Science.gov (United States)

    Smith, D.B.; Woodruff, L.G.; O'Leary, R. M.; Cannon, W.F.; Garrett, R.G.; Kilburn, J.E.; Goldhaber, M.B.

    2009-01-01

    In 2004, the US Geological Survey (USGS) and the Geological Survey of Canada sampled and chemically analyzed soils along two transects across Canada and the USA in preparation for a planned soil geochemical survey of North America. This effort was a pilot study to test and refine sampling protocols, analytical methods, quality control protocols, and field logistics for the continental survey. A total of 220 sample sites were selected at approximately 40-km intervals along the two transects. The ideal sampling protocol at each site called for a sample from a depth of 0-5 cm and a composite of each of the O, A, and C horizons. The Ca, Fe, K, Mg, Na, S, Ti, Ag, As, Ba, Be, Bi, Cd, Ce, Co, Cr, Cs, Cu, Ga, In, La, Li, Mn, Mo, Nb, Ni, P, Pb, Rb, Sb, Sc, Sn, Sr, Te, Th, Tl, U, V, W, Y, and Zn by inductively coupled plasma-mass spectrometry and inductively coupled plasma-atomic emission spectrometry following a near-total digestion in a mixture of HCl, HNO3, HClO4, and HF. Separate methods were used for Hg, Se, total C, and carbonate-C on this same size fraction. Only Ag, In, and Te had a large percentage of concentrations below the detection limit. Quality control (QC) of the analyses was monitored at three levels: the laboratory performing the analysis, the USGS QC officer, and the principal investigator for the study. This level of review resulted in an average of one QC sample for every 20 field samples, which proved to be minimally adequate for such a large-scale survey. Additional QC samples should be added to monitor within-batch quality to the extent that no more than 10 samples are analyzed between a QC sample. Only Cr (77%), Y (82%), and Sb (80%) fell outside the acceptable limits of accuracy (% recovery between 85 and 115%) because of likely residence in mineral phases resistant to the acid digestion. A separate sample of 0-5-cm material was collected at each site for determination of organic compounds. A subset of 73 of these samples was analyzed for a suite of

  9. A computer program for uncertainty analysis integrating regression and Bayesian methods

    Science.gov (United States)

    Lu, Dan; Ye, Ming; Hill, Mary C.; Poeter, Eileen P.; Curtis, Gary

    2014-01-01

    This work develops a new functionality in UCODE_2014 to evaluate Bayesian credible intervals using the Markov Chain Monte Carlo (MCMC) method. The MCMC capability in UCODE_2014 is based on the FORTRAN version of the differential evolution adaptive Metropolis (DREAM) algorithm of Vrugt et al. (2009), which estimates the posterior probability density function of model parameters in high-dimensional and multimodal sampling problems. The UCODE MCMC capability provides eleven prior probability distributions and three ways to initialize the sampling process. It evaluates parametric and predictive uncertainties and it has parallel computing capability based on multiple chains to accelerate the sampling process. This paper tests and demonstrates the MCMC capability using a 10-dimensional multimodal mathematical function, a 100-dimensional Gaussian function, and a groundwater reactive transport model. The use of the MCMC capability is made straightforward and flexible by adopting the JUPITER API protocol. With the new MCMC capability, UCODE_2014 can be used to calculate three types of uncertainty intervals, which all can account for prior information: (1) linear confidence intervals which require linearity and Gaussian error assumptions and typically 10s–100s of highly parallelizable model runs after optimization, (2) nonlinear confidence intervals which require a smooth objective function surface and Gaussian observation error assumptions and typically 100s–1,000s of partially parallelizable model runs after optimization, and (3) MCMC Bayesian credible intervals which require few assumptions and commonly 10,000s–100,000s or more partially parallelizable model runs. Ready access allows users to select methods best suited to their work, and to compare methods in many circumstances.

  10. Gene probes : principles and protocols [Methods in molecular biology, v. 179

    National Research Council Canada - National Science Library

    Rapley, Ralph; Aquino de Muro, Marilena

    2002-01-01

    ... of labeled DNA has allowed genes to be mapped to single chromosomes and in many cases to a single chromosome band, promoting significant advance in human genome mapping. Gene Probes: Principles and Protocols presents the principles for gene probe design, labeling, detection, target format, and hybridization conditions together with detailed protocols, accom...

  11. Computational and statistical methods for high-throughput analysis of post-translational modifications of proteins

    DEFF Research Database (Denmark)

    Schwämmle, Veit; Braga, Thiago Verano; Roepstorff, Peter

    2015-01-01

    The investigation of post-translational modifications (PTMs) represents one of the main research focuses for the study of protein function and cell signaling. Mass spectrometry instrumentation with increasing sensitivity improved protocols for PTM enrichment and recently established pipelines...... for high-throughput experiments allow large-scale identification and quantification of several PTM types. This review addresses the concurrently emerging challenges for the computational analysis of the resulting data and presents PTM-centered approaches for spectra identification, statistical analysis...

  12. A Simple Method for Dynamic Scheduling in a Heterogeneous Computing System

    OpenAIRE

    Žumer, Viljem; Brest, Janez

    2002-01-01

    A simple method for the dynamic scheduling on a heterogeneous computing system is proposed in this paper. It was implemented to minimize the parallel program execution time. The proposed method decomposes the program workload into computationally homogeneous subtasks, which may be of the different size, depending on the current load of each machine in a heterogeneous computing system.

  13. Computer-Aided Modelling Methods and Tools

    DEFF Research Database (Denmark)

    Cameron, Ian; Gani, Rafiqul

    2011-01-01

    The development of models for a range of applications requires methods and tools. In many cases a reference model is required that allows the generation of application specific models that are fit for purpose. There are a range of computer aided modelling tools available that help to define the m...

  14. Numerical methods design, analysis, and computer implementation of algorithms

    CERN Document Server

    Greenbaum, Anne

    2012-01-01

    Numerical Methods provides a clear and concise exploration of standard numerical analysis topics, as well as nontraditional ones, including mathematical modeling, Monte Carlo methods, Markov chains, and fractals. Filled with appealing examples that will motivate students, the textbook considers modern application areas, such as information retrieval and animation, and classical topics from physics and engineering. Exercises use MATLAB and promote understanding of computational results. The book gives instructors the flexibility to emphasize different aspects--design, analysis, or computer implementation--of numerical algorithms, depending on the background and interests of students. Designed for upper-division undergraduates in mathematics or computer science classes, the textbook assumes that students have prior knowledge of linear algebra and calculus, although these topics are reviewed in the text. Short discussions of the history of numerical methods are interspersed throughout the chapters. The book a...

  15. Reference depth for geostrophic computation - A new method

    Digital Repository Service at National Institute of Oceanography (India)

    Varkey, M.J.; Sastry, J.S.

    Various methods are available for the determination of reference depth for geostrophic computation. A new method based on the vertical profiles of mean and variance of the differences of mean specific volume anomaly (delta x 10) for different layers...

  16. Permeability computation on a REV with an immersed finite element method

    International Nuclear Information System (INIS)

    Laure, P.; Puaux, G.; Silva, L.; Vincent, M.

    2011-01-01

    An efficient method to compute permeability of fibrous media is presented. An immersed domain approach is used to represent the porous material at its microscopic scale and the flow motion is computed with a stabilized mixed finite element method. Therefore the Stokes equation is solved on the whole domain (including solid part) using a penalty method. The accuracy is controlled by refining the mesh around the solid-fluid interface defined by a level set function. Using homogenisation techniques, the permeability of a representative elementary volume (REV) is computed. The computed permeabilities of regular fibre packings are compared to classical analytical relations found in the bibliography.

  17. Modelling a New Product Model on the Basis of an Existing STEP Application Protocol

    Directory of Open Access Journals (Sweden)

    B.-R. Hoehn

    2005-01-01

    Full Text Available During the last years a great range of computer aided tools has been generated to support the development process of various products. The goal of a continuous data flow, needed for high efficiency, requires powerful standards for the data exchange. At the FZG (Gear Research Centre of the Technical University of Munich there was a need for a common gear data format for data exchange between gear calculation programs. The STEP standard ISO 10303 was developed for this type of purpose, but a suitable definition of gear data was still missing, even in the Application Protocol AP 214, developed for the design process in the automotive industry. The creation of a new STEP Application Protocol or the extension of existing protocol would be a very time consumpting normative process. So a new method was introduced by FZG. Some very general definitions of an Application Protocol (here AP 214 were used to determine rules for an exact specification of the required kind of data. In this case a product model for gear units was defined based on elements of the AP 214. Therefore no change of the Application Protocol is necessary. Meanwhile the product model for gear units has been published as a VDMA paper and successfully introduced for data exchange within the German gear industry associated with FVA (German Research Organisation for Gears and Transmissions. This method can also be adopted for other applications not yet sufficiently defined by STEP. 

  18. Developing frameworks for protocol implementation

    NARCIS (Netherlands)

    de Barros Barbosa, C.; de barros Barbosa, C.; Ferreira Pires, Luis

    1999-01-01

    This paper presents a method to develop frameworks for protocol implementation. Frameworks are software structures developed for a specific application domain, which can be reused in the implementation of various different concrete systems in this domain. The use of frameworks support a protocol

  19. Analysis of Pervasive Mobile Ad Hoc Routing Protocols

    Science.gov (United States)

    Qadri, Nadia N.; Liotta, Antonio

    Mobile ad hoc networks (MANETs) are a fundamental element of pervasive networks and therefore, of pervasive systems that truly support pervasive computing, where user can communicate anywhere, anytime and on-the-fly. In fact, future advances in pervasive computing rely on advancements in mobile communication, which includes both infrastructure-based wireless networks and non-infrastructure-based MANETs. MANETs introduce a new communication paradigm, which does not require a fixed infrastructure - they rely on wireless terminals for routing and transport services. Due to highly dynamic topology, absence of established infrastructure for centralized administration, bandwidth constrained wireless links, and limited resources in MANETs, it is challenging to design an efficient and reliable routing protocol. This chapter reviews the key studies carried out so far on the performance of mobile ad hoc routing protocols. We discuss performance issues and metrics required for the evaluation of ad hoc routing protocols. This leads to a survey of existing work, which captures the performance of ad hoc routing algorithms and their behaviour from different perspectives and highlights avenues for future research.

  20. Robustness and device independence of verifiable blind quantum computing

    International Nuclear Information System (INIS)

    Gheorghiu, Alexandru; Kashefi, Elham; Wallden, Petros

    2015-01-01

    Recent advances in theoretical and experimental quantum computing bring us closer to scalable quantum computing devices. This makes the need for protocols that verify the correct functionality of quantum operations timely and has led to the field of quantum verification. In this paper we address key challenges to make quantum verification protocols applicable to experimental implementations. We prove the robustness of the single server verifiable universal blind quantum computing protocol of Fitzsimons and Kashefi (2012 arXiv:1203.5217) in the most general scenario. This includes the case where the purification of the deviated input state is in the hands of an adversarial server. The proved robustness property allows the composition of this protocol with a device-independent state tomography protocol that we give, which is based on the rigidity of CHSH games as proposed by Reichardt et al (2013 Nature 496 456–60). The resulting composite protocol has lower round complexity for the verification of entangled quantum servers with a classical verifier and, as we show, can be made fault tolerant. (paper)

  1. Security analysis of the decoy method with the Bennett–Brassard 1984 protocol for finite key lengths

    International Nuclear Information System (INIS)

    Hayashi, Masahito; Nakayama, Ryota

    2014-01-01

    This paper provides a formula for the sacrifice bit-length for privacy amplification with the Bennett–Brassard 1984 protocol for finite key lengths, when we employ the decoy method. Using the formula, we can guarantee the security parameter for a realizable quantum key distribution system. The key generation rates with finite key lengths are numerically evaluated. The proposed method improves the existing key generation rate even in the asymptotic setting. (paper)

  2. Efficient Communication Protocols for Deciding Edit Distance

    DEFF Research Database (Denmark)

    Jowhari, Hossein

    2012-01-01

    In this paper we present two communication protocols on computing edit distance. In our first result, we give a one-way protocol for the following Document Exchange problem. Namely given x ∈ Σn to Alice and y ∈ Σn to Bob and integer k to both, Alice sends a message to Bob so that he learns x...... or truthfully reports that the edit distance between x and y is greater than k. For this problem, we give a randomized protocol in which Alice transmits at most O ˜ (klog 2 n) bits and each party’s time complexity is O ˜ (nlogn+k 2 log 2 n) . Our second result is a simultaneous protocol for edit distance over...... permutations. Here Alice and Bob both send a message to a third party (the referee) who does not have access to the input strings. Given the messages, the referee decides if the edit distance between x and y is at most k or not. For this problem we give a protocol in which Alice and Bob run a O...

  3. RT-PCR protocols [Methods in molecular biology, v. 193

    National Research Council Canada - National Science Library

    O'Connell, Joseph

    2002-01-01

    .... Here the newcomer will find readily reproducible protocols for highly sensitive detection and quantification of gene expression, the in situ localization of gene expression in tissue, and the cloning...

  4. RadNet: Open network protocol for radiation data

    International Nuclear Information System (INIS)

    Rees, B.; Olson, K.; Beckes-Talcott, J.; Kadner, S.; Wenderlich, T.; Hoy, M.; Doyle, W.; Koskelo, M.

    1998-01-01

    Safeguards instrumentation is increasingly being incorporated into remote monitoring applications. In the past, vendors of radiation monitoring instruments typically provided the tools for uploading the monitoring data to a host. However, the proprietary nature of communication protocols lends itself to increased computer support needs and increased installation expenses. As a result, a working group of suppliers and customers of radiation monitoring instruments defined an open network protocol for transferring packets on a local area network from radiation monitoring equipment to network hosts. The protocol was termed RadNet. While it is now primarily used for health physics instruments, RadNet's flexibility and strength make it ideal for remote monitoring of nuclear materials. The incorporation of standard, open protocols ensures that future work will not render present work obsolete; because RadNet utilizes standard Internet protocols, and is itself a non-proprietary standard. The use of industry standards also simplifies the development and implementation of ancillary services, e.g. E-main generation or even pager systems

  5. A hybrid method for the computation of quasi-3D seismograms.

    Science.gov (United States)

    Masson, Yder; Romanowicz, Barbara

    2013-04-01

    The development of powerful computer clusters and efficient numerical computation methods, such as the Spectral Element Method (SEM) made possible the computation of seismic wave propagation in a heterogeneous 3D earth. However, the cost of theses computations is still problematic for global scale tomography that requires hundreds of such simulations. Part of the ongoing research effort is dedicated to the development of faster modeling methods based on the spectral element method. Capdeville et al. (2002) proposed to couple SEM simulations with normal modes calculation (C-SEM). Nissen-Meyer et al. (2007) used 2D SEM simulations to compute 3D seismograms in a 1D earth model. Thanks to these developments, and for the first time, Lekic et al. (2011) developed a 3D global model of the upper mantle using SEM simulations. At the local and continental scale, adjoint tomography that is using a lot of SEM simulation can be implemented on current computers (Tape, Liu et al. 2009). Due to their smaller size, these models offer higher resolution. They provide us with images of the crust and the upper part of the mantle. In an attempt to teleport such local adjoint tomographic inversions into the deep earth, we are developing a hybrid method where SEM computation are limited to a region of interest within the earth. That region can have an arbitrary shape and size. Outside this region, the seismic wavefield is extrapolated to obtain synthetic data at the Earth's surface. A key feature of the method is the use of a time reversal mirror to inject the wavefield induced by distant seismic source into the region of interest (Robertsson and Chapman 2000). We compute synthetic seismograms as follow: Inside the region of interest, we are using regional spectral element software RegSEM to compute wave propagation in 3D. Outside this region, the wavefield is extrapolated to the surface by convolution with the Green's functions from the mirror to the seismic stations. For now, these

  6. System and methods for predicting transmembrane domains in membrane proteins and mining the genome for recognizing G-protein coupled receptors

    Science.gov (United States)

    Trabanino, Rene J; Vaidehi, Nagarajan; Hall, Spencer E; Goddard, William A; Floriano, Wely

    2013-02-05

    The invention provides computer-implemented methods and apparatus implementing a hierarchical protocol using multiscale molecular dynamics and molecular modeling methods to predict the presence of transmembrane regions in proteins, such as G-Protein Coupled Receptors (GPCR), and protein structural models generated according to the protocol. The protocol features a coarse grain sampling method, such as hydrophobicity analysis, to provide a fast and accurate procedure for predicting transmembrane regions. Methods and apparatus of the invention are useful to screen protein or polynucleotide databases for encoded proteins with transmembrane regions, such as GPCRs.

  7. Computational methods for describing the laser-induced mechanical response of tissue

    Energy Technology Data Exchange (ETDEWEB)

    Trucano, T.; McGlaun, J.M.; Farnsworth, A.

    1994-02-01

    Detailed computational modeling of laser surgery requires treatment of the photoablation of human tissue by high intensity pulses of laser light and the subsequent thermomechanical response of the tissue. Three distinct physical regimes must be considered to accomplish this: (1) the immediate absorption of the laser pulse by the tissue and following tissue ablation, which is dependent upon tissue light absorption characteristics; (2) the near field thermal and mechanical response of the tissue to this laser pulse, and (3) the potential far field (and longer time) mechanical response of witness tissue. Both (2) and (3) are dependent upon accurate constitutive descriptions of the tissue. We will briefly review tissue absorptivity and mechanical behavior, with an emphasis on dynamic loads characteristic of the photoablation process. In this paper our focus will center on the requirements of numerical modeling and the uncertainties of mechanical tissue behavior under photoablation. We will also discuss potential contributions that computational simulations can make in the design of surgical protocols which utilize lasers, for example, in assessing the potential for collateral mechanical damage by laser pulses.

  8. Multiscale Methods, Parallel Computation, and Neural Networks for Real-Time Computer Vision.

    Science.gov (United States)

    Battiti, Roberto

    1990-01-01

    This thesis presents new algorithms for low and intermediate level computer vision. The guiding ideas in the presented approach are those of hierarchical and adaptive processing, concurrent computation, and supervised learning. Processing of the visual data at different resolutions is used not only to reduce the amount of computation necessary to reach the fixed point, but also to produce a more accurate estimation of the desired parameters. The presented adaptive multiple scale technique is applied to the problem of motion field estimation. Different parts of the image are analyzed at a resolution that is chosen in order to minimize the error in the coefficients of the differential equations to be solved. Tests with video-acquired images show that velocity estimation is more accurate over a wide range of motion with respect to the homogeneous scheme. In some cases introduction of explicit discontinuities coupled to the continuous variables can be used to avoid propagation of visual information from areas corresponding to objects with different physical and/or kinematic properties. The human visual system uses concurrent computation in order to process the vast amount of visual data in "real -time." Although with different technological constraints, parallel computation can be used efficiently for computer vision. All the presented algorithms have been implemented on medium grain distributed memory multicomputers with a speed-up approximately proportional to the number of processors used. A simple two-dimensional domain decomposition assigns regions of the multiresolution pyramid to the different processors. The inter-processor communication needed during the solution process is proportional to the linear dimension of the assigned domain, so that efficiency is close to 100% if a large region is assigned to each processor. Finally, learning algorithms are shown to be a viable technique to engineer computer vision systems for different applications starting from

  9. A Computationally Efficient Method for Polyphonic Pitch Estimation

    Directory of Open Access Journals (Sweden)

    Ruohua Zhou

    2009-01-01

    Full Text Available This paper presents a computationally efficient method for polyphonic pitch estimation. The method employs the Fast Resonator Time-Frequency Image (RTFI as the basic time-frequency analysis tool. The approach is composed of two main stages. First, a preliminary pitch estimation is obtained by means of a simple peak-picking procedure in the pitch energy spectrum. Such spectrum is calculated from the original RTFI energy spectrum according to harmonic grouping principles. Then the incorrect estimations are removed according to spectral irregularity and knowledge of the harmonic structures of the music notes played on commonly used music instruments. The new approach is compared with a variety of other frame-based polyphonic pitch estimation methods, and results demonstrate the high performance and computational efficiency of the approach.

  10. Information-theoretic security proof for quantum-key-distribution protocols

    International Nuclear Information System (INIS)

    Renner, Renato; Gisin, Nicolas; Kraus, Barbara

    2005-01-01

    We present a technique for proving the security of quantum-key-distribution (QKD) protocols. It is based on direct information-theoretic arguments and thus also applies if no equivalent entanglement purification scheme can be found. Using this technique, we investigate a general class of QKD protocols with one-way classical post-processing. We show that, in order to analyze the full security of these protocols, it suffices to consider collective attacks. Indeed, we give new lower and upper bounds on the secret-key rate which only involve entropies of two-qubit density operators and which are thus easy to compute. As an illustration of our results, we analyze the Bennett-Brassard 1984, the six-state, and the Bennett 1992 protocols with one-way error correction and privacy amplification. Surprisingly, the performance of these protocols is increased if one of the parties adds noise to the measurement data before the error correction. In particular, this additional noise makes the protocols more robust against noise in the quantum channel

  11. Information-theoretic security proof for quantum-key-distribution protocols

    Science.gov (United States)

    Renner, Renato; Gisin, Nicolas; Kraus, Barbara

    2005-07-01

    We present a technique for proving the security of quantum-key-distribution (QKD) protocols. It is based on direct information-theoretic arguments and thus also applies if no equivalent entanglement purification scheme can be found. Using this technique, we investigate a general class of QKD protocols with one-way classical post-processing. We show that, in order to analyze the full security of these protocols, it suffices to consider collective attacks. Indeed, we give new lower and upper bounds on the secret-key rate which only involve entropies of two-qubit density operators and which are thus easy to compute. As an illustration of our results, we analyze the Bennett-Brassard 1984, the six-state, and the Bennett 1992 protocols with one-way error correction and privacy amplification. Surprisingly, the performance of these protocols is increased if one of the parties adds noise to the measurement data before the error correction. In particular, this additional noise makes the protocols more robust against noise in the quantum channel.

  12. Molecular dynamics simulations and applications in computational toxicology and nanotoxicology.

    Science.gov (United States)

    Selvaraj, Chandrabose; Sakkiah, Sugunadevi; Tong, Weida; Hong, Huixiao

    2018-02-01

    Nanotoxicology studies toxicity of nanomaterials and has been widely applied in biomedical researches to explore toxicity of various biological systems. Investigating biological systems through in vivo and in vitro methods is expensive and time taking. Therefore, computational toxicology, a multi-discipline field that utilizes computational power and algorithms to examine toxicology of biological systems, has gained attractions to scientists. Molecular dynamics (MD) simulations of biomolecules such as proteins and DNA are popular for understanding of interactions between biological systems and chemicals in computational toxicology. In this paper, we review MD simulation methods, protocol for running MD simulations and their applications in studies of toxicity and nanotechnology. We also briefly summarize some popular software tools for execution of MD simulations. Published by Elsevier Ltd.

  13. A secure distributed logistic regression protocol for the detection of rare adverse drug events.

    Science.gov (United States)

    El Emam, Khaled; Samet, Saeed; Arbuckle, Luk; Tamblyn, Robyn; Earle, Craig; Kantarcioglu, Murat

    2013-05-01

    There is limited capacity to assess the comparative risks of medications after they enter the market. For rare adverse events, the pooling of data from multiple sources is necessary to have the power and sufficient population heterogeneity to detect differences in safety and effectiveness in genetic, ethnic and clinically defined subpopulations. However, combining datasets from different data custodians or jurisdictions to perform an analysis on the pooled data creates significant privacy concerns that would need to be addressed. Existing protocols for addressing these concerns can result in reduced analysis accuracy and can allow sensitive information to leak. To develop a secure distributed multi-party computation protocol for logistic regression that provides strong privacy guarantees. We developed a secure distributed logistic regression protocol using a single analysis center with multiple sites providing data. A theoretical security analysis demonstrates that the protocol is robust to plausible collusion attacks and does not allow the parties to gain new information from the data that are exchanged among them. The computational performance and accuracy of the protocol were evaluated on simulated datasets. The computational performance scales linearly as the dataset sizes increase. The addition of sites results in an exponential growth in computation time. However, for up to five sites, the time is still short and would not affect practical applications. The model parameters are the same as the results on pooled raw data analyzed in SAS, demonstrating high model accuracy. The proposed protocol and prototype system would allow the development of logistic regression models in a secure manner without requiring the sharing of personal health information. This can alleviate one of the key barriers to the establishment of large-scale post-marketing surveillance programs. We extended the secure protocol to account for correlations among patients within sites through

  14. Behaviour Protocols Verification: Fighting State Explosion

    Czech Academy of Sciences Publication Activity Database

    Mach, M.; Plášil, František; Kofroň, Jan

    2005-01-01

    Roč. 6, č. 2 (2005), s. 22-30 ISSN 1525-9293 R&D Projects: GA ČR(CZ) GA102/03/0672 Institutional research plan: CEZ:AV0Z10300504 Keywords : formal verification * software components * stateexplos ion * behavior protocols * parse trees Subject RIV: JC - Computer Hardware ; Software

  15. Geometric optical transfer function and tis computation method

    International Nuclear Information System (INIS)

    Wang Qi

    1992-01-01

    Geometric Optical Transfer Function formula is derived after expound some content to be easily ignored, and the computation method is given with Bessel function of order zero and numerical integration and Spline interpolation. The method is of advantage to ensure accuracy and to save calculation

  16. From human monocytes to genome-wide binding sites--a protocol for small amounts of blood: monocyte isolation/ChIP-protocol/library amplification/genome wide computational data analysis.

    Directory of Open Access Journals (Sweden)

    Sebastian Weiterer

    Full Text Available Chromatin immunoprecipitation in combination with a genome-wide analysis via high-throughput sequencing is the state of the art method to gain genome-wide representation of histone modification or transcription factor binding profiles. However, chromatin immunoprecipitation analysis in the context of human experimental samples is limited, especially in the case of blood cells. The typically extremely low yields of precipitated DNA are usually not compatible with library amplification for next generation sequencing. We developed a highly reproducible protocol to present a guideline from the first step of isolating monocytes from a blood sample to analyse the distribution of histone modifications in a genome-wide manner.The protocol describes the whole work flow from isolating monocytes from human blood samples followed by a high-sensitivity and small-scale chromatin immunoprecipitation assay with guidance for generating libraries compatible with next generation sequencing from small amounts of immunoprecipitated DNA.

  17. Multiparametric multidetector computed tomography scanning on suspicion of hyperacute ischemic stroke: validating a standardized protocol

    Directory of Open Access Journals (Sweden)

    Felipe Torres Pacheco

    2013-06-01

    Full Text Available Multidetector computed tomography (MDCT scanning has enabled the early diagnosis of hyperacute brain ischemia. We aimed at validating a standardized protocol to read and report MDCT techniques in a series of adult patients. The inter-observer agreement among the trained examiners was tested, and their results were compared with a standard reading. No false positives were observed, and an almost perfect agreement (Kappa>0.81 was documented when the CT angiography (CTA and cerebral perfusion CT (CPCT map data were added to the noncontrast CT (NCCT analysis. The inter-observer agreement was higher for highly trained readers, corroborating the need for specific training to interpret these modern techniques. The authors recommend adding CTA and CPCT to the NCCT analysis in order to clarify the global analysis of structural and hemodynamic brain abnormalities. Our structured report is suitable as a script for the reproducible analysis of the MDCT of patients on suspicion of ischemic stroke.

  18. Network protocols and sockets

    OpenAIRE

    BALEJ, Marek

    2010-01-01

    My work will deal with network protocols and sockets and their use in programming language C#. It will therefore deal programming network applications on the platform .NET from Microsoft and instruments, which C# provides to us. There will describe the tools and methods for programming network applications, and shows a description and sample applications that work with sockets and application protocols.

  19. Digital image processing mathematical and computational methods

    CERN Document Server

    Blackledge, J M

    2005-01-01

    This authoritative text (the second part of a complete MSc course) provides mathematical methods required to describe images, image formation and different imaging systems, coupled with the principle techniques used for processing digital images. It is based on a course for postgraduates reading physics, electronic engineering, telecommunications engineering, information technology and computer science. This book relates the methods of processing and interpreting digital images to the 'physics' of imaging systems. Case studies reinforce the methods discussed, with examples of current research

  20. Protocol Monitoring Energy Conservation; Protocol Monitoring Energiebesparing

    Energy Technology Data Exchange (ETDEWEB)

    Boonekamp, P.G.M. [ECN Beleidsstudies, Petten (Netherlands); Mannaerts, H. [Centraal Planburea CPB, Den Haag (Netherlands); Tinbergen, W. [Centraal Bureau voor de Statistiek CBS, Den Haag (Netherlands); Vreuls, H.H.J. [Nederlandse onderneming voor energie en milieu Novem, Utrecht (Netherlands); Wesselink, B. [Rijksinstituut voor Volksgezondheid en Milieuhygiene RIVM, Bilthoven (Netherlands)

    2001-12-01

    On request of the Dutch ministry of Economic Affairs five institutes have collaborated to create a 'Protocol Monitoring Energy Conservation', a common method and database to calculate the amount of energy savings realised in past years. The institutes concerned are the Central Bureau of Statistics (CBS), the Netherlands Bureau for Economic Policy Analysis (CPB), the Energy research Centre of the Netherlands (ECN), the National Agency for Energy and Environment (Novem) and the Netherlands Institute of Public Health and the Environment (RIVM). The institutes have agreed upon a clear definition of energy use and energy savings. The demarcation with renewable energy, the saving effects of substitution between energy carriers and the role of import and export of energy have been elaborated. A decomposition method is used to split up the observed change in energy use in a number of effects, on a national and sectoral level. This method includes an analysis of growth effects, effects of structural changes in production and consumption activities and savings on end use or with more efficient conversion processes. To calculate these effects the total energy use is desegregated as much as possible. For each segment a reference energy use is calculated according to the trend in a variable which is supposed to be representative for the use without savings. The difference with the actual energy use is taken as the savings realised. Results are given for the sectors households, industry, agriculture, services and government, transportation and the energy sector; as well as a national figure. A special feature of the protocol method is the application of primary energy use figures in the determination of savings for end users. This means that the use of each energy carrier is increased with a certain amount, according to the conversion losses caused elsewhere in the energy system. The losses concern the base year energy sector and losses abroad for imports of secondary

  1. Self-Awareness in Computer Networks

    Directory of Open Access Journals (Sweden)

    Ariane Keller

    2014-01-01

    Full Text Available The Internet architecture works well for a wide variety of communication scenarios. However, its flexibility is limited because it was initially designed to provide communication links between a few static nodes in a homogeneous network and did not attempt to solve the challenges of today’s dynamic network environments. Although the Internet has evolved to a global system of interconnected computer networks, which links together billions of heterogeneous compute nodes, its static architecture remained more or less the same. Nowadays the diversity in networked devices, communication requirements, and network conditions vary heavily, which makes it difficult for a static set of protocols to provide the required functionality. Therefore, we propose a self-aware network architecture in which protocol stacks can be built dynamically. Those protocol stacks can be optimized continuously during communication according to the current requirements. For this network architecture we propose an FPGA-based execution environment called EmbedNet that allows for a dynamic mapping of network protocols to either hardware or software. We show that our architecture can reduce the communication overhead significantly by adapting the protocol stack and that the dynamic hardware/software mapping of protocols considerably reduces the CPU load introduced by packet processing.

  2. Spatial analysis statistics, visualization, and computational methods

    CERN Document Server

    Oyana, Tonny J

    2015-01-01

    An introductory text for the next generation of geospatial analysts and data scientists, Spatial Analysis: Statistics, Visualization, and Computational Methods focuses on the fundamentals of spatial analysis using traditional, contemporary, and computational methods. Outlining both non-spatial and spatial statistical concepts, the authors present practical applications of geospatial data tools, techniques, and strategies in geographic studies. They offer a problem-based learning (PBL) approach to spatial analysis-containing hands-on problem-sets that can be worked out in MS Excel or ArcGIS-as well as detailed illustrations and numerous case studies. The book enables readers to: Identify types and characterize non-spatial and spatial data Demonstrate their competence to explore, visualize, summarize, analyze, optimize, and clearly present statistical data and results Construct testable hypotheses that require inferential statistical analysis Process spatial data, extract explanatory variables, conduct statisti...

  3. Simulating elastic light scattering using high performance computing methods

    NARCIS (Netherlands)

    Hoekstra, A.G.; Sloot, P.M.A.; Verbraeck, A.; Kerckhoffs, E.J.H.

    1993-01-01

    The Coupled Dipole method, as originally formulated byPurcell and Pennypacker, is a very powerful method tosimulate the Elastic Light Scattering from arbitraryparticles. This method, which is a particle simulationmodel for Computational Electromagnetics, has one majordrawback: if the size of the

  4. Systems, computer-implemented methods, and tangible computer-readable storage media for wide-field interferometry

    Science.gov (United States)

    Lyon, Richard G. (Inventor); Leisawitz, David T. (Inventor); Rinehart, Stephen A. (Inventor); Memarsadeghi, Nargess (Inventor)

    2012-01-01

    Disclosed herein are systems, computer-implemented methods, and tangible computer-readable storage media for wide field imaging interferometry. The method includes for each point in a two dimensional detector array over a field of view of an image: gathering a first interferogram from a first detector and a second interferogram from a second detector, modulating a path-length for a signal from an image associated with the first interferogram in the first detector, overlaying first data from the modulated first detector and second data from the second detector, and tracking the modulating at every point in a two dimensional detector array comprising the first detector and the second detector over a field of view for the image. The method then generates a wide-field data cube based on the overlaid first data and second data for each point. The method can generate an image from the wide-field data cube.

  5. Cloud Computing Bible

    CERN Document Server

    Sosinsky, Barrie

    2010-01-01

    The complete reference guide to the hot technology of cloud computingIts potential for lowering IT costs makes cloud computing a major force for both IT vendors and users; it is expected to gain momentum rapidly with the launch of Office Web Apps later this year. Because cloud computing involves various technologies, protocols, platforms, and infrastructure elements, this comprehensive reference is just what you need if you'll be using or implementing cloud computing.Cloud computing offers significant cost savings by eliminating upfront expenses for hardware and software; its growing popularit

  6. Integrating computational methods to retrofit enzymes to synthetic pathways.

    Science.gov (United States)

    Brunk, Elizabeth; Neri, Marilisa; Tavernelli, Ivano; Hatzimanikatis, Vassily; Rothlisberger, Ursula

    2012-02-01

    Microbial production of desired compounds provides an efficient framework for the development of renewable energy resources. To be competitive to traditional chemistry, one requirement is to utilize the full capacity of the microorganism to produce target compounds with high yields and turnover rates. We use integrated computational methods to generate and quantify the performance of novel biosynthetic routes that contain highly optimized catalysts. Engineering a novel reaction pathway entails addressing feasibility on multiple levels, which involves handling the complexity of large-scale biochemical networks while respecting the critical chemical phenomena at the atomistic scale. To pursue this multi-layer challenge, our strategy merges knowledge-based metabolic engineering methods with computational chemistry methods. By bridging multiple disciplines, we provide an integral computational framework that could accelerate the discovery and implementation of novel biosynthetic production routes. Using this approach, we have identified and optimized a novel biosynthetic route for the production of 3HP from pyruvate. Copyright © 2011 Wiley Periodicals, Inc.

  7. Scalable Multiparty Computation with Nearly Optimal Work and Resilience

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Krøigaard, Mikkel; Ishai, Yuval

    2008-01-01

    We present the first general protocol for secure multiparty computation in which the total amount of work required by n players to compute a function f grows only polylogarithmically with n (ignoring an additive term that depends on n but not on the complexity of f). Moreover, the protocol is also...

  8. Rational Multiparty Computation

    OpenAIRE

    Wallrabenstein, John Ross

    2014-01-01

    The field of rational cryptography considers the design of cryptographic protocols in the presence of rational agents seeking to maximize local utility functions. This departs from the standard secure multiparty computation setting, where players are assumed to be either honest or malicious. ^ We detail the construction of both a two-party and a multiparty game theoretic framework for constructing rational cryptographic protocols. Our framework specifies the utility function assumptions neces...

  9. A novel computed method to reconstruct the bilateral digital interarticular channel of atlas and its use on the anterior upper cervical screw fixation

    Directory of Open Access Journals (Sweden)

    Ai-Min Wu

    2016-02-01

    Full Text Available Purpose. To investigate a novel computed method to reconstruct the bilateral digital interarticular channel of atlas and its potential use on the anterior upper cervical screw fixation. Methods. We have used the reverse engineering software (image-processing software and computer-aided design software to create the approximate and optimal digital interarticular channel of atlas for 60 participants. Angles of channels, diameters of inscribed circles, long and short axes of ellipses were measured and recorded, and gender-specific analysis was also performed. Results. The channels provided sufficient space for one or two screws, and the parameters of channels are described. While the channels of females were smaller than that of males, no significant difference of angles between males and females were observed. Conclusion. Our study demonstrates the radiological features of approximate digital interarticular channels, optimal digital interarticular channels of atlas, and provides the reference trajectory of anterior transarticular screws and anterior occiput-to-axis screws. Additionally, we provide a protocol that can help make a pre-operative plan for accurate placement of anterior transarticular screws and anterior occiput-to-axis screws.

  10. Blind Quantum Computation

    DEFF Research Database (Denmark)

    Salvail, Louis; Arrighi, Pablo

    2006-01-01

    We investigate the possibility of "having someone carry out the work of executing a function for you, but without letting him learn anything about your input". Say Alice wants Bob to compute some known function f upon her input x, but wants to prevent Bob from learning anything about x. The situa......We investigate the possibility of "having someone carry out the work of executing a function for you, but without letting him learn anything about your input". Say Alice wants Bob to compute some known function f upon her input x, but wants to prevent Bob from learning anything about x....... The situation arises for instance if client Alice has limited computational resources in comparison with mistrusted server Bob, or if x is an inherently mobile piece of data. Could there be a protocol whereby Bob is forced to compute f(x) "blindly", i.e. without observing x? We provide such a blind computation...... protocol for the class of functions which admit an efficient procedure to generate random input-output pairs, e.g. factorization. The cheat-sensitive security achieved relies only upon quantum theory being true. The security analysis carried out assumes the eavesdropper performs individual attacks....

  11. Cochrane Qualitative and Implementation Methods Group guidance series-paper 2: methods for question formulation, searching, and protocol development for qualitative evidence synthesis.

    Science.gov (United States)

    Harris, Janet L; Booth, Andrew; Cargo, Margaret; Hannes, Karin; Harden, Angela; Flemming, Kate; Garside, Ruth; Pantoja, Tomas; Thomas, James; Noyes, Jane

    2018-05-01

    This paper updates previous Cochrane guidance on question formulation, searching, and protocol development, reflecting recent developments in methods for conducting qualitative evidence syntheses to inform Cochrane intervention reviews. Examples are used to illustrate how decisions about boundaries for a review are formed via an iterative process of constructing lines of inquiry and mapping the available information to ascertain whether evidence exists to answer questions related to effectiveness, implementation, feasibility, appropriateness, economic evidence, and equity. The process of question formulation allows reviewers to situate the topic in relation to how it informs and explains effectiveness, using the criterion of meaningfulness, appropriateness, feasibility, and implementation. Questions related to complex questions and interventions can be structured by drawing on an increasingly wide range of question frameworks. Logic models and theoretical frameworks are useful tools for conceptually mapping the literature to illustrate the complexity of the phenomenon of interest. Furthermore, protocol development may require iterative question formulation and searching. Consequently, the final protocol may function as a guide rather than a prescriptive route map, particularly in qualitative reviews that ask more exploratory and open-ended questions. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. A novel quantum solution to secure two-party distance computation

    Science.gov (United States)

    Peng, Zhen-wan; Shi, Run-hua; Wang, Pan-hong; Zhang, Shun

    2018-06-01

    Secure Two-Party Distance Computation is an important primitive of Secure Multiparty Computational Geometry that it involves two parties, where each party has a private point, and the two parties want to jointly compute the distance between their points without revealing anything about their respective private information. Secure Two-Party Distance Computation has very important and potential applications in settings of high secure requirements, such as privacy-preserving Determination of Spatial Location-Relation, Determination of Polygons Similarity, and so on. In this paper, we present a quantum protocol for Secure Two-Party Distance Computation by using QKD-based Quantum Private Query. The security of the protocol is based on the physical principles of quantum mechanics, instead of difficulty assumptions, and therefore, it can ensure higher security than the classical related protocols.

  13. A Krylov Subspace Method for Unstructured Mesh SN Transport Computation

    International Nuclear Information System (INIS)

    Yoo, Han Jong; Cho, Nam Zin; Kim, Jong Woon; Hong, Ser Gi; Lee, Young Ouk

    2010-01-01

    Hong, et al., have developed a computer code MUST (Multi-group Unstructured geometry S N Transport) for the neutral particle transport calculations in three-dimensional unstructured geometry. In this code, the discrete ordinates transport equation is solved by using the discontinuous finite element method (DFEM) or the subcell balance methods with linear discontinuous expansion. In this paper, the conventional source iteration in the MUST code is replaced by the Krylov subspace method to reduce computing time and the numerical test results are given

  14. A Combined Thermodynamics & Computational Method to Assess Lithium Composition in Anode and Cathode of Lithium Ion Batteries

    International Nuclear Information System (INIS)

    Zhang, Wenyu; Jiang, Lianlian; Van Durmen, Pauline; Saadat, Somaye; Yazami, Rachid

    2016-01-01

    With aim to address the open question of accurate determination of lithium composition in anode and cathode at a defined state of charge (SOC) of lithium ion batteries (LIB), we developed a method combining electrochemical thermodynamic measurements (ETM) and computational data fitting protocol. It is a common knowledge that in a lithium ion battery the SOC of anode and cathode differ from the SOC of the full-cell. Differences are in large part due to irreversible lithium losses within cell and to electrode mass unbalance. This implies that the lithium composition range in anode and in cathode during full charge and discharge cycle in full-cell is different from the composition range achieved in lithium half-cells of anode and cathode over their respective full SOC ranges. To the authors knowledge there is no unequivocal and practical method to determine the actual lithium composition of electrodes in a LIB, hence their SOC. Yet, accurate lithium composition assessment is fundamental not only for understanding the physics of electrodes but also for optimizing cell performances, particularly energy density and cycle life.

  15. Computational electrodynamics the finite-difference time-domain method

    CERN Document Server

    Taflove, Allen

    2005-01-01

    This extensively revised and expanded third edition of the Artech House bestseller, Computational Electrodynamics: The Finite-Difference Time-Domain Method, offers engineers the most up-to-date and definitive resource on this critical method for solving Maxwell's equations. The method helps practitioners design antennas, wireless communications devices, high-speed digital and microwave circuits, and integrated optical devices with unsurpassed efficiency. There has been considerable advancement in FDTD computational technology over the past few years, and the third edition brings professionals the very latest details with entirely new chapters on important techniques, major updates on key topics, and new discussions on emerging areas such as nanophotonics. What's more, to supplement the third edition, the authors have created a Web site with solutions to problems, downloadable graphics and videos, and updates, making this new edition the ideal textbook on the subject as well.

  16. Fully consistent CFD methods for incompressible flow computations

    DEFF Research Database (Denmark)

    Kolmogorov, Dmitry; Shen, Wen Zhong; Sørensen, Niels N.

    2014-01-01

    Nowadays collocated grid based CFD methods are one of the most e_cient tools for computations of the ows past wind turbines. To ensure the robustness of the methods they require special attention to the well-known problem of pressure-velocity coupling. Many commercial codes to ensure the pressure...

  17. Behavior Protocols for Software Components

    Czech Academy of Sciences Publication Activity Database

    Plášil, František; Višňovský, Stanislav

    2002-01-01

    Roč. 28, č. 11 (2002), s. 1056-1076 ISSN 0098-5589 R&D Projects: GA AV ČR IAA2030902; GA ČR GA201/99/0244 Grant - others:Eureka(XE) Pepita project no.2033 Institutional research plan: AV0Z1030915 Keywords : behavior protocols * component-based programming * software architecture Subject RIV: JC - Computer Hardware ; Software Impact factor: 1.170, year: 2002

  18. Detection of furcation involvement using periapical radiography and 2 cone-beam computed tomography imaging protocols with and without a metallic post: An animal study

    Energy Technology Data Exchange (ETDEWEB)

    Salineiro, Fernanda Cristina Sales; Gialain, Ivan Onone; Kobayashi-Velasco, Solange; Pannuti, Claudio Mendes; Cavalcanti, Marcelo Gusmao Paraiso [Dept. of Stomatology, School of Dentistry, University of Sao Paulo, Sao Paulo (Brazil)

    2017-03-15

    The purpose of this study was to assess the accuracy, sensitivity, and specificity of the diagnosis of incipient furcation involvement with periapical radiography (PR) and 2 cone-beam computed tomography (CBCT) imaging protocols, and to test metal artifact interference. Mandibular second molars in 10 macerated pig mandibles were divided into those that showed no furcation involvement and those with lesions in the furcation area. Exams using PR and 2 different CBCT imaging protocols were performed with and without a metallic post. Each image was analyzed twice by 2 observers who rated the absence or presence of furcation involvement according to a 5-point scale. Receiver operating characteristic (ROC) curves were used to evaluate the accuracy, sensitivity, and specificity of the observations. The accuracy of the CBCT imaging protocols ranged from 67.5% to 82.5% in the images obtained with a metallic post and from 72.5% to 80% in those without a metallic post. The accuracy of PR ranged from 37.5% to 55% in the images with a metallic post and from 42.5% to 62.5% in those without a metallic post. The area under the ROC curve values for the CBCT imaging protocols ranged from 0.813 to 0.802, and for PR ranged from 0.503 to 0.448. Both CBCT imaging protocols showed higher accuracy, sensitivity, and specificity than PR in the detection of incipient furcation involvement. Based on these results, CBCT may be considered a reliable tool for detecting incipient furcation involvement following a clinical periodontal exam, even in the presence of a metallic post.

  19. A State-of-the-Art Review of the Real-Time Computer-Aided Study of the Writing Process

    Science.gov (United States)

    Abdel Latif, Muhammad M.

    2008-01-01

    Writing researchers have developed various methods for investigating the writing process since the 1970s. The early 1980s saw the occurrence of the real-time computer-aided study of the writing process that relies on the protocols generated by recording the computer screen activities as writers compose using the word processor. This article…

  20. High performance computing and quantum trajectory method in CPU and GPU systems

    International Nuclear Information System (INIS)

    Wiśniewska, Joanna; Sawerwain, Marek; Leoński, Wiesław

    2015-01-01

    Nowadays, a dynamic progress in computational techniques allows for development of various methods, which offer significant speed-up of computations, especially those related to the problems of quantum optics and quantum computing. In this work, we propose computational solutions which re-implement the quantum trajectory method (QTM) algorithm in modern parallel computation environments in which multi-core CPUs and modern many-core GPUs can be used. In consequence, new computational routines are developed in more effective way than those applied in other commonly used packages, such as Quantum Optics Toolbox (QOT) for Matlab or QuTIP for Python

  1. Study protocol: a randomized controlled trial of a computer-based depression and substance abuse intervention for people attending residential substance abuse treatment

    Directory of Open Access Journals (Sweden)

    Kelly Peter J

    2012-02-01

    Full Text Available Abstract Background A large proportion of people attending residential alcohol and other substance abuse treatment have a co-occurring mental illness. Empirical evidence suggests that it is important to treat both the substance abuse problem and co-occurring mental illness concurrently and in an integrated fashion. However, the majority of residential alcohol and other substance abuse services do not address mental illness in a systematic way. It is likely that computer delivered interventions could improve the ability of substance abuse services to address co-occurring mental illness. This protocol describes a study in which we will assess the effectiveness of adding a computer delivered depression and substance abuse intervention for people who are attending residential alcohol and other substance abuse treatment. Methods/Design Participants will be recruited from residential rehabilitation programs operated by the Australian Salvation Army. All participants who satisfy the diagnostic criteria for an alcohol or other substance dependence disorder will be asked to participate in the study. After completion of a baseline assessment, participants will be randomly assigned to either a computer delivered substance abuse and depression intervention (treatment condition or to a computer-delivered typing tutorial (active control condition. All participants will continue to complete The Salvation Army residential program, a predominantly 12-step based treatment facility. Randomisation will be stratified by gender (Male, Female, length of time the participant has been in the program at the commencement of the study (4 weeks or less, 4 weeks or more, and use of anti-depressant medication (currently prescribed medication, not prescribed medication. Participants in both conditions will complete computer sessions twice per week, over a five-week period. Research staff blind to treatment allocation will complete the assessments at baseline, and then 3, 6, 9

  2. Authentication Protocols for Internet of Things: A Comprehensive Survey

    Directory of Open Access Journals (Sweden)

    Mohamed Amine Ferrag

    2017-01-01

    Full Text Available In this paper, a comprehensive survey of authentication protocols for Internet of Things (IoT is presented. Specifically more than forty authentication protocols developed for or applied in the context of the IoT are selected and examined in detail. These protocols are categorized based on the target environment: (1 Machine to Machine Communications (M2M, (2 Internet of Vehicles (IoV, (3 Internet of Energy (IoE, and (4 Internet of Sensors (IoS. Threat models, countermeasures, and formal security verification techniques used in authentication protocols for the IoT are presented. In addition a taxonomy and comparison of authentication protocols that are developed for the IoT in terms of network model, specific security goals, main processes, computation complexity, and communication overhead are provided. Based on the current survey, open issues are identified and future research directions are proposed.

  3. Multi-centred mixed-methods PEPFAR HIV care & support public health evaluation: study protocol

    Directory of Open Access Journals (Sweden)

    Fayers Peter

    2010-09-01

    Full Text Available Abstract Background A public health response is essential to meet the multidimensional needs of patients and families affected by HIV disease in sub-Saharan Africa. In order to appraise curret provision of HIV care and support in East Africa, and to provide evidence-based direction to future care programming, and Public Health Evaluation was commissioned by the PEPFAR programme of the US Government. Methods/Design This paper described the 2-Phase international mixed methods study protocol utilising longitudinal outcome measurement, surveys, patient and family qualitative interviews and focus groups, staff qualitative interviews, health economics and document analysis. Aim 1 To describe the nature and scope of HIV care and support in two African countries, including the types of facilities available, clients seen, and availability of specific components of care [Study Phase 1]. Aim 2 To determine patient health outcomes over time and principle cost drivers [Study Phase 2]. The study objectives are as follows. 1 To undertake a cross-sectional survey of service configuration and activity by sampling 10% of the facilities being funded by PEPFAR to provide HIV care and support in Kenya and Uganda (Phase 1 in order to describe care currently provided, including pharmacy drug reviews to determine availability and supply of essential drugs in HIV management. 2 To conduct patient focus group discussions at each of these (Phase 1 to determine care received. 3 To undertake a longitudinal prospective study of 1200 patients who are newly diagnosed with HIV or patients with HIV who present with a new problem attending PEPFAR care and support services. Data collection includes self-reported quality of life, core palliative outcomes and components of care received (Phase 2. 4 To conduct qualitative interviews with staff, patients and carers in order to explore and understand service issues and care provision in more depth (Phase 2. 5 To undertake document

  4. Event-by-event simulation of quantum cryptography protocols

    NARCIS (Netherlands)

    Zhao, S.; Raedt, H. De

    We present a new approach to simulate quantum cryptography protocols using event-based processes. The method is validated by simulating the BB84 protocol and the Ekert protocol, both without and with the presence of an eavesdropper.

  5. A stochastic method for computing hadronic matrix elements

    Energy Technology Data Exchange (ETDEWEB)

    Alexandrou, Constantia [Cyprus Univ., Nicosia (Cyprus). Dept. of Physics; The Cyprus Institute, Nicosia (Cyprus). Computational-based Science and Technology Research Center; Dinter, Simon; Drach, Vincent [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Jansen, Karl [Cyprus Univ., Nicosia (Cyprus). Dept. of Physics; Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Hadjiyiannakou, Kyriakos [Cyprus Univ., Nicosia (Cyprus). Dept. of Physics; Renner, Dru B. [Thomas Jefferson National Accelerator Facility, Newport News, VA (United States); Collaboration: European Twisted Mass Collaboration

    2013-02-15

    We present a stochastic method for the calculation of baryon three-point functions that is more versatile compared to the typically used sequential method. We analyze the scaling of the error of the stochastically evaluated three-point function with the lattice volume and find a favorable signal-to-noise ratio suggesting that our stochastic method can be used efficiently at large volumes to compute hadronic matrix elements.

  6. Computational methods for 2D materials: discovery, property characterization, and application design.

    Science.gov (United States)

    Paul, J T; Singh, A K; Dong, Z; Zhuang, H; Revard, B C; Rijal, B; Ashton, M; Linscheid, A; Blonsky, M; Gluhovic, D; Guo, J; Hennig, R G

    2017-11-29

    The discovery of two-dimensional (2D) materials comes at a time when computational methods are mature and can predict novel 2D materials, characterize their properties, and guide the design of 2D materials for applications. This article reviews the recent progress in computational approaches for 2D materials research. We discuss the computational techniques and provide an overview of the ongoing research in the field. We begin with an overview of known 2D materials, common computational methods, and available cyber infrastructures. We then move onto the discovery of novel 2D materials, discussing the stability criteria for 2D materials, computational methods for structure prediction, and interactions of monolayers with electrochemical and gaseous environments. Next, we describe the computational characterization of the 2D materials' electronic, optical, magnetic, and superconducting properties and the response of the properties under applied mechanical strain and electrical fields. From there, we move on to discuss the structure and properties of defects in 2D materials, and describe methods for 2D materials device simulations. We conclude by providing an outlook on the needs and challenges for future developments in the field of computational research for 2D materials.

  7. TU-H-207A-09: An Automated Technique for Estimating Patient-Specific Regional Imparted Energy and Dose From TCM CT Exams Across 13 Protocols

    International Nuclear Information System (INIS)

    Sanders, J; Tian, X; Segars, P; Boone, J; Samei, E

    2016-01-01

    Purpose: To develop an automated technique for estimating patient-specific regional imparted energy and dose from tube current modulated (TCM) computed tomography (CT) exams across a diverse set of head and body protocols. Methods: A library of 58 adult computational anthropomorphic extended cardiac-torso (XCAT) phantoms were used to model a patient population. A validated Monte Carlo program was used to simulate TCM CT exams on the entire library of phantoms for three head and 10 body protocols. The net imparted energy to the phantoms, normalized by dose length product (DLP), and the net tissue mass in each of the scan regions were computed. A knowledgebase containing relationships between normalized imparted energy and scanned mass was established. An automated computer algorithm was written to estimate the scanned mass from actual clinical CT exams. The scanned mass estimate, DLP of the exam, and knowledgebase were used to estimate the imparted energy to the patient. The algorithm was tested on 20 chest and 20 abdominopelvic TCM CT exams. Results: The normalized imparted energy increased with increasing kV for all protocols. However, the normalized imparted energy was relatively unaffected by the strength of the TCM. The average imparted energy was 681 ± 376 mJ for abdominopelvic exams and 274 ± 141 mJ for chest exams. Overall, the method was successful in providing patientspecific estimates of imparted energy for 98% of the cases tested. Conclusion: Imparted energy normalized by DLP increased with increasing tube potential. However, the strength of the TCM did not have a significant effect on the net amount of energy deposited to tissue. The automated program can be implemented into the clinical workflow to provide estimates of regional imparted energy and dose across a diverse set of clinical protocols.

  8. A multi-protocol framework for ad-hoc service discovery

    OpenAIRE

    Flores-Cortes, C.; Blair, Gordon S.; Grace, P.

    2006-01-01

    Discovering the appropriate services in ad-hoc computing environments where a great number of devices and software components collaborate discreetly and provide numerous services is an important challenge. Service discovery protocols make it possible for participating nodes in a network to locate and advertise services with minimum user intervention. However, because it is not possible to predict at design time which protocols will be used to advertise services in a given context/environment,...

  9. Survey of computed tomography doses in head and chest protocols; Levantamento de doses em tomografia computadorizada em protocolos de cranio e torax

    Energy Technology Data Exchange (ETDEWEB)

    Souza, Giordana Salvi de; Silva, Ana Maria Marques da, E-mail: giordana.souza@acad.pucrs.br [Pontificia Universidade Catolica do Rio Grande do Sul (PUC-RS), Porto Alegre, RS (Brazil). Faculdade de Fisica. Nucleo de Pesquisa em Imagens Medicas

    2016-07-01

    Computed tomography is a clinical tool for the diagnosis of patients. However, the patient is subjected to a complex dose distribution. The aim of this study was to survey dose indicators in head and chest protocols CT scans, in terms of Dose-Length Product(DLP) and effective dose for adult and pediatric patients, comparing them with diagnostic reference levels in the literature. Patients were divided into age groups and the following image acquisition parameters were collected: age, kV, mAs, Volumetric Computed Tomography Dose Index (CTDIvol) and DLP. The effective dose was found multiplying DLP by correction factors. The results were obtained from the third quartile and showed the importance of determining kV and mAs values for each patient depending on the studied region, age and thickness. (author)

  10. Blind quantum computing with weak coherent pulses.

    Science.gov (United States)

    Dunjko, Vedran; Kashefi, Elham; Leverrier, Anthony

    2012-05-18

    The universal blind quantum computation (UBQC) protocol [A. Broadbent, J. Fitzsimons, and E. Kashefi, in Proceedings of the 50th Annual IEEE Symposiumon Foundations of Computer Science (IEEE Computer Society, Los Alamitos, CA, USA, 2009), pp. 517-526.] allows a client to perform quantum computation on a remote server. In an ideal setting, perfect privacy is guaranteed if the client is capable of producing specific, randomly chosen single qubit states. While from a theoretical point of view, this may constitute the lowest possible quantum requirement, from a pragmatic point of view, generation of such states to be sent along long distances can never be achieved perfectly. We introduce the concept of ϵ blindness for UBQC, in analogy to the concept of ϵ security developed for other cryptographic protocols, allowing us to characterize the robustness and security properties of the protocol under possible imperfections. We also present a remote blind single qubit preparation protocol with weak coherent pulses for the client to prepare, in a delegated fashion, quantum states arbitrarily close to perfect random single qubit states. This allows us to efficiently achieve ϵ-blind UBQC for any ϵ>0, even if the channel between the client and the server is arbitrarily lossy.

  11. Blind Quantum Computing with Weak Coherent Pulses

    Science.gov (United States)

    Dunjko, Vedran; Kashefi, Elham; Leverrier, Anthony

    2012-05-01

    The universal blind quantum computation (UBQC) protocol [A. Broadbent, J. Fitzsimons, and E. Kashefi, in Proceedings of the 50th Annual IEEE Symposiumon Foundations of Computer Science (IEEE Computer Society, Los Alamitos, CA, USA, 2009), pp. 517-526.] allows a client to perform quantum computation on a remote server. In an ideal setting, perfect privacy is guaranteed if the client is capable of producing specific, randomly chosen single qubit states. While from a theoretical point of view, this may constitute the lowest possible quantum requirement, from a pragmatic point of view, generation of such states to be sent along long distances can never be achieved perfectly. We introduce the concept of ɛ blindness for UBQC, in analogy to the concept of ɛ security developed for other cryptographic protocols, allowing us to characterize the robustness and security properties of the protocol under possible imperfections. We also present a remote blind single qubit preparation protocol with weak coherent pulses for the client to prepare, in a delegated fashion, quantum states arbitrarily close to perfect random single qubit states. This allows us to efficiently achieve ɛ-blind UBQC for any ɛ>0, even if the channel between the client and the server is arbitrarily lossy.

  12. A New Computationally Frugal Method For Sensitivity Analysis Of Environmental Models

    Science.gov (United States)

    Rakovec, O.; Hill, M. C.; Clark, M. P.; Weerts, A.; Teuling, R.; Borgonovo, E.; Uijlenhoet, R.

    2013-12-01

    Effective and efficient parameter sensitivity analysis methods are crucial to understand the behaviour of complex environmental models and use of models in risk assessment. This paper proposes a new computationally frugal method for analyzing parameter sensitivity: the Distributed Evaluation of Local Sensitivity Analysis (DELSA). The DELSA method can be considered a hybrid of local and global methods, and focuses explicitly on multiscale evaluation of parameter sensitivity across the parameter space. Results of the DELSA method are compared with the popular global, variance-based Sobol' method and the delta method. We assess the parameter sensitivity of both (1) a simple non-linear reservoir model with only two parameters, and (2) five different "bucket-style" hydrologic models applied to a medium-sized catchment (200 km2) in the Belgian Ardennes. Results show that in both the synthetic and real-world examples, the global Sobol' method and the DELSA method provide similar sensitivities, with the DELSA method providing more detailed insight at much lower computational cost. The ability to understand how sensitivity measures vary through parameter space with modest computational requirements provides exciting new opportunities.

  13. Semi-quantum communication: protocols for key agreement, controlled secure direct communication and dialogue

    Science.gov (United States)

    Shukla, Chitra; Thapliyal, Kishore; Pathak, Anirban

    2017-12-01

    Semi-quantum protocols that allow some of the users to remain classical are proposed for a large class of problems associated with secure communication and secure multiparty computation. Specifically, first-time semi-quantum protocols are proposed for key agreement, controlled deterministic secure communication and dialogue, and it is shown that the semi-quantum protocols for controlled deterministic secure communication and dialogue can be reduced to semi-quantum protocols for e-commerce and private comparison (socialist millionaire problem), respectively. Complementing with the earlier proposed semi-quantum schemes for key distribution, secret sharing and deterministic secure communication, set of schemes proposed here and subsequent discussions have established that almost every secure communication and computation tasks that can be performed using fully quantum protocols can also be performed in semi-quantum manner. Some of the proposed schemes are completely orthogonal-state-based, and thus, fundamentally different from the existing semi-quantum schemes that are conjugate coding-based. Security, efficiency and applicability of the proposed schemes have been discussed with appropriate importance.

  14. Controlled Delegation Protocol in Mobile RFID Networks

    Directory of Open Access Journals (Sweden)

    Yang MingHour

    2010-01-01

    Full Text Available To achieve off-line delegation for mobile readers, we propose a delegation protocol for mobile RFID allowing its readers access to specific tags through back-end server. That is to say, reader-tag mutual authentication can be performed without readers being connected to back-end server. Readers are also allowed off-line access to tags' data. Compared with other delegation protocols, our scheme uniquely enables back-end server to limit each reader's reading times during delegation. Even in a multireader situation, our protocol can limit reading times and reading time periods for each of them and therefore makes back-end server's delegation more flexible. Besides, our protocol can prevent authorized readers from transferring their authority to the unauthorized, declining invalid access to tags. Our scheme is proved viable and secure with GNY logic; it is against certain security threats, such as replay attacks, denial of service (DoS attacks, Man-in-the-Middle attacks, counterfeit tags, and breaches of location and data privacy. Also, the performance analysis of our protocol proves that current tags can afford the computation load required in this scheme.

  15. Quantum computing on encrypted data.

    Science.gov (United States)

    Fisher, K A G; Broadbent, A; Shalm, L K; Yan, Z; Lavoie, J; Prevedel, R; Jennewein, T; Resch, K J

    2014-01-01

    The ability to perform computations on encrypted data is a powerful tool for protecting privacy. Recently, protocols to achieve this on classical computing systems have been found. Here, we present an efficient solution to the quantum analogue of this problem that enables arbitrary quantum computations to be carried out on encrypted quantum data. We prove that an untrusted server can implement a universal set of quantum gates on encrypted quantum bits (qubits) without learning any information about the inputs, while the client, knowing the decryption key, can easily decrypt the results of the computation. We experimentally demonstrate, using single photons and linear optics, the encryption and decryption scheme on a set of gates sufficient for arbitrary quantum computations. As our protocol requires few extra resources compared with other schemes it can be easily incorporated into the design of future quantum servers. These results will play a key role in enabling the development of secure distributed quantum systems.

  16. On Protocol Security in the Cryptographic Model

    DEFF Research Database (Denmark)

    Nielsen, Jesper Buus

    you as possible. This is the general problem of secure multiparty computation. The usual way of formalizing the problem is to say that a number of parties who do not trust each other wish to compute some function of their local inputs, while keeping their inputs as secret as possible and guaranteeing...... the channels by which they communicate. A general solution to the secure multiparty computation problem is a compiler which given any feasible function describes an efficient protocol which allows the parties to compute the function securely on their local inputs over an open network. Over the past twenty...... years the secure multiparty computation problem has been the subject of a large body of research, both research into the models of multiparty computation and research aimed at realizing general secure multiparty computation. The main approach to realizing secure multiparty computation has been based...

  17. Minimizing the Free Energy: A Computer Method for Teaching Chemical Equilibrium Concepts.

    Science.gov (United States)

    Heald, Emerson F.

    1978-01-01

    Presents a computer method for teaching chemical equilibrium concepts using material balance conditions and the minimization of the free energy. Method for the calculation of chemical equilibrium, the computer program used to solve equilibrium problems and applications of the method are also included. (HM)

  18. Protocol Fuel Mix reporting

    International Nuclear Information System (INIS)

    2002-07-01

    The protocol in this document describes a method for an Electricity Distribution Company (EDC) to account for the fuel mix of electricity that it delivers to its customers, based on the best available information. Own production, purchase and sale of electricity, and certificates trading are taken into account. In chapter 2 the actual protocol is outlined. In the appendixes additional (supporting) information is given: (A) Dutch Standard Fuel Mix, 2000; (B) Calculation of the Dutch Standard fuel mix; (C) Procedures to estimate and benchmark the fuel mix; (D) Quality management; (E) External verification; (F) Recommendation for further development of the protocol; (G) Reporting examples

  19. Dynamic Anthropometry – Deffning Protocols for Automatic Body Measurement

    Directory of Open Access Journals (Sweden)

    Slavenka Petrak

    2017-12-01

    Full Text Available The paper presents the research on possibilities of protocol development for automatic computer-based determination of measurements on a 3D body model in defined dynamic positions. Initially, two dynamic body positions were defined for the research on dimensional changes of targeted body lengths and surface segments during body movement from basic static position into a selected dynamic body position. The assumption was that during body movement, specifi c length and surface dimensions would change significantly from the aspect of clothing construction and functionality of a garment model. 3D body scanning of a female test sample was performed in basic static and two defined dynamic positions. 3D body models were processed and measurement points were defined as a starting point for the determination of characteristic body measurements. The protocol for automatic computer measurement was defined for every dynamic body position by the systematic set of activities based on determined measurement points. The verification of developed protocols was performed by automatic determination of defined measurements on the test sample and by comparing the results with the conventional manual measurement.

  20. Chapter 16: Retrocommissioning Evaluation Protocol. The Uniform Methods Project: Methods for Determining Energy Efficiency Savings for Specific Measures

    Energy Technology Data Exchange (ETDEWEB)

    Kurnik, Charles W. [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Tiessen, Alex [Posterity Group, Derwood, MD (United States)

    2017-10-09

    Retrocommissioning (RCx) is a systematic process for optimizing energy performance in existing buildings. It specifically focuses on improving the control of energy-using equipment (e.g., heating, ventilation, and air conditioning [HVAC] equipment and lighting) and typically does not involve equipment replacement. Field results have shown proper RCx can achieve energy savings ranging from 5 percent to 20 percent, with a typical payback of two years or less (Thorne 2003). The method presented in this protocol provides direction regarding: (1) how to account for each measure's specific characteristics and (2) how to choose the most appropriate savings verification approach.

  1. Computer-aided method for recognition of proton track in nuclear emulsion

    International Nuclear Information System (INIS)

    Ruan Jinlu; Li Hongyun; Song Jiwen; Zhang Jianfu; Chen Liang; Zhang Zhongbing; Liu Jinliang

    2014-01-01

    In order to overcome the shortcomings of the manual method for proton-recoil track recognition in nuclear emulsions, a computer-aided track recognition method was studied. In this method, image sequences captured by a microscope system were processed through image convolution with composite filters, binarization by multi thresholds, track grains clustering and redundant grains removing to recognize the track grains in the image sequences. Then the proton-recoil tracks were reconstructed from the recognized track grains through track reconstruction. The proton-recoil tracks in the nuclear emulsion irradiated by the neutron beam at energy of 14.9 MeV were recognized by the computer-aided method. The results show that proton-recoil tracks reconstructed by this method consist well with those reconstructed by the manual method. This compute-raided track recognition method lays an important technical foundation of developments of a proton-recoil track automatic recognition system and applications of nuclear emulsions in pulsed neutron spectrum measurement. (authors)

  2. Applications of meshless methods for damage computations with finite strains

    International Nuclear Information System (INIS)

    Pan Xiaofei; Yuan Huang

    2009-01-01

    Material defects such as cavities have great effects on the damage process in ductile materials. Computations based on finite element methods (FEMs) often suffer from instability due to material failure as well as large distortions. To improve computational efficiency and robustness the element-free Galerkin (EFG) method is applied in the micro-mechanical constitute damage model proposed by Gurson and modified by Tvergaard and Needleman (the GTN damage model). The EFG algorithm is implemented in the general purpose finite element code ABAQUS via the user interface UEL. With the help of the EFG method, damage processes in uniaxial tension specimens and notched specimens are analyzed and verified with experimental data. Computational results reveal that the damage which takes place in the interior of specimens will extend to the exterior and cause fracture of specimens; the damage is a fast procedure relative to the whole tensing process. The EFG method provides more stable and robust numerical solution in comparing with the FEM analysis

  3. A novel quantum scheme for secure two-party distance computation

    Science.gov (United States)

    Peng, Zhen-wan; Shi, Run-hua; Zhong, Hong; Cui, Jie; Zhang, Shun

    2017-12-01

    Secure multiparty computational geometry is an essential field of secure multiparty computation, which computes a computation geometric problem without revealing any private information of each party. Secure two-party distance computation is a primitive of secure multiparty computational geometry, which computes the distance between two points without revealing each point's location information (i.e., coordinate). Secure two-party distance computation has potential applications with high secure requirements in military, business, engineering and so on. In this paper, we present a quantum solution to secure two-party distance computation by subtly using quantum private query. Compared to the classical related protocols, our quantum protocol can ensure higher security and better privacy protection because of the physical principle of quantum mechanics.

  4. Efficient computation method of Jacobian matrix

    International Nuclear Information System (INIS)

    Sasaki, Shinobu

    1995-05-01

    As well known, the elements of the Jacobian matrix are complex trigonometric functions of the joint angles, resulting in a matrix of staggering complexity when we write it all out in one place. This article addresses that difficulties to this subject are overcome by using velocity representation. The main point is that its recursive algorithm and computer algebra technologies allow us to derive analytical formulation with no human intervention. Particularly, it is to be noted that as compared to previous results the elements are extremely simplified throughout the effective use of frame transformations. Furthermore, in case of a spherical wrist, it is shown that the present approach is computationally most efficient. Due to such advantages, the proposed method is useful in studying kinematically peculiar properties such as singularity problems. (author)

  5. Computational methods of electron/photon transport

    International Nuclear Information System (INIS)

    Mack, J.M.

    1983-01-01

    A review of computational methods simulating the non-plasma transport of electrons and their attendant cascades is presented. Remarks are mainly restricted to linearized formalisms at electron energies above 1 keV. The effectiveness of various metods is discussed including moments, point-kernel, invariant imbedding, discrete-ordinates, and Monte Carlo. Future research directions and the potential impact on various aspects of science and engineering are indicated

  6. Upgrading of analogue cameras using modern PC based computer

    International Nuclear Information System (INIS)

    Pardom, M.F.; Matos, L.

    2002-01-01

    Aim: The use of computers along with analogue cameras enables them to perform tasks involving time-activity parameters. The INFORMENU system converts a modern PC computer into a dedicated nuclear medicine computer system with a total cost affordable to emerging economic countries, and easily adaptable to all existing cameras. Materials and Methods: In collaboration with nuclear medicine physicians, an application including hardware and software was developed by a private firm. The system runs smoothly on Windows 98 and its operation is very easy. The main features are comparable to the brand commercial computer systems; such as image resolution until 1024 x 1024, low count loss at high count rate, uniformity correction, integrated graphical and text reporting, and user defined clinical protocols. Results: The system is used in more than 20 private and public institutions. The count loss is less than 1% in all the routine work, improvement of uniformity correction of 3-5 times, improved utility of the analogue cameras. Conclusion: The INFORMENU system improves the utility of analogue cameras permitting the inclusion of dynamic clinical protocols and quantifications, helping the development of the nuclear medicine practice. The operation and maintenance costs were lowered. The end users improve their knowledge of modern nuclear medicine

  7. Asynchronous Multiparty Computation

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Geisler, Martin; Krøigaard, Mikkel

    2009-01-01

    guarantees termination if the adversary allows a preprocessing phase to terminate, in which no information is released. The communication complexity of this protocol is the same as that of a passively secure solution up to a constant factor. It is secure against an adaptive and active adversary corrupting...... less than n/3 players. We also present a software framework for implementation of asynchronous protocols called VIFF (Virtual Ideal Functionality Framework), which allows automatic parallelization of primitive operations such as secure multiplications, without having to resort to complicated...... multithreading. Benchmarking of a VIFF implementation of our protocol confirms that it is applicable to practical non-trivial secure computations....

  8. Decomposition and Cross-Product-Based Method for Computing the Dynamic Equation of Robots

    Directory of Open Access Journals (Sweden)

    Ching-Long Shih

    2012-08-01

    Full Text Available This paper aims to demonstrate a clear relationship between Lagrange equations and Newton-Euler equations regarding computational methods for robot dynamics, from which we derive a systematic method for using either symbolic or on-line numerical computations. Based on the decomposition approach and cross-product operation, a computing method for robot dynamics can be easily developed. The advantages of this computing framework are that: it can be used for both symbolic and on-line numeric computation purposes, and it can also be applied to biped systems, as well as some simple closed-chain robot systems.

  9. Greenberger-Horne-Zeilinger states-based blind quantum computation with entanglement concentration.

    Science.gov (United States)

    Zhang, Xiaoqian; Weng, Jian; Lu, Wei; Li, Xiaochun; Luo, Weiqi; Tan, Xiaoqing

    2017-09-11

    In blind quantum computation (BQC) protocol, the quantum computability of servers are complicated and powerful, while the clients are not. It is still a challenge for clients to delegate quantum computation to servers and keep the clients' inputs, outputs and algorithms private. Unfortunately, quantum channel noise is unavoidable in the practical transmission. In this paper, a novel BQC protocol based on maximally entangled Greenberger-Horne-Zeilinger (GHZ) states is proposed which doesn't need a trusted center. The protocol includes a client and two servers, where the client only needs to own quantum channels with two servers who have full-advantage quantum computers. Two servers perform entanglement concentration used to remove the noise, where the success probability can almost reach 100% in theory. But they learn nothing in the process of concentration because of the no-signaling principle, so this BQC protocol is secure and feasible.

  10. A model-guided symbolic execution approach for network protocol implementations and vulnerability detection.

    Science.gov (United States)

    Wen, Shameng; Meng, Qingkun; Feng, Chao; Tang, Chaojing

    2017-01-01

    Formal techniques have been devoted to analyzing whether network protocol specifications violate security policies; however, these methods cannot detect vulnerabilities in the implementations of the network protocols themselves. Symbolic execution can be used to analyze the paths of the network protocol implementations, but for stateful network protocols, it is difficult to reach the deep states of the protocol. This paper proposes a novel model-guided approach to detect vulnerabilities in network protocol implementations. Our method first abstracts a finite state machine (FSM) model, then utilizes the model to guide the symbolic execution. This approach achieves high coverage of both the code and the protocol states. The proposed method is implemented and applied to test numerous real-world network protocol implementations. The experimental results indicate that the proposed method is more effective than traditional fuzzing methods such as SPIKE at detecting vulnerabilities in the deep states of network protocol implementations.

  11. Authentication Test-Based the RFID Authentication Protocol with Security Analysis

    Directory of Open Access Journals (Sweden)

    Minghui Wang

    2014-08-01

    Full Text Available To the problem of many recently proposed RFID authentication protocol was soon find security holes, we analyzed the main reason, which is that protocol design is not rigorous, and the correctness of the protocol cannot be guaranteed. To this end, authentication test method was adopted in the process of the formal analysis and strict proof to the proposed RFID protocol in this paper. Authentication Test is a new type of analysis and design method of security protocols based on Strand space model, and it can be used for most types of the security protocols. After analysis the security, the proposed protocol can meet the RFID security demand: information confidentiality, data integrity and identity authentication.

  12. Computer classes and games in virtual reality environment to reduce loneliness among students of an elderly reference center: Study protocol for a randomised cross-over design.

    Science.gov (United States)

    Antunes, Thaiany Pedrozo Campos; Oliveira, Acary Souza Bulle de; Crocetta, Tania Brusque; Antão, Jennifer Yohanna Ferreira de Lima; Barbosa, Renata Thais de Almeida; Guarnieri, Regiani; Massetti, Thais; Monteiro, Carlos Bandeira de Mello; Abreu, Luiz Carlos de

    2017-03-01

    Physical and mental changes associated with aging commonly lead to a decrease in communication capacity, reducing social interactions and increasing loneliness. Computer classes for older adults make significant contributions to social and cognitive aspects of aging. Games in a virtual reality (VR) environment stimulate the practice of communicative and cognitive skills and might also bring benefits to older adults. Furthermore, it might help to initiate their contact to the modern technology. The purpose of this study protocol is to evaluate the effects of practicing VR games during computer classes on the level of loneliness of students of an elderly reference center. This study will be a prospective longitudinal study with a randomised cross-over design, with subjects aged 50 years and older, of both genders, spontaneously enrolled in computer classes for beginners. Data collection will be done in 3 moments: moment 0 (T0) - at baseline; moment 1 (T1) - after 8 typical computer classes; and moment 2 (T2) - after 8 computer classes which include 15 minutes for practicing games in VR environment. A characterization questionnaire, the short version of the Short Social and Emotional Loneliness Scale for Adults (SELSA-S) and 3 games with VR (Random, MoviLetrando, and Reaction Time) will be used. For the intervention phase 4 other games will be used: Coincident Timing, Motor Skill Analyser, Labyrinth, and Fitts. The statistical analysis will compare the evolution in loneliness perception, performance, and reaction time during the practice of the games between the 3 moments of data collection. Performance and reaction time during the practice of the games will also be correlated to the loneliness perception. The protocol is approved by the host institution's ethics committee under the number 52305215.3.0000.0082. Results will be disseminated via peer-reviewed journal articles and conferences. This clinical trial is registered at ClinicalTrials.gov identifier: NCT

  13. The adaptation method in the Monte Carlo simulation for computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Hyoung Gun; Yoon, Chang Yeon; Lee, Won Ho [Dept. of Bio-convergence Engineering, Korea University, Seoul (Korea, Republic of); Cho, Seung Ryong [Dept. of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Park, Sung Ho [Dept. of Neurosurgery, Ulsan University Hospital, Ulsan (Korea, Republic of)

    2015-06-15

    The patient dose incurred from diagnostic procedures during advanced radiotherapy has become an important issue. Many researchers in medical physics are using computational simulations to calculate complex parameters in experiments. However, extended computation times make it difficult for personal computers to run the conventional Monte Carlo method to simulate radiological images with high-flux photons such as images produced by computed tomography (CT). To minimize the computation time without degrading imaging quality, we applied a deterministic adaptation to the Monte Carlo calculation and verified its effectiveness by simulating CT image reconstruction for an image evaluation phantom (Catphan; Phantom Laboratory, New York NY, USA) and a human-like voxel phantom (KTMAN-2) (Los Alamos National Laboratory, Los Alamos, NM, USA). For the deterministic adaptation, the relationship between iteration numbers and the simulations was estimated and the option to simulate scattered radiation was evaluated. The processing times of simulations using the adaptive method were at least 500 times faster than those using a conventional statistical process. In addition, compared with the conventional statistical method, the adaptive method provided images that were more similar to the experimental images, which proved that the adaptive method was highly effective for a simulation that requires a large number of iterations-assuming no radiation scattering in the vicinity of detectors minimized artifacts in the reconstructed image.

  14. Prediction of intestinal absorption and blood-brain barrier penetration by computational methods.

    Science.gov (United States)

    Clark, D E

    2001-09-01

    This review surveys the computational methods that have been developed with the aim of identifying drug candidates likely to fail later on the road to market. The specifications for such computational methods are outlined, including factors such as speed, interpretability, robustness and accuracy. Then, computational filters aimed at predicting "drug-likeness" in a general sense are discussed before methods for the prediction of more specific properties--intestinal absorption and blood-brain barrier penetration--are reviewed. Directions for future research are discussed and, in concluding, the impact of these methods on the drug discovery process, both now and in the future, is briefly considered.

  15. The comparative cost analysis of EAP Re-authentication Protocol and EAP TLS Protocol

    OpenAIRE

    Seema Mehla; Bhawna Gupta

    2010-01-01

    the Extensible Authentication Protocol (EAP) is a generic framework supporting multiple types of authentication methods. In systems where EAP is used for authentication, it is desirable to not repeat the entire EAP exchange with another authenticator. The EAP reauthentication Protocol provides a consistent, methodindependentand low-latency re-authentication. It is extension to current EAP mechanism to support intradomain handoff authentication. This paper analyzed the performance of the EAP r...

  16. Protocol dependence of mechanical properties in granular systems.

    Science.gov (United States)

    Inagaki, S; Otsuki, M; Sasa, S

    2011-11-01

    We study the protocol dependence of the mechanical properties of granular media by means of computer simulations. We control a protocol of realizing disk packings in a systematic manner. In 2D, by keeping material properties of the constituents identical, we carry out compaction with various strain rates. The disk packings exhibit the strain rate dependence of the critical packing fraction above which the pressure becomes non-zero. The observed behavior contrasts with the well-studied jamming transitions for frictionless disk packings. We also observe that the elastic moduli of the disk packings depend on the strain rate logarithmically. Our results suggest that there exists a time-dependent state variable to describe macroscopic material properties of disk packings, which depend on its protocol.

  17. Estimating Return on Investment in Translational Research: Methods and Protocols

    Science.gov (United States)

    Trochim, William; Dilts, David M.; Kirk, Rosalind

    2014-01-01

    Assessing the value of clinical and translational research funding on accelerating the translation of scientific knowledge is a fundamental issue faced by the National Institutes of Health and its Clinical and Translational Awards (CTSA). To address this issue, the authors propose a model for measuring the return on investment (ROI) of one key CTSA program, the clinical research unit (CRU). By estimating the economic and social inputs and outputs of this program, this model produces multiple levels of ROI: investigator, program and institutional estimates. A methodology, or evaluation protocol, is proposed to assess the value of this CTSA function, with specific objectives, methods, descriptions of the data to be collected, and how data are to be filtered, analyzed, and evaluated. This paper provides an approach CTSAs could use to assess the economic and social returns on NIH and institutional investments in these critical activities. PMID:23925706

  18. Estimating return on investment in translational research: methods and protocols.

    Science.gov (United States)

    Grazier, Kyle L; Trochim, William M; Dilts, David M; Kirk, Rosalind

    2013-12-01

    Assessing the value of clinical and translational research funding on accelerating the translation of scientific knowledge is a fundamental issue faced by the National Institutes of Health (NIH) and its Clinical and Translational Awards (CTSAs). To address this issue, the authors propose a model for measuring the return on investment (ROI) of one key CTSA program, the clinical research unit (CRU). By estimating the economic and social inputs and outputs of this program, this model produces multiple levels of ROI: investigator, program, and institutional estimates. A methodology, or evaluation protocol, is proposed to assess the value of this CTSA function, with specific objectives, methods, descriptions of the data to be collected, and how data are to be filtered, analyzed, and evaluated. This article provides an approach CTSAs could use to assess the economic and social returns on NIH and institutional investments in these critical activities.

  19. Optimising social information by game theory and ant colony method to enhance routing protocol in opportunistic networks

    Directory of Open Access Journals (Sweden)

    Chander Prabha

    2016-09-01

    Full Text Available The data loss and disconnection of nodes are frequent in the opportunistic networks. The social information plays an important role in reducing the data loss because it depends on the connectivity of nodes. The appropriate selection of next hop based on social information is critical for improving the performance of routing in opportunistic networks. The frequent disconnection problem is overcome by optimising the social information with Ant Colony Optimization method which depends on the topology of opportunistic network. The proposed protocol is examined thoroughly via analysis and simulation in order to assess their performance in comparison with other social based routing protocols in opportunistic network under various parameters settings.

  20. Secure and Efficient Protocol for Vehicular Ad Hoc Network with Privacy Preservation

    Directory of Open Access Journals (Sweden)

    Choi Hyoung-Kee

    2011-01-01

    Full Text Available Security is a fundamental issue for promising applications in a VANET. Designing a secure protocol for a VANET that accommodates efficiency, privacy, and traceability is difficult because of the contradictions between these qualities. In this paper, we present a secure yet efficient protocol for a VANET that satisfies these security requirements. Although much research has attempted to address similar issues, we contend that our proposed protocol outperforms other proposals that have been advanced. This claim is based on observations that show that the proposed protocol has such strengths as light computational load, efficient storage management, and dependability.

  1. High-integrity software, computation and the scientific method

    International Nuclear Information System (INIS)

    Hatton, L.

    2012-01-01

    Computation rightly occupies a central role in modern science. Datasets are enormous and the processing implications of some algorithms are equally staggering. With the continuing difficulties in quantifying the results of complex computations, it is of increasing importance to understand its role in the essentially Popperian scientific method. In this paper, some of the problems with computation, for example the long-term unquantifiable presence of undiscovered defect, problems with programming languages and process issues will be explored with numerous examples. One of the aims of the paper is to understand the implications of trying to produce high-integrity software and the limitations which still exist. Unfortunately Computer Science itself suffers from an inability to be suitably critical of its practices and has operated in a largely measurement-free vacuum since its earliest days. Within computer science itself, this has not been so damaging in that it simply leads to unconstrained creativity and a rapid turnover of new technologies. In the applied sciences however which have to depend on computational results, such unquantifiability significantly undermines trust. It is time this particular demon was put to rest. (author)

  2. "Tennis elbow". A challenging call for computation and medicine

    Science.gov (United States)

    Sfetsioris, D.; Bontioti, E. N.

    2014-10-01

    An attempt to give an insight on the features composing this musculotendinous disorder. We address the issues of definition, pathophysiology and the mechanism underlying the onset and the occurrence of the disease, diagnosis and diagnostic tools as well as the methods of treatment. We focus mostly on conservative treatment protocols and we recognize the need for a more thorough investigation with the aid of computation.

  3. Development and Usability Testing of a Computer-Tailored Decision Support Tool for Lung Cancer Screening: Study Protocol.

    Science.gov (United States)

    Carter-Harris, Lisa; Comer, Robert Skipworth; Goyal, Anurag; Vode, Emilee Christine; Hanna, Nasser; Ceppa, DuyKhanh; Rawl, Susan M

    2017-11-16

    Awareness of lung cancer screening remains low in the screening-eligible population, and when patients visit their clinician never having heard of lung cancer screening, engaging in shared decision making to arrive at an informed decision can be a challenge. Therefore, methods to effectively support both patients and clinicians to engage in these important discussions are essential. To facilitate shared decision making about lung cancer screening, effective methods to prepare patients to have these important discussions with their clinician are needed. Our objective is to develop a computer-tailored decision support tool that meets the certification criteria of the International Patient Decision Aid Standards instrument version 4.0 that will support shared decision making in lung cancer screening decisions. Using a 3-phase process, we will develop and test a prototype of a computer-tailored decision support tool in a sample of lung cancer screening-eligible individuals. In phase I, we assembled a community advisory board comprising 10 screening-eligible individuals to develop the prototype. In phase II, we recruited a sample of 13 screening-eligible individuals to test the prototype for usability, acceptability, and satisfaction. In phase III, we are conducting a pilot randomized controlled trial (RCT) with 60 screening-eligible participants who have never been screened for lung cancer. Outcomes tested include lung cancer and screening knowledge, lung cancer screening health beliefs (perceived risk, perceived benefits, perceived barriers, and self-efficacy), perception of being prepared to engage in a patient-clinician discussion about lung cancer screening, occurrence of a patient-clinician discussion about lung cancer screening, and stage of adoption for lung cancer screening. Phases I and II are complete. Phase III is underway. As of July 15, 2017, 60 participants have been enrolled into the study, and have completed the baseline survey, intervention, and first

  4. Deployment Strategies and Clustering Protocols Efficiency

    Directory of Open Access Journals (Sweden)

    Chérif Diallo

    2017-06-01

    Full Text Available Wireless sensor networks face significant design challenges due to limited computing and storage capacities and, most importantly, dependence on limited battery power. Energy is a critical resource and is often an important issue to the deployment of sensor applications that claim to be omnipresent in the world of future. Thus optimizing the deployment of sensors becomes a major constraint in the design and implementation of a WSN in order to ensure better network operations. In wireless networking, clustering techniques add scalability, reduce the computation complexity of routing protocols, allow data aggregation and then enhance the network performance. The well-known MaxMin clustering algorithm was previously generalized, corrected and validated. Then, in a previous work we have improved MaxMin by proposing a Single- node Cluster Reduction (SNCR mechanism which eliminates single-node clusters and then improve energy efficiency. In this paper, we show that MaxMin, because of its original pathological case, does not support the grid deployment topology, which is frequently used in WSN architectures. The unreliability feature of the wireless links could have negative impacts on Link Quality Indicator (LQI based clustering protocols. So, in the second part of this paper we show how our distributed Link Quality based d- Clustering Protocol (LQI-DCP has good performance in both stable and high unreliable link environments. Finally, performance evaluation results also show that LQI-DCP fully supports the grid deployment topology and is more energy efficient than MaxMin.

  5. Big data mining analysis method based on cloud computing

    Science.gov (United States)

    Cai, Qing Qiu; Cui, Hong Gang; Tang, Hao

    2017-08-01

    Information explosion era, large data super-large, discrete and non-(semi) structured features have gone far beyond the traditional data management can carry the scope of the way. With the arrival of the cloud computing era, cloud computing provides a new technical way to analyze the massive data mining, which can effectively solve the problem that the traditional data mining method cannot adapt to massive data mining. This paper introduces the meaning and characteristics of cloud computing, analyzes the advantages of using cloud computing technology to realize data mining, designs the mining algorithm of association rules based on MapReduce parallel processing architecture, and carries out the experimental verification. The algorithm of parallel association rule mining based on cloud computing platform can greatly improve the execution speed of data mining.

  6. A model-guided symbolic execution approach for network protocol implementations and vulnerability detection.

    Directory of Open Access Journals (Sweden)

    Shameng Wen

    Full Text Available Formal techniques have been devoted to analyzing whether network protocol specifications violate security policies; however, these methods cannot detect vulnerabilities in the implementations of the network protocols themselves. Symbolic execution can be used to analyze the paths of the network protocol implementations, but for stateful network protocols, it is difficult to reach the deep states of the protocol. This paper proposes a novel model-guided approach to detect vulnerabilities in network protocol implementations. Our method first abstracts a finite state machine (FSM model, then utilizes the model to guide the symbolic execution. This approach achieves high coverage of both the code and the protocol states. The proposed method is implemented and applied to test numerous real-world network protocol implementations. The experimental results indicate that the proposed method is more effective than traditional fuzzing methods such as SPIKE at detecting vulnerabilities in the deep states of network protocol implementations.

  7. Industrial output restriction and the Kyoto protocol. An input-output approach with application to Canada

    International Nuclear Information System (INIS)

    Lixon, Benoit; Thomassin, Paul J.; Hamaide, Bertrand

    2008-01-01

    The objective of this paper is to assess the economic impacts of reducing greenhouse gas emissions by decreasing industrial output in Canada to a level that will meet the target set out in the Kyoto Protocol. The study uses an ecological-economic Input-Output model combining economic components valued in monetary terms with ecologic components - GHG emissions - expressed in physical terms. Economic and greenhouse gas emissions data for Canada are computed in the same sectoral disaggregation. Three policy scenarios are considered: the first one uses the direct emission coefficients to allocate the reduction in industrial output, while the other two use the direct plus indirect emission coefficients. In the first two scenarios, the reduction in industrial sector output is allocated uniformly across sectors while it is allocated to the 12 largest emitting industries in the last one. The estimated impacts indicate that the results vary with the different allocation methods. The third policy scenario, allocation to the 12 largest emitting sectors, is the most cost effective of the three as the impacts of the Kyoto Protocol reduces Gross Domestic Product by 3.1% compared to 24% and 8.1% in the first two scenarios. Computed economic costs should be considered as upper-bounds because the model assumes immediate adjustment to the Kyoto Protocol and because flexibility mechanisms are not incorporated. The resulting upper-bound impact of the third scenario may seem to contradict those who claim that the Kyoto Protocol would place an unbearable burden on the Canadian economy. (author)

  8. In-memory interconnect protocol configuration registers

    Energy Technology Data Exchange (ETDEWEB)

    Cheng, Kevin Y.; Roberts, David A.

    2017-09-19

    Systems, apparatuses, and methods for moving the interconnect protocol configuration registers into the main memory space of a node. The region of memory used for storing the interconnect protocol configuration registers may also be made cacheable to reduce the latency of accesses to the interconnect protocol configuration registers. Interconnect protocol configuration registers which are used during a startup routine may be prefetched into the host's cache to make the startup routine more efficient. The interconnect protocol configuration registers for various interconnect protocols may include one or more of device capability tables, memory-side statistics (e.g., to support two-level memory data mapping decisions), advanced memory and interconnect features such as repair resources and routing tables, prefetching hints, error correcting code (ECC) bits, lists of device capabilities, set and store base address, capability, device ID, status, configuration, capabilities, and other settings.

  9. In-memory interconnect protocol configuration registers

    Science.gov (United States)

    Cheng, Kevin Y.; Roberts, David A.

    2017-09-19

    Systems, apparatuses, and methods for moving the interconnect protocol configuration registers into the main memory space of a node. The region of memory used for storing the interconnect protocol configuration registers may also be made cacheable to reduce the latency of accesses to the interconnect protocol configuration registers. Interconnect protocol configuration registers which are used during a startup routine may be prefetched into the host's cache to make the startup routine more efficient. The interconnect protocol configuration registers for various interconnect protocols may include one or more of device capability tables, memory-side statistics (e.g., to support two-level memory data mapping decisions), advanced memory and interconnect features such as repair resources and routing tables, prefetching hints, error correcting code (ECC) bits, lists of device capabilities, set and store base address, capability, device ID, status, configuration, capabilities, and other settings.

  10. Computational methods for constructing protein structure models from 3D electron microscopy maps.

    Science.gov (United States)

    Esquivel-Rodríguez, Juan; Kihara, Daisuke

    2013-10-01

    Protein structure determination by cryo-electron microscopy (EM) has made significant progress in the past decades. Resolutions of EM maps have been improving as evidenced by recently reported structures that are solved at high resolutions close to 3Å. Computational methods play a key role in interpreting EM data. Among many computational procedures applied to an EM map to obtain protein structure information, in this article we focus on reviewing computational methods that model protein three-dimensional (3D) structures from a 3D EM density map that is constructed from two-dimensional (2D) maps. The computational methods we discuss range from de novo methods, which identify structural elements in an EM map, to structure fitting methods, where known high resolution structures are fit into a low-resolution EM map. A list of available computational tools is also provided. Copyright © 2013 Elsevier Inc. All rights reserved.

  11. Symbolic Analysis of Cryptographic Protocols

    DEFF Research Database (Denmark)

    Dahl, Morten

    We present our work on using abstract models for formally analysing cryptographic protocols: First, we present an ecient method for verifying trace-based authenticity properties of protocols using nonces, symmetric encryption, and asymmetric encryption. The method is based on a type system...... of Gordon et al., which we modify to support fully-automated type inference. Tests conducted via an implementation of our algorithm found it to be very ecient. Second, we show how privacy may be captured in a symbolic model using an equivalencebased property and give a formal denition. We formalise...

  12. DNA repair protocols

    DEFF Research Database (Denmark)

    Bjergbæk, Lotte

    In its 3rd edition, this Methods in Molecular Biology(TM) book covers the eukaryotic response to genomic insult including advanced protocols and standard techniques in the field of DNA repair. Offers expert guidance for DNA repair, recombination, and replication. Current knowledge of the mechanisms...... that regulate DNA repair has grown significantly over the past years with technology advances such as RNA interference, advanced proteomics and microscopy as well as high throughput screens. The third edition of DNA Repair Protocols covers various aspects of the eukaryotic response to genomic insult including...... recent advanced protocols as well as standard techniques used in the field of DNA repair. Both mammalian and non-mammalian model organisms are covered in the book, and many of the techniques can be applied with only minor modifications to other systems than the one described. Written in the highly...

  13. A New Key-lock Method for User Authentication and Access Control

    Institute of Scientific and Technical Information of China (English)

    JI Dongyao; ZHANG Futai; WANG Yumin

    2001-01-01

    We propose a new key-lock methodfor user authentication and access control based onChinese remainder theorem, the concepts of the ac-cess control matrix, key-lock-pair, time stamp, and the NS public key protocol. Our method is dynamicand needs a minimum amount of computation in thesense that it only updates at most one key/lock foreach access request. We also demonstrate how an au-thentication protocol can be integrated into the ac-cess control method. By applying a time stamp, themethod can not only withstand replay attack, butalso strengthen the authenticating mechanism, whichcould not be achieved simultaneously in previous key-lock methods.

  14. The asymptotic expansion method via symbolic computation

    OpenAIRE

    Navarro, Juan F.

    2012-01-01

    This paper describes an algorithm for implementing a perturbation method based on an asymptotic expansion of the solution to a second-order differential equation. We also introduce a new symbolic computation system which works with the so-called modified quasipolynomials, as well as an implementation of the algorithm on it.

  15. The Geneva Protocol of 1925

    International Nuclear Information System (INIS)

    Mc Elroy, R.J.

    1991-01-01

    This paper reports that when President Gerald Ford signed the instruments of ratification for the Geneva Protocol of 1925 on January 22, 1975, a tortured, half-century-long chapter in U.S. arms control policy was brought to a close. Fifty years earlier, at the Geneva Conference for the Control of the International Trade in Arms, Munitions and Implements of War, the United States had played a key role in drafting and reaching agreement on the Protocol for the Prohibition of the Use in War of Asphyxiating, Poisonous or Other Gases and of Bacteriological Methods of Warfare. The protocol, signed by thirty nations, including the United States, on June 17, 1925, prohibits the use in war of asphyxiating, poisonous or other gases, and of all analogous liquids, materials or devices as well as the use of bacteriological methods of warfare

  16. Personal computer based home automation system

    OpenAIRE

    Hellmuth, George F.

    1993-01-01

    The systems engineering process is applied in the development of the preliminary design of a home automation communication protocol. The objective of the communication protocol is to provide a means for a personal computer to communicate with adapted appliances in the home. A needs analysis is used to ascertain that a need exist for a home automation system. Numerous design alternatives are suggested and evaluated to determine the best possible protocol design. Coaxial cable...

  17. Platform-independent method for computer aided schematic drawings

    Science.gov (United States)

    Vell, Jeffrey L [Slingerlands, NY; Siganporia, Darius M [Clifton Park, NY; Levy, Arthur J [Fort Lauderdale, FL

    2012-02-14

    A CAD/CAM method is disclosed for a computer system to capture and interchange schematic drawing and associated design information. The schematic drawing and design information are stored in an extensible, platform-independent format.

  18. A-VCI: A flexible method to efficiently compute vibrational spectra

    Science.gov (United States)

    Odunlami, Marc; Le Bris, Vincent; Bégué, Didier; Baraille, Isabelle; Coulaud, Olivier

    2017-06-01

    The adaptive vibrational configuration interaction algorithm has been introduced as a new method to efficiently reduce the dimension of the set of basis functions used in a vibrational configuration interaction process. It is based on the construction of nested bases for the discretization of the Hamiltonian operator according to a theoretical criterion that ensures the convergence of the method. In the present work, the Hamiltonian is written as a sum of products of operators. The purpose of this paper is to study the properties and outline the performance details of the main steps of the algorithm. New parameters have been incorporated to increase flexibility, and their influence has been thoroughly investigated. The robustness and reliability of the method are demonstrated for the computation of the vibrational spectrum up to 3000 cm-1 of a widely studied 6-atom molecule (acetonitrile). Our results are compared to the most accurate up to date computation; we also give a new reference calculation for future work on this system. The algorithm has also been applied to a more challenging 7-atom molecule (ethylene oxide). The computed spectrum up to 3200 cm-1 is the most accurate computation that exists today on such systems.

  19. A limited, low-dose computed tomography protocol to examine the sacroiliac joints

    International Nuclear Information System (INIS)

    Friedman, L.; Silberberg, P.J.; Rainbow, A.; Butler, R.

    1993-01-01

    Limited, low-dose, three-scan computed tomography (CT) was shown to be as accurate as a complete CT series in examining the sacroiliac joints and is suggested as an effective alternative to plain radiography as the primary means to detect sacroiliitis. The advantages include the brevity of the examination, a 2-fold to 4-fold reduction in radiation exposure relative to conventional radiography and a 20-fold to 30-fold reduction relative to a full CT series. The technique was developed from studies of anatomic specimens in which the articular surfaces were covered with a film of barium to show clearly the synovial surfaces and allow the choice of the most appropriate levels of section. From the anteroposterior scout view the following levels were defined: at the first sacral foramen, between the first and second sacral foramina and at the third sacral foramen. In the superior section a quarter of the sacroiliac joint is synovial, whereas in the inferior section the entire joint is synovial. The three representative cuts and the anteroposterior scout view are displayed on a single 14 x 17 in. (36 x 43 cm) film. Comparative images at various current strengths showed that at lower currents than conventionally used no diagnostic information was lost, despite a slight increase in noise. The referring physicians at the authors' institution prefer this protocol to the imaging routine previously used. (author). 21 refs., 1 tab., 4 figs

  20. Performance analysis of routing protocols for IoT

    Science.gov (United States)

    Manda, Sridhar; Nalini, N.

    2018-04-01

    Internet of Things (IoT) is an arrangement of advancements that are between disciplinary. It is utilized to have compelling combination of both physical and computerized things. With IoT physical things can have personal virtual identities and participate in distributed computing. Realization of IoT needs the usage of sensors based on the sector for which IoT is integrated. For instance, in healthcare domain, IoT needs to have integration with wearable sensors used by patients. As sensor devices produce huge amount of data, often called big data, there should be efficient routing protocols in place. To the extent remote systems is worried there are some current protocols, for example, OLSR, DSR and AODV. It additionally tosses light into Trust based routing protocol for low-power and lossy systems (TRPL) for IoT. These are broadly utilized remote directing protocols. As IoT is developing round the corner, it is basic to investigate routing protocols that and evaluate their execution regarding throughput, end to end delay, and directing overhead. The execution experiences can help in settling on very much educated choices while incorporating remote systems with IoT. In this paper, we analyzed different routing protocols and their performance is compared. It is found that AODV showed better performance than other routing protocols aforementioned.

  1. Improving an Anonymous and Provably Secure Authentication Protocol for a Mobile User

    Directory of Open Access Journals (Sweden)

    Jongho Moon

    2017-01-01

    Full Text Available Recently many authentication protocols using an extended chaotic map were suggested for a mobile user. Many researchers demonstrated that authentication protocol needs to provide key agreement, mutual authentication, and user anonymity between mobile user and server and resilience to many possible attacks. In this paper, we cautiously analyzed chaotic-map-based authentication scheme and proved that it is still insecure to off-line identity guessing, user and server impersonation, and on-line identity guessing attacks. To address these vulnerabilities, we proposed an improved protocol based on an extended chaotic map and a fuzzy extractor. We proved the security of the proposed protocol using a random oracle and AVISPA (Automated Validation of Internet Security Protocols and Applications tool. Furthermore, we present an informal security analysis to make sure that the improved protocol is invulnerable to possible attacks. The proposed protocol is also computationally efficient when compared to other previous protocols.

  2. Depth-Averaged Non-Hydrostatic Hydrodynamic Model Using a New Multithreading Parallel Computing Method

    Directory of Open Access Journals (Sweden)

    Ling Kang

    2017-03-01

    Full Text Available Compared to the hydrostatic hydrodynamic model, the non-hydrostatic hydrodynamic model can accurately simulate flows that feature vertical accelerations. The model’s low computational efficiency severely restricts its wider application. This paper proposes a non-hydrostatic hydrodynamic model based on a multithreading parallel computing method. The horizontal momentum equation is obtained by integrating the Navier–Stokes equations from the bottom to the free surface. The vertical momentum equation is approximated by the Keller-box scheme. A two-step method is used to solve the model equations. A parallel strategy based on block decomposition computation is utilized. The original computational domain is subdivided into two subdomains that are physically connected via a virtual boundary technique. Two sub-threads are created and tasked with the computation of the two subdomains. The producer–consumer model and the thread lock technique are used to achieve synchronous communication between sub-threads. The validity of the model was verified by solitary wave propagation experiments over a flat bottom and slope, followed by two sinusoidal wave propagation experiments over submerged breakwater. The parallel computing method proposed here was found to effectively enhance computational efficiency and save 20%–40% computation time compared to serial computing. The parallel acceleration rate and acceleration efficiency are approximately 1.45% and 72%, respectively. The parallel computing method makes a contribution to the popularization of non-hydrostatic models.

  3. Network protocol changes can improve DisCom WAN performance : evaluating TCP modifications and SCTP in the ASC tri-lab environment.

    Energy Technology Data Exchange (ETDEWEB)

    Tolendino, Lawrence F.; Hu, Tan Chang

    2005-06-01

    The Advanced Simulation and Computing (ASC) Distance Computing (DisCom) Wide Area Network (WAN) is a high performance, long distance network environment that is based on the ubiquitous TCP/IP protocol set. However, the Transmission Control Protocol (TCP) and the algorithms that govern its operation were defined almost two decades ago for a network environment vastly different from the DisCom WAN. In this paper we explore and evaluate possible modifications to TCP that purport to improve TCP performance in environments like the DisCom WAN. We also examine a much newer protocol, SCTP (Stream Control Transmission Protocol) that claims to provide reliable network transport while also implementing multi-streaming, multi-homing capabilities that are appealing in the DisCom high performance network environment. We provide performance comparisons and recommendations for continued development that will lead to network communications protocol implementations capable of supporting the coming ASC Petaflop computing environments.

  4. SACFIR: SDN-Based Application-Aware Centralized Adaptive Flow Iterative Reconfiguring Routing Protocol for WSNs.

    Science.gov (United States)

    Aslam, Muhammad; Hu, Xiaopeng; Wang, Fan

    2017-12-13

    Smart reconfiguration of a dynamic networking environment is offered by the central control of Software-Defined Networking (SDN). Centralized SDN-based management architectures are capable of retrieving global topology intelligence and decoupling the forwarding plane from the control plane. Routing protocols developed for conventional Wireless Sensor Networks (WSNs) utilize limited iterative reconfiguration methods to optimize environmental reporting. However, the challenging networking scenarios of WSNs involve a performance overhead due to constant periodic iterative reconfigurations. In this paper, we propose the SDN-based Application-aware Centralized adaptive Flow Iterative Reconfiguring (SACFIR) routing protocol with the centralized SDN iterative solver controller to maintain the load-balancing between flow reconfigurations and flow allocation cost. The proposed SACFIR's routing protocol offers a unique iterative path-selection algorithm, which initially computes suitable clustering based on residual resources at the control layer and then implements application-aware threshold-based multi-hop report transmissions on the forwarding plane. The operation of the SACFIR algorithm is centrally supervised by the SDN controller residing at the Base Station (BS). This paper extends SACFIR to SDN-based Application-aware Main-value Centralized adaptive Flow Iterative Reconfiguring (SAMCFIR) to establish both proactive and reactive reporting. The SAMCFIR transmission phase enables sensor nodes to trigger direct transmissions for main-value reports, while in the case of SACFIR, all reports follow computed routes. Our SDN-enabled proposed models adjust the reconfiguration period according to the traffic burden on sensor nodes, which results in heterogeneity awareness, load-balancing and application-specific reconfigurations of WSNs. Extensive experimental simulation-based results show that SACFIR and SAMCFIR yield the maximum scalability, network lifetime and stability

  5. SACFIR: SDN-Based Application-Aware Centralized Adaptive Flow Iterative Reconfiguring Routing Protocol for WSNs

    Directory of Open Access Journals (Sweden)

    Muhammad Aslam

    2017-12-01

    Full Text Available Smart reconfiguration of a dynamic networking environment is offered by the central control of Software-Defined Networking (SDN. Centralized SDN-based management architectures are capable of retrieving global topology intelligence and decoupling the forwarding plane from the control plane. Routing protocols developed for conventional Wireless Sensor Networks (WSNs utilize limited iterative reconfiguration methods to optimize environmental reporting. However, the challenging networking scenarios of WSNs involve a performance overhead due to constant periodic iterative reconfigurations. In this paper, we propose the SDN-based Application-aware Centralized adaptive Flow Iterative Reconfiguring (SACFIR routing protocol with the centralized SDN iterative solver controller to maintain the load-balancing between flow reconfigurations and flow allocation cost. The proposed SACFIR’s routing protocol offers a unique iterative path-selection algorithm, which initially computes suitable clustering based on residual resources at the control layer and then implements application-aware threshold-based multi-hop report transmissions on the forwarding plane. The operation of the SACFIR algorithm is centrally supervised by the SDN controller residing at the Base Station (BS. This paper extends SACFIR to SDN-based Application-aware Main-value Centralized adaptive Flow Iterative Reconfiguring (SAMCFIR to establish both proactive and reactive reporting. The SAMCFIR transmission phase enables sensor nodes to trigger direct transmissions for main-value reports, while in the case of SACFIR, all reports follow computed routes. Our SDN-enabled proposed models adjust the reconfiguration period according to the traffic burden on sensor nodes, which results in heterogeneity awareness, load-balancing and application-specific reconfigurations of WSNs. Extensive experimental simulation-based results show that SACFIR and SAMCFIR yield the maximum scalability, network lifetime

  6. Computational Methods in Stochastic Dynamics Volume 2

    CERN Document Server

    Stefanou, George; Papadopoulos, Vissarion

    2013-01-01

    The considerable influence of inherent uncertainties on structural behavior has led the engineering community to recognize the importance of a stochastic approach to structural problems. Issues related to uncertainty quantification and its influence on the reliability of the computational models are continuously gaining in significance. In particular, the problems of dynamic response analysis and reliability assessment of structures with uncertain system and excitation parameters have been the subject of continuous research over the last two decades as a result of the increasing availability of powerful computing resources and technology.   This book is a follow up of a previous book with the same subject (ISBN 978-90-481-9986-0) and focuses on advanced computational methods and software tools which can highly assist in tackling complex problems in stochastic dynamic/seismic analysis and design of structures. The selected chapters are authored by some of the most active scholars in their respective areas and...

  7. The Asymptotic Expansion Method via Symbolic Computation

    Directory of Open Access Journals (Sweden)

    Juan F. Navarro

    2012-01-01

    Full Text Available This paper describes an algorithm for implementing a perturbation method based on an asymptotic expansion of the solution to a second-order differential equation. We also introduce a new symbolic computation system which works with the so-called modified quasipolynomials, as well as an implementation of the algorithm on it.

  8. Applying Human Computation Methods to Information Science

    Science.gov (United States)

    Harris, Christopher Glenn

    2013-01-01

    Human Computation methods such as crowdsourcing and games with a purpose (GWAP) have each recently drawn considerable attention for their ability to synergize the strengths of people and technology to accomplish tasks that are challenging for either to do well alone. Despite this increased attention, much of this transformation has been focused on…

  9. Electron beam treatment planning: A review of dose computation methods

    International Nuclear Information System (INIS)

    Mohan, R.; Riley, R.; Laughlin, J.S.

    1983-01-01

    Various methods of dose computations are reviewed. The equivalent path length methods used to account for body curvature and internal structure are not adequate because they ignore the lateral diffusion of electrons. The Monte Carlo method for the broad field three-dimensional situation in treatment planning is impractical because of the enormous computer time required. The pencil beam technique may represent a suitable compromise. The behavior of a pencil beam may be described by the multiple scattering theory or, alternatively, generated using the Monte Carlo method. Although nearly two orders of magnitude slower than the equivalent path length technique, the pencil beam method improves accuracy sufficiently to justify its use. It applies very well when accounting for the effect of surface irregularities; the formulation for handling inhomogeneous internal structure is yet to be developed

  10. Efficient and secure authentication protocol for roaming user in ...

    Indian Academy of Sciences (India)

    BALU L PARNE

    2018-05-29

    May 29, 2018 ... 1 Department of Computer Science and Engineering, Visvesvaraya National Institute of Technology (VNIT), ... proposed protocol is presented by BAN logic and the security ..... with session key Sk of the HLR to protect from.

  11. QoS-aware self-adaptation of communication protocols in a pervasive service middleware

    DEFF Research Database (Denmark)

    Zhang, Weishan; Hansen, Klaus Marius; Fernandes, João

    2010-01-01

    Pervasive computing is characterized by heterogeneous devices that usually have scarce resources requiring optimized usage. These devices may use different communication protocols which can be switched at runtime. As different communication protocols have different quality of service (Qo......S) properties, this motivates optimized self-adaption of protocols for devices, e.g., considering power consumption and other QoS requirements, e.g. round trip time (RTT) for service invocations, throughput, and reliability. In this paper, we present an extensible approach for self-adaptation of communication...... protocols for pervasive web services, where protocols are designed as reusable connectors and our middleware infrastructure can hide the complexity of using different communication protocols to upper layers. We also propose to use Genetic Algorithms (GAs) to find optimized configurations at runtime...

  12. High-Throughput Sequencing Based Methods of RNA Structure Investigation

    DEFF Research Database (Denmark)

    Kielpinski, Lukasz Jan

    In this thesis we describe the development of four related methods for RNA structure probing that utilize massive parallel sequencing. Using them, we were able to gather structural data for multiple, long molecules simultaneously. First, we have established an easy to follow experimental...... and computational protocol for detecting the reverse transcription termination sites (RTTS-Seq). This protocol was subsequently applied to hydroxyl radical footprinting of three dimensional RNA structures to give a probing signal that correlates well with the RNA backbone solvent accessibility. Moreover, we applied...

  13. Computing architecture for autonomous microgrids

    Science.gov (United States)

    Goldsmith, Steven Y.

    2015-09-29

    A computing architecture that facilitates autonomously controlling operations of a microgrid is described herein. A microgrid network includes numerous computing devices that execute intelligent agents, each of which is assigned to a particular entity (load, source, storage device, or switch) in the microgrid. The intelligent agents can execute in accordance with predefined protocols to collectively perform computations that facilitate uninterrupted control of the .

  14. A numerical method to compute interior transmission eigenvalues

    International Nuclear Information System (INIS)

    Kleefeld, Andreas

    2013-01-01

    In this paper the numerical calculation of eigenvalues of the interior transmission problem arising in acoustic scattering for constant contrast in three dimensions is considered. From the computational point of view existing methods are very expensive, and are only able to show the existence of such transmission eigenvalues. Furthermore, they have trouble finding them if two or more eigenvalues are situated closely together. We present a new method based on complex-valued contour integrals and the boundary integral equation method which is able to calculate highly accurate transmission eigenvalues. So far, this is the first paper providing such accurate values for various surfaces different from a sphere in three dimensions. Additionally, the computational cost is even lower than those of existing methods. Furthermore, the algorithm is capable of finding complex-valued eigenvalues for which no numerical results have been reported yet. Until now, the proof of existence of such eigenvalues is still open. Finally, highly accurate eigenvalues of the interior Dirichlet problem are provided and might serve as test cases to check newly derived Faber–Krahn type inequalities for larger transmission eigenvalues that are not yet available. (paper)

  15. Mathematical optics classical, quantum, and computational methods

    CERN Document Server

    Lakshminarayanan, Vasudevan

    2012-01-01

    Going beyond standard introductory texts, Mathematical Optics: Classical, Quantum, and Computational Methods brings together many new mathematical techniques from optical science and engineering research. Profusely illustrated, the book makes the material accessible to students and newcomers to the field. Divided into six parts, the text presents state-of-the-art mathematical methods and applications in classical optics, quantum optics, and image processing. Part I describes the use of phase space concepts to characterize optical beams and the application of dynamic programming in optical wave

  16. Towards a New Classification of Location Privacy Methods in Pervasive Computing

    DEFF Research Database (Denmark)

    Andersen, Mads Schaarup; Kjærgaard, Mikkel Baun

    2011-01-01

    and Collaborative Sensing, and that insufficient work has been done in Route Tracing. It is concluded that none of the existing methods cover all applications of Route Tracing. It is, therefore, suggested that a new overall method should be proposed to solve the problem of location privacy in Route Tracing......-of-Interest, Social Networking, Collaborative Sensing, and Route Tracing, and the high level location privacy method categories are Anonymization, Classical Security, Spatial Obfuscation, Temporal Obfuscation, and Protocol. It is found that little work exists on location privacy in the areas of Social Networking......Over the last decade many methods for location privacy have been proposed, but the mapping between classes of location based services and location privacy methods is not obvious. This entails confusion for developers, lack of usage of privacy methods, and an unclear road-map ahead for research...

  17. A computer-aided audit system for respiratory therapy consult evaluations: description of a method and early results.

    Science.gov (United States)

    Kester, Lucy; Stoller, James K

    2013-05-01

    Use of respiratory therapist (RT)-guided protocols enhances allocation of respiratory care. In the context that optimal protocol use requires a system for auditing respiratory care plans to assure adherence to protocols and expertise of the RTs generating the care plan, a live audit system has been in longstanding use in our Respiratory Therapy Consult Service. Growth in the number of RT positions and the need to audit more frequently has prompted development of a new, computer-aided audit system. The number and results of audits using the old and new systems were compared (for the periods May 30, 2009 through May 30, 2011 and January 1, 2012 through May 30, 2012, respectively). In contrast to the original, live system requiring a patient visit by the auditor, the new system involves completion of a respiratory therapy care plan using patient information in the electronic medical record, both by the RT generating the care plan and the auditor. Completing audits in the new system also uses an electronic respiratory therapy management system. The degrees of concordance between the audited RT's care plans and the "gold standard" care plans using the old and new audit systems were similar. Use of the new system was associated with an almost doubling of the rate of audits (ie, 11 per month vs 6.1 per month). The new, computer-aided audit system increased capacity to audit more RTs performing RT-guided consults while preserving accuracy as an audit tool. Ensuring that RTs adhere to the audit process remains the challenge for the new system, and is the rate-limiting step.

  18. Advances of evolutionary computation methods and operators

    CERN Document Server

    Cuevas, Erik; Oliva Navarro, Diego Alberto

    2016-01-01

    The goal of this book is to present advances that discuss alternative Evolutionary Computation (EC) developments and non-conventional operators which have proved to be effective in the solution of several complex problems. The book has been structured so that each chapter can be read independently from the others. The book contains nine chapters with the following themes: 1) Introduction, 2) the Social Spider Optimization (SSO), 3) the States of Matter Search (SMS), 4) the collective animal behavior (CAB) algorithm, 5) the Allostatic Optimization (AO) method, 6) the Locust Search (LS) algorithm, 7) the Adaptive Population with Reduced Evaluations (APRE) method, 8) the multimodal CAB, 9) the constrained SSO method.

  19. A method of non-contact reading code based on computer vision

    Science.gov (United States)

    Zhang, Chunsen; Zong, Xiaoyu; Guo, Bingxuan

    2018-03-01

    With the purpose of guarantee the computer information exchange security between internal and external network (trusted network and un-trusted network), A non-contact Reading code method based on machine vision has been proposed. Which is different from the existing network physical isolation method. By using the computer monitors, camera and other equipment. Deal with the information which will be on exchanged, Include image coding ,Generate the standard image , Display and get the actual image , Calculate homography matrix, Image distort correction and decoding in calibration, To achieve the computer information security, Non-contact, One-way transmission between the internal and external network , The effectiveness of the proposed method is verified by experiments on real computer text data, The speed of data transfer can be achieved 24kb/s. The experiment shows that this algorithm has the characteristics of high security, fast velocity and less loss of information. Which can meet the daily needs of the confidentiality department to update the data effectively and reliably, Solved the difficulty of computer information exchange between Secret network and non-secret network, With distinctive originality, practicability, and practical research value.

  20. Computational mathematics models, methods, and analysis with Matlab and MPI

    CERN Document Server

    White, Robert E

    2004-01-01

    Computational Mathematics: Models, Methods, and Analysis with MATLAB and MPI explores and illustrates this process. Each section of the first six chapters is motivated by a specific application. The author applies a model, selects a numerical method, implements computer simulations, and assesses the ensuing results. These chapters include an abundance of MATLAB code. By studying the code instead of using it as a "black box, " you take the first step toward more sophisticated numerical modeling. The last four chapters focus on multiprocessing algorithms implemented using message passing interface (MPI). These chapters include Fortran 9x codes that illustrate the basic MPI subroutines and revisit the applications of the previous chapters from a parallel implementation perspective. All of the codes are available for download from www4.ncsu.edu./~white.This book is not just about math, not just about computing, and not just about applications, but about all three--in other words, computational science. Whether us...

  1. Moving finite elements: A continuously adaptive method for computational fluid dynamics

    International Nuclear Information System (INIS)

    Glasser, A.H.; Miller, K.; Carlson, N.

    1991-01-01

    Moving Finite Elements (MFE), a recently developed method for computational fluid dynamics, promises major advances in the ability of computers to model the complex behavior of liquids, gases, and plasmas. Applications of computational fluid dynamics occur in a wide range of scientifically and technologically important fields. Examples include meteorology, oceanography, global climate modeling, magnetic and inertial fusion energy research, semiconductor fabrication, biophysics, automobile and aircraft design, industrial fluid processing, chemical engineering, and combustion research. The improvements made possible by the new method could thus have substantial economic impact. Moving Finite Elements is a moving node adaptive grid method which has a tendency to pack the grid finely in regions where it is most needed at each time and to leave it coarse elsewhere. It does so in a manner which is simple and automatic, and does not require a large amount of human ingenuity to apply it to each particular problem. At the same time, it often allows the time step to be large enough to advance a moving shock by many shock thicknesses in a single time step, moving the grid smoothly with the solution and minimizing the number of time steps required for the whole problem. For 2D problems (two spatial variables) the grid is composed of irregularly shaped and irregularly connected triangles which are very flexible in their ability to adapt to the evolving solution. While other adaptive grid methods have been developed which share some of these desirable properties, this is the only method which combines them all. In many cases, the method can save orders of magnitude of computing time, equivalent to several generations of advancing computer hardware

  2. Protocols for pressure ulcer prevention: are they evidence-based?

    Science.gov (United States)

    Chaves, Lidice M; Grypdonck, Mieke H F; Defloor, Tom

    2010-03-01

    This study is a report of a study to determine the quality of protocols for pressure ulcer prevention in home care in the Netherlands. If pressure ulcer prevention protocols are evidence-based and practitioners use them correctly in practice, this will result a reduction in pressure ulcers. Very little is known about the evidence-based content and quality of the pressure ulcer prevention protocols. In 2008, current pressure ulcer prevention protocols from 24 home-care agencies in the Netherlands were evaluated. A checklist developed and validated by two pressure ulcer prevention experts was used to assess the quality of the protocols, and weighted and unweighted quality scores were computed and analysed using descriptive statistics. The 24 pressure ulcer prevention protocols had a mean weighted quality score of 63.38 points out of a maximum of 100 (sd 5). The importance of observing the skin at the pressure points at least once a day was emphasized in 75% of the protocols. Only 42% correctly warned against the use of materials that were 'less effective or that could potentially cause harm'. Pressure ulcer prevention commands a reasonable amount of attention in home care, but the incidence of pressure ulcers and lack of a consistent, standardized document for use in actual practice indicate a need for systematic implementation of national pressure ulcer prevention standards in the Netherlands to ensure adherence to the established protocols.

  3. Methods for computing SN eigenvalues and eigenvectors of slab geometry transport problems

    International Nuclear Information System (INIS)

    Yavuz, Musa

    1998-01-01

    We discuss computational methods for computing the eigenvalues and eigenvectors of single energy-group neutral particle transport (S N ) problems in homogeneous slab geometry, with an arbitrary scattering anisotropy of order L. These eigensolutions are important when exact (or very accurate) solutions are desired for coarse spatial cell problems demanding rapid execution times. Three methods, one of which is 'new', are presented for determining the eigenvalues and eigenvectors of such S N problems. In the first method, separation of variables is directly applied to the S N equations. In the second method, common characteristics of the S N and P N-1 equations are used. In the new method, the eigenvalues and eigenvectors can be computed provided that the cell-interface Green's functions (transmission and reflection factors) are known. Numerical results for S 4 test problems are given to compare the new method with the existing methods

  4. Methods for computing SN eigenvalues and eigenvectors of slab geometry transport problems

    International Nuclear Information System (INIS)

    Yavuz, M.

    1997-01-01

    We discuss computational methods for computing the eigenvalues and eigenvectors of single energy-group neutral particle transport (S N ) problems in homogeneous slab geometry, with an arbitrary scattering anisotropy of order L. These eigensolutions are important when exact (or very accurate) solutions are desired for coarse spatial cell problems demanding rapid execution times. Three methods, one of which is 'new', are presented for determining the eigenvalues and eigenvectors of such S N problems. In the first method, separation of variables is directly applied to the S N equations. In the second method, common characteristics of the S N and P N-1 equations are used. In the new method, the eigenvalues and eigenvectors can be computed provided that the cell-interface Green's functions (transmission and reflection factors) are known. Numerical results for S 4 test problems are given to compare the new method with the existing methods. (author)

  5. Delamination detection using methods of computational intelligence

    Science.gov (United States)

    Ihesiulor, Obinna K.; Shankar, Krishna; Zhang, Zhifang; Ray, Tapabrata

    2012-11-01

    Abstract Reliable delamination prediction scheme is indispensable in order to prevent potential risks of catastrophic failures in composite structures. The existence of delaminations changes the vibration characteristics of composite laminates and hence such indicators can be used to quantify the health characteristics of laminates. An approach for online health monitoring of in-service composite laminates is presented in this paper that relies on methods based on computational intelligence. Typical changes in the observed vibration characteristics (i.e. change in natural frequencies) are considered as inputs to identify the existence, location and magnitude of delaminations. The performance of the proposed approach is demonstrated using numerical models of composite laminates. Since this identification problem essentially involves the solution of an optimization problem, the use of finite element (FE) methods as the underlying tool for analysis turns out to be computationally expensive. A surrogate assisted optimization approach is hence introduced to contain the computational time within affordable limits. An artificial neural network (ANN) model with Bayesian regularization is used as the underlying approximation scheme while an improved rate of convergence is achieved using a memetic algorithm. However, building of ANN surrogate models usually requires large training datasets. K-means clustering is effectively employed to reduce the size of datasets. ANN is also used via inverse modeling to determine the position, size and location of delaminations using changes in measured natural frequencies. The results clearly highlight the efficiency and the robustness of the approach.

  6. Compositional mining of multiple object API protocols through state abstraction.

    Science.gov (United States)

    Dai, Ziying; Mao, Xiaoguang; Lei, Yan; Qi, Yuhua; Wang, Rui; Gu, Bin

    2013-01-01

    API protocols specify correct sequences of method invocations. Despite their usefulness, API protocols are often unavailable in practice because writing them is cumbersome and error prone. Multiple object API protocols are more expressive than single object API protocols. However, the huge number of objects of typical object-oriented programs poses a major challenge to the automatic mining of multiple object API protocols: besides maintaining scalability, it is important to capture various object interactions. Current approaches utilize various heuristics to focus on small sets of methods. In this paper, we present a general, scalable, multiple object API protocols mining approach that can capture all object interactions. Our approach uses abstract field values to label object states during the mining process. We first mine single object typestates as finite state automata whose transitions are annotated with states of interacting objects before and after the execution of the corresponding method and then construct multiple object API protocols by composing these annotated single object typestates. We implement our approach for Java and evaluate it through a series of experiments.

  7. Keyboard with Universal Communication Protocol Applied to CNC Machine

    Directory of Open Access Journals (Sweden)

    Mejía-Ugalde Mario

    2014-04-01

    Full Text Available This article describes the use of a universal communication protocol for industrial keyboard based microcontroller applied to computer numerically controlled (CNC machine. The main difference among the keyboard manufacturers is that each manufacturer has its own programming of source code, producing a different communication protocol, generating an improper interpretation of the function established. The above results in commercial industrial keyboards which are expensive and incompatible in their connection with different machines. In the present work the protocol allows to connect the designed universal keyboard and the standard keyboard of the PC at the same time, it is compatible with all the computers through the communications USB, AT or PS/2, to use in CNC machines, with extension to other machines such as robots, blowing, injection molding machines and others. The advantages of this design include its easy reprogramming, decreased costs, manipulation of various machine functions and easy expansion of entry and exit signals. The results obtained of performance tests were satisfactory, because each key has the programmed and reprogrammed facility in different ways, generating codes for different functions, depending on the application where it is required to be used.

  8. Subtraction method of computing QCD jet cross sections at NNLO accuracy

    Science.gov (United States)

    Trócsányi, Zoltán; Somogyi, Gábor

    2008-10-01

    We present a general subtraction method for computing radiative corrections to QCD jet cross sections at next-to-next-to-leading order accuracy. The steps needed to set up this subtraction scheme are the same as those used in next-to-leading order computations. However, all steps need non-trivial modifications, which we implement such that that those can be defined at any order in perturbation theory. We give a status report of the implementation of the method to computing jet cross sections in electron-positron annihilation at the next-to-next-to-leading order accuracy.

  9. Subtraction method of computing QCD jet cross sections at NNLO accuracy

    Energy Technology Data Exchange (ETDEWEB)

    Trocsanyi, Zoltan [University of Debrecen and Institute of Nuclear Research of the Hungarian Academy of Sciences, H-4001 Debrecen P.O.Box 51 (Hungary)], E-mail: Zoltan.Trocsanyi@cern.ch; Somogyi, Gabor [University of Zuerich, Winterthurerstrasse 190, CH-8057 Zuerich (Switzerland)], E-mail: sgabi@physik.unizh.ch

    2008-10-15

    We present a general subtraction method for computing radiative corrections to QCD jet cross sections at next-to-next-to-leading order accuracy. The steps needed to set up this subtraction scheme are the same as those used in next-to-leading order computations. However, all steps need non-trivial modifications, which we implement such that that those can be defined at any order in perturbation theory. We give a status report of the implementation of the method to computing jet cross sections in electron-positron annihilation at the next-to-next-to-leading order accuracy.

  10. Network, system, and status software enhancements for the autonomously managed electrical power system breadboard. Volume 2: Protocol specification

    Science.gov (United States)

    Mckee, James W.

    1990-01-01

    This volume (2 of 4) contains the specification, structured flow charts, and code listing for the protocol. The purpose of an autonomous power system on a spacecraft is to relieve humans from having to continuously monitor and control the generation, storage, and distribution of power in the craft. This implies that algorithms will have been developed to monitor and control the power system. The power system will contain computers on which the algorithms run. There should be one control computer system that makes the high level decisions and sends commands to and receive data from the other distributed computers. This will require a communications network and an efficient protocol by which the computers will communicate. One of the major requirements on the protocol is that it be real time because of the need to control the power elements.

  11. Vectorization on the star computer of several numerical methods for a fluid flow problem

    Science.gov (United States)

    Lambiotte, J. J., Jr.; Howser, L. M.

    1974-01-01

    A reexamination of some numerical methods is considered in light of the new class of computers which use vector streaming to achieve high computation rates. A study has been made of the effect on the relative efficiency of several numerical methods applied to a particular fluid flow problem when they are implemented on a vector computer. The method of Brailovskaya, the alternating direction implicit method, a fully implicit method, and a new method called partial implicitization have been applied to the problem of determining the steady state solution of the two-dimensional flow of a viscous imcompressible fluid in a square cavity driven by a sliding wall. Results are obtained for three mesh sizes and a comparison is made of the methods for serial computation.

  12. Balancing nurses' workload in hospital wards: study protocol of developing a method to manage workload.

    Science.gov (United States)

    van den Oetelaar, W F J M; van Stel, H F; van Rhenen, W; Stellato, R K; Grolman, W

    2016-11-10

    Hospitals pursue different goals at the same time: excellent service to their patients, good quality care, operational excellence, retaining employees. This requires a good balance between patient needs and nursing staff. One way to ensure a proper fit between patient needs and nursing staff is to work with a workload management method. In our view, a nursing workload management method needs to have the following characteristics: easy to interpret; limited additional registration; applicable to different types of hospital wards; supported by nurses; covers all activities of nurses and suitable for prospective planning of nursing staff. At present, no such method is available. The research follows several steps to come to a workload management method for staff nurses. First, a list of patient characteristics relevant to care time will be composed by performing a Delphi study among staff nurses. Next, a time study of nurses' activities will be carried out. The 2 can be combined to estimate care time per patient group and estimate the time nurses spend on non-patient-related activities. These 2 estimates can be combined and compared with available nursing resources: this gives an estimate of nurses' workload. The research will take place in an academic hospital in the Netherlands. 6 surgical wards will be included, capacity 15-30 beds. The study protocol was submitted to the Medical Ethical Review Board of the University Medical Center (UMC) Utrecht and received a positive advice, protocol number 14-165/C. This method will be developed in close cooperation with staff nurses and ward management. The strong involvement of the end users will contribute to a broader support of the results. The method we will develop may also be useful for planning purposes; this is a strong advantage compared with existing methods, which tend to focus on retrospective analysis. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence

  13. Computer Networks A Systems Approach

    CERN Document Server

    Peterson, Larry L

    2011-01-01

    This best-selling and classic book teaches you the key principles of computer networks with examples drawn from the real world of network and protocol design. Using the Internet as the primary example, the authors explain various protocols and networking technologies. Their systems-oriented approach encourages you to think about how individual network components fit into a larger, complex system of interactions. Whatever your perspective, whether it be that of an application developer, network administrator, or a designer of network equipment or protocols, you will come away with a "big pictur

  14. Control rod computer code IAMCOS: general theory and numerical methods

    International Nuclear Information System (INIS)

    West, G.

    1982-11-01

    IAMCOS is a computer code for the description of mechanical and thermal behavior of cylindrical control rods for fast breeders. This code version was applied, tested and modified from 1979 to 1981. In this report are described the basic model (02 version), theoretical definitions and computation methods [fr

  15. Reliable methods for computer simulation error control and a posteriori estimates

    CERN Document Server

    Neittaanmäki, P

    2004-01-01

    Recent decades have seen a very rapid success in developing numerical methods based on explicit control over approximation errors. It may be said that nowadays a new direction is forming in numerical analysis, the main goal of which is to develop methods ofreliable computations. In general, a reliable numerical method must solve two basic problems: (a) generate a sequence of approximations that converges to a solution and (b) verify the accuracy of these approximations. A computer code for such a method must consist of two respective blocks: solver and checker.In this book, we are chie

  16. Protocol vulnerability detection based on network traffic analysis and binary reverse engineering.

    Science.gov (United States)

    Wen, Shameng; Meng, Qingkun; Feng, Chao; Tang, Chaojing

    2017-01-01

    Network protocol vulnerability detection plays an important role in many domains, including protocol security analysis, application security, and network intrusion detection. In this study, by analyzing the general fuzzing method of network protocols, we propose a novel approach that combines network traffic analysis with the binary reverse engineering method. For network traffic analysis, the block-based protocol description language is introduced to construct test scripts, while the binary reverse engineering method employs the genetic algorithm with a fitness function designed to focus on code coverage. This combination leads to a substantial improvement in fuzz testing for network protocols. We build a prototype system and use it to test several real-world network protocol implementations. The experimental results show that the proposed approach detects vulnerabilities more efficiently and effectively than general fuzzing methods such as SPIKE.

  17. A Comparative Study of Wireless Sensor Networks and Their Routing Protocols

    Directory of Open Access Journals (Sweden)

    Subhajit Pal

    2010-11-01

    Full Text Available Recent developments in the area of micro-sensor devices have accelerated advances in the sensor networks field leading to many new protocols specifically designed for wireless sensor networks (WSNs. Wireless sensor networks with hundreds to thousands of sensor nodes can gather information from an unattended location and transmit the gathered data to a particular user, depending on the application. These sensor nodes have some constraints due to their limited energy, storage capacity and computing power. Data are routed from one node to other using different routing protocols. There are a number of routing protocols for wireless sensor networks. In this review article, we discuss the architecture of wireless sensor networks. Further, we categorize the routing protocols according to some key factors and summarize their mode of operation. Finally, we provide a comparative study on these various protocols.

  18. A systematic and efficient method to compute multi-loop master integrals

    Science.gov (United States)

    Liu, Xiao; Ma, Yan-Qing; Wang, Chen-Yu

    2018-04-01

    We propose a novel method to compute multi-loop master integrals by constructing and numerically solving a system of ordinary differential equations, with almost trivial boundary conditions. Thus it can be systematically applied to problems with arbitrary kinematic configurations. Numerical tests show that our method can not only achieve results with high precision, but also be much faster than the only existing systematic method sector decomposition. As a by product, we find a new strategy to compute scalar one-loop integrals without reducing them to master integrals.

  19. Computational methods in metabolic engineering for strain design.

    Science.gov (United States)

    Long, Matthew R; Ong, Wai Kit; Reed, Jennifer L

    2015-08-01

    Metabolic engineering uses genetic approaches to control microbial metabolism to produce desired compounds. Computational tools can identify new biological routes to chemicals and the changes needed in host metabolism to improve chemical production. Recent computational efforts have focused on exploring what compounds can be made biologically using native, heterologous, and/or enzymes with broad specificity. Additionally, computational methods have been developed to suggest different types of genetic modifications (e.g. gene deletion/addition or up/down regulation), as well as suggest strategies meeting different criteria (e.g. high yield, high productivity, or substrate co-utilization). Strategies to improve the runtime performances have also been developed, which allow for more complex metabolic engineering strategies to be identified. Future incorporation of kinetic considerations will further improve strain design algorithms. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Principles of the new quantum cryptography protocols building

    International Nuclear Information System (INIS)

    Kurochkin, V.; Kurochkin, Yu.

    2009-01-01

    The main aim of the quantum cryptography protocols is the maximal secrecy under the conditions of the real experiment. This work presents the result of the new protocol building with the use of the secrecy maximization. While using some well-known approaches this method has allowed one to achieve completely new results in quantum cryptography. The process of the protocol elaboration develops from the standard BB84 protocol upgrading to the building of completely new protocol with arbitrary large bases number. The secrecy proofs of the elaborated protocol appear to be natural continuation of the protocol building process. This approach reveals possibility to reach extremely high parameters of the protocol. It suits both the restrictions of contemporary technologies and requirements for high bit rate while being absolutely secret

  1. Development of computational methods of design by analysis for pressure vessel components

    International Nuclear Information System (INIS)

    Bao Shiyi; Zhou Yu; He Shuyan; Wu Honglin

    2005-01-01

    Stress classification is not only one of key steps when pressure vessel component is designed by analysis, but also a difficulty which puzzles engineers and designers at all times. At present, for calculating and categorizing the stress field of pressure vessel components, there are several computation methods of design by analysis such as Stress Equivalent Linearization, Two-Step Approach, Primary Structure method, Elastic Compensation method, GLOSS R-Node method and so on, that are developed and applied. Moreover, ASME code also gives an inelastic method of design by analysis for limiting gross plastic deformation only. When pressure vessel components design by analysis, sometimes there are huge differences between the calculating results for using different calculating and analysis methods mentioned above. As consequence, this is the main reason that affects wide application of design by analysis approach. Recently, a new approach, presented in the new proposal of a European Standard, CEN's unfired pressure vessel standard EN 13445-3, tries to avoid problems of stress classification by analyzing pressure vessel structure's various failure mechanisms directly based on elastic-plastic theory. In this paper, some stress classification methods mentioned above, are described briefly. And the computational methods cited in the European pressure vessel standard, such as Deviatoric Map, and nonlinear analysis methods (plastic analysis and limit analysis), are depicted compendiously. Furthermore, the characteristics of computational methods of design by analysis are summarized for selecting the proper computational method when design pressure vessel component by analysis. (authors)

  2. Recent Advances in Computational Methods for Nuclear Magnetic Resonance Data Processing

    KAUST Repository

    Gao, Xin

    2013-01-01

    research attention from specialists in bioinformatics and computational biology. In this paper, we review recent advances in computational methods for NMR protein structure determination. We summarize the advantages of and bottlenecks in the existing

  3. Multiscale methods in computational fluid and solid mechanics

    NARCIS (Netherlands)

    Borst, de R.; Hulshoff, S.J.; Lenz, S.; Munts, E.A.; Brummelen, van E.H.; Wall, W.; Wesseling, P.; Onate, E.; Periaux, J.

    2006-01-01

    First, an attempt is made towards gaining a more systematic understanding of recent progress in multiscale modelling in computational solid and fluid mechanics. Sub- sequently, the discussion is focused on variational multiscale methods for the compressible and incompressible Navier-Stokes

  4. Performance Analysis of Secure and Private Billing Protocols for Smart Metering

    Directory of Open Access Journals (Sweden)

    Tom Eccles

    2017-11-01

    Full Text Available Traditional utility metering is to be replaced by smart metering. Smart metering enables fine-grained utility consumption measurements. These fine-grained measurements raise privacy concerns due to the lifestyle information which can be inferred from the precise time at which utilities were consumed. This paper outlines and compares two privacy-respecting time of use billing protocols for smart metering and investigates their performance on a variety of hardware. These protocols protect the privacy of customers by never transmitting the fine-grained utility readings outside of the customer’s home network. One protocol favors complexity on the trusted smart meter hardware while the other uses homomorphic commitments to offload computation to a third device. Both protocols are designed to operate on top of existing cryptographic secure channel protocols in place on smart meters. Proof of concept software implementations of these protocols have been written and their suitability for real world application to low-performance smart meter hardware is discussed. These protocols may also have application to other privacy conscious aggregation systems, such as electronic voting.

  5. Immunochemical protocols

    National Research Council Canada - National Science Library

    Pound, John D

    1998-01-01

    ... easy and important refinements often are not published. This much anticipated 2nd edition of Immunochemzcal Protocols therefore aims to provide a user-friendly up-to-date handbook of reliable techniques selected to suit the needs of molecular biologists. It covers the full breadth of the relevant established immunochemical methods, from protein blotting and immunoa...

  6. A fast computing method to distinguish the hyperbolic trajectory of an non-autonomous system

    Science.gov (United States)

    Jia, Meng; Fan, Yang-Yu; Tian, Wei-Jian

    2011-03-01

    Attempting to find a fast computing method to DHT (distinguished hyperbolic trajectory), this study first proves that the errors of the stable DHT can be ignored in normal direction when they are computed as the trajectories extend. This conclusion means that the stable flow with perturbation will approach to the real trajectory as it extends over time. Based on this theory and combined with the improved DHT computing method, this paper reports a new fast computing method to DHT, which magnifies the DHT computing speed without decreasing its accuracy. Project supported by the National Natural Science Foundation of China (Grant No. 60872159).

  7. A fast computing method to distinguish the hyperbolic trajectory of an non-autonomous system

    International Nuclear Information System (INIS)

    Jia Meng; Fan Yang-Yu; Tian Wei-Jian

    2011-01-01

    Attempting to find a fast computing method to DHT (distinguished hyperbolic trajectory), this study first proves that the errors of the stable DHT can be ignored in normal direction when they are computed as the trajectories extend. This conclusion means that the stable flow with perturbation will approach to the real trajectory as it extends over time. Based on this theory and combined with the improved DHT computing method, this paper reports a new fast computing method to DHT, which magnifies the DHT computing speed without decreasing its accuracy. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)

  8. Computer-generated holograms by multiple wavefront recording plane method with occlusion culling.

    Science.gov (United States)

    Symeonidou, Athanasia; Blinder, David; Munteanu, Adrian; Schelkens, Peter

    2015-08-24

    We propose a novel fast method for full parallax computer-generated holograms with occlusion processing, suitable for volumetric data such as point clouds. A novel light wave propagation strategy relying on the sequential use of the wavefront recording plane method is proposed, which employs look-up tables in order to reduce the computational complexity in the calculation of the fields. Also, a novel technique for occlusion culling with little additional computation cost is introduced. Additionally, the method adheres a Gaussian distribution to the individual points in order to improve visual quality. Performance tests show that for a full-parallax high-definition CGH a speedup factor of more than 2,500 compared to the ray-tracing method can be achieved without hardware acceleration.

  9. Lattice Boltzmann method fundamentals and engineering applications with computer codes

    CERN Document Server

    Mohamad, A A

    2014-01-01

    Introducing the Lattice Boltzmann Method in a readable manner, this book provides detailed examples with complete computer codes. It avoids the most complicated mathematics and physics without scarifying the basic fundamentals of the method.

  10. Health care access for rural youth on equal terms? A mixed methods study protocol in northern Sweden.

    Science.gov (United States)

    Goicolea, Isabel; Carson, Dean; San Sebastian, Miguel; Christianson, Monica; Wiklund, Maria; Hurtig, Anna-Karin

    2018-01-11

    The purpose of this paper is to propose a protocol for researching the impact of rural youth health service strategies on health care access. There has been no published comprehensive assessment of the effectiveness of youth health strategies in rural areas, and there is no clearly articulated model of how such assessments might be conducted. The protocol described here aims to gather information to; i) Assess rural youth access to health care according to their needs, ii) Identify and understand the strategies developed in rural areas to promote youth access to health care, and iii) Propose actions for further improvement. The protocol is described with particular reference to research being undertaken in the four northernmost counties of Sweden, which contain a widely dispersed and diverse youth population. The protocol proposes qualitative and quantitative methodologies sequentially in four phases. First, to map youth access to health care according to their health care needs, including assessing horizontal equity (equal use of health care for equivalent health needs,) and vertical equity (people with greater health needs should receive more health care than those with lesser needs). Second, a multiple case study design investigates strategies developed across the region (youth clinics, internet applications, public health programs) to improve youth access to health care. Third, qualitative comparative analysis of the 24 rural municipalities in the region identifies the best combination of conditions leading to high youth access to health care. Fourth, a concept mapping study involving rural stakeholders, care providers and youth provides recommended actions to improve rural youth access to health care. The implementation of this research protocol will contribute to 1) generating knowledge that could contribute to strengthening rural youth access to health care, as well as to 2) advancing the application of mixed methods to explore access to health care.

  11. Computational Protocols for Prediction of Solute NMR Relative Chemical Shifts. A Case Study of L-Tryptophan in Aqueous Solution

    DEFF Research Database (Denmark)

    Eriksen, Janus J.; Olsen, Jógvan Magnus H.; Aidas, Kestutis

    2011-01-01

    to the results stemming from the conformations extracted from the MM conformational search in terms of replicating an experimental reference as well as in achieving the correct sequence of the NMR relative chemical shifts of L-tryptophan in aqueous solution. We find this to be due to missing conformations......In this study, we have applied two different spanning protocols for obtaining the molecular conformations of L-tryptophan in aqueous solution, namely a molecular dynamics simulation and a molecular mechanics conformational search with subsequent geometry re-optimization of the stable conformers...... using a quantum mechanically based method. These spanning protocols represent standard ways of obtaining a set of conformations on which NMR calculations may be performed. The results stemming from the solute–solvent configurations extracted from the MD simulation at 300 K are found to be inferior...

  12. Optimised resource construction for verifiable quantum computation

    International Nuclear Information System (INIS)

    Kashefi, Elham; Wallden, Petros

    2017-01-01

    Recent developments have brought the possibility of achieving scalable quantum networks and quantum devices closer. From the computational point of view these emerging technologies become relevant when they are no longer classically simulatable. Hence a pressing challenge is the construction of practical methods to verify the correctness of the outcome produced by universal or non-universal quantum devices. A promising approach that has been extensively explored is the scheme of verification via encryption through blind quantum computation. We present here a new construction that simplifies the required resources for any such verifiable protocol. We obtain an overhead that is linear in the size of the input (computation), while the security parameter remains independent of the size of the computation and can be made exponentially small (with a small extra cost). Furthermore our construction is generic and could be applied to any universal or non-universal scheme with a given underlying graph. (paper)

  13. Fluid-Induced Vibration Analysis for Reactor Internals Using Computational FSI Method

    Energy Technology Data Exchange (ETDEWEB)

    Moon, Jong Sung; Yi, Kun Woo; Sung, Ki Kwang; Im, In Young; Choi, Taek Sang [KEPCO E and C, Daejeon (Korea, Republic of)

    2013-10-15

    This paper introduces a fluid-induced vibration analysis method which calculates the response of the RVI to both deterministic and random loads at once and utilizes more realistic pressure distribution using the computational Fluid Structure Interaction (FSI) method. As addressed above, the FIV analysis for the RVI was carried out using the computational FSI method. This method calculates the response to deterministic and random turbulence loads at once. This method is also a simple and integrative method to get structural dynamic responses of reactor internals to various flow-induced loads. Because the analysis of this paper omitted the bypass flow region and Inner Barrel Assembly (IBA) due to the limitation of computer resources, it is necessary to find an effective way to consider all regions in the RV for the FIV analysis in the future. Reactor coolant flow makes Reactor Vessel Internals (RVI) vibrate and may affect the structural integrity of them. U. S. NRC Regulatory Guide 1.20 requires the Comprehensive Vibration Assessment Program (CVAP) to verify the structural integrity of the RVI for Fluid-Induced Vibration (FIV). The hydraulic forces on the RVI of OPR1000 and APR1400 were computed from the hydraulic formulas and the CVAP measurements in Palo Verde Unit 1 and Yonggwang Unit 4 for the structural vibration analyses. In this method, the hydraulic forces were divided into deterministic and random turbulence loads and were used for the excitation forces of the separate structural analyses. These forces are applied to the finite element model and the responses to them were combined into the resultant stresses.

  14. Positron emission tomography/computed tomography--imaging protocols, artifacts, and pitfalls.

    Science.gov (United States)

    Bockisch, Andreas; Beyer, Thomas; Antoch, Gerald; Freudenberg, Lutz S; Kühl, Hilmar; Debatin, Jörg F; Müller, Stefan P

    2004-01-01

    There has been a longstanding interest in fused images of anatomical information, such as that provided by computed tomography (CT) or magnetic resonance imaging (MRI) systems, with biological information obtainable by positron emission tomography (PET). The near-simultaneous data acquisition in a fixed combination of a PET and a CT scanner in a combined PET/CT imaging system minimizes spatial and temporal mismatches between the modalities by eliminating the need to move the patient in between exams. In addition, using the fast CT scan for PET attenuation correction, the duration of the examination is significantly reduced compared to standalone PET imaging with standard rod-transmission sources. The main source of artifacts arises from the use of the CT-data for scatter and attenuation correction of the PET images. Today, CT reconstruction algorithms cannot account for the presence of metal implants, such as dental fillings or prostheses, properly, thus resulting in streak artifacts, which are propagated into the PET image by the attenuation correction. The transformation of attenuation coefficients at X-ray energies to those at 511 keV works well for soft tissues, bone, and air, but again is insufficient for dense CT contrast agents, such as iodine or barium. Finally, mismatches, for example, due to uncoordinated respiration result in incorrect attenuation-corrected PET images. These artifacts, however, can be minimized or avoided prospectively by careful acquisition protocol considerations. In doubt, the uncorrected images almost always allow discrimination between true and artificial finding. PET/CT has to be integrated into the diagnostic workflow for harvesting the full potential of the new modality. In particular, the diagnostic power of both, the CT and the PET within the combination must not be underestimated. By combining multiple diagnostic studies within a single examination, significant logistic advantages can be expected if the combined PET

  15. Performance of particle in cell methods on highly concurrent computational architectures

    International Nuclear Information System (INIS)

    Adams, M.F.; Ethier, S.; Wichmann, N.

    2009-01-01

    Particle in cell (PIC) methods are effective in computing Vlasov-Poisson system of equations used in simulations of magnetic fusion plasmas. PIC methods use grid based computations, for solving Poisson's equation or more generally Maxwell's equations, as well as Monte-Carlo type methods to sample the Vlasov equation. The presence of two types of discretizations, deterministic field solves and Monte-Carlo methods for the Vlasov equation, pose challenges in understanding and optimizing performance on today large scale computers which require high levels of concurrency. These challenges arises from the need to optimize two very different types of processes and the interactions between them. Modern cache based high-end computers have very deep memory hierarchies and high degrees of concurrency which must be utilized effectively to achieve good performance. The effective use of these machines requires maximizing concurrency by eliminating serial or redundant work and minimizing global communication. A related issue is minimizing the memory traffic between levels of the memory hierarchy because performance is often limited by the bandwidths and latencies of the memory system. This paper discusses some of the performance issues, particularly in regard to parallelism, of PIC methods. The gyrokinetic toroidal code (GTC) is used for these studies and a new radial grid decomposition is presented and evaluated. Scaling of the code is demonstrated on ITER sized plasmas with up to 16K Cray XT3/4 cores.

  16. Performance of particle in cell methods on highly concurrent computational architectures

    International Nuclear Information System (INIS)

    Adams, M F; Ethier, S; Wichmann, N

    2007-01-01

    Particle in cell (PIC) methods are effective in computing Vlasov-Poisson system of equations used in simulations of magnetic fusion plasmas. PIC methods use grid based computations, for solving Poisson's equation or more generally Maxwell's equations, as well as Monte-Carlo type methods to sample the Vlasov equation. The presence of two types of discretizations, deterministic field solves and Monte-Carlo methods for the Vlasov equation, pose challenges in understanding and optimizing performance on today large scale computers which require high levels of concurrency. These challenges arises from the need to optimize two very different types of processes and the interactions between them. Modern cache based high-end computers have very deep memory hierarchies and high degrees of concurrency which must be utilized effectively to achieve good performance. The effective use of these machines requires maximizing concurrency by eliminating serial or redundant work and minimizing global communication. A related issue is minimizing the memory traffic between levels of the memory hierarchy because performance is often limited by the bandwidths and latencies of the memory system. This paper discusses some of the performance issues, particularly in regard to parallelism, of PIC methods. The gyrokinetic toroidal code (GTC) is used for these studies and a new radial grid decomposition is presented and evaluated. Scaling of the code is demonstrated on ITER sized plasmas with up to 16K Cray XT3/4 cores

  17. Short-term electric load forecasting using computational intelligence methods

    OpenAIRE

    Jurado, Sergio; Peralta, J.; Nebot, Àngela; Mugica, Francisco; Cortez, Paulo

    2013-01-01

    Accurate time series forecasting is a key issue to support individual and organizational decision making. In this paper, we introduce several methods for short-term electric load forecasting. All the presented methods stem from computational intelligence techniques: Random Forest, Nonlinear Autoregressive Neural Networks, Evolutionary Support Vector Machines and Fuzzy Inductive Reasoning. The performance of the suggested methods is experimentally justified with several experiments carried out...

  18. Computational method for free surface hydrodynamics

    International Nuclear Information System (INIS)

    Hirt, C.W.; Nichols, B.D.

    1980-01-01

    There are numerous flow phenomena in pressure vessel and piping systems that involve the dynamics of free fluid surfaces. For example, fluid interfaces must be considered during the draining or filling of tanks, in the formation and collapse of vapor bubbles, and in seismically shaken vessels that are partially filled. To aid in the analysis of these types of flow phenomena, a new technique has been developed for the computation of complicated free-surface motions. This technique is based on the concept of a local average volume of fluid (VOF) and is embodied in a computer program for two-dimensional, transient fluid flow called SOLA-VOF. The basic approach used in the VOF technique is briefly described, and compared to other free-surface methods. Specific capabilities of the SOLA-VOF program are illustrated by generic examples of bubble growth and collapse, flows of immiscible fluid mixtures, and the confinement of spilled liquids

  19. Genomics protocols [Methods in molecular biology, v. 175

    National Research Council Canada - National Science Library

    Starkey, Michael P; Elaswarapu, Ramnath

    2001-01-01

    ... to the larger community of researchers who have recognized the potential of genomics research and may themselves be beginning to explore the technologies involved. Some of the techniques described in Genomics Protocols are clearly not restricted to the genomics field; indeed, a prerequisite for many procedures in this discipline is that they require an extremely high throughput, beyond the scope of the average investigator. However, what we have endeavored here to achieve is both to compile a collection of...

  20. A systematic and efficient method to compute multi-loop master integrals

    Directory of Open Access Journals (Sweden)

    Xiao Liu

    2018-04-01

    Full Text Available We propose a novel method to compute multi-loop master integrals by constructing and numerically solving a system of ordinary differential equations, with almost trivial boundary conditions. Thus it can be systematically applied to problems with arbitrary kinematic configurations. Numerical tests show that our method can not only achieve results with high precision, but also be much faster than the only existing systematic method sector decomposition. As a by product, we find a new strategy to compute scalar one-loop integrals without reducing them to master integrals.