WorldWideScience

Sample records for automated error-tolerant macromolecular

  1. The role of macromolecular stability in desiccation tolerance

    NARCIS (Netherlands)

    Wolkers, W.F.

    1998-01-01

    The work presented in this thesis concerns a study on the molecular interactions that play a role in the macromolecular stability of desiccation-tolerant higher plant organs. Fourier transform infrared microspectroscopy was used as the main experimental technique to assess macromolecular

  2. Automated data collection for macromolecular crystallography.

    Science.gov (United States)

    Winter, Graeme; McAuley, Katherine E

    2011-09-01

    An overview, together with some practical advice, is presented of the current status of the automation of macromolecular crystallography (MX) data collection, with a focus on MX beamlines at Diamond Light Source, UK. Copyright © 2011 Elsevier Inc. All rights reserved.

  3. Overview of error-tolerant cockpit research

    Science.gov (United States)

    Abbott, Kathy

    1990-01-01

    The objectives of research in intelligent cockpit aids and intelligent error-tolerant systems are stated. In intelligent cockpit aids research, the objective is to provide increased aid and support to the flight crew of civil transport aircraft through the use of artificial intelligence techniques combined with traditional automation. In intelligent error-tolerant systems, the objective is to develop and evaluate cockpit systems that provide flight crews with safe and effective ways and means to manage aircraft systems, plan and replan flights, and respond to contingencies. A subsystems fault management functional diagram is given. All information is in viewgraph form.

  4. New Paradigm for Macromolecular Crystallography Experiments at SSRL: Automated Crystal Screening And Remote Data Collection

    International Nuclear Information System (INIS)

    Soltis, S.M.; Cohen, A.E.; Deacon, A.; Eriksson, T.; Gonzalez, A.; McPhillips, S.; Chui, H.; Dunten, P.; Hollenbeck, M.; Mathews, I.; Miller, M.; Moorhead, P.; Phizackerley, R.P.; Smith, C.; Song, J.; Bedem, H. van dem; Ellis, P.; Kuhn, P.; McPhillips, T.; Sauter, N.; Sharp, K.

    2009-01-01

    Complete automation of the macromolecular crystallography experiment has been achieved at Stanford Synchrotron Radiation Lightsource (SSRL) through the combination of robust mechanized experimental hardware and a flexible control system with an intuitive user interface. These highly reliable systems have enabled crystallography experiments to be carried out from the researchers' home institutions and other remote locations while retaining complete control over even the most challenging systems. A breakthrough component of the system, the Stanford Auto-Mounter (SAM), has enabled the efficient mounting of cryocooled samples without human intervention. Taking advantage of this automation, researchers have successfully screened more than 200 000 samples to select the crystals with the best diffraction quality for data collection as well as to determine optimal crystallization and cryocooling conditions. These systems, which have been deployed on all SSRL macromolecular crystallography beamlines and several beamlines worldwide, are used by more than 80 research groups in remote locations, establishing a new paradigm for macromolecular crystallography experimentation.

  5. Understanding human management of automation errors

    Science.gov (United States)

    McBride, Sara E.; Rogers, Wendy A.; Fisk, Arthur D.

    2013-01-01

    Automation has the potential to aid humans with a diverse set of tasks and support overall system performance. Automated systems are not always reliable, and when automation errs, humans must engage in error management, which is the process of detecting, understanding, and correcting errors. However, this process of error management in the context of human-automation interaction is not well understood. Therefore, we conducted a systematic review of the variables that contribute to error management. We examined relevant research in human-automation interaction and human error to identify critical automation, person, task, and emergent variables. We propose a framework for management of automation errors to incorporate and build upon previous models. Further, our analysis highlights variables that may be addressed through design and training to positively influence error management. Additional efforts to understand the error management process will contribute to automation designed and implemented to support safe and effective system performance. PMID:25383042

  6. Fully automated data collection and processing system on macromolecular crystallography beamlines at the PF

    International Nuclear Information System (INIS)

    Yamada, Yusuke; Hiraki, Masahiko; Matsugaki, Naohiro; Chavas, Leonard M.G.; Igarashi, Noriyuki; Wakatsuki, Soichi

    2012-01-01

    Fully automated data collection and processing system has been developed on macromolecular crystallography beamlines at the Photon Factory. In this system, the sample exchange, centering and data collection are sequentially performed for all samples stored in the sample exchange system at a beamline without any manual operations. Data processing of collected data sets is also performed automatically. These results are stored into the database system, and users can monitor the progress and results of automated experiment via a Web browser. (author)

  7. AutoDrug: fully automated macromolecular crystallography workflows for fragment-based drug discovery

    International Nuclear Information System (INIS)

    Tsai, Yingssu; McPhillips, Scott E.; González, Ana; McPhillips, Timothy M.; Zinn, Daniel; Cohen, Aina E.; Feese, Michael D.; Bushnell, David; Tiefenbrunn, Theresa; Stout, C. David; Ludaescher, Bertram; Hedman, Britt; Hodgson, Keith O.; Soltis, S. Michael

    2013-01-01

    New software has been developed for automating the experimental and data-processing stages of fragment-based drug discovery at a macromolecular crystallography beamline. A new workflow-automation framework orchestrates beamline-control and data-analysis software while organizing results from multiple samples. AutoDrug is software based upon the scientific workflow paradigm that integrates the Stanford Synchrotron Radiation Lightsource macromolecular crystallography beamlines and third-party processing software to automate the crystallography steps of the fragment-based drug-discovery process. AutoDrug screens a cassette of fragment-soaked crystals, selects crystals for data collection based on screening results and user-specified criteria and determines optimal data-collection strategies. It then collects and processes diffraction data, performs molecular replacement using provided models and detects electron density that is likely to arise from bound fragments. All processes are fully automated, i.e. are performed without user interaction or supervision. Samples can be screened in groups corresponding to particular proteins, crystal forms and/or soaking conditions. A single AutoDrug run is only limited by the capacity of the sample-storage dewar at the beamline: currently 288 samples. AutoDrug was developed in conjunction with RestFlow, a new scientific workflow-automation framework. RestFlow simplifies the design of AutoDrug by managing the flow of data and the organization of results and by orchestrating the execution of computational pipeline steps. It also simplifies the execution and interaction of third-party programs and the beamline-control system. Modeling AutoDrug as a scientific workflow enables multiple variants that meet the requirements of different user groups to be developed and supported. A workflow tailored to mimic the crystallography stages comprising the drug-discovery pipeline of CoCrystal Discovery Inc. has been deployed and successfully

  8. The relationship between automation complexity and operator error

    International Nuclear Information System (INIS)

    Ogle, Russell A.; Morrison, Delmar 'Trey'; Carpenter, Andrew R.

    2008-01-01

    One of the objectives of process automation is to improve the safety of plant operations. Manual operation, it is often argued, provides too many opportunities for operator error. By this argument, process automation should decrease the risk of accidents caused by operator error. However, some accident theorists have argued that while automation may eliminate some types of operator error, it may create new varieties of error. In this paper we present six case studies of explosions involving operator error in an automated process facility. Taken together, these accidents resulted in six fatalities, 30 injuries and hundreds of millions of dollars in property damage. The case studies are divided into two categories: low and high automation complexity (three case studies each). The nature of the operator error was dependent on the level of automation complexity. For each case study, we also consider the contribution of the existing engineering controls such as safety instrumented systems (SIS) or safety critical devices (SCD) and explore why they were insufficient to prevent, or mitigate, the severity of the explosion

  9. Automated Axis Alignment for a Nanomanipulator inside SEM and Its Error Optimization

    Directory of Open Access Journals (Sweden)

    Chao Zhou

    2017-01-01

    Full Text Available In the motion of probing nanostructures, repeating position and movement is frequently happing and tolerance for position error is stringent. The consistency between the axis of manipulators and image is very significant since the visual servo is the most important tool in the automated manipulation. This paper proposed an automated axis alignment method for a nanomanipulator inside the SEM by recognizing the position of a closed-loop controlling the end-effector, which can characterize the relationship of these two axes, and then the rotation matrix can be calculated accordingly. The error of this method and its transfer function are also calculated to compare the iteration method and average method. The method in this paper can accelerate the process of axis alignment to avoid the electron beam induced deposition effect on the end tips. Experiment demonstration shows that it can achieve a 0.1-degree precision in 90 seconds.

  10. Understanding reliance on automation: effects of error type, error distribution, age and experience

    Science.gov (United States)

    Sanchez, Julian; Rogers, Wendy A.; Fisk, Arthur D.; Rovira, Ericka

    2015-01-01

    An obstacle detection task supported by “imperfect” automation was used with the goal of understanding the effects of automation error types and age on automation reliance. Sixty younger and sixty older adults interacted with a multi-task simulation of an agricultural vehicle (i.e. a virtual harvesting combine). The simulator included an obstacle detection task and a fully manual tracking task. A micro-level analysis provided insight into the way reliance patterns change over time. The results indicated that there are distinct patterns of reliance that develop as a function of error type. A prevalence of automation false alarms led participants to under-rely on the automation during alarm states while over relying on it during non-alarms states. Conversely, a prevalence of automation misses led participants to over-rely on automated alarms and under-rely on the automation during non-alarm states. Older adults adjusted their behavior according to the characteristics of the automation similarly to younger adults, although it took them longer to do so. The results of this study suggest the relationship between automation reliability and reliance depends on the prevalence of specific errors and on the state of the system. Understanding the effects of automation detection criterion settings on human-automation interaction can help designers of automated systems make predictions about human behavior and system performance as a function of the characteristics of the automation. PMID:25642142

  11. Status and prospects of macromolecular crystallography

    Indian Academy of Sciences (India)

    technique that could be completely automated in most cases. ... major challenge in macromolecular crystallography today is ... tial characterization of crystals in the home source and make a ... opportunities for a generation of structural biolo-.

  12. Systematic investigation of SLC final focus tolerances to errors

    International Nuclear Information System (INIS)

    Napoly, O.

    1996-10-01

    In this paper we review the tolerances of the SLC final focus system. To calculate these tolerances we used the error analysis routine of the program FFADA which has been written to aid the design and the analysis of final focus systems for the future linear colliders. This routine, complete by S. Fartoukh, systematically reviews the errors generated by the geometric 6-d Euclidean displacements of each magnet as well as by the field errors (normal and skew) up to the sextipolar order. It calculates their effects on the orbit and the transfer matrix at the second order in the errors, thus including cross-talk between errors originating from two different magnets. It also translates these effects in terms of tolerance derived from spot size growth and luminosity loss. We have run the routine for the following set of beam IP parameters: σ * x = 2.1 μm; σ * x' = 300 μrd; σ * x = 1 mm; σ * y = 0.55 μm; σ * y' = 200 μrd; σ * b = 2 x 10 -3 . The resulting errors and tolerances are displayed in a series of histograms which are reproduced in this paper. (author)

  13. The organizational context of error tolerant interface systems

    International Nuclear Information System (INIS)

    Sepanloo, K.; Meshkati, N.; Kozuh, M.

    1995-01-01

    Human error has been recognized as the main contributor to the occurrence of incidents in large technological systems such as nuclear power plants. Recent researches have concluded that human errors are unavoidable side effects of exploration of acceptable performance during adaptation to the unknown changes in the environment. To assist the operators in coping with unforeseen situations, the innovative error tolerant interface systems have been proposed to provide the operators with opportunities to make hypothetical tests without having to carry them out directly on the plant in potentially irreversible conditions. On the other hand, the degree of success of introduction of any new system into a tightly-coupled complex socio-technological system is known to be a great deal dependent upon the degree of harmony of that system with the organization s framework and attitudes. Error tolerant interface systems with features of simplicity, transparency, error detectability and recoverability provide a forgiving cognition environment where the effects of errors are observable and recoverable. The nature of these systems are likely to be more consistent with flexible and rather plain organizational structures, in which static and punitive concepts of human error are modified on the favour of dynamic and adaptive approaches. In this paper the features of error tolerant interface systems are explained and their consistent organizational structures are explored. (author)

  14. The organizational context of error tolerant interface systems

    Energy Technology Data Exchange (ETDEWEB)

    Sepanloo, K [Nuclear Safety Department, Tehran (Iran, Islamic Republic of); Meshkati, N [Institute of Safety and Systems Management, Los Angeles (United States); Kozuh, M [Josef Stefan Institute, Ljubljana (Slovenia)

    1996-12-31

    Human error has been recognized as the main contributor to the occurrence of incidents in large technological systems such as nuclear power plants. Recent researches have concluded that human errors are unavoidable side effects of exploration of acceptable performance during adaptation to the unknown changes in the environment. To assist the operators in coping with unforeseen situations, the innovative error tolerant interface systems have been proposed to provide the operators with opportunities to make hypothetical tests without having to carry them out directly on the plant in potentially irreversible conditions. On the other hand, the degree of success of introduction of any new system into a tightly-coupled complex socio-technological system is known to be a great deal dependent upon the degree of harmony of that system with the organization s framework and attitudes. Error tolerant interface systems with features of simplicity, transparency, error detectability and recoverability provide a forgiving cognition environment where the effects of errors are observable and recoverable. The nature of these systems are likely to be more consistent with flexible and rather plain organizational structures, in which static and punitive concepts of human error are modified on the favour of dynamic and adaptive approaches. In this paper the features of error tolerant interface systems are explained and their consistent organizational structures are explored. (author) 11 refs.

  15. Dispensing error rate after implementation of an automated pharmacy carousel system.

    Science.gov (United States)

    Oswald, Scott; Caldwell, Richard

    2007-07-01

    A study was conducted to determine filling and dispensing error rates before and after the implementation of an automated pharmacy carousel system (APCS). The study was conducted in a 613-bed acute and tertiary care university hospital. Before the implementation of the APCS, filling and dispensing rates were recorded during October through November 2004 and January 2005. Postimplementation data were collected during May through June 2006. Errors were recorded in three areas of pharmacy operations: first-dose or missing medication fill, automated dispensing cabinet fill, and interdepartmental request fill. A filling error was defined as an error caught by a pharmacist during the verification step. A dispensing error was defined as an error caught by a pharmacist observer after verification by the pharmacist. Before implementation of the APCS, 422 first-dose or missing medication orders were observed between October 2004 and January 2005. Independent data collected in December 2005, approximately six weeks after the introduction of the APCS, found that filling and error rates had increased. The filling rate for automated dispensing cabinets was associated with the largest decrease in errors. Filling and dispensing error rates had decreased by December 2005. In terms of interdepartmental request fill, no dispensing errors were noted in 123 clinic orders dispensed before the implementation of the APCS. One dispensing error out of 85 clinic orders was identified after implementation of the APCS. The implementation of an APCS at a university hospital decreased medication filling errors related to automated cabinets only and did not affect other filling and dispensing errors.

  16. Automated drug dispensing system reduces medication errors in an intensive care setting.

    Science.gov (United States)

    Chapuis, Claire; Roustit, Matthieu; Bal, Gaëlle; Schwebel, Carole; Pansu, Pascal; David-Tchouda, Sandra; Foroni, Luc; Calop, Jean; Timsit, Jean-François; Allenet, Benoît; Bosson, Jean-Luc; Bedouch, Pierrick

    2010-12-01

    We aimed to assess the impact of an automated dispensing system on the incidence of medication errors related to picking, preparation, and administration of drugs in a medical intensive care unit. We also evaluated the clinical significance of such errors and user satisfaction. Preintervention and postintervention study involving a control and an intervention medical intensive care unit. Two medical intensive care units in the same department of a 2,000-bed university hospital. Adult medical intensive care patients. After a 2-month observation period, we implemented an automated dispensing system in one of the units (study unit) chosen randomly, with the other unit being the control. The overall error rate was expressed as a percentage of total opportunities for error. The severity of errors was classified according to National Coordinating Council for Medication Error Reporting and Prevention categories by an expert committee. User satisfaction was assessed through self-administered questionnaires completed by nurses. A total of 1,476 medications for 115 patients were observed. After automated dispensing system implementation, we observed a reduced percentage of total opportunities for error in the study compared to the control unit (13.5% and 18.6%, respectively; perror (20.4% and 13.5%; perror showed a significant impact of the automated dispensing system in reducing preparation errors (perrors caused no harm (National Coordinating Council for Medication Error Reporting and Prevention category C). The automated dispensing system did not reduce errors causing harm. Finally, the mean for working conditions improved from 1.0±0.8 to 2.5±0.8 on the four-point Likert scale. The implementation of an automated dispensing system reduced overall medication errors related to picking, preparation, and administration of drugs in the intensive care unit. Furthermore, most nurses favored the new drug dispensation organization.

  17. Current status and future prospects of an automated sample exchange system PAM for protein crystallography

    Science.gov (United States)

    Hiraki, M.; Yamada, Y.; Chavas, L. M. G.; Matsugaki, N.; Igarashi, N.; Wakatsuki, S.

    2013-03-01

    To achieve fully-automated and/or remote data collection in high-throughput X-ray experiments, the Structural Biology Research Centre at the Photon Factory (PF) has installed PF automated mounting system (PAM) for sample exchange robots at PF macromolecular crystallography beamlines BL-1A, BL-5A, BL-17A, AR-NW12A and AR-NE3A. We are upgrading the experimental systems, including the PAM for stable and efficient operation. To prevent human error in automated data collection, we installed a two-dimensional barcode reader for identification of the cassettes and sample pins. Because no liquid nitrogen pipeline in the PF experimental hutch is installed, the users commonly add liquid nitrogen using a small Dewar. To address this issue, an automated liquid nitrogen filling system that links a 100-liter tank to the robot Dewar has been installed on the PF macromolecular beamline. Here we describe this new implementation, as well as future prospects.

  18. Magnetic field errors tolerances of Nuclotron booster

    Science.gov (United States)

    Butenko, Andrey; Kazinova, Olha; Kostromin, Sergey; Mikhaylov, Vladimir; Tuzikov, Alexey; Khodzhibagiyan, Hamlet

    2018-04-01

    Generation of magnetic field in units of booster synchrotron for the NICA project is one of the most important conditions for getting the required parameters and qualitative accelerator operation. Research of linear and nonlinear dynamics of ion beam 197Au31+ in the booster have carried out with MADX program. Analytical estimation of magnetic field errors tolerance and numerical computation of dynamic aperture of booster DFO-magnetic lattice are presented. Closed orbit distortion with random errors of magnetic fields and errors in layout of booster units was evaluated.

  19. Automation of Commanding at NASA: Reducing Human Error in Space Flight

    Science.gov (United States)

    Dorn, Sarah J.

    2010-01-01

    Automation has been implemented in many different industries to improve efficiency and reduce human error. Reducing or eliminating the human interaction in tasks has been proven to increase productivity in manufacturing and lessen the risk of mistakes by humans in the airline industry. Human space flight requires the flight controllers to monitor multiple systems and react quickly when failures occur so NASA is interested in implementing techniques that can assist in these tasks. Using automation to control some of these responsibilities could reduce the number of errors the flight controllers encounter due to standard human error characteristics. This paper will investigate the possibility of reducing human error in the critical area of manned space flight at NASA.

  20. Aviation safety/automation program overview

    Science.gov (United States)

    Morello, Samuel A.

    1990-01-01

    The goal is to provide a technology base leading to improved safety of the national airspace system through the development and integration of human-centered automation technologies for aircraft crews and air traffic controllers. Information on the problems, specific objectives, human-automation interaction, intelligent error-tolerant systems, and air traffic control/cockpit integration is given in viewgraph form.

  1. Positional error in automated geocoding of residential addresses

    Directory of Open Access Journals (Sweden)

    Talbot Thomas O

    2003-12-01

    Full Text Available Abstract Background Public health applications using geographic information system (GIS technology are steadily increasing. Many of these rely on the ability to locate where people live with respect to areas of exposure from environmental contaminants. Automated geocoding is a method used to assign geographic coordinates to an individual based on their street address. This method often relies on street centerline files as a geographic reference. Such a process introduces positional error in the geocoded point. Our study evaluated the positional error caused during automated geocoding of residential addresses and how this error varies between population densities. We also evaluated an alternative method of geocoding using residential property parcel data. Results Positional error was determined for 3,000 residential addresses using the distance between each geocoded point and its true location as determined with aerial imagery. Error was found to increase as population density decreased. In rural areas of an upstate New York study area, 95 percent of the addresses geocoded to within 2,872 m of their true location. Suburban areas revealed less error where 95 percent of the addresses geocoded to within 421 m. Urban areas demonstrated the least error where 95 percent of the addresses geocoded to within 152 m of their true location. As an alternative to using street centerline files for geocoding, we used residential property parcel points to locate the addresses. In the rural areas, 95 percent of the parcel points were within 195 m of the true location. In suburban areas, this distance was 39 m while in urban areas 95 percent of the parcel points were within 21 m of the true location. Conclusion Researchers need to determine if the level of error caused by a chosen method of geocoding may affect the results of their project. As an alternative method, property data can be used for geocoding addresses if the error caused by traditional methods is

  2. The using of the control room automation against human errors

    International Nuclear Information System (INIS)

    Kautto, A.

    1993-01-01

    The control room automation has developed very strongly during the 80's in IVO (Imatran Voima Oy). The former work expanded strongly with building of the full scope training simulator to the Loviisa plant. The important milestones has been, for example the testing of the Critical Function Monitoring System, a concept developed by Combustion Eng. Inc., in Loviisa training simulator 1982, the replacing of the process and simulator computers in Loviisa 1989, and 1990 and the presenting the use of the computer based procedures in training of operators 1993. With developing of automation and procedures it is possible to minimize the probability of human error. However, it is not possible totally eliminate the risks caused by human errors. (orig.)

  3. Automated Structure Solution with the PHENIX Suite

    Energy Technology Data Exchange (ETDEWEB)

    Zwart, Peter H.; Zwart, Peter H.; Afonine, Pavel; Grosse-Kunstleve, Ralf W.; Hung, Li-Wei; Ioerger, Tom R.; McCoy, A.J.; McKee, Eric; Moriarty, Nigel; Read, Randy J.; Sacchettini, James C.; Sauter, Nicholas K.; Storoni, L.C.; Terwilliger, Tomas C.; Adams, Paul D.

    2008-06-09

    Significant time and effort are often required to solve and complete a macromolecular crystal structure. The development of automated computational methods for the analysis, solution and completion of crystallographic structures has the potential to produce minimally biased models in a short time without the need for manual intervention. The PHENIX software suite is a highly automated system for macromolecular structure determination that can rapidly arrive at an initial partial model of a structure without significant human intervention, given moderate resolution and good quality data. This achievement has been made possible by the development of new algorithms for structure determination, maximum-likelihood molecular replacement (PHASER), heavy-atom search (HySS), template and pattern-based automated model-building (RESOLVE, TEXTAL), automated macromolecular refinement (phenix.refine), and iterative model-building, density modification and refinement that can operate at moderate resolution (RESOLVE, AutoBuild). These algorithms are based on a highly integrated and comprehensive set of crystallographic libraries that have been built and made available to the community. The algorithms are tightly linked and made easily accessible to users through the PHENIX Wizards and the PHENIX GUI.

  4. Automated structure solution with the PHENIX suite

    Energy Technology Data Exchange (ETDEWEB)

    Terwilliger, Thomas C [Los Alamos National Laboratory; Zwart, Peter H [LBNL; Afonine, Pavel V [LBNL; Grosse - Kunstleve, Ralf W [LBNL

    2008-01-01

    Significant time and effort are often required to solve and complete a macromolecular crystal structure. The development of automated computational methods for the analysis, solution, and completion of crystallographic structures has the potential to produce minimally biased models in a short time without the need for manual intervention. The PHENIX software suite is a highly automated system for macromolecular structure determination that can rapidly arrive at an initial partial model of a structure without significant human intervention, given moderate resolution, and good quality data. This achievement has been made possible by the development of new algorithms for structure determination, maximum-likelihood molecular replacement (PHASER), heavy-atom search (HySS), template- and pattern-based automated model-building (RESOLVE, TEXTAL), automated macromolecular refinement (phenix. refine), and iterative model-building, density modification and refinement that can operate at moderate resolution (RESOLVE, AutoBuild). These algorithms are based on a highly integrated and comprehensive set of crystallographic libraries that have been built and made available to the community. The algorithms are tightly linked and made easily accessible to users through the PHENIX Wizards and the PHENIX GUI.

  5. Quantum Error Correction and Fault Tolerant Quantum Computing

    CERN Document Server

    Gaitan, Frank

    2008-01-01

    It was once widely believed that quantum computation would never become a reality. However, the discovery of quantum error correction and the proof of the accuracy threshold theorem nearly ten years ago gave rise to extensive development and research aimed at creating a working, scalable quantum computer. Over a decade has passed since this monumental accomplishment yet no book-length pedagogical presentation of this important theory exists. Quantum Error Correction and Fault Tolerant Quantum Computing offers the first full-length exposition on the realization of a theory once thought impo

  6. MolProbity: all-atom structure validation for macromolecular crystallography

    International Nuclear Information System (INIS)

    Chen, Vincent B.; Arendall, W. Bryan III; Headd, Jeffrey J.; Keedy, Daniel A.; Immormino, Robert M.; Kapral, Gary J.; Murray, Laura W.; Richardson, Jane S.; Richardson, David C.

    2010-01-01

    MolProbity structure validation will diagnose most local errors in macromolecular crystal structures and help to guide their correction. MolProbity is a structure-validation web service that provides broad-spectrum solidly based evaluation of model quality at both the global and local levels for both proteins and nucleic acids. It relies heavily on the power and sensitivity provided by optimized hydrogen placement and all-atom contact analysis, complemented by updated versions of covalent-geometry and torsion-angle criteria. Some of the local corrections can be performed automatically in MolProbity and all of the diagnostics are presented in chart and graphical forms that help guide manual rebuilding. X-ray crystallography provides a wealth of biologically important molecular data in the form of atomic three-dimensional structures of proteins, nucleic acids and increasingly large complexes in multiple forms and states. Advances in automation, in everything from crystallization to data collection to phasing to model building to refinement, have made solving a structure using crystallography easier than ever. However, despite these improvements, local errors that can affect biological interpretation are widespread at low resolution and even high-resolution structures nearly all contain at least a few local errors such as Ramachandran outliers, flipped branched protein side chains and incorrect sugar puckers. It is critical both for the crystallographer and for the end user that there are easy and reliable methods to diagnose and correct these sorts of errors in structures. MolProbity is the authors’ contribution to helping solve this problem and this article reviews its general capabilities, reports on recent enhancements and usage, and presents evidence that the resulting improvements are now beneficially affecting the global database

  7. Integration of error tolerance into the design of control rooms of nuclear power plants

    International Nuclear Information System (INIS)

    Sepanloo, Kamran

    1998-08-01

    Many complex technological systems' failures have been attributed to human errors. Today, based on extensive research on the role of human element in technological systems it is known that human error can not totally be eliminated in modern, flexible, or changing work environments by conventional style design strategies(e.g. defence in depth), or better instructions nor should they be. Instead, the operators' ability to explore degrees of freedom should be supported and means for recovering from the effects of errors should be included. This calls for innovative error tolerant design of technological systems. Integration of error tolerant concept into the design, construction, startup, and operation of nuclear power plants provides an effective means of reducing human error occurrence during all stages of life of it and therefore leads to considerable enhancement of plant's safety

  8. Aviation safety and automation technology for subsonic transports

    Science.gov (United States)

    Albers, James A.

    1991-01-01

    Discussed here are aviation safety human factors and air traffic control (ATC) automation research conducted at the NASA Ames Research Center. Research results are given in the areas of flight deck and ATC automations, displays and warning systems, crew coordination, and crew fatigue and jet lag. Accident investigation and an incident reporting system that is used to guide the human factors research is discussed. A design philosophy for human-centered automation is given, along with an evaluation of automation on advanced technology transports. Intelligent error tolerant systems such as electronic checklists are discussed along with design guidelines for reducing procedure errors. The data on evaluation of Crew Resource Management (CRM) training indicates highly significant positive changes in appropriate flight deck behavior and more effective use of available resources for crew members receiving the training.

  9. PS-022 Complex automated medication systems reduce medication administration error rates in an acute medical ward

    DEFF Research Database (Denmark)

    Risør, Bettina Wulff; Lisby, Marianne; Sørensen, Jan

    2017-01-01

    Background Medication errors have received extensive attention in recent decades and are of significant concern to healthcare organisations globally. Medication errors occur frequently, and adverse events associated with medications are one of the largest causes of harm to hospitalised patients...... cabinet, automated dispensing and barcode medication administration; (2) non-patient specific automated dispensing and barcode medication administration. The occurrence of administration errors was observed in three 3 week periods. The error rates were calculated by dividing the number of doses with one...

  10. The Joint Structural Biology Group beam lines at the ESRF: Modern macromolecular crystallography

    CERN Document Server

    Mitchell, E P

    2001-01-01

    Macromolecular crystallography has evolved considerably over the last decade. Data sets in under an hour are now possible on high throughput beam lines leading to electron density and, possibly, initial models calculated on-site. There are five beam lines currently dedicated to macromolecular crystallography: the ID14 complex and BM-14 (soon to be superseded by ID-29). These lines handle over five hundred projects every six months and demand is increasing. Automated sample handling, alignment and data management protocols will be required to work efficiently with this demanding load. Projects developing these themes are underway within the JSBG.

  11. Global error estimation based on the tolerance proportionality for some adaptive Runge-Kutta codes

    Science.gov (United States)

    Calvo, M.; González-Pinto, S.; Montijano, J. I.

    2008-09-01

    Modern codes for the numerical solution of Initial Value Problems (IVPs) in ODEs are based in adaptive methods that, for a user supplied tolerance [delta], attempt to advance the integration selecting the size of each step so that some measure of the local error is [similar, equals][delta]. Although this policy does not ensure that the global errors are under the prescribed tolerance, after the early studies of Stetter [Considerations concerning a theory for ODE-solvers, in: R. Burlisch, R.D. Grigorieff, J. Schröder (Eds.), Numerical Treatment of Differential Equations, Proceedings of Oberwolfach, 1976, Lecture Notes in Mathematics, vol. 631, Springer, Berlin, 1978, pp. 188-200; Tolerance proportionality in ODE codes, in: R. März (Ed.), Proceedings of the Second Conference on Numerical Treatment of Ordinary Differential Equations, Humbold University, Berlin, 1980, pp. 109-123] and the extensions of Higham [Global error versus tolerance for explicit Runge-Kutta methods, IMA J. Numer. Anal. 11 (1991) 457-480; The tolerance proportionality of adaptive ODE solvers, J. Comput. Appl. Math. 45 (1993) 227-236; The reliability of standard local error control algorithms for initial value ordinary differential equations, in: Proceedings: The Quality of Numerical Software: Assessment and Enhancement, IFIP Series, Springer, Berlin, 1997], it has been proved that in many existing explicit Runge-Kutta codes the global errors behave asymptotically as some rational power of [delta]. This step-size policy, for a given IVP, determines at each grid point tn a new step-size hn+1=h(tn;[delta]) so that h(t;[delta]) is a continuous function of t. In this paper a study of the tolerance proportionality property under a discontinuous step-size policy that does not allow to change the size of the step if the step-size ratio between two consecutive steps is close to unity is carried out. This theory is applied to obtain global error estimations in a few problems that have been solved with

  12. Healing assessment of tile sets for error tolerance in DNA self-assembly.

    Science.gov (United States)

    Hashempour, M; Mashreghian Arani, Z; Lombardi, F

    2008-12-01

    An assessment of the effectiveness of healing for error tolerance in DNA self-assembly tile sets for algorithmic/nano-manufacturing applications is presented. Initially, the conditions for correct binding of a tile to an existing aggregate are analysed using a Markovian approach; based on this analysis, it is proved that correct aggregation (as identified with a so-called ideal tile set) is not always met for the existing tile sets for nano-manufacturing. A metric for assessing tile sets for healing by utilising punctures is proposed. Tile sets are investigated and assessed with respect to features such as error (mismatched tile) movement, punctured area and bond types. Subsequently, it is shown that the proposed metric can comprehensively assess the healing effectiveness of a puncture type for a tile set and its capability to attain error tolerance for the desired pattern. Extensive simulation results are provided.

  13. Automated error-tolerant macromolecular structure determination from multidimensional nuclear Overhauser enhancement spectra and chemical shift assignments: improved robustness and performance of the PASD algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Kuszewski, John J.; Thottungal, Robin Augustine [National Institutes of Health, Imaging Sciences Laboratory, Center for Information Technology (United States); Clore, G. Marius [National Institutes of Health, Laboratory of Chemical Physics, National Institute of Diabetes and Digestive and Kidney Diseases (United States)], E-mail: mariusc@mail.nih.gov; Schwieters, Charles D. [National Institutes of Health, Imaging Sciences Laboratory, Center for Information Technology (United States)], E-mail: Charles.Schwieters@nih.gov

    2008-08-15

    We report substantial improvements to the previously introduced automated NOE assignment and structure determination protocol known as PASD (Kuszewski et al. (2004) J Am Chem Soc 26:6258-6273). The improved protocol includes extensive analysis of input spectral data to create a low-resolution contact map of residues expected to be close in space. This map is used to obtain reasonable initial guesses of NOE assignment likelihoods which are refined during subsequent structure calculations. Information in the contact map about which residues are predicted to not be close in space is applied via conservative repulsive distance restraints which are used in early phases of the structure calculations. In comparison with the previous protocol, the new protocol requires significantly less computation time. We show results of running the new PASD protocol on six proteins and demonstrate that useful assignment and structural information is extracted on proteins of more than 220 residues. We show that useful assignment information can be obtained even in the case in which a unique structure cannot be determined.

  14. SU-G-TeP4-08: Automating the Verification of Patient Treatment Parameters

    Energy Technology Data Exchange (ETDEWEB)

    DiCostanzo, D; Ayan, A; Woollard, J; Gupta, N [The Ohio State University, Columbus, OH (United States)

    2016-06-15

    Purpose: To automate the daily verification of each patient’s treatment by utilizing the trajectory log files (TLs) written by the Varian TrueBeam linear accelerator while reducing the number of false positives including jaw and gantry positioning errors, that are displayed in the Treatment History tab of Varian’s Chart QA module. Methods: Small deviations in treatment parameters are difficult to detect in weekly chart checks, but may be significant in reducing delivery errors, and would be critical if detected daily. Software was developed in house to read TLs. Multiple functions were implemented within the software that allow it to operate via a GUI to analyze TLs, or as a script to run on a regular basis. In order to determine tolerance levels for the scripted analysis, 15,241 TLs from seven TrueBeams were analyzed. The maximum error of each axis for each TL was written to a CSV file and statistically analyzed to determine the tolerance for each axis accessible in the TLs to flag for manual review. The software/scripts developed were tested by varying the tolerance values to ensure veracity. After tolerances were determined, multiple weeks of manual chart checks were performed simultaneously with the automated analysis to ensure validity. Results: The tolerance values for the major axis were determined to be, 0.025 degrees for the collimator, 1.0 degree for the gantry, 0.002cm for the y-jaws, 0.01cm for the x-jaws, and 0.5MU for the MU. The automated verification of treatment parameters has been in clinical use for 4 months. During that time, no errors in machine delivery of the patient treatments were found. Conclusion: The process detailed here is a viable and effective alternative to manually checking treatment parameters during weekly chart checks.

  15. From drafting guideline to error detection: Automating style checking for legislative texts

    OpenAIRE

    Höfler Stefan; Sugisaki Kyoko

    2012-01-01

    This paper reports on the development of methods for the automated detection of violations of style guidelines for legislative texts, and their implementation in a prototypical tool. To this aim, the approach of error modelling employed in automated style checkers for technical writing is enhanced to meet the requirements of legislative editing. The paper identifies and discusses the two main sets of challenges that have to be tackled in this process: (i) the provision of domain-specific NLP ...

  16. Study of the Ultimate Error of the Axis Tolerance Feature and Its Pose Decoupling Based on an Area Coordinate System

    Directory of Open Access Journals (Sweden)

    Qungui Du

    2018-03-01

    Full Text Available Manufacturing error and assembly error should be taken into consideration during evaluation and analysis of accurate product performance in the design phase. Traditional tolerance analysis methods establish error propagation model based on dimension chains with tolerance values being regarded as error boundaries, and obtain the limit of target feature error through optimization methods or conducting statistical analysis with the tolerance domain being the boundary. As deviations of the tolerance feature (TF on degrees of freedom (DOF have coupling relations, accurate deviations on all DOF may not be obtained, even though these deviations constitute the basis for product performance analysis. Therefore, taking the widely used shaft-hole fit as an example, a pose decoupling model of the axis TF was proposed based on an area coordinate system. This model realized decoupling analysis of any pose of the axis TF within the tolerance domain. As proposed by the authors, by combining a tolerance analysis model based on tracking local coordinate systems, ultimate pose analysis of the closed-loop system, namely the target feature, as well as statistical analysis could be further implemented. This method contributed to analysis of true product performance with arbitrary error in the product design phase from the angle of tolerance, therefore, shortening the product research and development cycle. This method is demonstrated through applying it to a real-life example.

  17. Complacency and Automation Bias in the Use of Imperfect Automation.

    Science.gov (United States)

    Wickens, Christopher D; Clegg, Benjamin A; Vieane, Alex Z; Sebok, Angelia L

    2015-08-01

    We examine the effects of two different kinds of decision-aiding automation errors on human-automation interaction (HAI), occurring at the first failure following repeated exposure to correctly functioning automation. The two errors are incorrect advice, triggering the automation bias, and missing advice, reflecting complacency. Contrasts between analogous automation errors in alerting systems, rather than decision aiding, have revealed that alerting false alarms are more problematic to HAI than alerting misses are. Prior research in decision aiding, although contrasting the two aiding errors (incorrect vs. missing), has confounded error expectancy. Participants performed an environmental process control simulation with and without decision aiding. For those with the aid, automation dependence was created through several trials of perfect aiding performance, and an unexpected automation error was then imposed in which automation was either gone (one group) or wrong (a second group). A control group received no automation support. The correct aid supported faster and more accurate diagnosis and lower workload. The aid failure degraded all three variables, but "automation wrong" had a much greater effect on accuracy, reflecting the automation bias, than did "automation gone," reflecting the impact of complacency. Some complacency was manifested for automation gone, by a longer latency and more modest reduction in accuracy. Automation wrong, creating the automation bias, appears to be a more problematic form of automation error than automation gone, reflecting complacency. Decision-aiding automation should indicate its lower degree of confidence in uncertain environments to avoid the automation bias. © 2015, Human Factors and Ergonomics Society.

  18. Automated evolutionary restructuring of workflows to minimise errors via stochastic model checking

    DEFF Research Database (Denmark)

    Herbert, Luke Thomas; Hansen, Zaza Nadja Lee; Jacobsen, Peter

    2014-01-01

    This paper presents a framework for the automated restructuring of workflows that allows one to minimise the impact of errors on a production workflow. The framework allows for the modelling of workflows by means of a formalised subset of the Business Process Modelling and Notation (BPMN) language...

  19. Errors detected in pediatric oral liquid medication doses prepared in an automated workflow management system.

    Science.gov (United States)

    Bledsoe, Sarah; Van Buskirk, Alex; Falconer, R James; Hollon, Andrew; Hoebing, Wendy; Jokic, Sladan

    2018-02-01

    The effectiveness of barcode-assisted medication preparation (BCMP) technology on detecting oral liquid dose preparation errors. From June 1, 2013, through May 31, 2014, a total of 178,344 oral doses were processed at Children's Mercy, a 301-bed pediatric hospital, through an automated workflow management system. Doses containing errors detected by the system's barcode scanning system or classified as rejected by the pharmacist were further reviewed. Errors intercepted by the barcode-scanning system were classified as (1) expired product, (2) incorrect drug, (3) incorrect concentration, and (4) technological error. Pharmacist-rejected doses were categorized into 6 categories based on the root cause of the preparation error: (1) expired product, (2) incorrect concentration, (3) incorrect drug, (4) incorrect volume, (5) preparation error, and (6) other. Of the 178,344 doses examined, 3,812 (2.1%) errors were detected by either the barcode-assisted scanning system (1.8%, n = 3,291) or a pharmacist (0.3%, n = 521). The 3,291 errors prevented by the barcode-assisted system were classified most commonly as technological error and incorrect drug, followed by incorrect concentration and expired product. Errors detected by pharmacists were also analyzed. These 521 errors were most often classified as incorrect volume, preparation error, expired product, other, incorrect drug, and incorrect concentration. BCMP technology detected errors in 1.8% of pediatric oral liquid medication doses prepared in an automated workflow management system, with errors being most commonly attributed to technological problems or incorrect drugs. Pharmacists rejected an additional 0.3% of studied doses. Copyright © 2018 by the American Society of Health-System Pharmacists, Inc. All rights reserved.

  20. Macromolecular therapeutics.

    Science.gov (United States)

    Yang, Jiyuan; Kopeček, Jindřich

    2014-09-28

    This review covers water-soluble polymer-drug conjugates and macromolecules that possess biological activity without attached low molecular weight drugs. The main design principles of traditional and backbone degradable polymer-drug conjugates as well as the development of a new paradigm in nanomedicines - (low molecular weight) drug-free macromolecular therapeutics are discussed. To address the biological features of cancer, macromolecular therapeutics directed to stem/progenitor cells and the tumor microenvironment are deliberated. Finally, the future perspectives of the field are briefly debated. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. Fault-tolerant quantum computing in the Pauli or Clifford frame with slow error diagnostics

    Directory of Open Access Journals (Sweden)

    Christopher Chamberland

    2018-01-01

    Full Text Available We consider the problem of fault-tolerant quantum computation in the presence of slow error diagnostics, either caused by measurement latencies or slow decoding algorithms. Our scheme offers a few improvements over previously existing solutions, for instance it does not require active error correction and results in a reduced error-correction overhead when error diagnostics is much slower than the gate time. In addition, we adapt our protocol to cases where the underlying error correction strategy chooses the optimal correction amongst all Clifford gates instead of the usual Pauli gates. The resulting Clifford frame protocol is of independent interest as it can increase error thresholds and could find applications in other areas of quantum computation.

  2. Furfural-tolerant Zymomonas mobilis derived from error-prone PCR-based whole genome shuffling and their tolerant mechanism.

    Science.gov (United States)

    Huang, Suzhen; Xue, Tingli; Wang, Zhiquan; Ma, Yuanyuan; He, Xueting; Hong, Jiefang; Zou, Shaolan; Song, Hao; Zhang, Minhua

    2018-04-01

    Furfural-tolerant strain is essential for the fermentative production of biofuels or chemicals from lignocellulosic biomass. In this study, Zymomonas mobilis CP4 was for the first time subjected to error-prone PCR-based whole genome shuffling, and the resulting mutants F211 and F27 that could tolerate 3 g/L furfural were obtained. The mutant F211 under various furfural stress conditions could rapidly grow when the furfural concentration reduced to 1 g/L. Meanwhile, the two mutants also showed higher tolerance to high concentration of glucose than the control strain CP4. Genome resequencing revealed that the F211 and F27 had 12 and 13 single-nucleotide polymorphisms. The activity assay demonstrated that the activity of NADH-dependent furfural reductase in mutant F211 and CP4 was all increased under furfural stress, and the activity peaked earlier in mutant than in control. Also, furfural level in the culture of F211 was also more rapidly decreased. These indicate that the increase in furfural tolerance of the mutants may be resulted from the enhanced NADH-dependent furfural reductase activity during early log phase, which could lead to an accelerated furfural detoxification process in mutants. In all, we obtained Z. mobilis mutants with enhanced furfural and high concentration of glucose tolerance, and provided valuable clues for the mechanism of furfural tolerance and strain development.

  3. Improved acid tolerance of Lactobacillus pentosus by error-prone whole genome amplification.

    Science.gov (United States)

    Ye, Lidan; Zhao, Hua; Li, Zhi; Wu, Jin Chuan

    2013-05-01

    Acid tolerance of Lactobacillus pentosus ATCC 8041 was improved by error-prone amplification of its genomic DNA using random primers and Taq DNA polymerase. The resulting amplification products were transferred into wild-type L. pentosus by electroporation and the transformants were screened for growth on low-pH agar plates. After only one round of mutation, one mutant (MT3) was identified that was able to completely consume 20 g/L of glucose to produce lactic acid at a yield of 95% in 1L MRS medium at pH 3.8 within 36 h, whereas no growth or lactic acid production was observed for the wild-type strain under the same conditions. The acid tolerance of mutant MT3 remained genetically stable for at least 25 subcultures. Therefore, the error-prone whole genome amplification technique is a very powerful tool for improving phenotypes of this lactic acid bacterium and may also be applicable for other microorganisms. Copyright © 2012 Elsevier Ltd. All rights reserved.

  4. MxCuBE: a synchrotron beamline control environment customized for macromolecular crystallography experiments

    International Nuclear Information System (INIS)

    Gabadinho, José; Beteva, Antonia; Guijarro, Matias; Rey-Bakaikoa, Vicente; Spruce, Darren

    2010-01-01

    MxCuBE is a beamline control environment optimized for the needs of macromolecular crystallography. This paper describes the design of the software and the features that MxCuBE currently provides. The design and features of a beamline control software system for macromolecular crystallography (MX) experiments developed at the European Synchrotron Radiation Facility (ESRF) are described. This system, MxCuBE, allows users to easily and simply interact with beamline hardware components and provides automated routines for common tasks in the operation of a synchrotron beamline dedicated to experiments in MX. Additional functionality is provided through intuitive interfaces that enable the assessment of the diffraction characteristics of samples, experiment planning, automatic data collection and the on-line collection and analysis of X-ray emission spectra. The software can be run in a tandem client-server mode that allows for remote control and relevant experimental parameters and results are automatically logged in a relational database, ISPyB. MxCuBE is modular, flexible and extensible and is currently deployed on eight macromolecular crystallography beamlines at the ESRF. Additionally, the software is installed at MAX-lab beamline I911-3 and at BESSY beamline BL14.1

  5. Towards more reliable automated multi-dose dispensing: retrospective follow-up study on medication dose errors and product defects.

    Science.gov (United States)

    Palttala, Iida; Heinämäki, Jyrki; Honkanen, Outi; Suominen, Risto; Antikainen, Osmo; Hirvonen, Jouni; Yliruusi, Jouko

    2013-03-01

    To date, little is known on applicability of different types of pharmaceutical dosage forms in an automated high-speed multi-dose dispensing process. The purpose of the present study was to identify and further investigate various process-induced and/or product-related limitations associated with multi-dose dispensing process. The rates of product defects and dose dispensing errors in automated multi-dose dispensing were retrospectively investigated during a 6-months follow-up period. The study was based on the analysis of process data of totally nine automated high-speed multi-dose dispensing systems. Special attention was paid to the dependence of multi-dose dispensing errors/product defects and pharmaceutical tablet properties (such as shape, dimensions, weight, scored lines, coatings, etc.) to profile the most suitable forms of tablets for automated dose dispensing systems. The relationship between the risk of errors in dose dispensing and tablet characteristics were visualized by creating a principal component analysis (PCA) model for the outcome of dispensed tablets. The two most common process-induced failures identified in the multi-dose dispensing are predisposal of tablet defects and unexpected product transitions in the medication cassette (dose dispensing error). The tablet defects are product-dependent failures, while the tablet transitions are dependent on automated multi-dose dispensing systems used. The occurrence of tablet defects is approximately twice as common as tablet transitions. Optimal tablet preparation for the high-speed multi-dose dispensing would be a round-shaped, relatively small/middle-sized, film-coated tablet without any scored line. Commercial tablet products can be profiled and classified based on their suitability to a high-speed multi-dose dispensing process.

  6. Automation and Remote Synchrotron Data Collection

    International Nuclear Information System (INIS)

    Gilski, M.

    2008-01-01

    X-ray crystallography is the natural choice for macromolecular structure determination by virtue of its accuracy, speed, and potential for further speed gains, while synchrotron radiation is indispensable because of its intensity and tuneability. Good X-ray crystallographic diffraction patterns are essential and frequently this is achievable through using the few large synchrotrons located worldwide. Beamline time on these facilities have long queues, and increasing the efficiency of utilization of these facilities will help in expediting the structure determination process. Automation and remote data collection are therefore essential steps in ensuring that macromolecular structure determination becomes a very high throughput process. (author)

  7. Review of Pre-Analytical Errors in Oral Glucose Tolerance Testing in a Tertiary Care Hospital.

    Science.gov (United States)

    Nanda, Rachita; Patel, Suprava; Sahoo, Sibashish; Mohapatra, Eli

    2018-03-13

    The pre-pre-analytical and pre-analytical phases form a major chunk of the errors in a laboratory. The process has taken into consideration a very common procedure which is the oral glucose tolerance test to identify the pre-pre-analytical errors. Quality indicators provide evidence of quality, support accountability and help in the decision making of laboratory personnel. The aim of this research is to evaluate pre-analytical performance of the oral glucose tolerance test procedure. An observational study that was conducted overa period of three months, in the phlebotomy and accessioning unit of our laboratory using questionnaire that examined the pre-pre-analytical errors through a scoring system. The pre-analytical phase was analyzed for each sample collected as per seven quality indicators. About 25% of the population gave wrong answer with regard to the question that tested the knowledge of patient preparation. The appropriateness of test result QI-1 had the most error. Although QI-5 for sample collection had a low error rate, it is a very important indicator as any wrongly collected sample can alter the test result. Evaluating the pre-analytical and pre-pre-analytical phase is essential and must be conducted routinely on a yearly basis to identify errors and take corrective action and to facilitate their gradual introduction into routine practice.

  8. The impact of automation on workload and dispensing errors in a hospital pharmacy.

    Science.gov (United States)

    James, K Lynette; Barlow, Dave; Bithell, Anne; Hiom, Sarah; Lord, Sue; Pollard, Mike; Roberts, Dave; Way, Cheryl; Whittlesea, Cate

    2013-04-01

    To determine the effect of installing an original-pack automated dispensing system (ADS) on dispensary workload and prevented dispensing incidents in a hospital pharmacy. Data on dispensary workload and prevented dispensing incidents, defined as dispensing errors detected and reported before medication had left the pharmacy, were collected over 6 weeks at a National Health Service hospital in Wales before and after the installation of an ADS. Workload was measured by non-participant observation using the event recording technique. Prevented dispensing incidents were self-reported by pharmacy staff on standardised forms. Median workloads (measured as items dispensed/person/hour) were compared using Mann-Whitney U tests and rate of prevented dispensing incidents were compared using Chi-square test. Spearman's rank correlation was used to examine the association between workload and prevented dispensing incidents. A P value of ≤0.05 was considered statistically significant. Median dispensary workload was significantly lower pre-automation (9.20 items/person/h) compared to post-automation (13.17 items/person/h, P automation (0.28%) than pre-automation (0.64%, P automation (ρ = 0.23, P automation improves dispensing efficiency and reduces the rate of prevented dispensing incidents. It is proposed that prevented dispensing incidents frequently occurred during periods of high workload due to involuntary automaticity. Prevented dispensing incidents occurring after a busy period were attributed to staff experiencing fatigue after-effects. © 2012 The Authors. IJPP © 2012 Royal Pharmaceutical Society.

  9. Automated evaluation of setup errors in carbon ion therapy using PET: Feasibility study

    International Nuclear Information System (INIS)

    Kuess, Peter; Hopfgartner, Johannes; Georg, Dietmar; Helmbrecht, Stephan; Fiedler, Fine; Birkfellner, Wolfgang; Enghardt, Wolfgang

    2013-01-01

    Purpose: To investigate the possibility of detecting patient mispositioning in carbon-ion therapy with particle therapy positron emission tomography (PET) in an automated image registration based manner. Methods: Tumors in the head and neck (H and N), pelvic, lung, and brain region were investigated. Biologically optimized carbon ion treatment plans were created with TRiP98. From these treatment plans, the reference β + -activity distributions were calculated using a Monte Carlo simulation. Setup errors were simulated by shifting or rotating the computed tomography (CT). The expected β + activity was calculated for each plan with shifts. Finally, the reference particle therapy PET images were compared to the “shifted” β + -activity distribution simulations using the Pearson's correlation coefficient (PCC). To account for different PET monitoring options the inbeam PET was compared to three different inroom scenarios. Additionally, the dosimetric effects of the CT misalignments were investigated. Results: The automated PCC detection of patient mispositioning was possible in the investigated indications for cranio-caudal shifts of 4 mm and more, except for prostate tumors. In the rather homogeneous pelvic region, the generated β + -activity distribution of the reference and compared PET image were too much alike. Thus, setup errors in this region could not be detected. Regarding lung lesions the detection strongly depended on the exact tumor location: in the center of the lung tumor misalignments could be detected down to 2 mm shifts while resolving shifts of tumors close to the thoracic wall was more challenging. Rotational shifts in the H and N and lung region of +6° and more could be detected using inroom PET and partly using inbeam PET. Comparing inroom PET to inbeam PET no obvious trend was found. However, among the inroom scenarios a longer measurement time was found to be advantageous. Conclusions: This study scopes the use of various particle therapy

  10. A 3D Image Filter for Parameter-Free Segmentation of Macromolecular Structures from Electron Tomograms

    Science.gov (United States)

    Ali, Rubbiya A.; Landsberg, Michael J.; Knauth, Emily; Morgan, Garry P.; Marsh, Brad J.; Hankamer, Ben

    2012-01-01

    3D image reconstruction of large cellular volumes by electron tomography (ET) at high (≤5 nm) resolution can now routinely resolve organellar and compartmental membrane structures, protein coats, cytoskeletal filaments, and macromolecules. However, current image analysis methods for identifying in situ macromolecular structures within the crowded 3D ultrastructural landscape of a cell remain labor-intensive, time-consuming, and prone to user-bias and/or error. This paper demonstrates the development and application of a parameter-free, 3D implementation of the bilateral edge-detection (BLE) algorithm for the rapid and accurate segmentation of cellular tomograms. The performance of the 3D BLE filter has been tested on a range of synthetic and real biological data sets and validated against current leading filters—the pseudo 3D recursive and Canny filters. The performance of the 3D BLE filter was found to be comparable to or better than that of both the 3D recursive and Canny filters while offering the significant advantage that it requires no parameter input or optimisation. Edge widths as little as 2 pixels are reproducibly detected with signal intensity and grey scale values as low as 0.72% above the mean of the background noise. The 3D BLE thus provides an efficient method for the automated segmentation of complex cellular structures across multiple scales for further downstream processing, such as cellular annotation and sub-tomogram averaging, and provides a valuable tool for the accurate and high-throughput identification and annotation of 3D structural complexity at the subcellular level, as well as for mapping the spatial and temporal rearrangement of macromolecular assemblies in situ within cellular tomograms. PMID:22479430

  11. Fault tolerant strategies for automated operation of nuclear reactors

    International Nuclear Information System (INIS)

    Berkan, R.C.; Tsoukalas, L.

    1991-01-01

    This paper introduces an automatic control system incorporating a number of verification, validation, and command generation tasks with-in a fault-tolerant architecture. The integrated system utilizes recent methods of artificial intelligence such as neural networks and fuzzy logic control. Furthermore, advanced signal processing and nonlinear control methods are also included in the design. The primary goal is to create an on-line capability to validate signals, analyze plant performance, and verify the consistency of commands before control decisions are finalized. The application of this approach to the automated startup of the Experimental Breeder Reactor-II (EBR-II) is performed using a validated nonlinear model. The simulation results show that the advanced concepts have the potential to improve plant availability andsafety

  12. Clustering procedures for the optimal selection of data sets from multiple crystals in macromolecular crystallography

    International Nuclear Information System (INIS)

    Foadi, James; Aller, Pierre; Alguel, Yilmaz; Cameron, Alex; Axford, Danny; Owen, Robin L.; Armour, Wes; Waterman, David G.; Iwata, So; Evans, Gwyndaf

    2013-01-01

    A systematic approach to the scaling and merging of data from multiple crystals in macromolecular crystallography is introduced and explained. The availability of intense microbeam macromolecular crystallography beamlines at third-generation synchrotron sources has enabled data collection and structure solution from microcrystals of <10 µm in size. The increased likelihood of severe radiation damage where microcrystals or particularly sensitive crystals are used forces crystallographers to acquire large numbers of data sets from many crystals of the same protein structure. The associated analysis and merging of multi-crystal data is currently a manual and time-consuming step. Here, a computer program, BLEND, that has been written to assist with and automate many of the steps in this process is described. It is demonstrated how BLEND has successfully been used in the solution of a novel membrane protein

  13. Clustering procedures for the optimal selection of data sets from multiple crystals in macromolecular crystallography

    Energy Technology Data Exchange (ETDEWEB)

    Foadi, James [Diamond Light Source, Harwell Science and Innovation Campus, Didcot, Oxfordshire OX11 0DE (United Kingdom); Imperial College, London SW7 2AZ (United Kingdom); Aller, Pierre [Diamond Light Source, Harwell Science and Innovation Campus, Didcot, Oxfordshire OX11 0DE (United Kingdom); Alguel, Yilmaz; Cameron, Alex [Imperial College, London SW7 2AZ (United Kingdom); Axford, Danny; Owen, Robin L. [Diamond Light Source, Harwell Science and Innovation Campus, Didcot, Oxfordshire OX11 0DE (United Kingdom); Armour, Wes [Oxford e-Research Centre (OeRC), Keble Road, Oxford OX1 3QG (United Kingdom); Waterman, David G. [Research Complex at Harwell (RCaH), Harwell Science and Innovation Campus, Didcot, Oxfordshire OX11 0FA (United Kingdom); Iwata, So [Diamond Light Source, Harwell Science and Innovation Campus, Didcot, Oxfordshire OX11 0DE (United Kingdom); Imperial College, London SW7 2AZ (United Kingdom); Evans, Gwyndaf, E-mail: gwyndaf.evans@diamond.ac.uk [Diamond Light Source, Harwell Science and Innovation Campus, Didcot, Oxfordshire OX11 0DE (United Kingdom)

    2013-08-01

    A systematic approach to the scaling and merging of data from multiple crystals in macromolecular crystallography is introduced and explained. The availability of intense microbeam macromolecular crystallography beamlines at third-generation synchrotron sources has enabled data collection and structure solution from microcrystals of <10 µm in size. The increased likelihood of severe radiation damage where microcrystals or particularly sensitive crystals are used forces crystallographers to acquire large numbers of data sets from many crystals of the same protein structure. The associated analysis and merging of multi-crystal data is currently a manual and time-consuming step. Here, a computer program, BLEND, that has been written to assist with and automate many of the steps in this process is described. It is demonstrated how BLEND has successfully been used in the solution of a novel membrane protein.

  14. Automated data collection based on RoboDiff at the ESRF beamline MASSIF-1

    Energy Technology Data Exchange (ETDEWEB)

    Nurizzo, Didier, E-mail: Didier.nurizzo@esrf.fr; Guichard, Nicolas; McSweeney, Sean; Theveneau, Pascal; Guijarro, Matias; Svensson, Olof; Mueller-Dieckmann, Christoph; Leonard, Gordon [ESRF, The European Synchrotron, 71, Avenue des Martyrs,CS 40220, 38043 Grenoble (France); Bowler, Matthew W. [EMBL Grenoble Outstation, 71 Avenue des Martyrs, CS90181, 38042 Grenoble Cedex 9 (France)

    2016-07-27

    The European Synchrotron Radiation Facility has a long standing history in the automation of experiments in Macromolecular Crystallography. MASSIF-1 (Massively Automated Sample Screening and evaluation Integrated Facility), a beamline constructed as part of the ESRF Upgrade Phase I program, has been open to the external user community since July 2014 and offers a unique completely automated data collection service to both academic and industrial structural biologists.

  15. Automated Quantification of the Landing Error Scoring System With a Markerless Motion-Capture System.

    Science.gov (United States)

    Mauntel, Timothy C; Padua, Darin A; Stanley, Laura E; Frank, Barnett S; DiStefano, Lindsay J; Peck, Karen Y; Cameron, Kenneth L; Marshall, Stephen W

    2017-11-01

      The Landing Error Scoring System (LESS) can be used to identify individuals with an elevated risk of lower extremity injury. The limitation of the LESS is that raters identify movement errors from video replay, which is time-consuming and, therefore, may limit its use by clinicians. A markerless motion-capture system may be capable of automating LESS scoring, thereby removing this obstacle.   To determine the reliability of an automated markerless motion-capture system for scoring the LESS.   Cross-sectional study.   United States Military Academy.   A total of 57 healthy, physically active individuals (47 men, 10 women; age = 18.6 ± 0.6 years, height = 174.5 ± 6.7 cm, mass = 75.9 ± 9.2 kg).   Participants completed 3 jump-landing trials that were recorded by standard video cameras and a depth camera. Their movement quality was evaluated by expert LESS raters (standard video recording) using the LESS rubric and by software that automates LESS scoring (depth-camera data). We recorded an error for a LESS item if it was present on at least 2 of 3 jump-landing trials. We calculated κ statistics, prevalence- and bias-adjusted κ (PABAK) statistics, and percentage agreement for each LESS item. Interrater reliability was evaluated between the 2 expert rater scores and between a consensus expert score and the markerless motion-capture system score.   We observed reliability between the 2 expert LESS raters (average κ = 0.45 ± 0.35, average PABAK = 0.67 ± 0.34; percentage agreement = 0.83 ± 0.17). The markerless motion-capture system had similar reliability with consensus expert scores (average κ = 0.48 ± 0.40, average PABAK = 0.71 ± 0.27; percentage agreement = 0.85 ± 0.14). However, reliability was poor for 5 LESS items in both LESS score comparisons.   A markerless motion-capture system had the same level of reliability as expert LESS raters, suggesting that an automated system can accurately assess movement. Therefore, clinicians can use

  16. Automated systems help prevent operator error during [reactor] I and C [instrumentation and control] testing

    International Nuclear Information System (INIS)

    Courcoux, R.

    1989-01-01

    On a nuclear steam supply system, even a minor failure can involve actuation of the whole reactor protection system (RPS). To reduce the likelihood of human error leading to unwanted trips during the maintenance of instrumentation and control systems, Framatome has been developing and installing various automated testing systems. Such automated systems are particularly helpful when periodic tests with a potential for RPS actuation have to be carried out, or when the test is on the critical path for the refuelling outage. The Sensitive Channel Programme described is an example of the sort of work that has been done. (author)

  17. Practical macromolecular cryocrystallography

    Energy Technology Data Exchange (ETDEWEB)

    Pflugrath, J. W., E-mail: jim.pflugrath@gmail.com [Rigaku Americas Corp., 9009 New Trails Drive, The Woodlands, TX 77381 (United States)

    2015-05-27

    Current methods, reagents and experimental hardware for successfully and reproducibly flash-cooling macromolecular crystals to cryogenic temperatures for X-ray diffraction data collection are reviewed. Cryocrystallography is an indispensable technique that is routinely used for single-crystal X-ray diffraction data collection at temperatures near 100 K, where radiation damage is mitigated. Modern procedures and tools to cryoprotect and rapidly cool macromolecular crystals with a significant solvent fraction to below the glass-transition phase of water are reviewed. Reagents and methods to help prevent the stresses that damage crystals when flash-cooling are described. A method of using isopentane to assess whether cryogenic temperatures have been preserved when dismounting screened crystals is also presented.

  18. Automated identification of crystallographic ligands using sparse-density representations

    International Nuclear Information System (INIS)

    Carolan, C. G.; Lamzin, V. S.

    2014-01-01

    A novel procedure for identifying ligands in macromolecular crystallographic electron-density maps is introduced. Density clusters in such maps can be rapidly attributed to one of 82 different ligands in an automated manner. A novel procedure for the automatic identification of ligands in macromolecular crystallographic electron-density maps is introduced. It is based on the sparse parameterization of density clusters and the matching of the pseudo-atomic grids thus created to conformationally variant ligands using mathematical descriptors of molecular shape, size and topology. In large-scale tests on experimental data derived from the Protein Data Bank, the procedure could quickly identify the deposited ligand within the top-ranked compounds from a database of candidates. This indicates the suitability of the method for the identification of binding entities in fragment-based drug screening and in model completion in macromolecular structure determination

  19. Clustering procedures for the optimal selection of data sets from multiple crystals in macromolecular crystallography.

    Science.gov (United States)

    Foadi, James; Aller, Pierre; Alguel, Yilmaz; Cameron, Alex; Axford, Danny; Owen, Robin L; Armour, Wes; Waterman, David G; Iwata, So; Evans, Gwyndaf

    2013-08-01

    The availability of intense microbeam macromolecular crystallography beamlines at third-generation synchrotron sources has enabled data collection and structure solution from microcrystals of structure. The associated analysis and merging of multi-crystal data is currently a manual and time-consuming step. Here, a computer program, BLEND, that has been written to assist with and automate many of the steps in this process is described. It is demonstrated how BLEND has successfully been used in the solution of a novel membrane protein.

  20. Macromolecular crystallography using synchrotron radiation

    International Nuclear Information System (INIS)

    Bartunik, H.D.; Phillips, J.C.; Fourme, R.

    1982-01-01

    The use of synchrotron X-ray sources in macromolecular crystallography is described. The properties of synchrotron radiation relevant to macromolecular crystallography are examined. The applications discussed include anomalous dispersion techniques, the acquisition of normal and high resolution data, and kinetic studies of structural changes in macromolecules; protein data are presented illustrating these applications. The apparatus used is described including information on the electronic detectors, the monitoring of the incident beam and crystal cooling. (U.K.)

  1. Assessing the Efficiency of Phenotyping Early Traits in a Greenhouse Automated Platform for Predicting Drought Tolerance of Soybean in the Field.

    Science.gov (United States)

    Peirone, Laura S; Pereyra Irujo, Gustavo A; Bolton, Alejandro; Erreguerena, Ignacio; Aguirrezábal, Luis A N

    2018-01-01

    Conventional field phenotyping for drought tolerance, the most important factor limiting yield at a global scale, is labor-intensive and time-consuming. Automated greenhouse platforms can increase the precision and throughput of plant phenotyping and contribute to a faster release of drought tolerant varieties. The aim of this work was to establish a framework of analysis to identify early traits which could be efficiently measured in a greenhouse automated phenotyping platform, for predicting the drought tolerance of field grown soybean genotypes. A group of genotypes was evaluated, which showed variation in their drought susceptibility index (DSI) for final biomass and leaf area. A large number of traits were measured before and after the onset of a water deficit treatment, which were analyzed under several criteria: the significance of the regression with the DSI, phenotyping cost, earliness, and repeatability. The most efficient trait was found to be transpiration efficiency measured at 13 days after emergence. This trait was further tested in a second experiment with different water deficit intensities, and validated using a different set of genotypes against field data from a trial network in a third experiment. The framework applied in this work for assessing traits under different criteria could be helpful for selecting those most efficient for automated phenotyping.

  2. Assessing the Efficiency of Phenotyping Early Traits in a Greenhouse Automated Platform for Predicting Drought Tolerance of Soybean in the Field

    Directory of Open Access Journals (Sweden)

    Laura S. Peirone

    2018-05-01

    Full Text Available Conventional field phenotyping for drought tolerance, the most important factor limiting yield at a global scale, is labor-intensive and time-consuming. Automated greenhouse platforms can increase the precision and throughput of plant phenotyping and contribute to a faster release of drought tolerant varieties. The aim of this work was to establish a framework of analysis to identify early traits which could be efficiently measured in a greenhouse automated phenotyping platform, for predicting the drought tolerance of field grown soybean genotypes. A group of genotypes was evaluated, which showed variation in their drought susceptibility index (DSI for final biomass and leaf area. A large number of traits were measured before and after the onset of a water deficit treatment, which were analyzed under several criteria: the significance of the regression with the DSI, phenotyping cost, earliness, and repeatability. The most efficient trait was found to be transpiration efficiency measured at 13 days after emergence. This trait was further tested in a second experiment with different water deficit intensities, and validated using a different set of genotypes against field data from a trial network in a third experiment. The framework applied in this work for assessing traits under different criteria could be helpful for selecting those most efficient for automated phenotyping.

  3. Design for Error Tolerance

    DEFF Research Database (Denmark)

    Rasmussen, Jens

    1983-01-01

    An important aspect of the optimal design of computer-based operator support systems is the sensitivity of such systems to operator errors. The author discusses how a system might allow for human variability with the use of reversibility and observability.......An important aspect of the optimal design of computer-based operator support systems is the sensitivity of such systems to operator errors. The author discusses how a system might allow for human variability with the use of reversibility and observability....

  4. Sequential Ensembles Tolerant to Synthetic Aperture Radar (SAR Soil Moisture Retrieval Errors

    Directory of Open Access Journals (Sweden)

    Ju Hyoung Lee

    2016-04-01

    Full Text Available Due to complicated and undefined systematic errors in satellite observation, data assimilation integrating model states with satellite observations is more complicated than field measurements-based data assimilation at a local scale. In the case of Synthetic Aperture Radar (SAR soil moisture, the systematic errors arising from uncertainties in roughness conditions are significant and unavoidable, but current satellite bias correction methods do not resolve the problems very well. Thus, apart from the bias correction process of satellite observation, it is important to assess the inherent capability of satellite data assimilation in such sub-optimal but more realistic observational error conditions. To this end, time-evolving sequential ensembles of the Ensemble Kalman Filter (EnKF is compared with stationary ensemble of the Ensemble Optimal Interpolation (EnOI scheme that does not evolve the ensembles over time. As the sensitivity analysis demonstrated that the surface roughness is more sensitive to the SAR retrievals than measurement errors, it is a scope of this study to monitor how data assimilation alters the effects of roughness on SAR soil moisture retrievals. In results, two data assimilation schemes all provided intermediate values between SAR overestimation, and model underestimation. However, under the same SAR observational error conditions, the sequential ensembles approached a calibrated model showing the lowest Root Mean Square Error (RMSE, while the stationary ensemble converged towards the SAR observations exhibiting the highest RMSE. As compared to stationary ensembles, sequential ensembles have a better tolerance to SAR retrieval errors. Such inherent nature of EnKF suggests an operational merit as a satellite data assimilation system, due to the limitation of bias correction methods currently available.

  5. Estimation of 3D reconstruction errors in a stereo-vision system

    Science.gov (United States)

    Belhaoua, A.; Kohler, S.; Hirsch, E.

    2009-06-01

    The paper presents an approach for error estimation for the various steps of an automated 3D vision-based reconstruction procedure of manufactured workpieces. The process is based on a priori planning of the task and built around a cognitive intelligent sensory system using so-called Situation Graph Trees (SGT) as a planning tool. Such an automated quality control system requires the coordination of a set of complex processes performing sequentially data acquisition, its quantitative evaluation and the comparison with a reference model (e.g., CAD object model) in order to evaluate quantitatively the object. To ensure efficient quality control, the aim is to be able to state if reconstruction results fulfill tolerance rules or not. Thus, the goal is to evaluate independently the error for each step of the stereo-vision based 3D reconstruction (e.g., for calibration, contour segmentation, matching and reconstruction) and then to estimate the error for the whole system. In this contribution, we analyze particularly the segmentation error due to localization errors for extracted edge points supposed to belong to lines and curves composing the outline of the workpiece under evaluation. The fitting parameters describing these geometric features are used as quality measure to determine confidence intervals and finally to estimate the segmentation errors. These errors are then propagated through the whole reconstruction procedure, enabling to evaluate their effect on the final 3D reconstruction result, specifically on position uncertainties. Lastly, analysis of these error estimates enables to evaluate the quality of the 3D reconstruction, as illustrated by the shown experimental results.

  6. Restructuring of workflows to minimise errors via stochastic model checking: An automated evolutionary approach

    International Nuclear Information System (INIS)

    Herbert, L.T.; Hansen, Z.N.L.

    2016-01-01

    This paper presents a framework for the automated restructuring of stochastic workflows to reduce the impact of faults. The framework allows for the modelling of workflows by means of a formalised subset of the BPMN workflow language. We extend this modelling formalism to describe faults and incorporate an intention preserving stochastic semantics able to model both probabilistic- and non-deterministic behaviour. Stochastic model checking techniques are employed to generate the state-space of a given workflow. Possible improvements obtained by restructuring are measured by employing the framework's capacity for tracking real-valued quantities associated with states and transitions of the workflow. The space of possible restructurings of a workflow is explored by means of an evolutionary algorithm, where the goals for improvement are defined in terms of optimising quantities, typically employed to model resources, associated with a workflow. The approach is fully automated and only the modelling of the production workflows, potential faults and the expression of the goals require manual input. We present the design of a software tool implementing this framework and explore the practical utility of this approach through an industrial case study in which the risk of production failures and their impact are reduced by restructuring the workflow. - Highlights: • We present a framework which allows for the automated restructuring of workflows. • This framework seeks to minimise the impact of errors on the workflow. • We illustrate a scalable software implementation of this framework. • We explore the practical utility of this approach through an industry case. • The impact of errors can be substantially reduced by restructuring the workflow.

  7. Clustering procedures for the optimal selection of data sets from multiple crystals in macromolecular crystallography

    Science.gov (United States)

    Foadi, James; Aller, Pierre; Alguel, Yilmaz; Cameron, Alex; Axford, Danny; Owen, Robin L.; Armour, Wes; Waterman, David G.; Iwata, So; Evans, Gwyndaf

    2013-01-01

    The availability of intense microbeam macromolecular crystallography beamlines at third-generation synchrotron sources has enabled data collection and structure solution from microcrystals of sets from many crystals of the same protein structure. The associated analysis and merging of multi-crystal data is currently a manual and time-consuming step. Here, a computer program, BLEND, that has been written to assist with and automate many of the steps in this process is described. It is demonstrated how BLEND has successfully been used in the solution of a novel membrane protein. PMID:23897484

  8. Rapid automated superposition of shapes and macromolecular models using spherical harmonics.

    Science.gov (United States)

    Konarev, Petr V; Petoukhov, Maxim V; Svergun, Dmitri I

    2016-06-01

    A rapid algorithm to superimpose macromolecular models in Fourier space is proposed and implemented ( SUPALM ). The method uses a normalized integrated cross-term of the scattering amplitudes as a proximity measure between two three-dimensional objects. The reciprocal-space algorithm allows for direct matching of heterogeneous objects including high- and low-resolution models represented by atomic coordinates, beads or dummy residue chains as well as electron microscopy density maps and inhomogeneous multi-phase models ( e.g. of protein-nucleic acid complexes). Using spherical harmonics for the computation of the amplitudes, the method is up to an order of magnitude faster than the real-space algorithm implemented in SUPCOMB by Kozin & Svergun [ J. Appl. Cryst. (2001 ▸), 34 , 33-41]. The utility of the new method is demonstrated in a number of test cases and compared with the results of SUPCOMB . The spherical harmonics algorithm is best suited for low-resolution shape models, e.g . those provided by solution scattering experiments, but also facilitates a rapid cross-validation against structural models obtained by other methods.

  9. Macromolecular crystallization in microgravity

    International Nuclear Information System (INIS)

    Snell, Edward H; Helliwell, John R

    2005-01-01

    Density difference fluid flows and sedimentation of growing crystals are greatly reduced when crystallization takes place in a reduced gravity environment. In the case of macromolecular crystallography a crystal of a biological macromolecule is used for diffraction experiments (x-ray or neutron) so as to determine the three-dimensional structure of the macromolecule. The better the internal order of the crystal then the greater the molecular structure detail that can be extracted. It is this structural information that enables an understanding of how the molecule functions. This knowledge is changing the biological and chemical sciences, with major potential in understanding disease pathologies. In this review, we examine the use of microgravity as an environment to grow macromolecular crystals. We describe the crystallization procedures used on the ground, how the resulting crystals are studied and the knowledge obtained from those crystals. We address the features desired in an ordered crystal and the techniques used to evaluate those features in detail. We then introduce the microgravity environment, the techniques to access that environment and the theory and evidence behind the use of microgravity for crystallization experiments. We describe how ground-based laboratory techniques have been adapted to microgravity flights and look at some of the methods used to analyse the resulting data. Several case studies illustrate the physical crystal quality improvements and the macromolecular structural advances. Finally, limitations and alternatives to microgravity and future directions for this research are covered. Macromolecular structural crystallography in general is a remarkable field where physics, biology, chemistry and mathematics meet to enable insight to the fundamentals of life. As the reader will see, there is a great deal of physics involved when the microgravity environment is applied to crystallization, some of it known, and undoubtedly much yet to

  10. Macromolecular contrast agents for MR mammography: current status

    International Nuclear Information System (INIS)

    Daldrup-Link, Heike E.; Brasch, Robert C.

    2003-01-01

    Macromolecular contrast media (MMCM) encompass a new class of diagnostic drugs that can be applied with dynamic MRI to extract both physiologic and morphologic information in breast lesions. Kinetic analysis of dynamic MMCM-enhanced MR data in breast tumor patients provides useful estimates of tumor blood volume and microvascular permeability, typically increased in cancer. These tumor characteristics can be applied to differentiate benign from malignant lesions, to define the angiogenesis status of cancers, and to monitor tumor response to therapy. The most immediate challenge to the development of MMCM-enhanced mammography is the identification of those candidate compounds that demonstrate the requisite long intravascular distribution and have the high tolerance necessary for clinical use. Potential mammographic applications and limitations of various MMCM, defined by either experimental animal testing or clinical testing in patients, are reviewed in this article. (orig.)

  11. Effects of a direct refill program for automated dispensing cabinets on medication-refill errors.

    Science.gov (United States)

    Helmons, Pieter J; Dalton, Ashley J; Daniels, Charles E

    2012-10-01

    The effects of a direct refill program for automated dispensing cabinets (ADCs) on medication-refill errors were studied. This study was conducted in designated acute care areas of a 386-bed academic medical center. A wholesaler-to-ADC direct refill program, consisting of prepackaged delivery of medications and bar-code-assisted ADC refilling, was implemented in the inpatient pharmacy of the medical center in September 2009. Medication-refill errors in 26 ADCs from the general medicine units, the infant special care unit, the surgical and burn intensive care units, and intermediate units were assessed before and after the implementation of this program. Medication-refill errors were defined as an ADC pocket containing the wrong drug, wrong strength, or wrong dosage form. ADC refill errors decreased by 77%, from 62 errors per 6829 refilled pockets (0.91%) to 8 errors per 3855 refilled pockets (0.21%) (p error type detected before the intervention was the incorrect medication (wrong drug, wrong strength, or wrong dosage form) in the ADC pocket. Of the 54 incorrect medications found before the intervention, 38 (70%) were loaded in a multiple-drug drawer. After the implementation of the new refill process, 3 of the 5 incorrect medications were loaded in a multiple-drug drawer. There were 3 instances of expired medications before and only 1 expired medication after implementation of the program. A redesign of the ADC refill process using a wholesaler-to-ADC direct refill program that included delivery of prepackaged medication and bar-code-assisted refill significantly decreased the occurrence of ADC refill errors.

  12. Automated Peak Picking and Peak Integration in Macromolecular NMR Spectra Using AUTOPSY

    Science.gov (United States)

    Koradi, Reto; Billeter, Martin; Engeli, Max; Güntert, Peter; Wüthrich, Kurt

    1998-12-01

    A new approach for automated peak picking of multidimensional protein NMR spectra with strong overlap is introduced, which makes use of the program AUTOPSY (automatedpeak picking for NMRspectroscopy). The main elements of this program are a novel function for local noise level calculation, the use of symmetry considerations, and the use of lineshapes extracted from well-separated peaks for resolving groups of strongly overlapping peaks. The algorithm generates peak lists with precise chemical shift and integral intensities, and a reliability measure for the recognition of each peak. The results of automated peak picking of NOESY spectra with AUTOPSY were tested in combination with the combined automated NOESY cross peak assignment and structure calculation routine NOAH implemented in the program DYANA. The quality of the resulting structures was found to be comparable with those from corresponding data obtained with manual peak picking.

  13. Macromolecular Crystallization in Microfluidics for the International Space Station

    Science.gov (United States)

    Monaco, Lisa A.; Spearing, Scott

    2003-01-01

    At NASA's Marshall Space Flight Center, the Iterative Biological Crystallization (IBC) project has begun development on scientific hardware for macromolecular crystallization on the International Space Station (ISS). Currently ISS crystallization research is limited to solution recipes that were prepared on the ground prior to launch. The proposed hardware will conduct solution mixing and dispensing on board the ISS, be fully automated, and have imaging functions via remote commanding from the ground. Utilizing microfluidic technology, IBC will allow for on orbit iterations. The microfluidics LabChip(R) devices that have been developed, along with Caliper Technologies, will greatly benefit researchers by allowing for precise fluid handling of nano/pico liter sized volumes. IBC will maximize the amount of science return by utilizing the microfluidic approach and be a valuable tool to structural biologists investigating medically relevant projects.

  14. Sequential recovery of macromolecular components of the nucleolus.

    Science.gov (United States)

    Bai, Baoyan; Laiho, Marikki

    2015-01-01

    The nucleolus is involved in a number of cellular processes of importance to cell physiology and pathology, including cell stress responses and malignancies. Studies of macromolecular composition of the nucleolus depend critically on the efficient extraction and accurate quantification of all macromolecular components (e.g., DNA, RNA, and protein). We have developed a TRIzol-based method that efficiently and simultaneously isolates these three macromolecular constituents from the same sample of purified nucleoli. The recovered and solubilized protein can be accurately quantified by the bicinchoninic acid assay and assessed by polyacrylamide gel electrophoresis or by mass spectrometry. We have successfully applied this approach to extract and quantify the responses of all three macromolecular components in nucleoli after drug treatments of HeLa cells, and conducted RNA-Seq analysis of the nucleolar RNA.

  15. Macromolecular Crystal Growth by Means of Microfluidics

    Science.gov (United States)

    vanderWoerd, Mark; Ferree, Darren; Spearing, Scott; Monaco, Lisa; Molho, Josh; Spaid, Michael; Brasseur, Mike; Curreri, Peter A. (Technical Monitor)

    2002-01-01

    We have performed a feasibility study in which we show that chip-based, microfluidic (LabChip(TM)) technology is suitable for protein crystal growth. This technology allows for accurate and reliable dispensing and mixing of very small volumes while minimizing bubble formation in the crystallization mixture. The amount of (protein) solution remaining after completion of an experiment is minimal, which makes this technique efficient and attractive for use with proteins, which are difficult or expensive to obtain. The nature of LabChip(TM) technology renders it highly amenable to automation. Protein crystals obtained in our initial feasibility studies were of excellent quality as determined by X-ray diffraction. Subsequent to the feasibility study, we designed and produced the first LabChip(TM) device specifically for protein crystallization in batch mode. It can reliably dispense and mix from a range of solution constituents into two independent growth wells. We are currently testing this design to prove its efficacy for protein crystallization optimization experiments. In the near future we will expand our design to incorporate up to 10 growth wells per LabChip(TM) device. Upon completion, additional crystallization techniques such as vapor diffusion and liquid-liquid diffusion will be accommodated. Macromolecular crystallization using microfluidic technology is envisioned as a fully automated system, which will use the 'tele-science' concept of remote operation and will be developed into a research facility for the International Space Station as well as on the ground.

  16. The use of automatic programming techniques for fault tolerant computing systems

    Science.gov (United States)

    Wild, C.

    1985-01-01

    It is conjectured that the production of software for ultra-reliable computing systems such as required by Space Station, aircraft, nuclear power plants and the like will require a high degree of automation as well as fault tolerance. In this paper, the relationship between automatic programming techniques and fault tolerant computing systems is explored. Initial efforts in the automatic synthesis of code from assertions to be used for error detection as well as the automatic generation of assertions and test cases from abstract data type specifications is outlined. Speculation on the ability to generate truly diverse designs capable of recovery from errors by exploring alternate paths in the program synthesis tree is discussed. Some initial thoughts on the use of knowledge based systems for the global detection of abnormal behavior using expectations and the goal-directed reconfiguration of resources to meet critical mission objectives are given. One of the sources of information for these systems would be the knowledge captured during the automatic programming process.

  17. A CMOS high resolution, process/temperature variation tolerant RSSI for WIA-PA transceiver

    International Nuclear Information System (INIS)

    Yang Tao; Jiang Yu; Li Jie; Guo Jiangfei; Chen Hua; Han Jingyu; Guo Guiliang; Yan Yuepeng

    2015-01-01

    This paper presents a high resolution, process/temperature variation tolerant received signal strength indicator (RSSI) for wireless networks for industrial automation process automation (WIA-PA) transceiver fabricated in 0.18 μm CMOS technology. The active area of the RSSI is 0.24 mm 2 . Measurement results show that the proposed RSSI has a dynamic range more than 70 dB and the linearity error is within ±0.5 dB for an input power from −70 to 0 dBm (dBm to 50 Ω), the corresponding output voltage is from 0.81 to 1.657 V and the RSSI slope is 12.1 mV/dB while consuming all of 2 mA from a 1.8 V power supply. Furthermore, by the help of the integrated compensation circuit, the proposed RSSI shows the temperature error within ±1.5 dB from −40 to 85 °C, and process variation error within ±0.25 dB, which exhibits good temperature-independence and excellent robustness against process variation characteristics. (paper)

  18. Designing and evaluating an automated system for real-time medication administration error detection in a neonatal intensive care unit.

    Science.gov (United States)

    Ni, Yizhao; Lingren, Todd; Hall, Eric S; Leonard, Matthew; Melton, Kristin; Kirkendall, Eric S

    2018-05-01

    Timely identification of medication administration errors (MAEs) promises great benefits for mitigating medication errors and associated harm. Despite previous efforts utilizing computerized methods to monitor medication errors, sustaining effective and accurate detection of MAEs remains challenging. In this study, we developed a real-time MAE detection system and evaluated its performance prior to system integration into institutional workflows. Our prospective observational study included automated MAE detection of 10 high-risk medications and fluids for patients admitted to the neonatal intensive care unit at Cincinnati Children's Hospital Medical Center during a 4-month period. The automated system extracted real-time medication use information from the institutional electronic health records and identified MAEs using logic-based rules and natural language processing techniques. The MAE summary was delivered via a real-time messaging platform to promote reduction of patient exposure to potential harm. System performance was validated using a physician-generated gold standard of MAE events, and results were compared with those of current practice (incident reporting and trigger tools). Physicians identified 116 MAEs from 10 104 medication administrations during the study period. Compared to current practice, the sensitivity with automated MAE detection was improved significantly from 4.3% to 85.3% (P = .009), with a positive predictive value of 78.0%. Furthermore, the system showed potential to reduce patient exposure to harm, from 256 min to 35 min (P patient exposure to potential harm following MAE events.

  19. Automated main-chain model building by template matching and iterative fragment extension

    International Nuclear Information System (INIS)

    Terwilliger, Thomas C.

    2003-01-01

    A method for automated macromolecular main-chain model building is described. An algorithm for the automated macromolecular model building of polypeptide backbones is described. The procedure is hierarchical. In the initial stages, many overlapping polypeptide fragments are built. In subsequent stages, the fragments are extended and then connected. Identification of the locations of helical and β-strand regions is carried out by FFT-based template matching. Fragment libraries of helices and β-strands from refined protein structures are then positioned at the potential locations of helices and strands and the longest segments that fit the electron-density map are chosen. The helices and strands are then extended using fragment libraries consisting of sequences three amino acids long derived from refined protein structures. The resulting segments of polypeptide chain are then connected by choosing those which overlap at two or more C α positions. The fully automated procedure has been implemented in RESOLVE and is capable of model building at resolutions as low as 3.5 Å. The algorithm is useful for building a preliminary main-chain model that can serve as a basis for refinement and side-chain addition

  20. Accurate recapture identification for genetic mark–recapture studies with error-tolerant likelihood-based match calling and sample clustering

    Science.gov (United States)

    Sethi, Suresh; Linden, Daniel; Wenburg, John; Lewis, Cara; Lemons, Patrick R.; Fuller, Angela K.; Hare, Matthew P.

    2016-01-01

    Error-tolerant likelihood-based match calling presents a promising technique to accurately identify recapture events in genetic mark–recapture studies by combining probabilities of latent genotypes and probabilities of observed genotypes, which may contain genotyping errors. Combined with clustering algorithms to group samples into sets of recaptures based upon pairwise match calls, these tools can be used to reconstruct accurate capture histories for mark–recapture modelling. Here, we assess the performance of a recently introduced error-tolerant likelihood-based match-calling model and sample clustering algorithm for genetic mark–recapture studies. We assessed both biallelic (i.e. single nucleotide polymorphisms; SNP) and multiallelic (i.e. microsatellite; MSAT) markers using a combination of simulation analyses and case study data on Pacific walrus (Odobenus rosmarus divergens) and fishers (Pekania pennanti). A novel two-stage clustering approach is demonstrated for genetic mark–recapture applications. First, repeat captures within a sampling occasion are identified. Subsequently, recaptures across sampling occasions are identified. The likelihood-based matching protocol performed well in simulation trials, demonstrating utility for use in a wide range of genetic mark–recapture studies. Moderately sized SNP (64+) and MSAT (10–15) panels produced accurate match calls for recaptures and accurate non-match calls for samples from closely related individuals in the face of low to moderate genotyping error. Furthermore, matching performance remained stable or increased as the number of genetic markers increased, genotyping error notwithstanding.

  1. Setup accuracy of stereoscopic X-ray positioning with automated correction for rotational errors in patients treated with conformal arc radiotherapy for prostate cancer

    International Nuclear Information System (INIS)

    Soete, Guy; Verellen, Dirk; Tournel, Koen; Storme, Guy

    2006-01-01

    We evaluated setup accuracy of NovalisBody stereoscopic X-ray positioning with automated correction for rotational errors with the Robotics Tilt Module in patients treated with conformal arc radiotherapy for prostate cancer. The correction of rotational errors was shown to reduce random and systematic errors in all directions. (NovalisBody TM and Robotics Tilt Module TM are products of BrainLAB A.G., Heimstetten, Germany)

  2. About Small Streams and Shiny Rocks: Macromolecular Crystal Growth in Microfluidics

    Science.gov (United States)

    vanderWoerd, Mark; Ferree, Darren; Spearing, Scott; Monaco, Lisa; Molho, Josh; Spaid, Michael; Brasseur, Mike; Curreri, Peter A. (Technical Monitor)

    2002-01-01

    We are developing a novel technique with which we have grown diffraction quality protein crystals in very small volumes, utilizing chip-based, microfluidic ("LabChip") technology. With this technology volumes smaller than achievable with any laboratory pipette can be dispensed with high accuracy. We have performed a feasibility study in which we crystallized several proteins with the aid of a LabChip device. The protein crystals are of excellent quality as shown by X-ray diffraction. The advantages of this new technology include improved accuracy of dispensing for small volumes, complete mixing of solution constituents without bubble formation, highly repeatable recipe and growth condition replication, and easy automation of the method. We have designed a first LabChip device specifically for protein crystallization in batch mode and can reliably dispense and mix from a range of solution constituents. We are currently testing this design. Upon completion additional crystallization techniques, such as vapor diffusion and liquid-liquid diffusion will be accommodated. Macromolecular crystallization using microfluidic technology is envisioned as a fully automated system, which will use the 'tele-science' concept of remote operation and will be developed into a research facility aboard the International Space Station.

  3. The design of macromolecular crystallography diffraction experiments

    International Nuclear Information System (INIS)

    Evans, Gwyndaf; Axford, Danny; Owen, Robin L.

    2011-01-01

    Thoughts about the decisions made in designing macromolecular X-ray crystallography experiments at synchrotron beamlines are presented. The measurement of X-ray diffraction data from macromolecular crystals for the purpose of structure determination is the convergence of two processes: the preparation of diffraction-quality crystal samples on the one hand and the construction and optimization of an X-ray beamline and end station on the other. Like sample preparation, a macromolecular crystallography beamline is geared to obtaining the best possible diffraction measurements from crystals provided by the synchrotron user. This paper describes the thoughts behind an experiment that fully exploits both the sample and the beamline and how these map into everyday decisions that users can and should make when visiting a beamline with their most precious crystals

  4. Efficient analysis of macromolecular rotational diffusion from heteronuclear relaxation data

    International Nuclear Information System (INIS)

    Dosset, Patrice; Hus, Jean-Christophe; Blackledge, Martin; Marion, Dominique

    2000-01-01

    A novel program has been developed for the interpretation of 15 N relaxation rates in terms of macromolecular anisotropic rotational diffusion. The program is based on a highly efficient simulated annealing/minimization algorithm, designed specifically to search the parametric space described by the isotropic, axially symmetric and fully anisotropic rotational diffusion tensor models. The high efficiency of this algorithm allows extensive noise-based Monte Carlo error analysis. Relevant statistical tests are systematically applied to provide confidence limits for the proposed tensorial models. The program is illustrated here using the example of the cytochrome c' from Rhodobacter capsulatus, a four-helix bundle heme protein, for which data at three different field strengths were independently analysed and compared

  5. Errors in macromolecular synthesis after stress. A study of the possible protective role of the small heat shock proteinsBiochemistry

    NARCIS (Netherlands)

    Marin Vinader, L.

    2006-01-01

    The general goal of this thesis was to gain insight in what small heat shock proteins (sHsps) do with respect to macromolecular synthesis during a stressful situation in the cell. It is known that after a non-lethal heat shock, cells are better protected against a subsequent more severe heat shock,

  6. Automated MAD and MIR structure solution

    International Nuclear Information System (INIS)

    Terwilliger, Thomas C.; Berendzen, Joel

    1999-01-01

    A fully automated procedure for solving MIR and MAD structures has been developed using a scoring scheme to convert the structure-solution process into an optimization problem. Obtaining an electron-density map from X-ray diffraction data can be difficult and time-consuming even after the data have been collected, largely because MIR and MAD structure determinations currently require many subjective evaluations of the qualities of trial heavy-atom partial structures before a correct heavy-atom solution is obtained. A set of criteria for evaluating the quality of heavy-atom partial solutions in macromolecular crystallography have been developed. These have allowed the conversion of the crystal structure-solution process into an optimization problem and have allowed its automation. The SOLVE software has been used to solve MAD data sets with as many as 52 selenium sites in the asymmetric unit. The automated structure-solution process developed is a major step towards the fully automated structure-determination, model-building and refinement procedure which is needed for genomic scale structure determinations

  7. Finding beam focus errors automatically

    International Nuclear Information System (INIS)

    Lee, M.J.; Clearwater, S.H.; Kleban, S.D.

    1987-01-01

    An automated method for finding beam focus errors using an optimization program called COMFORT-PLUS. The steps involved in finding the correction factors using COMFORT-PLUS has been used to find the beam focus errors for two damping rings at the SLAC Linear Collider. The program is to be used as an off-line program to analyze actual measured data for any SLC system. A limitation on the application of this procedure is found to be that it depends on the magnitude of the machine errors. Another is that the program is not totally automated since the user must decide a priori where to look for errors

  8. FPGAs and parallel architectures for aerospace applications soft errors and fault-tolerant design

    CERN Document Server

    Rech, Paolo

    2016-01-01

    This book introduces the concepts of soft errors in FPGAs, as well as the motivation for using commercial, off-the-shelf (COTS) FPGAs in mission-critical and remote applications, such as aerospace.  The authors describe the effects of radiation in FPGAs, present a large set of soft-error mitigation techniques that can be applied in these circuits, as well as methods for qualifying these circuits under radiation.  Coverage includes radiation effects in FPGAs, fault-tolerant techniques for FPGAs, use of COTS FPGAs in aerospace applications, experimental data of FPGAs under radiation, FPGA embedded processors under radiation, and fault injection in FPGAs. Since dedicated parallel processing architectures such as GPUs have become more desirable in aerospace applications due to high computational power, GPU analysis under radiation is also discussed. ·         Discusses features and drawbacks of reconfigurability methods for FPGAs, focused on aerospace applications; ·         Explains how radia...

  9. Soft-error tolerance and energy consumption evaluation of embedded computer with magnetic random access memory in practical systems using computer simulations

    Science.gov (United States)

    Nebashi, Ryusuke; Sakimura, Noboru; Sugibayashi, Tadahiko

    2017-08-01

    We evaluated the soft-error tolerance and energy consumption of an embedded computer with magnetic random access memory (MRAM) using two computer simulators. One is a central processing unit (CPU) simulator of a typical embedded computer system. We simulated the radiation-induced single-event-upset (SEU) probability in a spin-transfer-torque MRAM cell and also the failure rate of a typical embedded computer due to its main memory SEU error. The other is a delay tolerant network (DTN) system simulator. It simulates the power dissipation of wireless sensor network nodes of the system using a revised CPU simulator and a network simulator. We demonstrated that the SEU effect on the embedded computer with 1 Gbit MRAM-based working memory is less than 1 failure in time (FIT). We also demonstrated that the energy consumption of the DTN sensor node with MRAM-based working memory can be reduced to 1/11. These results indicate that MRAM-based working memory enhances the disaster tolerance of embedded computers.

  10. Macromolecular crystallography beamline X25 at the NSLS

    Energy Technology Data Exchange (ETDEWEB)

    Héroux, Annie; Allaire, Marc; Buono, Richard; Cowan, Matthew L.; Dvorak, Joseph; Flaks, Leon; LaMarra, Steven; Myers, Stuart F.; Orville, Allen M.; Robinson, Howard H.; Roessler, Christian G.; Schneider, Dieter K.; Shea-McCarthy, Grace; Skinner, John M.; Skinner, Michael; Soares, Alexei S.; Sweet, Robert M.; Berman, Lonny E., E-mail: berman@bnl.gov [Brookhaven National Laboratory, PO Box 5000, Upton, NY 11973-5000 (United States)

    2014-04-08

    A description of the upgraded beamline X25 at the NSLS, operated by the PXRR and the Photon Sciences Directorate serving the Macromolecular Crystallography community, is presented. Beamline X25 at the NSLS is one of the five beamlines dedicated to macromolecular crystallography operated by the Brookhaven National Laboratory Macromolecular Crystallography Research Resource group. This mini-gap insertion-device beamline has seen constant upgrades for the last seven years in order to achieve mini-beam capability down to 20 µm × 20 µm. All major components beginning with the radiation source, and continuing along the beamline and its experimental hutch, have changed to produce a state-of-the-art facility for the scientific community.

  11. Macromolecular crystallography beamline X25 at the NSLS

    International Nuclear Information System (INIS)

    Héroux, Annie; Allaire, Marc; Buono, Richard; Cowan, Matthew L.; Dvorak, Joseph; Flaks, Leon; LaMarra, Steven; Myers, Stuart F.; Orville, Allen M.; Robinson, Howard H.; Roessler, Christian G.; Schneider, Dieter K.; Shea-McCarthy, Grace; Skinner, John M.; Skinner, Michael; Soares, Alexei S.; Sweet, Robert M.; Berman, Lonny E.

    2014-01-01

    A description of the upgraded beamline X25 at the NSLS, operated by the PXRR and the Photon Sciences Directorate serving the Macromolecular Crystallography community, is presented. Beamline X25 at the NSLS is one of the five beamlines dedicated to macromolecular crystallography operated by the Brookhaven National Laboratory Macromolecular Crystallography Research Resource group. This mini-gap insertion-device beamline has seen constant upgrades for the last seven years in order to achieve mini-beam capability down to 20 µm × 20 µm. All major components beginning with the radiation source, and continuing along the beamline and its experimental hutch, have changed to produce a state-of-the-art facility for the scientific community

  12. Field error lottery

    Energy Technology Data Exchange (ETDEWEB)

    Elliott, C.J.; McVey, B. (Los Alamos National Lab., NM (USA)); Quimby, D.C. (Spectra Technology, Inc., Bellevue, WA (USA))

    1990-01-01

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.

  13. Automated structure solution, density modification and model building.

    Science.gov (United States)

    Terwilliger, Thomas C

    2002-11-01

    The approaches that form the basis of automated structure solution in SOLVE and RESOLVE are described. The use of a scoring scheme to convert decision making in macromolecular structure solution to an optimization problem has proven very useful and in many cases a single clear heavy-atom solution can be obtained and used for phasing. Statistical density modification is well suited to an automated approach to structure solution because the method is relatively insensitive to choices of numbers of cycles and solvent content. The detection of non-crystallographic symmetry (NCS) in heavy-atom sites and checking of potential NCS operations against the electron-density map has proven to be a reliable method for identification of NCS in most cases. Automated model building beginning with an FFT-based search for helices and sheets has been successful in automated model building for maps with resolutions as low as 3 A. The entire process can be carried out in a fully automatic fashion in many cases.

  14. MO-F-CAMPUS-T-03: Data Driven Approaches for Determination of Treatment Table Tolerance Values for Record and Verification Systems

    International Nuclear Information System (INIS)

    Gupta, N; DiCostanzo, D; Fullenkamp, M

    2015-01-01

    Purpose: To determine appropriate couch tolerance values for modern radiotherapy linac R&V systems with indexed patient setup. Methods: Treatment table tolerance values have been the most difficult to lower, due to many factors including variations in patient positioning and differences in table tops between machines. We recently installed nine linacs with similar tables and started indexing every patient in our clinic. In this study we queried our R&V database and analyzed the deviation of couch position values from the acquired values at verification simulation for all patients treated with indexed positioning. Mean and standard deviations of daily setup deviations were computed in the longitudinal, lateral and vertical direction for 343 patient plans. The mean, median and standard error of the standard deviations across the whole patient population and for some disease sites were computed to determine tolerance values. Results: The plot of our couch deviation values showed a gaussian distribution, with some small deviations, corresponding to setup uncertainties on non-imaging days, and SRS/SRT/SBRT patients, as well as some large deviations which were spot checked and found to be corresponding to indexing errors that were overriden. Setting our tolerance values based on the median + 1 standard error resulted in tolerance values of 1cm lateral and longitudinal, and 0.5 cm vertical for all non- SRS/SRT/SBRT cases. Re-analizing the data, we found that about 92% of the treated fractions would be within these tolerance values (ignoring the mis-indexed patients). We also analyzed data for disease site based subpopulations and found no difference in the tolerance values that needed to be used. Conclusion: With the use of automation, auto-setup and other workflow efficiency tools being introduced into radiotherapy workflow, it is very essential to set table tolerances that allow safe treatments, but flag setup errors that need to be reassessed before treatments

  15. MO-F-CAMPUS-T-03: Data Driven Approaches for Determination of Treatment Table Tolerance Values for Record and Verification Systems

    Energy Technology Data Exchange (ETDEWEB)

    Gupta, N; DiCostanzo, D; Fullenkamp, M [Ohio State University, Columbus, OH (United States)

    2015-06-15

    Purpose: To determine appropriate couch tolerance values for modern radiotherapy linac R&V systems with indexed patient setup. Methods: Treatment table tolerance values have been the most difficult to lower, due to many factors including variations in patient positioning and differences in table tops between machines. We recently installed nine linacs with similar tables and started indexing every patient in our clinic. In this study we queried our R&V database and analyzed the deviation of couch position values from the acquired values at verification simulation for all patients treated with indexed positioning. Mean and standard deviations of daily setup deviations were computed in the longitudinal, lateral and vertical direction for 343 patient plans. The mean, median and standard error of the standard deviations across the whole patient population and for some disease sites were computed to determine tolerance values. Results: The plot of our couch deviation values showed a gaussian distribution, with some small deviations, corresponding to setup uncertainties on non-imaging days, and SRS/SRT/SBRT patients, as well as some large deviations which were spot checked and found to be corresponding to indexing errors that were overriden. Setting our tolerance values based on the median + 1 standard error resulted in tolerance values of 1cm lateral and longitudinal, and 0.5 cm vertical for all non- SRS/SRT/SBRT cases. Re-analizing the data, we found that about 92% of the treated fractions would be within these tolerance values (ignoring the mis-indexed patients). We also analyzed data for disease site based subpopulations and found no difference in the tolerance values that needed to be used. Conclusion: With the use of automation, auto-setup and other workflow efficiency tools being introduced into radiotherapy workflow, it is very essential to set table tolerances that allow safe treatments, but flag setup errors that need to be reassessed before treatments.

  16. The essential component in DNA-based information storage system: robust error-tolerating module

    Directory of Open Access Journals (Sweden)

    Aldrin Kay-Yuen eYim

    2014-11-01

    Full Text Available The size of digital data is ever increasing and is expected to grow to 40,000EB by 2020, yet the estimated global information storage capacity in 2011 is less than 300EB, indicating that most of the data are transient. DNA, as a very stable nano-molecule, is an ideal massive storage device for long-term data archive. The two most notable illustrations are from Church et al. and Goldman et al., whose approaches are well-optimized for most sequencing platforms – short synthesized DNA fragments without homopolymer. Here we suggested improvements on error handling methodology that could enable the integration of DNA-based computational process, e.g. algorithms based on self-assembly of DNA. As a proof of concept, a picture of size 438 bytes was encoded to DNA with Low-Density Parity-Check error-correction code. We salvaged a significant portion of sequencing reads with mutations generated during DNA synthesis and sequencing and successfully reconstructed the entire picture. A modular-based programming framework - DNAcodec with a XML-based data format was also introduced. Our experiments demonstrated the practicability of long DNA message recovery with high error-tolerance, which opens the field to biocomputing and synthetic biology.

  17. Automation in Immunohematology

    Directory of Open Access Journals (Sweden)

    Meenu Bajpai

    2012-01-01

    Full Text Available There have been rapid technological advances in blood banking in South Asian region over the past decade with an increasing emphasis on quality and safety of blood products. The conventional test tube technique has given way to newer techniques such as column agglutination technique, solid phase red cell adherence assay, and erythrocyte-magnetized technique. These new technologies are adaptable to automation and major manufacturers in this field have come up with semi and fully automated equipments for immunohematology tests in the blood bank. Automation improves the objectivity and reproducibility of tests. It reduces human errors in patient identification and transcription errors. Documentation and traceability of tests, reagents and processes and archiving of results is another major advantage of automation. Shifting from manual methods to automation is a major undertaking for any transfusion service to provide quality patient care with lesser turnaround time for their ever increasing workload. This article discusses the various issues involved in the process.

  18. Partitioning,Automation and Error Recovery in the Control and Monitoring System of an LHC Experiment

    Institute of Scientific and Technical Information of China (English)

    C.Gaspar

    2001-01-01

    The Joint Controls Project(JCOP)is a collaboration between CERN and the four LHC experiments to find and implement common solutions for their control and monitoring systems.As part of this project and Architecture Working Group was set up in order to study the requirements and devise an architectural model that would suit the four experiments.Many issues were studied by this working group:Alarm handling,Access Control,Hierarchical Control,etc.This paper will report on the specific issue of hierarchical control and in particular partitioning,automation and error recovery.

  19. AR-NE3A, a New Macromolecular Crystallography Beamline for Pharmaceutical Applications at the Photon Factory

    International Nuclear Information System (INIS)

    Yamada, Yusuke; Hiraki, Masahiko; Sasajima, Kumiko; Matsugaki, Naohiro; Igarashi, Noriyuki; Kikuchi, Takashi; Mori, Takeharu; Toyoshima, Akio; Kishimoto, Shunji; Wakatsuki, Soichi; Amano, Yasushi; Warizaya, Masaichi; Sakashita, Hitoshi

    2010-01-01

    Recent advances in high-throughput techniques for macromolecular crystallography have highlighted the importance of structure-based drug design (SBDD), and the demand for synchrotron use by pharmaceutical researchers has increased. Thus, in collaboration with Astellas Pharma Inc., we have constructed a new high-throughput macromolecular crystallography beamline, AR-NE3A, which is dedicated to SBDD. At AR-NE3A, a photon flux up to three times higher than those at existing high-throughput beams at the Photon Factory, AR-NW12A and BL-5A, can be realized at the same sample positions. Installed in the experimental hutch are a high-precision diffractometer, fast-readout, high-gain CCD detector, and sample exchange robot capable of handling more than two hundred cryo-cooled samples stored in a Dewar. To facilitate high-throughput data collection required for pharmaceutical research, fully automated data collection and processing systems have been developed. Thus, sample exchange, centering, data collection, and data processing are automatically carried out based on the user's pre-defined schedule. Although Astellas Pharma Inc. has a priority access to AR-NE3A, the remaining beam time is allocated to general academic and other industrial users.

  20. Quantum error correction for beginners

    International Nuclear Information System (INIS)

    Devitt, Simon J; Nemoto, Kae; Munro, William J

    2013-01-01

    Quantum error correction (QEC) and fault-tolerant quantum computation represent one of the most vital theoretical aspects of quantum information processing. It was well known from the early developments of this exciting field that the fragility of coherent quantum systems would be a catastrophic obstacle to the development of large-scale quantum computers. The introduction of quantum error correction in 1995 showed that active techniques could be employed to mitigate this fatal problem. However, quantum error correction and fault-tolerant computation is now a much larger field and many new codes, techniques, and methodologies have been developed to implement error correction for large-scale quantum algorithms. In response, we have attempted to summarize the basic aspects of quantum error correction and fault-tolerance, not as a detailed guide, but rather as a basic introduction. The development in this area has been so pronounced that many in the field of quantum information, specifically researchers who are new to quantum information or people focused on the many other important issues in quantum computation, have found it difficult to keep up with the general formalisms and methodologies employed in this area. Rather than introducing these concepts from a rigorous mathematical and computer science framework, we instead examine error correction and fault-tolerance largely through detailed examples, which are more relevant to experimentalists today and in the near future. (review article)

  1. Macromolecular crowding directs extracellular matrix organization and mesenchymal stem cell behavior.

    Directory of Open Access Journals (Sweden)

    Adam S Zeiger

    Full Text Available Microenvironments of biological cells are dominated in vivo by macromolecular crowding and resultant excluded volume effects. This feature is absent in dilute in vitro cell culture. Here, we induced macromolecular crowding in vitro by using synthetic macromolecular globules of nm-scale radius at physiological levels of fractional volume occupancy. We quantified the impact of induced crowding on the extracellular and intracellular protein organization of human mesenchymal stem cells (MSCs via immunocytochemistry, atomic force microscopy (AFM, and AFM-enabled nanoindentation. Macromolecular crowding in extracellular culture media directly induced supramolecular assembly and alignment of extracellular matrix proteins deposited by cells, which in turn increased alignment of the intracellular actin cytoskeleton. The resulting cell-matrix reciprocity further affected adhesion, proliferation, and migration behavior of MSCs. Macromolecular crowding can thus aid the design of more physiologically relevant in vitro studies and devices for MSCs and other cells, by increasing the fidelity between materials synthesized by cells in vivo and in vitro.

  2. Macromolecular crowding directs extracellular matrix organization and mesenchymal stem cell behavior.

    Science.gov (United States)

    Zeiger, Adam S; Loe, Felicia C; Li, Ran; Raghunath, Michael; Van Vliet, Krystyn J

    2012-01-01

    Microenvironments of biological cells are dominated in vivo by macromolecular crowding and resultant excluded volume effects. This feature is absent in dilute in vitro cell culture. Here, we induced macromolecular crowding in vitro by using synthetic macromolecular globules of nm-scale radius at physiological levels of fractional volume occupancy. We quantified the impact of induced crowding on the extracellular and intracellular protein organization of human mesenchymal stem cells (MSCs) via immunocytochemistry, atomic force microscopy (AFM), and AFM-enabled nanoindentation. Macromolecular crowding in extracellular culture media directly induced supramolecular assembly and alignment of extracellular matrix proteins deposited by cells, which in turn increased alignment of the intracellular actin cytoskeleton. The resulting cell-matrix reciprocity further affected adhesion, proliferation, and migration behavior of MSCs. Macromolecular crowding can thus aid the design of more physiologically relevant in vitro studies and devices for MSCs and other cells, by increasing the fidelity between materials synthesized by cells in vivo and in vitro.

  3. Error rate of automated calculation for wound surface area using a digital photography.

    Science.gov (United States)

    Yang, S; Park, J; Lee, H; Lee, J B; Lee, B U; Oh, B H

    2018-02-01

    Although measuring would size using digital photography is a quick and simple method to evaluate the skin wound, the possible compatibility of it has not been fully validated. To investigate the error rate of our newly developed wound surface area calculation using digital photography. Using a smartphone and a digital single lens reflex (DSLR) camera, four photographs of various sized wounds (diameter: 0.5-3.5 cm) were taken from the facial skin model in company with color patches. The quantitative values of wound areas were automatically calculated. The relative error (RE) of this method with regard to wound sizes and types of camera was analyzed. RE of individual calculated area was from 0.0329% (DSLR, diameter 1.0 cm) to 23.7166% (smartphone, diameter 2.0 cm). In spite of the correction of lens curvature, smartphone has significantly higher error rate than DSLR camera (3.9431±2.9772 vs 8.1303±4.8236). However, in cases of wound diameter below than 3 cm, REs of average values of four photographs were below than 5%. In addition, there was no difference in the average value of wound area taken by smartphone and DSLR camera in those cases. For the follow-up of small skin defect (diameter: <3 cm), our newly developed automated wound area calculation method is able to be applied to the plenty of photographs, and the average values of them are a relatively useful index of wound healing with acceptable error rate. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  4. Fault tolerant computing systems

    International Nuclear Information System (INIS)

    Randell, B.

    1981-01-01

    Fault tolerance involves the provision of strategies for error detection damage assessment, fault treatment and error recovery. A survey is given of the different sorts of strategies used in highly reliable computing systems, together with an outline of recent research on the problems of providing fault tolerance in parallel and distributed computing systems. (orig.)

  5. Timing analysis for embedded systems using non-preemptive EDF scheduling under bounded error arrivals

    Directory of Open Access Journals (Sweden)

    Michael Short

    2017-07-01

    Full Text Available Embedded systems consist of one or more processing units which are completely encapsulated by the devices under their control, and they often have stringent timing constraints associated with their functional specification. Previous research has considered the performance of different types of task scheduling algorithm and developed associated timing analysis techniques for such systems. Although preemptive scheduling techniques have traditionally been favored, rapid increases in processor speeds combined with improved insights into the behavior of non-preemptive scheduling techniques have seen an increased interest in their use for real-time applications such as multimedia, automation and control. However when non-preemptive scheduling techniques are employed there is a potential lack of error confinement should any timing errors occur in individual software tasks. In this paper, the focus is upon adding fault tolerance in systems using non-preemptive deadline-driven scheduling. Schedulability conditions are derived for fault-tolerant periodic and sporadic task sets experiencing bounded error arrivals under non-preemptive deadline scheduling. A timing analysis algorithm is presented based upon these conditions and its run-time properties are studied. Computational experiments show it to be highly efficient in terms of run-time complexity and competitive ratio when compared to previous approaches.

  6. The Automation-by-Expertise-by-Training Interaction.

    Science.gov (United States)

    Strauch, Barry

    2017-03-01

    I introduce the automation-by-expertise-by-training interaction in automated systems and discuss its influence on operator performance. Transportation accidents that, across a 30-year interval demonstrated identical automation-related operator errors, suggest a need to reexamine traditional views of automation. I review accident investigation reports, regulator studies, and literature on human computer interaction, expertise, and training and discuss how failing to attend to the interaction of automation, expertise level, and training has enabled operators to commit identical automation-related errors. Automated systems continue to provide capabilities exceeding operators' need for effective system operation and provide interfaces that can hinder, rather than enhance, operator automation-related situation awareness. Because of limitations in time and resources, training programs do not provide operators the expertise needed to effectively operate these automated systems, requiring them to obtain the expertise ad hoc during system operations. As a result, many do not acquire necessary automation-related system expertise. Integrating automation with expected operator expertise levels, and within training programs that provide operators the necessary automation expertise, can reduce opportunities for automation-related operator errors. Research to address the automation-by-expertise-by-training interaction is needed. However, such research must meet challenges inherent to examining realistic sociotechnical system automation features with representative samples of operators, perhaps by using observational and ethnographic research. Research in this domain should improve the integration of design and training and, it is hoped, enhance operator performance.

  7. Quantitative Estimation for the Effectiveness of Automation

    International Nuclear Information System (INIS)

    Lee, Seung Min; Seong, Poong Hyun

    2012-01-01

    In advanced MCR, various automation systems are applied to enhance the human performance and reduce the human errors in industrial fields. It is expected that automation provides greater efficiency, lower workload, and fewer human errors. However, these promises are not always fulfilled. As the new types of events related to application of the imperfect and complex automation are occurred, it is required to analyze the effects of automation system for the performance of human operators. Therefore, we suggest the quantitative estimation method to analyze the effectiveness of the automation systems according to Level of Automation (LOA) classification, which has been developed over 30 years. The estimation of the effectiveness of automation will be achieved by calculating the failure probability of human performance related to the cognitive activities

  8. Nitrogen isotopic composition of macromolecular organic matter in interplanetary dust particles

    Science.gov (United States)

    Aléon, Jérôme; Robert, François; Chaussidon, Marc; Marty, Bernard

    2003-10-01

    Nitrogen concentrations and isotopic compositions were measured by ion microprobe scanning imaging in two interplanetary dust particles L2021 K1 and L2036 E22, in which imaging of D/H and C/H ratios has previously evidenced the presence of D-rich macromolecular organic components. High nitrogen concentrations of 10-20 wt% and δ 15N values up to +400‰ are observed in these D-rich macromolecular components. The previous study of D/H and C/H ratios has revealed three different D-rich macromolecular phases. The one previously ascribed to macromolecular organic matter akin the insoluble organic matter (IOM) from carbonaceous chondrites is enriched in nitrogen by one order of magnitude compared to the carbonaceous chondrite IOM, although its isotopic composition is still similar to what is known from Renazzo (δ 15N = +208‰). The correlation observed in macromolecular organic material between the D- and 15N-excesses suggests that the latter originate probably from chemical reactions typical of the cold interstellar medium. These interstellar materials preserved to some extent in IDPs are therefore macromolecular organic components with various aliphaticity and aromaticity. They are heavily N-heterosubstituted as shown by their high nitrogen concentrations >10 wt%. They have high D/H ratios >10 -3 and δ 15N values ≥ +400‰. In L2021 K1 a mixture is observed at the micron scale between interstellar and chondritic-like organic phases. This indicates that some IDPs contain organic materials processed at various heliocentric distances in a turbulent nebula. Comparison with observation in comets suggests that these molecules may be cometary macromolecules. A correlation is observed between the D/H ratios and δ 15N values of macromolecular organic matter from IDPs, meteorites, the Earth and of major nebular reservoirs. This suggests that most macromolecular organic matter in the inner solar system was probably issued from interstellar precursors and further processed

  9. Estimators of the Relations of Equivalence, Tolerance and Preference Based on Pairwise Comparisons with Random Errors

    Directory of Open Access Journals (Sweden)

    Leszek Klukowski

    2012-01-01

    Full Text Available This paper presents a review of results of the author in the area of estimation of the relations of equivalence, tolerance and preference within a finite set based on multiple, independent (in a stochastic way pairwise comparisons with random errors, in binary and multivalent forms. These estimators require weaker assumptions than those used in the literature on the subject. Estimates of the relations are obtained based on solutions to problems from discrete optimization. They allow application of both types of comparisons - binary and multivalent (this fact relates to the tolerance and preference relations. The estimates can be verified in a statistical way; in particular, it is possible to verify the type of the relation. The estimates have been applied by the author to problems regarding forecasting, financial engineering and bio-cybernetics. (original abstract

  10. Quantum money with nearly optimal error tolerance

    Science.gov (United States)

    Amiri, Ryan; Arrazola, Juan Miguel

    2017-06-01

    We present a family of quantum money schemes with classical verification which display a number of benefits over previous proposals. Our schemes are based on hidden matching quantum retrieval games and they tolerate noise up to 23 % , which we conjecture reaches 25 % asymptotically as the dimension of the underlying hidden matching states is increased. Furthermore, we prove that 25 % is the maximum tolerable noise for a wide class of quantum money schemes with classical verification, meaning our schemes are almost optimally noise tolerant. We use methods in semidefinite programming to prove security in a substantially different manner to previous proposals, leading to two main advantages: first, coin verification involves only a constant number of states (with respect to coin size), thereby allowing for smaller coins; second, the reusability of coins within our scheme grows linearly with the size of the coin, which is known to be optimal. Last, we suggest methods by which the coins in our protocol could be implemented using weak coherent states and verified using existing experimental techniques, even in the presence of detector inefficiencies.

  11. The effects of local street network characteristics on the positional accuracy of automated geocoding for geographic health studies

    Directory of Open Access Journals (Sweden)

    Zimmerman Dale L

    2010-02-01

    Full Text Available Abstract Background Automated geocoding of patient addresses for the purpose of conducting spatial epidemiologic studies results in positional errors. It is well documented that errors tend to be larger in rural areas than in cities, but possible effects of local characteristics of the street network, such as street intersection density and street length, on errors have not yet been documented. Our study quantifies effects of these local street network characteristics on the means and the entire probability distributions of positional errors, using regression methods and tolerance intervals/regions, for more than 6000 geocoded patient addresses from an Iowa county. Results Positional errors were determined for 6376 addresses in Carroll County, Iowa, as the vector difference between each 100%-matched automated geocode and its ground-truthed location. Mean positional error magnitude was inversely related to proximate street intersection density. This effect was statistically significant for both rural and municipal addresses, but more so for the former. Also, the effect of street segment length on geocoding accuracy was statistically significant for municipal, but not rural, addresses; for municipal addresses mean error magnitude increased with length. Conclusion Local street network characteristics may have statistically significant effects on geocoding accuracy in some places, but not others. Even in those locales where their effects are statistically significant, street network characteristics may explain a relatively small portion of the variability among geocoding errors. It appears that additional factors besides rurality and local street network characteristics affect accuracy in general.

  12. FlexED8: the first member of a fast and flexible sample-changer family for macromolecular crystallography.

    Science.gov (United States)

    Papp, Gergely; Felisaz, Franck; Sorez, Clement; Lopez-Marrero, Marcos; Janocha, Robert; Manjasetty, Babu; Gobbo, Alexandre; Belrhali, Hassan; Bowler, Matthew W; Cipriani, Florent

    2017-10-01

    Automated sample changers are now standard equipment for modern macromolecular crystallography synchrotron beamlines. Nevertheless, most are only compatible with a single type of sample holder and puck. Recent work aimed at reducing sample-handling efforts and crystal-alignment times at beamlines has resulted in a new generation of compact and precise sample holders for cryocrystallography: miniSPINE and NewPin [see the companion paper by Papp et al. (2017, Acta Cryst., D73, 829-840)]. With full data collection now possible within seconds at most advanced beamlines, and future fourth-generation synchrotron sources promising to extract data in a few tens of milliseconds, the time taken to mount and centre a sample is rate-limiting. In this context, a versatile and fast sample changer, FlexED8, has been developed that is compatible with the highly successful SPINE sample holder and with the miniSPINE and NewPin sample holders. Based on a six-axis industrial robot, FlexED8 is equipped with a tool changer and includes a novel open sample-storage dewar with a built-in ice-filtering system. With seven versatile puck slots, it can hold up to 112 SPINE sample holders in uni-pucks, or 252 miniSPINE or NewPin sample holders, with 36 samples per puck. Additionally, a double gripper, compatible with the SPINE sample holders and uni-pucks, allows a reduction in the sample-exchange time from 40 s, the typical time with a standard single gripper, to less than 5 s. Computer vision-based sample-transfer monitoring, sophisticated error handling and automatic error-recovery procedures ensure high reliability. The FlexED8 sample changer has been successfully tested under real conditions on a beamline.

  13. Analytical model for macromolecular partitioning during yeast cell division

    International Nuclear Information System (INIS)

    Kinkhabwala, Ali; Khmelinskii, Anton; Knop, Michael

    2014-01-01

    Asymmetric cell division, whereby a parent cell generates two sibling cells with unequal content and thereby distinct fates, is central to cell differentiation, organism development and ageing. Unequal partitioning of the macromolecular content of the parent cell — which includes proteins, DNA, RNA, large proteinaceous assemblies and organelles — can be achieved by both passive (e.g. diffusion, localized retention sites) and active (e.g. motor-driven transport) processes operating in the presence of external polarity cues, internal asymmetries, spontaneous symmetry breaking, or stochastic effects. However, the quantitative contribution of different processes to the partitioning of macromolecular content is difficult to evaluate. Here we developed an analytical model that allows rapid quantitative assessment of partitioning as a function of various parameters in the budding yeast Saccharomyces cerevisiae. This model exposes quantitative degeneracies among the physical parameters that govern macromolecular partitioning, and reveals regions of the solution space where diffusion is sufficient to drive asymmetric partitioning and regions where asymmetric partitioning can only be achieved through additional processes such as motor-driven transport. Application of the model to different macromolecular assemblies suggests that partitioning of protein aggregates and episomes, but not prions, is diffusion-limited in yeast, consistent with previous reports. In contrast to computationally intensive stochastic simulations of particular scenarios, our analytical model provides an efficient and comprehensive overview of partitioning as a function of global and macromolecule-specific parameters. Identification of quantitative degeneracies among these parameters highlights the importance of their careful measurement for a given macromolecular species in order to understand the dominant processes responsible for its observed partitioning

  14. What Macromolecular Crowding Can Do to a Protein

    Science.gov (United States)

    Kuznetsova, Irina M.; Turoverov, Konstantin K.; Uversky, Vladimir N.

    2014-01-01

    The intracellular environment represents an extremely crowded milieu, with a limited amount of free water and an almost complete lack of unoccupied space. Obviously, slightly salted aqueous solutions containing low concentrations of a biomolecule of interest are too simplistic to mimic the “real life” situation, where the biomolecule of interest scrambles and wades through the tightly packed crowd. In laboratory practice, such macromolecular crowding is typically mimicked by concentrated solutions of various polymers that serve as model “crowding agents”. Studies under these conditions revealed that macromolecular crowding might affect protein structure, folding, shape, conformational stability, binding of small molecules, enzymatic activity, protein-protein interactions, protein-nucleic acid interactions, and pathological aggregation. The goal of this review is to systematically analyze currently available experimental data on the variety of effects of macromolecular crowding on a protein molecule. The review covers more than 320 papers and therefore represents one of the most comprehensive compendia of the current knowledge in this exciting area. PMID:25514413

  15. Decision Making In A High-Tech World: Automation Bias and Countermeasures

    Science.gov (United States)

    Mosier, Kathleen L.; Skitka, Linda J.; Burdick, Mark R.; Heers, Susan T.; Rosekind, Mark R. (Technical Monitor)

    1996-01-01

    Automated decision aids and decision support systems have become essential tools in many high-tech environments. In aviation, for example, flight management systems computers not only fly the aircraft, but also calculate fuel efficient paths, detect and diagnose system malfunctions and abnormalities, and recommend or carry out decisions. Air Traffic Controllers will soon be utilizing decision support tools to help them predict and detect potential conflicts and to generate clearances. Other fields as disparate as nuclear power plants and medical diagnostics are similarly becoming more and more automated. Ideally, the combination of human decision maker and automated decision aid should result in a high-performing team, maximizing the advantages of additional cognitive and observational power in the decision-making process. In reality, however, the presence of these aids often short-circuits the way that even very experienced decision makers have traditionally handled tasks and made decisions, and introduces opportunities for new decision heuristics and biases. Results of recent research investigating the use of automated aids have indicated the presence of automation bias, that is, errors made when decision makers rely on automated cues as a heuristic replacement for vigilant information seeking and processing. Automation commission errors, i.e., errors made when decision makers inappropriately follow an automated directive, or automation omission errors, i.e., errors made when humans fail to take action or notice a problem because an automated aid fails to inform them, can result from this tendency. Evidence of the tendency to make automation-related omission and commission errors has been found in pilot self reports, in studies using pilots in flight simulations, and in non-flight decision making contexts with student samples. Considerable research has found that increasing social accountability can successfully ameliorate a broad array of cognitive biases and

  16. Macromolecular nanotheranostics for multimodal anticancer therapy

    Science.gov (United States)

    Huis in't Veld, Ruben; Storm, Gert; Hennink, Wim E.; Kiessling, Fabian; Lammers, Twan

    2011-10-01

    Macromolecular carrier materials based on N-(2-hydroxypropyl)methacrylamide (HPMA) are prototypic and well-characterized drug delivery systems that have been extensively evaluated in the past two decades, both at the preclinical and at the clinical level. Using several different imaging agents and techniques, HPMA copolymers have been shown to circulate for prolonged periods of time, and to accumulate in tumors both effectively and selectively by means of the Enhanced Permeability and Retention (EPR) effect. Because of this, HPMA-based macromolecular nanotheranostics, i.e. formulations containing both drug and imaging agents within a single formulation, have been shown to be highly effective in inducing tumor growth inhibition in animal models. In patients, however, as essentially all other tumor-targeted nanomedicines, they are generally only able to improve the therapeutic index of the attached active agent by lowering its toxicity, and they fail to improve the efficacy of the intervention. Bearing this in mind, we have recently reasoned that because of their biocompatibility and their beneficial biodistribution, nanomedicine formulations might be highly suitable systems for combination therapies. In the present manuscript, we briefly summarize several exemplary efforts undertaken in this regard in our labs in the past couple of years, and we show that long-circulating and passively tumor-targeted macromolecular nanotheranostics can be used to improve the efficacy of radiochemotherapy and of chemotherapy combinations.

  17. The Upgrade Programme for the Structural Biology beamlines at the European Synchrotron Radiation Facility - High throughput sample evaluation and automation

    Science.gov (United States)

    Theveneau, P.; Baker, R.; Barrett, R.; Beteva, A.; Bowler, M. W.; Carpentier, P.; Caserotto, H.; de Sanctis, D.; Dobias, F.; Flot, D.; Guijarro, M.; Giraud, T.; Lentini, M.; Leonard, G. A.; Mattenet, M.; McCarthy, A. A.; McSweeney, S. M.; Morawe, C.; Nanao, M.; Nurizzo, D.; Ohlsson, S.; Pernot, P.; Popov, A. N.; Round, A.; Royant, A.; Schmid, W.; Snigirev, A.; Surr, J.; Mueller-Dieckmann, C.

    2013-03-01

    Automation and advances in technology are the key elements in addressing the steadily increasing complexity of Macromolecular Crystallography (MX) experiments. Much of this complexity is due to the inter-and intra-crystal heterogeneity in diffraction quality often observed for crystals of multi-component macromolecular assemblies or membrane proteins. Such heterogeneity makes high-throughput sample evaluation an important and necessary tool for increasing the chances of a successful structure determination. The introduction at the ESRF of automatic sample changers in 2005 dramatically increased the number of samples that were tested for diffraction quality. This "first generation" of automation, coupled with advances in software aimed at optimising data collection strategies in MX, resulted in a three-fold increase in the number of crystal structures elucidated per year using data collected at the ESRF. In addition, sample evaluation can be further complemented using small angle scattering experiments on the newly constructed bioSAXS facility on BM29 and the micro-spectroscopy facility (ID29S). The construction of a second generation of automated facilities on the MASSIF (Massively Automated Sample Screening Integrated Facility) beam lines will build on these advances and should provide a paradigm shift in how MX experiments are carried out which will benefit the entire Structural Biology community.

  18. Recent advances in macromolecular prodrugs

    DEFF Research Database (Denmark)

    Riber, Camilla Frich; Zelikin, Alexander N.

    2017-01-01

    Macromolecular prodrugs (MP) are high molar mass conjugates, typically carrying several copies of a drug or a drug combination, designed to optimize delivery of the drug, that is — its pharmacokinetics. From its advent several decades ago, design of MP has undergone significant development and es...

  19. RCrane: semi-automated RNA model building.

    Science.gov (United States)

    Keating, Kevin S; Pyle, Anna Marie

    2012-08-01

    RNA crystals typically diffract to much lower resolutions than protein crystals. This low-resolution diffraction results in unclear density maps, which cause considerable difficulties during the model-building process. These difficulties are exacerbated by the lack of computational tools for RNA modeling. Here, RCrane, a tool for the partially automated building of RNA into electron-density maps of low or intermediate resolution, is presented. This tool works within Coot, a common program for macromolecular model building. RCrane helps crystallographers to place phosphates and bases into electron density and then automatically predicts and builds the detailed all-atom structure of the traced nucleotides. RCrane then allows the crystallographer to review the newly built structure and select alternative backbone conformations where desired. This tool can also be used to automatically correct the backbone structure of previously built nucleotides. These automated corrections can fix incorrect sugar puckers, steric clashes and other structural problems.

  20. Experience of automation failures in training: effects on trust, automation bias, complacency and performance.

    Science.gov (United States)

    Sauer, Juergen; Chavaillaz, Alain; Wastell, David

    2016-06-01

    This work examined the effects of operators' exposure to various types of automation failures in training. Forty-five participants were trained for 3.5 h on a simulated process control environment. During training, participants either experienced a fully reliable, automatic fault repair facility (i.e. faults detected and correctly diagnosed), a misdiagnosis-prone one (i.e. faults detected but not correctly diagnosed) or a miss-prone one (i.e. faults not detected). One week after training, participants were tested for 3 h, experiencing two types of automation failures (misdiagnosis, miss). The results showed that automation bias was very high when operators trained on miss-prone automation encountered a failure of the diagnostic system. Operator errors resulting from automation bias were much higher when automation misdiagnosed a fault than when it missed one. Differences in trust levels that were instilled by the different training experiences disappeared during the testing session. Practitioner Summary: The experience of automation failures during training has some consequences. A greater potential for operator errors may be expected when an automatic system failed to diagnose a fault than when it failed to detect one.

  1. Improvement of an automated protein crystal exchange system PAM for high-throughput data collection

    International Nuclear Information System (INIS)

    Hiraki, Masahiko; Yamada, Yusuke; Chavas, Leonard M. G.; Wakatsuki, Soichi; Matsugaki, Naohiro

    2013-01-01

    A special liquid-nitrogen Dewar with double capacity for the sample-exchange robot has been created at AR-NE3A at the Photon Factory, allowing continuous fully automated data collection. In this work, this new system is described and the stability of its calibration is discussed. Photon Factory Automated Mounting system (PAM) protein crystal exchange systems are available at the following Photon Factory macromolecular beamlines: BL-1A, BL-5A, BL-17A, AR-NW12A and AR-NE3A. The beamline AR-NE3A has been constructed for high-throughput macromolecular crystallography and is dedicated to structure-based drug design. The PAM liquid-nitrogen Dewar can store a maximum of three SSRL cassettes. Therefore, users have to interrupt their experiments and replace the cassettes when using four or more of them during their beam time. As a result of investigation, four or more cassettes were used in AR-NE3A alone. For continuous automated data collection, the size of the liquid-nitrogen Dewar for the AR-NE3A PAM was increased, doubling the capacity. In order to check the calibration with the new Dewar and the cassette stand, calibration experiments were repeatedly performed. Compared with the current system, the parameters of the novel system are shown to be stable

  2. Improvement of an automated protein crystal exchange system PAM for high-throughput data collection

    Energy Technology Data Exchange (ETDEWEB)

    Hiraki, Masahiko, E-mail: masahiko.hiraki@kek.jp; Yamada, Yusuke; Chavas, Leonard M. G. [High Energy Accelerator Research Organization, 1-1 Oho, Tsukuba, Ibaraki 305-0801 (Japan); Wakatsuki, Soichi [High Energy Accelerator Research Organization, 1-1 Oho, Tsukuba, Ibaraki 305-0801 (Japan); SLAC National Accelerator Laboratory, 2575 Sand Hill Road, MS 69, Menlo Park, CA 94025-7015 (United States); Stanford University, Beckman Center B105, Stanford, CA 94305-5126 (United States); Matsugaki, Naohiro [High Energy Accelerator Research Organization, 1-1 Oho, Tsukuba, Ibaraki 305-0801 (Japan)

    2013-11-01

    A special liquid-nitrogen Dewar with double capacity for the sample-exchange robot has been created at AR-NE3A at the Photon Factory, allowing continuous fully automated data collection. In this work, this new system is described and the stability of its calibration is discussed. Photon Factory Automated Mounting system (PAM) protein crystal exchange systems are available at the following Photon Factory macromolecular beamlines: BL-1A, BL-5A, BL-17A, AR-NW12A and AR-NE3A. The beamline AR-NE3A has been constructed for high-throughput macromolecular crystallography and is dedicated to structure-based drug design. The PAM liquid-nitrogen Dewar can store a maximum of three SSRL cassettes. Therefore, users have to interrupt their experiments and replace the cassettes when using four or more of them during their beam time. As a result of investigation, four or more cassettes were used in AR-NE3A alone. For continuous automated data collection, the size of the liquid-nitrogen Dewar for the AR-NE3A PAM was increased, doubling the capacity. In order to check the calibration with the new Dewar and the cassette stand, calibration experiments were repeatedly performed. Compared with the current system, the parameters of the novel system are shown to be stable.

  3. Localization of protein aggregation in Escherichia coli is governed by diffusion and nucleoid macromolecular crowding effect.

    Directory of Open Access Journals (Sweden)

    Anne-Sophie Coquel

    2013-04-01

    Full Text Available Aggregates of misfolded proteins are a hallmark of many age-related diseases. Recently, they have been linked to aging of Escherichia coli (E. coli where protein aggregates accumulate at the old pole region of the aging bacterium. Because of the potential of E. coli as a model organism, elucidating aging and protein aggregation in this bacterium may pave the way to significant advances in our global understanding of aging. A first obstacle along this path is to decipher the mechanisms by which protein aggregates are targeted to specific intercellular locations. Here, using an integrated approach based on individual-based modeling, time-lapse fluorescence microscopy and automated image analysis, we show that the movement of aging-related protein aggregates in E. coli is purely diffusive (Brownian. Using single-particle tracking of protein aggregates in live E. coli cells, we estimated the average size and diffusion constant of the aggregates. Our results provide evidence that the aggregates passively diffuse within the cell, with diffusion constants that depend on their size in agreement with the Stokes-Einstein law. However, the aggregate displacements along the cell long axis are confined to a region that roughly corresponds to the nucleoid-free space in the cell pole, thus confirming the importance of increased macromolecular crowding in the nucleoids. We thus used 3D individual-based modeling to show that these three ingredients (diffusion, aggregation and diffusion hindrance in the nucleoids are sufficient and necessary to reproduce the available experimental data on aggregate localization in the cells. Taken together, our results strongly support the hypothesis that the localization of aging-related protein aggregates in the poles of E. coli results from the coupling of passive diffusion-aggregation with spatially non-homogeneous macromolecular crowding. They further support the importance of "soft" intracellular structuring (based on

  4. The Upgrade Programme for the Structural Biology beamlines at the European Synchrotron Radiation Facility – High throughput sample evaluation and automation

    International Nuclear Information System (INIS)

    Theveneau, P; Baker, R; Barrett, R; Beteva, A; Bowler, M W; Carpentier, P; Caserotto, H; Sanctis, D de; Dobias, F; Flot, D; Guijarro, M; Giraud, T; Lentini, M; Leonard, G A; Mattenet, M; McSweeney, S M; Morawe, C; Nurizzo, D; McCarthy, A A; Nanao, M

    2013-01-01

    Automation and advances in technology are the key elements in addressing the steadily increasing complexity of Macromolecular Crystallography (MX) experiments. Much of this complexity is due to the inter-and intra-crystal heterogeneity in diffraction quality often observed for crystals of multi-component macromolecular assemblies or membrane proteins. Such heterogeneity makes high-throughput sample evaluation an important and necessary tool for increasing the chances of a successful structure determination. The introduction at the ESRF of automatic sample changers in 2005 dramatically increased the number of samples that were tested for diffraction quality. This 'first generation' of automation, coupled with advances in software aimed at optimising data collection strategies in MX, resulted in a three-fold increase in the number of crystal structures elucidated per year using data collected at the ESRF. In addition, sample evaluation can be further complemented using small angle scattering experiments on the newly constructed bioSAXS facility on BM29 and the micro-spectroscopy facility (ID29S). The construction of a second generation of automated facilities on the MASSIF (Massively Automated Sample Screening Integrated Facility) beam lines will build on these advances and should provide a paradigm shift in how MX experiments are carried out which will benefit the entire Structural Biology community.

  5. Correcting Inconsistencies and Errors in Bacterial Genome Metadata Using an Automated Curation Tool in Excel (AutoCurE).

    Science.gov (United States)

    Schmedes, Sarah E; King, Jonathan L; Budowle, Bruce

    2015-01-01

    Whole-genome data are invaluable for large-scale comparative genomic studies. Current sequencing technologies have made it feasible to sequence entire bacterial genomes with relative ease and time with a substantially reduced cost per nucleotide, hence cost per genome. More than 3,000 bacterial genomes have been sequenced and are available at the finished status. Publically available genomes can be readily downloaded; however, there are challenges to verify the specific supporting data contained within the download and to identify errors and inconsistencies that may be present within the organizational data content and metadata. AutoCurE, an automated tool for bacterial genome database curation in Excel, was developed to facilitate local database curation of supporting data that accompany downloaded genomes from the National Center for Biotechnology Information. AutoCurE provides an automated approach to curate local genomic databases by flagging inconsistencies or errors by comparing the downloaded supporting data to the genome reports to verify genome name, RefSeq accession numbers, the presence of archaea, BioProject/UIDs, and sequence file descriptions. Flags are generated for nine metadata fields if there are inconsistencies between the downloaded genomes and genomes reports and if erroneous or missing data are evident. AutoCurE is an easy-to-use tool for local database curation for large-scale genome data prior to downstream analyses.

  6. Tolerance Optimization for Mechanisms with Lubricated Joints

    International Nuclear Information System (INIS)

    Choi, J.-H.; Lee, S.J.; Choi, D.-H.

    1998-01-01

    This paper addresses an analytical approach to tolerance optimization for planar mechanisms with lubricated joints based on mechanical error analysis. The mobility method is applied to consider the lubrication effects at joints and planar mechanisms are stochastically defined by using the clearance vector model for mechanical error analysis. The uncertainties considered in the analysis are tolerances on link lengths and radial clearances and these are selected as design variables. To show the validity of the proposed method for mechanical error analysis, it is applied to two examples, and the results obtained are compared with those of Monte Carlo simulations. Based on the mechanical error analysis, tolerance optimizations are applied to the examples

  7. Quickly Getting the Best Data from Your Macromolecular Crystals with a New Generation of Beamline Instruments

    International Nuclear Information System (INIS)

    Cipriani, Florent; Felisaz, Franck; Lavault, Bernard; Brockhauser, Sandor; Ravelli, Raimond; Launer, Ludovic; Leonard, Gordon; Renier, Michel

    2007-01-01

    While routine Macromolecular x-ray (MX) crystallography has relied on well established techniques for some years all the synchrotrons around the world are improving the throughput of their MX beamlines. Third generation synchrotrons provide small intense beams that make data collection of 5-10 microns sized crystals possible. The EMBL/ESRF MX Group in Grenoble has developed a new generation of instruments to easily collect data on 10 μm size crystals in an automated environment. This work is part of the Grenoble automation program that enables FedEx like crystallography using fully automated data collection and web monitored experiments. Seven ESRF beamlines and the MRC BM14 ESRF/CRG beamline are currently equipped with these latest instruments. We describe here the main features of the MD2x diffractometer family and the SC3 sample changer robot. Although the SC3 was primarily designed to increase the throughput of MX beamlines, it has also been shown to be efficient in improving the quality of the data collected. Strategies in screening a large number of crystals, selecting the best, and collecting a full data set from several re-oriented micro-crystals can now be run with minimum time and effort. The MD2x and SC3 instruments are now commercialised by the company ACCEL GmbH

  8. How to regress and predict in a Bland-Altman plot? Review and contribution based on tolerance intervals and correlated-errors-in-variables models.

    Science.gov (United States)

    Francq, Bernard G; Govaerts, Bernadette

    2016-06-30

    Two main methodologies for assessing equivalence in method-comparison studies are presented separately in the literature. The first one is the well-known and widely applied Bland-Altman approach with its agreement intervals, where two methods are considered interchangeable if their differences are not clinically significant. The second approach is based on errors-in-variables regression in a classical (X,Y) plot and focuses on confidence intervals, whereby two methods are considered equivalent when providing similar measures notwithstanding the random measurement errors. This paper reconciles these two methodologies and shows their similarities and differences using both real data and simulations. A new consistent correlated-errors-in-variables regression is introduced as the errors are shown to be correlated in the Bland-Altman plot. Indeed, the coverage probabilities collapse and the biases soar when this correlation is ignored. Novel tolerance intervals are compared with agreement intervals with or without replicated data, and novel predictive intervals are introduced to predict a single measure in an (X,Y) plot or in a Bland-Atman plot with excellent coverage probabilities. We conclude that the (correlated)-errors-in-variables regressions should not be avoided in method comparison studies, although the Bland-Altman approach is usually applied to avert their complexity. We argue that tolerance or predictive intervals are better alternatives than agreement intervals, and we provide guidelines for practitioners regarding method comparison studies. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  9. Celebrating macromolecular crystallography: A personal perspective

    Directory of Open Access Journals (Sweden)

    Abad-Zapatero, Celerino

    2015-04-01

    Full Text Available The twentieth century has seen an enormous advance in the knowledge of the atomic structures that surround us. The discovery of the first crystal structures of simple inorganic salts by the Braggs in 1914, using the diffraction of X-rays by crystals, provided the critical elements to unveil the atomic structure of matter. Subsequent developments in the field leading to macromolecular crystallography are presented with a personal perspective, related to the cultural milieu of Spain in the late 1950’s. The journey of discovery of the author, as he developed professionally, is interwoven with the expansion of macromolecular crystallography from the first proteins (myoglobin, hemoglobin to the ‘coming of age’ of the field in 1971 and the discoveries that followed, culminating in the determination of the structure of the ribosomes at the turn of the century. A perspective is presented exploring the future of the field and also a reflection about the future generations of Spanish scientists.El siglo XX ha sido testigo del increíble avance que ha experimentado el conocimiento de la estructura atómica de la materia que nos rodea. El descubrimiento de las primeras estructuras atómicas de sales inorgánicas por los Bragg en 1914, empleando difracción de rayos X con cristales, proporcionó los elementos clave para alcanzar tal conocimiento. Posteriores desarrollos en este campo, que condujeron a la cristalografía macromolecular, se presentan aquí desde una perspectiva personal, relacionada con el contexto cultural de la España de la década de los 50. La experiencia del descubrimiento científico, durante mi desarrollo profesional, se integra en el desarrollo de la cristalografía macromolecular, desde las primeras proteínas (míoglobina y hemoglobina, hasta su madurez en 1971 que, con los posteriores descubrimientos, culmina con la determinación del la estructura del ribosoma. Asimismo, se explora el futuro de esta disciplina y se

  10. TED: A Tolerant Edit Distance for segmentation evaluation.

    Science.gov (United States)

    Funke, Jan; Klein, Jonas; Moreno-Noguer, Francesc; Cardona, Albert; Cook, Matthew

    2017-02-15

    In this paper, we present a novel error measure to compare a computer-generated segmentation of images or volumes against ground truth. This measure, which we call Tolerant Edit Distance (TED), is motivated by two observations that we usually encounter in biomedical image processing: (1) Some errors, like small boundary shifts, are tolerable in practice. Which errors are tolerable is application dependent and should be explicitly expressible in the measure. (2) Non-tolerable errors have to be corrected manually. The effort needed to do so should be reflected by the error measure. Our measure is the minimal weighted sum of split and merge operations to apply to one segmentation such that it resembles another segmentation within specified tolerance bounds. This is in contrast to other commonly used measures like Rand index or variation of information, which integrate small, but tolerable, differences. Additionally, the TED provides intuitive numbers and allows the localization and classification of errors in images or volumes. We demonstrate the applicability of the TED on 3D segmentations of neurons in electron microscopy images where topological correctness is arguable more important than exact boundary locations. Furthermore, we show that the TED is not just limited to evaluation tasks. We use it as the loss function in a max-margin learning framework to find parameters of an automatic neuron segmentation algorithm. We show that training to minimize the TED, i.e., to minimize crucial errors, leads to higher segmentation accuracy compared to other learning methods. Copyright © 2016. Published by Elsevier Inc.

  11. Modeling Increased Complexity and the Reliance on Automation: FLightdeck Automation Problems (FLAP) Model

    Science.gov (United States)

    Ancel, Ersin; Shih, Ann T.

    2014-01-01

    This paper highlights the development of a model that is focused on the safety issue of increasing complexity and reliance on automation systems in transport category aircraft. Recent statistics show an increase in mishaps related to manual handling and automation errors due to pilot complacency and over-reliance on automation, loss of situational awareness, automation system failures and/or pilot deficiencies. Consequently, the aircraft can enter a state outside the flight envelope and/or air traffic safety margins which potentially can lead to loss-of-control (LOC), controlled-flight-into-terrain (CFIT), or runway excursion/confusion accidents, etc. The goal of this modeling effort is to provide NASA's Aviation Safety Program (AvSP) with a platform capable of assessing the impacts of AvSP technologies and products towards reducing the relative risk of automation related accidents and incidents. In order to do so, a generic framework, capable of mapping both latent and active causal factors leading to automation errors, is developed. Next, the framework is converted into a Bayesian Belief Network model and populated with data gathered from Subject Matter Experts (SMEs). With the insertion of technologies and products, the model provides individual and collective risk reduction acquired by technologies and methodologies developed within AvSP.

  12. Structure studies of macromolecular systems

    Czech Academy of Sciences Publication Activity Database

    Hašek, Jindřich; Dohnálek, Jan; Skálová, Tereza; Dušková, Jarmila; Kolenko, Petr

    2006-01-01

    Roč. 13, č. 3 (2006), s. 136 ISSN 1211-5894. [Czech and Slovak Crystallographic Colloquium. 22.06.2006-24.06.2006, Grenoble] R&D Projects: GA AV ČR IAA4050811; GA MŠk 1K05008 Keywords : structure * X-ray diffraction * synchrotron Subject RIV: CD - Macromolecular Chemistry http://www. xray .cz/ms/default.htm

  13. Modeling the probability distribution of positional errors incurred by residential address geocoding

    Directory of Open Access Journals (Sweden)

    Mazumdar Soumya

    2007-01-01

    Full Text Available Abstract Background The assignment of a point-level geocode to subjects' residences is an important data assimilation component of many geographic public health studies. Often, these assignments are made by a method known as automated geocoding, which attempts to match each subject's address to an address-ranged street segment georeferenced within a streetline database and then interpolate the position of the address along that segment. Unfortunately, this process results in positional errors. Our study sought to model the probability distribution of positional errors associated with automated geocoding and E911 geocoding. Results Positional errors were determined for 1423 rural addresses in Carroll County, Iowa as the vector difference between each 100%-matched automated geocode and its true location as determined by orthophoto and parcel information. Errors were also determined for 1449 60%-matched geocodes and 2354 E911 geocodes. Huge (> 15 km outliers occurred among the 60%-matched geocoding errors; outliers occurred for the other two types of geocoding errors also but were much smaller. E911 geocoding was more accurate (median error length = 44 m than 100%-matched automated geocoding (median error length = 168 m. The empirical distributions of positional errors associated with 100%-matched automated geocoding and E911 geocoding exhibited a distinctive Greek-cross shape and had many other interesting features that were not capable of being fitted adequately by a single bivariate normal or t distribution. However, mixtures of t distributions with two or three components fit the errors very well. Conclusion Mixtures of bivariate t distributions with few components appear to be flexible enough to fit many positional error datasets associated with geocoding, yet parsimonious enough to be feasible for nascent applications of measurement-error methodology to spatial epidemiology.

  14. Recent Major Improvements to the ALS Sector 5 Macromolecular Crystallography Beamlines

    International Nuclear Information System (INIS)

    Morton, Simon A.; Glossinger, James; Smith-Baumann, Alexis; McKean, John P.; Trame, Christine; Dickert, Jeff; Rozales, Anthony; Dauz, Azer; Taylor, John; Zwart, Petrus; Duarte, Robert; Padmore, Howard; McDermott, Gerry; Adams, Paul

    2007-01-01

    Although the Advanced Light Source (ALS) was initially conceived primarily as a low energy (1.9GeV) 3rd generation source of VUV and soft x-ray radiation it was realized very early in the development of the facility that a multipole wiggler source coupled with high quality, (brightness preserving), optics would result in a beamline whose performance across the optimal energy range (5-15keV) for macromolecular crystallography (MX) would be comparable to, or even exceed, that of many existing crystallography beamlines at higher energy facilities. Hence, starting in 1996, a suite of three beamlines, branching off a single wiggler source, was constructed, which together formed the ALS Macromolecular Crystallography Facility. From the outset this facility was designed to cater equally to the needs of both academic and industrial users with a heavy emphasis placed on the development and introduction of high throughput crystallographic tools, techniques, and facilities--such as large area CCD detectors, robotic sample handling and automounting facilities, a service crystallography program, and a tightly integrated, centralized, and highly automated beamline control environment for users. This facility was immediately successful, with the primary Multiwavelength Anomalous Diffraction beamline (5.0.2) in particular rapidly becoming one of the foremost crystallographic facilities in the US--responsible for structures such as the 70S ribosome. This success in-turn triggered enormous growth of the ALS macromolecular crystallography community and spurred the development of five additional ALS MX beamlines all utilizing the newly developed superconducting bending magnets ('superbends') as sources. However in the years since the original Sector 5.0 beamlines were built the performance demands of macromolecular crystallography users have become ever more exacting; with growing emphasis placed on studying larger complexes, more difficult structures, weakly diffracting or smaller

  15. Control of Macromolecular Architectures for Renewable Polymers: Case Studies

    Science.gov (United States)

    Tang, Chuanbing

    The development of sustainable polymers from nature biomass is growing, but facing fierce competition from existing petrochemical-based counterparts. Controlling macromolecular architectures to maximize the properties of renewable polymers is a desirable approach to gain advantages. Given the complexity of biomass, there needs special consideration other than traditional design. In the presentation, I will talk about a few case studies on how macromolecular architectures could tune the properties of sustainable bioplastics and elastomers from renewable biomass such as resin acids (natural rosin) and plant oils.

  16. Peripheral refractive correction and automated perimetric profiles.

    Science.gov (United States)

    Wild, J M; Wood, J M; Crews, S J

    1988-06-01

    The effect of peripheral refractive error correction on the automated perimetric sensitivity profile was investigated on a sample of 10 clinically normal, experienced observers. Peripheral refractive error was determined at eccentricities of 0 degree, 20 degrees and 40 degrees along the temporal meridian of the right eye using the Canon Autoref R-1, an infra-red automated refractor, under the parametric conditions of the Octopus automated perimeter. Perimetric sensitivity was then undertaken at these eccentricities (stimulus sizes 0 and III) with and without the appropriate peripheral refractive correction using the Octopus 201 automated perimeter. Within the measurement limits of the experimental procedures employed, perimetric sensitivity was not influenced by peripheral refractive correction.

  17. Hypoxic tumor environments exhibit disrupted collagen I fibers and low macromolecular transport.

    Directory of Open Access Journals (Sweden)

    Samata M Kakkad

    Full Text Available Hypoxic tumor microenvironments result in an aggressive phenotype and resistance to therapy that lead to tumor progression, recurrence, and metastasis. While poor vascularization and the resultant inadequate drug delivery are known to contribute to drug resistance, the effect of hypoxia on molecular transport through the interstitium, and the role of the extracellular matrix (ECM in mediating this transport are unexplored. The dense mesh of fibers present in the ECM can especially influence the movement of macromolecules. Collagen 1 (Col1 fibers form a key component of the ECM in breast cancers. Here we characterized the influence of hypoxia on macromolecular transport in tumors, and the role of Col1 fibers in mediating this transport using an MDA-MB-231 breast cancer xenograft model engineered to express red fluorescent protein under hypoxia. Magnetic resonance imaging of macromolecular transport was combined with second harmonic generation microscopy of Col1 fibers. Hypoxic tumor regions displayed significantly decreased Col1 fiber density and volume, as well as significantly lower macromolecular draining and pooling rates, than normoxic regions. Regions adjacent to severely hypoxic areas revealed higher deposition of Col1 fibers and increased macromolecular transport. These data suggest that Col1 fibers may facilitate macromolecular transport in tumors, and their reduction in hypoxic regions may reduce this transport. Decreased macromolecular transport in hypoxic regions may also contribute to poor drug delivery and tumor recurrence in hypoxic regions. High Col1 fiber density observed around hypoxic regions may facilitate the escape of aggressive cancer cells from hypoxic regions.

  18. Fault tree model of human error based on error-forcing contexts

    International Nuclear Information System (INIS)

    Kang, Hyun Gook; Jang, Seung Cheol; Ha, Jae Joo

    2004-01-01

    In the safety-critical systems such as nuclear power plants, the safety-feature actuation is fully automated. In emergency case, the human operator could also play the role of a backup for automated systems. That is, the failure of safety-feature-actuation signal generation implies the concurrent failure of automated systems and that of manual actuation. The human operator's manual actuation failure is largely affected by error-forcing contexts (EFC). The failures of sensors and automated systems are most important ones. The sensors, the automated actuation system and the human operators are correlated in a complex manner and hard to develop a proper model. In this paper, we will explain the condition-based human reliability assessment (CBHRA) method in order to treat these complicated conditions in a practical way. In this study, we apply the CBHRA method to the manual actuation of safety features such as reactor trip and safety injection in Korean Standard Nuclear Power Plants

  19. Macromolecular target prediction by self-organizing feature maps.

    Science.gov (United States)

    Schneider, Gisbert; Schneider, Petra

    2017-03-01

    Rational drug discovery would greatly benefit from a more nuanced appreciation of the activity of pharmacologically active compounds against a diverse panel of macromolecular targets. Already, computational target-prediction models assist medicinal chemists in library screening, de novo molecular design, optimization of active chemical agents, drug re-purposing, in the spotting of potential undesired off-target activities, and in the 'de-orphaning' of phenotypic screening hits. The self-organizing map (SOM) algorithm has been employed successfully for these and other purposes. Areas covered: The authors recapitulate contemporary artificial neural network methods for macromolecular target prediction, and present the basic SOM algorithm at a conceptual level. Specifically, they highlight consensus target-scoring by the employment of multiple SOMs, and discuss the opportunities and limitations of this technique. Expert opinion: Self-organizing feature maps represent a straightforward approach to ligand clustering and classification. Some of the appeal lies in their conceptual simplicity and broad applicability domain. Despite known algorithmic shortcomings, this computational target prediction concept has been proven to work in prospective settings with high success rates. It represents a prototypic technique for future advances in the in silico identification of the modes of action and macromolecular targets of bioactive molecules.

  20. Error Correcting Codes

    Indian Academy of Sciences (India)

    Science and Automation at ... the Reed-Solomon code contained 223 bytes of data, (a byte ... then you have a data storage system with error correction, that ..... practical codes, storing such a table is infeasible, as it is generally too large.

  1. A fault-tolerant software strategy for digital systems

    Science.gov (United States)

    Hitt, E. F.; Webb, J. J.

    1984-01-01

    Techniques developed for producing fault-tolerant software are described. Tolerance is required because of the impossibility of defining fault-free software. Faults are caused by humans and can appear anywhere in the software life cycle. Tolerance is effected through error detection, damage assessment, recovery, and fault treatment, followed by return of the system to service. Multiversion software comprises two or more versions of the software yielding solutions which are examined by a decision algorithm. Errors can also be detected by extrapolation from previous results or by the acceptability of results. Violations of timing specifications can reveal errors, or the system can roll back to an error-free state when a defect is detected. The software, when used in flight control systems, must not impinge on time-critical responses. Efforts are still needed to reduce the costs of developing the fault-tolerant systems.

  2. Applying Intelligent Algorithms to Automate the Identification of Error Factors.

    Science.gov (United States)

    Jin, Haizhe; Qu, Qingxing; Munechika, Masahiko; Sano, Masataka; Kajihara, Chisato; Duffy, Vincent G; Chen, Han

    2018-05-03

    Medical errors are the manifestation of the defects occurring in medical processes. Extracting and identifying defects as medical error factors from these processes are an effective approach to prevent medical errors. However, it is a difficult and time-consuming task and requires an analyst with a professional medical background. The issues of identifying a method to extract medical error factors and reduce the extraction difficulty need to be resolved. In this research, a systematic methodology to extract and identify error factors in the medical administration process was proposed. The design of the error report, extraction of the error factors, and identification of the error factors were analyzed. Based on 624 medical error cases across four medical institutes in both Japan and China, 19 error-related items and their levels were extracted. After which, they were closely related to 12 error factors. The relational model between the error-related items and error factors was established based on a genetic algorithm (GA)-back-propagation neural network (BPNN) model. Additionally, compared to GA-BPNN, BPNN, partial least squares regression and support vector regression, GA-BPNN exhibited a higher overall prediction accuracy, being able to promptly identify the error factors from the error-related items. The combination of "error-related items, their different levels, and the GA-BPNN model" was proposed as an error-factor identification technology, which could automatically identify medical error factors.

  3. Concatenated codes for fault tolerant quantum computing

    Energy Technology Data Exchange (ETDEWEB)

    Knill, E.; Laflamme, R.; Zurek, W.

    1995-05-01

    The application of concatenated codes to fault tolerant quantum computing is discussed. We have previously shown that for quantum memories and quantum communication, a state can be transmitted with error {epsilon} provided each gate has error at most c{epsilon}. We show how this can be used with Shor`s fault tolerant operations to reduce the accuracy requirements when maintaining states not currently participating in the computation. Viewing Shor`s fault tolerant operations as a method for reducing the error of operations, we give a concatenated implementation which promises to propagate the reduction hierarchically. This has the potential of reducing the accuracy requirements in long computations.

  4. ABOUT THE BUILDING SYSTEMS OF PLANT AUTOMATION WITH ADVANCED SECURITY

    OpenAIRE

    Yu. I. Khmarskyi

    2008-01-01

    A structure and principle of construction of microprocessor circuit of local railway automation is examined on relay-and-contact principle, which realize the three-level filtration of errors providing the highest protection of the systems of station automation from errors.

  5. Stochastic reaction-diffusion algorithms for macromolecular crowding

    Science.gov (United States)

    Sturrock, Marc

    2016-06-01

    Compartment-based (lattice-based) reaction-diffusion algorithms are often used for studying complex stochastic spatio-temporal processes inside cells. In this paper the influence of macromolecular crowding on stochastic reaction-diffusion simulations is investigated. Reaction-diffusion processes are considered on two different kinds of compartmental lattice, a cubic lattice and a hexagonal close packed lattice, and solved using two different algorithms, the stochastic simulation algorithm and the spatiocyte algorithm (Arjunan and Tomita 2010 Syst. Synth. Biol. 4, 35-53). Obstacles (modelling macromolecular crowding) are shown to have substantial effects on the mean squared displacement and average number of molecules in the domain but the nature of these effects is dependent on the choice of lattice, with the cubic lattice being more susceptible to the effects of the obstacles. Finally, improvements for both algorithms are presented.

  6. In situ macromolecular crystallography using microbeams.

    Science.gov (United States)

    Axford, Danny; Owen, Robin L; Aishima, Jun; Foadi, James; Morgan, Ann W; Robinson, James I; Nettleship, Joanne E; Owens, Raymond J; Moraes, Isabel; Fry, Elizabeth E; Grimes, Jonathan M; Harlos, Karl; Kotecha, Abhay; Ren, Jingshan; Sutton, Geoff; Walter, Thomas S; Stuart, David I; Evans, Gwyndaf

    2012-05-01

    Despite significant progress in high-throughput methods in macromolecular crystallography, the production of diffraction-quality crystals remains a major bottleneck. By recording diffraction in situ from crystals in their crystallization plates at room temperature, a number of problems associated with crystal handling and cryoprotection can be side-stepped. Using a dedicated goniometer installed on the microfocus macromolecular crystallography beamline I24 at Diamond Light Source, crystals have been studied in situ with an intense and flexible microfocus beam, allowing weakly diffracting samples to be assessed without a manual crystal-handling step but with good signal to noise, despite the background scatter from the plate. A number of case studies are reported: the structure solution of bovine enterovirus 2, crystallization screening of membrane proteins and complexes, and structure solution from crystallization hits produced via a high-throughput pipeline. These demonstrate the potential for in situ data collection and structure solution with microbeams. © 2012 International Union of Crystallography

  7. Laboratory Automation and Middleware.

    Science.gov (United States)

    Riben, Michael

    2015-06-01

    The practice of surgical pathology is under constant pressure to deliver the highest quality of service, reduce errors, increase throughput, and decrease turnaround time while at the same time dealing with an aging workforce, increasing financial constraints, and economic uncertainty. Although not able to implement total laboratory automation, great progress continues to be made in workstation automation in all areas of the pathology laboratory. This report highlights the benefits and challenges of pathology automation, reviews middleware and its use to facilitate automation, and reviews the progress so far in the anatomic pathology laboratory. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Automated Classification of Phonological Errors in Aphasic Language

    Science.gov (United States)

    Ahuja, Sanjeev B.; Reggia, James A.; Berndt, Rita S.

    1984-01-01

    Using heuristically-guided state space search, a prototype program has been developed to simulate and classify phonemic errors occurring in the speech of neurologically-impaired patients. Simulations are based on an interchangeable rule/operator set of elementary errors which represent a theory of phonemic processing faults. This work introduces and evaluates a novel approach to error simulation and classification, it provides a prototype simulation tool for neurolinguistic research, and it forms the initial phase of a larger research effort involving computer modelling of neurolinguistic processes.

  9. Data-acquisition system for the NLO error-propagation exercise

    International Nuclear Information System (INIS)

    Lower, C.W.; Gessiness, B.; Bieber, A.M. Jr.; Keisch, B.; Suda, S.C.

    1983-01-01

    An automated data-acquisition system using barcoded labels was developed for an error-propagation exercise to determine the limit of error for inventory differences (LEID) for a material balance area at NLO, Inc.'s Feed Materials Production Center, Fernald, Ohio. Each discrete item of material to be measured (weighed or analyzed) was labeled with a bar-coded identification number. Automated scale terminals, portable bar-code readers, and an automated laboratory data-entry terminal were used to read identification labels and automatically record measurement and transfer information. This system is the prototype for an entire material control and accountability system

  10. ABOUT THE BUILDING SYSTEMS OF PLANT AUTOMATION WITH ADVANCED SECURITY

    Directory of Open Access Journals (Sweden)

    Yu. I. Khmarskyi

    2008-03-01

    Full Text Available A structure and principle of construction of microprocessor circuit of local railway automation is examined on relay-and-contact principle, which realize the three-level filtration of errors providing the highest protection of the systems of station automation from errors.

  11. The Automation of Nowcast Model Assessment Processes

    Science.gov (United States)

    2016-09-01

    secondly, provide modelers with the information needed to understand the model errors and how their algorithm changes might mitigate these errors. In...by ARL modelers. 2. Development Environment The automation of Point-Stat processes (i.e., PSA) was developed using Python 3.5.* Python was selected...because it is easy to use, widely used for scripting, and satisfies all the requirements to automate the implementation of the Point-Stat tool. In

  12. Medication errors: prescribing faults and prescription errors.

    Science.gov (United States)

    Velo, Giampaolo P; Minuz, Pietro

    2009-06-01

    1. Medication errors are common in general practice and in hospitals. Both errors in the act of writing (prescription errors) and prescribing faults due to erroneous medical decisions can result in harm to patients. 2. Any step in the prescribing process can generate errors. Slips, lapses, or mistakes are sources of errors, as in unintended omissions in the transcription of drugs. Faults in dose selection, omitted transcription, and poor handwriting are common. 3. Inadequate knowledge or competence and incomplete information about clinical characteristics and previous treatment of individual patients can result in prescribing faults, including the use of potentially inappropriate medications. 4. An unsafe working environment, complex or undefined procedures, and inadequate communication among health-care personnel, particularly between doctors and nurses, have been identified as important underlying factors that contribute to prescription errors and prescribing faults. 5. Active interventions aimed at reducing prescription errors and prescribing faults are strongly recommended. These should be focused on the education and training of prescribers and the use of on-line aids. The complexity of the prescribing procedure should be reduced by introducing automated systems or uniform prescribing charts, in order to avoid transcription and omission errors. Feedback control systems and immediate review of prescriptions, which can be performed with the assistance of a hospital pharmacist, are also helpful. Audits should be performed periodically.

  13. Learning from Errors: Effects of Teachers Training on Students' Attitudes towards and Their Individual Use of Errors

    Science.gov (United States)

    Rach, Stefanie; Ufer, Stefan; Heinze, Aiso

    2013-01-01

    Constructive error handling is considered an important factor for individual learning processes. In a quasi-experimental study with Grades 6 to 9 students, we investigate effects on students' attitudes towards errors as learning opportunities in two conditions: an error-tolerant classroom culture, and the first condition along with additional…

  14. Integration and global analysis of isothermal titration calorimetry data for studying macromolecular interactions.

    Science.gov (United States)

    Brautigam, Chad A; Zhao, Huaying; Vargas, Carolyn; Keller, Sandro; Schuck, Peter

    2016-05-01

    Isothermal titration calorimetry (ITC) is a powerful and widely used method to measure the energetics of macromolecular interactions by recording a thermogram of differential heating power during a titration. However, traditional ITC analysis is limited by stochastic thermogram noise and by the limited information content of a single titration experiment. Here we present a protocol for bias-free thermogram integration based on automated shape analysis of the injection peaks, followed by combination of isotherms from different calorimetric titration experiments into a global analysis, statistical analysis of binding parameters and graphical presentation of the results. This is performed using the integrated public-domain software packages NITPIC, SEDPHAT and GUSSI. The recently developed low-noise thermogram integration approach and global analysis allow for more precise parameter estimates and more reliable quantification of multisite and multicomponent cooperative and competitive interactions. Titration experiments typically take 1-2.5 h each, and global analysis usually takes 10-20 min.

  15. Measurement and analysis of operating system fault tolerance

    Science.gov (United States)

    Lee, I.; Tang, D.; Iyer, R. K.

    1992-01-01

    This paper demonstrates a methodology to model and evaluate the fault tolerance characteristics of operational software. The methodology is illustrated through case studies on three different operating systems: the Tandem GUARDIAN fault-tolerant system, the VAX/VMS distributed system, and the IBM/MVS system. Measurements are made on these systems for substantial periods to collect software error and recovery data. In addition to investigating basic dependability characteristics such as major software problems and error distributions, we develop two levels of models to describe error and recovery processes inside an operating system and on multiple instances of an operating system running in a distributed environment. Based on the models, reward analysis is conducted to evaluate the loss of service due to software errors and the effect of the fault-tolerance techniques implemented in the systems. Software error correlation in multicomputer systems is also investigated.

  16. Automated measuring systems. Automatisierte Messsysteme

    Energy Technology Data Exchange (ETDEWEB)

    1985-01-01

    Microprocessors have become a regular component of automated measuring systems. Experts offer their experience and basic information in 24 lectures and 10 poster presentations. The focus is on the following: Automated measuring, computer and microprocessor use, sensor technique, actuator technique, communication, interfaces, man-system interaction, distrubance tolerance and availability as well as uses. A discussion meeting is dedicated to the theme complex sensor digital signal, sensor interface and sensor bus.

  17. Manned spacecraft automation and robotics

    Science.gov (United States)

    Erickson, Jon D.

    1987-01-01

    The Space Station holds promise of being a showcase user and driver of advanced automation and robotics technology. The author addresses the advances in automation and robotics from the Space Shuttle - with its high-reliability redundancy management and fault tolerance design and its remote manipulator system - to the projected knowledge-based systems for monitoring, control, fault diagnosis, planning, and scheduling, and the telerobotic systems of the future Space Station.

  18. RCrane: semi-automated RNA model building

    International Nuclear Information System (INIS)

    Keating, Kevin S.; Pyle, Anna Marie

    2012-01-01

    RCrane is a new tool for the partially automated building of RNA crystallographic models into electron-density maps of low or intermediate resolution. This tool helps crystallographers to place phosphates and bases into electron density and then automatically predicts and builds the detailed all-atom structure of the traced nucleotides. RNA crystals typically diffract to much lower resolutions than protein crystals. This low-resolution diffraction results in unclear density maps, which cause considerable difficulties during the model-building process. These difficulties are exacerbated by the lack of computational tools for RNA modeling. Here, RCrane, a tool for the partially automated building of RNA into electron-density maps of low or intermediate resolution, is presented. This tool works within Coot, a common program for macromolecular model building. RCrane helps crystallographers to place phosphates and bases into electron density and then automatically predicts and builds the detailed all-atom structure of the traced nucleotides. RCrane then allows the crystallographer to review the newly built structure and select alternative backbone conformations where desired. This tool can also be used to automatically correct the backbone structure of previously built nucleotides. These automated corrections can fix incorrect sugar puckers, steric clashes and other structural problems

  19. Error Correcting Codes

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 3. Error Correcting Codes - Reed Solomon Codes. Priti Shankar. Series Article Volume 2 Issue 3 March ... Author Affiliations. Priti Shankar1. Department of Computer Science and Automation, Indian Institute of Science, Bangalore 560 012, India ...

  20. The Theory and Assessment of Spatial Straightness Error Matched New Generation GPS

    International Nuclear Information System (INIS)

    Zhang, X B; Sheng, X L; Jiang, X Q; Li, Z

    2006-01-01

    In order to assess spatial straightness error matched new generation Dimensional Geometrical Product Specification and Verification (GPS), the theory of spatial straightness error assessing is proposed and its advantages are analyzed based on metrology and statistics in this paper. Then, the assessing parameter system is proposed and it is testified in real application comparing to assessment result of the geometric tolerance theory. Statistical parameters of this assessing system post the different characteristics of spatial straightness error, and can reveal the impact of spatial straightness error on the accessory function more roundly to complement the single assessing parameter of geometrical tolerance for straightness error. The statistical spatial straightness tolerance and statistical spatial straightness error proposed in this paper is possible to be applied in evaluation of other error of form, orientation, location and run-out

  1. Simulation of aspheric tolerance with polynomial fitting

    Science.gov (United States)

    Li, Jing; Cen, Zhaofeng; Li, Xiaotong

    2018-01-01

    The shape of the aspheric lens changes caused by machining errors, resulting in a change in the optical transfer function, which affects the image quality. At present, there is no universally recognized tolerance criterion standard for aspheric surface. To study the influence of aspheric tolerances on the optical transfer function, the tolerances of polynomial fitting are allocated on the aspheric surface, and the imaging simulation is carried out by optical imaging software. Analysis is based on a set of aspheric imaging system. The error is generated in the range of a certain PV value, and expressed as a form of Zernike polynomial, which is added to the aspheric surface as a tolerance term. Through optical software analysis, the MTF of optical system can be obtained and used as the main evaluation index. Evaluate whether the effect of the added error on the MTF of the system meets the requirements of the current PV value. Change the PV value and repeat the operation until the acceptable maximum allowable PV value is obtained. According to the actual processing technology, consider the error of various shapes, such as M type, W type, random type error. The new method will provide a certain development for the actual free surface processing technology the reference value.

  2. The use of workflows in the design and implementation of complex experiments in macromolecular crystallography

    International Nuclear Information System (INIS)

    Brockhauser, Sandor; Svensson, Olof; Bowler, Matthew W.; Nanao, Max; Gordon, Elspeth; Leal, Ricardo M. F.; Popov, Alexander; Gerring, Matthew; McCarthy, Andrew A.; Gotz, Andy

    2012-01-01

    A powerful and easy-to-use workflow environment has been developed at the ESRF for combining experiment control with online data analysis on synchrotron beamlines. This tool provides the possibility of automating complex experiments without the need for expertise in instrumentation control and programming, but rather by accessing defined beamline services. The automation of beam delivery, sample handling and data analysis, together with increasing photon flux, diminishing focal spot size and the appearance of fast-readout detectors on synchrotron beamlines, have changed the way that many macromolecular crystallography experiments are planned and executed. Screening for the best diffracting crystal, or even the best diffracting part of a selected crystal, has been enabled by the development of microfocus beams, precise goniometers and fast-readout detectors that all require rapid feedback from the initial processing of images in order to be effective. All of these advances require the coupling of data feedback to the experimental control system and depend on immediate online data-analysis results during the experiment. To facilitate this, a Data Analysis WorkBench (DAWB) for the flexible creation of complex automated protocols has been developed. Here, example workflows designed and implemented using DAWB are presented for enhanced multi-step crystal characterizations, experiments involving crystal reorientation with kappa goniometers, crystal-burning experiments for empirically determining the radiation sensitivity of a crystal system and the application of mesh scans to find the best location of a crystal to obtain the highest diffraction quality. Beamline users interact with the prepared workflows through a specific brick within the beamline-control GUI MXCuBE

  3. Crowding-facilitated macromolecular transport in attractive micropost arrays.

    Science.gov (United States)

    Chien, Fan-Tso; Lin, Po-Keng; Chien, Wei; Hung, Cheng-Hsiang; Yu, Ming-Hung; Chou, Chia-Fu; Chen, Yeng-Long

    2017-05-02

    Our study of DNA dynamics in weakly attractive nanofabricated post arrays revealed crowding enhances polymer transport, contrary to hindered transport in repulsive medium. The coupling of DNA diffusion and adsorption to the microposts results in more frequent cross-post hopping and increased long-term diffusivity with increased crowding density. We performed Langevin dynamics simulations and found maximum long-term diffusivity in post arrays with gap sizes comparable to the polymer radius of gyration. We found that macromolecular transport in weakly attractive post arrays is faster than in non-attractive dense medium. Furthermore, we employed hidden Markov analysis to determine the transition of macromolecular adsorption-desorption on posts and hopping between posts. The apparent free energy barriers are comparable to theoretical estimates determined from polymer conformational fluctuations.

  4. Optimization of automation: III. Development of optimization method for determining automation rate in nuclear power plants

    International Nuclear Information System (INIS)

    Lee, Seung Min; Kim, Jong Hyun; Kim, Man Cheol; Seong, Poong Hyun

    2016-01-01

    Highlights: • We propose an appropriate automation rate that enables the best human performance. • We analyze the shortest working time considering Situation Awareness Recovery (SAR). • The optimized automation rate is estimated by integrating the automation and ostracism rate estimation methods. • The process to derive the optimized automation rate is demonstrated through case studies. - Abstract: Automation has been introduced in various industries, including the nuclear field, because it is commonly believed that automation promises greater efficiency, lower workloads, and fewer operator errors through reducing operator errors and enhancing operator and system performance. However, the excessive introduction of automation has deteriorated operator performance due to the side effects of automation, which are referred to as Out-of-the-Loop (OOTL), and this is critical issue that must be resolved. Thus, in order to determine the optimal level of automation introduction that assures the best human operator performance, a quantitative method of optimizing the automation is proposed in this paper. In order to propose the optimization method for determining appropriate automation levels that enable the best human performance, the automation rate and ostracism rate, which are estimation methods that quantitatively analyze the positive and negative effects of automation, respectively, are integrated. The integration was conducted in order to derive the shortest working time through considering the concept of situation awareness recovery (SAR), which states that the automation rate with the shortest working time assures the best human performance. The process to derive the optimized automation rate is demonstrated through an emergency operation scenario-based case study. In this case study, four types of procedures are assumed through redesigning the original emergency operating procedure according to the introduced automation and ostracism levels. Using the

  5. Sensor fault-tolerant control for gear-shifting engaging process of automated manual transmission

    Science.gov (United States)

    Li, Liang; He, Kai; Wang, Xiangyu; Liu, Yahui

    2018-01-01

    Angular displacement sensor on the actuator of automated manual transmission (AMT) is sensitive to fault, and the sensor fault will disturb its normal control, which affects the entire gear-shifting process of AMT and results in awful riding comfort. In order to solve this problem, this paper proposes a method of fault-tolerant control for AMT gear-shifting engaging process. By using the measured current of actuator motor and angular displacement of actuator, the gear-shifting engaging load torque table is built and updated before the occurrence of the sensor fault. Meanwhile, residual between estimated and measured angular displacements is used to detect the sensor fault. Once the residual exceeds a determined fault threshold, the sensor fault is detected. Then, switch control is triggered, and the current observer and load torque table estimates an actual gear-shifting position to replace the measured one to continue controlling the gear-shifting process. Numerical and experiment tests are carried out to evaluate the reliability and feasibility of proposed methods, and the results show that the performance of estimation and control is satisfactory.

  6. Toward a human-centered aircraft automation philosophy

    Science.gov (United States)

    Billings, Charles E.

    1989-01-01

    The evolution of automation in civil aircraft is examined in order to discern trends in the respective roles and functions of automation technology and the humans who operate these aircraft. The effects of advances in automation technology on crew reaction is considered and it appears that, though automation may well have decreased the frequency of certain types of human errors in flight, it may also have enabled new categories of human errors, some perhaps less obvious and therefore more serious than those it has alleviated. It is suggested that automation could be designed to keep the pilot closer to the control of the vehicle, while providing an array of information management and aiding functions designed to provide the pilot with data regarding flight replanning, degraded system operation, and the operational status and limits of the aircraft, its systems, and the physical and operational environment. The automation would serve as the pilot's assistant, providing and calculating data, watching for the unexpected, and keeping track of resources and their rate of expenditure.

  7. In situ macromolecular crystallography using microbeams

    International Nuclear Information System (INIS)

    Axford, Danny; Owen, Robin L.; Aishima, Jun; Foadi, James; Morgan, Ann W.; Robinson, James I.; Nettleship, Joanne E.; Owens, Raymond J.; Moraes, Isabel; Fry, Elizabeth E.; Grimes, Jonathan M.; Harlos, Karl; Kotecha, Abhay; Ren, Jingshan; Sutton, Geoff; Walter, Thomas S.; Stuart, David I.; Evans, Gwyndaf

    2012-01-01

    A sample environment for mounting crystallization trays has been developed on the microfocus beamline I24 at Diamond Light Source. The technical developments and several case studies are described. Despite significant progress in high-throughput methods in macromolecular crystallography, the production of diffraction-quality crystals remains a major bottleneck. By recording diffraction in situ from crystals in their crystallization plates at room temperature, a number of problems associated with crystal handling and cryoprotection can be side-stepped. Using a dedicated goniometer installed on the microfocus macromolecular crystallography beamline I24 at Diamond Light Source, crystals have been studied in situ with an intense and flexible microfocus beam, allowing weakly diffracting samples to be assessed without a manual crystal-handling step but with good signal to noise, despite the background scatter from the plate. A number of case studies are reported: the structure solution of bovine enterovirus 2, crystallization screening of membrane proteins and complexes, and structure solution from crystallization hits produced via a high-throughput pipeline. These demonstrate the potential for in situ data collection and structure solution with microbeams

  8. In situ macromolecular crystallography using microbeams

    Energy Technology Data Exchange (ETDEWEB)

    Axford, Danny; Owen, Robin L.; Aishima, Jun [Diamond Light Source, Harwell Science and Innovation Campus, Didcot, Oxfordshire OX11 0DE (United Kingdom); Foadi, James [Diamond Light Source, Harwell Science and Innovation Campus, Didcot, Oxfordshire OX11 0DE (United Kingdom); Imperial College, London SW7 2AZ (United Kingdom); Morgan, Ann W.; Robinson, James I. [University of Leeds, Leeds LS9 7FT (United Kingdom); Nettleship, Joanne E.; Owens, Raymond J. [Research Complex at Harwell, Rutherford Appleton Laboratory R92, Didcot, Oxfordshire OX11 0DE (United Kingdom); Moraes, Isabel [Imperial College, London SW7 2AZ (United Kingdom); Fry, Elizabeth E.; Grimes, Jonathan M.; Harlos, Karl; Kotecha, Abhay; Ren, Jingshan; Sutton, Geoff; Walter, Thomas S. [University of Oxford, Roosevelt Drive, Oxford OX3 7BN (United Kingdom); Stuart, David I. [Diamond Light Source, Harwell Science and Innovation Campus, Didcot, Oxfordshire OX11 0DE (United Kingdom); University of Oxford, Roosevelt Drive, Oxford OX3 7BN (United Kingdom); Evans, Gwyndaf, E-mail: gwyndaf.evans@diamond.ac.uk [Diamond Light Source, Harwell Science and Innovation Campus, Didcot, Oxfordshire OX11 0DE (United Kingdom)

    2012-04-17

    A sample environment for mounting crystallization trays has been developed on the microfocus beamline I24 at Diamond Light Source. The technical developments and several case studies are described. Despite significant progress in high-throughput methods in macromolecular crystallography, the production of diffraction-quality crystals remains a major bottleneck. By recording diffraction in situ from crystals in their crystallization plates at room temperature, a number of problems associated with crystal handling and cryoprotection can be side-stepped. Using a dedicated goniometer installed on the microfocus macromolecular crystallography beamline I24 at Diamond Light Source, crystals have been studied in situ with an intense and flexible microfocus beam, allowing weakly diffracting samples to be assessed without a manual crystal-handling step but with good signal to noise, despite the background scatter from the plate. A number of case studies are reported: the structure solution of bovine enterovirus 2, crystallization screening of membrane proteins and complexes, and structure solution from crystallization hits produced via a high-throughput pipeline. These demonstrate the potential for in situ data collection and structure solution with microbeams.

  9. An acoustic on-chip goniometer for room temperature macromolecular crystallography.

    Science.gov (United States)

    Burton, C G; Axford, D; Edwards, A M J; Gildea, R J; Morris, R H; Newton, M I; Orville, A M; Prince, M; Topham, P D; Docker, P T

    2017-12-05

    This paper describes the design, development and successful use of an on-chip goniometer for room-temperature macromolecular crystallography via acoustically induced rotations. We present for the first time a low cost, rate-tunable, acoustic actuator for gradual in-fluid sample reorientation about varying axes and its utilisation for protein structure determination on a synchrotron beamline. The device enables the efficient collection of diffraction data via a rotation method from a sample within a surface confined droplet. This method facilitates efficient macromolecular structural data acquisition in fluid environments for dynamical studies.

  10. Lifecycle, Iteration, and Process Automation with SMS Gateway

    Directory of Open Access Journals (Sweden)

    Fenny Fenny

    2015-12-01

    Full Text Available Producing a better quality software system requires an understanding of the indicators of the software quality through defect detection, and automated testing. This paper aims to elevate the design and automated testing process in an engine water pump of a drinking water plant. This paper proposes how software developers can improve the maintainability and reliability of automated testing system and report the abnormal state when an error occurs on the machine. The method in this paper uses literature to explain best practices and case studies of a drinking water plant. Furthermore, this paper is expected to be able to provide insights into the efforts to better handle errors and perform automated testing and monitoring on a machine.

  11. Individual differences in the calibration of trust in automation.

    Science.gov (United States)

    Pop, Vlad L; Shrewsbury, Alex; Durso, Francis T

    2015-06-01

    The objective was to determine whether operators with an expectancy that automation is trustworthy are better at calibrating their trust to changes in the capabilities of automation, and if so, why. Studies suggest that individual differences in automation expectancy may be able to account for why changes in the capabilities of automation lead to a substantial change in trust for some, yet only a small change for others. In a baggage screening task, 225 participants searched for weapons in 200 X-ray images of luggage. Participants were assisted by an automated decision aid exhibiting different levels of reliability. Measures of expectancy that automation is trustworthy were used in conjunction with subjective measures of trust and perceived reliability to identify individual differences in trust calibration. Operators with high expectancy that automation is trustworthy were more sensitive to changes (both increases and decreases) in automation reliability. This difference was eliminated by manipulating the causal attribution of automation errors. Attributing the cause of automation errors to factors external to the automation fosters an understanding of tasks and situations in which automation differs in reliability and may lead to more appropriate trust. The development of interventions can lead to calibrated trust in automation. © 2014, Human Factors and Ergonomics Society.

  12. Automated Patient Identification and Localization Error Detection Using 2-Dimensional to 3-Dimensional Registration of Kilovoltage X-Ray Setup Images

    International Nuclear Information System (INIS)

    Lamb, James M.; Agazaryan, Nzhde; Low, Daniel A.

    2013-01-01

    Purpose: To determine whether kilovoltage x-ray projection radiation therapy setup images could be used to perform patient identification and detect gross errors in patient setup using a computer algorithm. Methods and Materials: Three patient cohorts treated using a commercially available image guided radiation therapy (IGRT) system that uses 2-dimensional to 3-dimensional (2D-3D) image registration were retrospectively analyzed: a group of 100 cranial radiation therapy patients, a group of 100 prostate cancer patients, and a group of 83 patients treated for spinal lesions. The setup images were acquired using fixed in-room kilovoltage imaging systems. In the prostate and cranial patient groups, localizations using image registration were performed between computed tomography (CT) simulation images from radiation therapy planning and setup x-ray images corresponding both to the same patient and to different patients. For the spinal patients, localizations were performed to the correct vertebral body, and to an adjacent vertebral body, using planning CTs and setup x-ray images from the same patient. An image similarity measure used by the IGRT system image registration algorithm was extracted from the IGRT system log files and evaluated as a discriminant for error detection. Results: A threshold value of the similarity measure could be chosen to separate correct and incorrect patient matches and correct and incorrect vertebral body localizations with excellent accuracy for these patient cohorts. A 10-fold cross-validation using linear discriminant analysis yielded misclassification probabilities of 0.000, 0.0045, and 0.014 for the cranial, prostate, and spinal cases, respectively. Conclusions: An automated measure of the image similarity between x-ray setup images and corresponding planning CT images could be used to perform automated patient identification and detection of localization errors in radiation therapy treatments

  13. Automated Patient Identification and Localization Error Detection Using 2-Dimensional to 3-Dimensional Registration of Kilovoltage X-Ray Setup Images

    Energy Technology Data Exchange (ETDEWEB)

    Lamb, James M., E-mail: jlamb@mednet.ucla.edu; Agazaryan, Nzhde; Low, Daniel A.

    2013-10-01

    Purpose: To determine whether kilovoltage x-ray projection radiation therapy setup images could be used to perform patient identification and detect gross errors in patient setup using a computer algorithm. Methods and Materials: Three patient cohorts treated using a commercially available image guided radiation therapy (IGRT) system that uses 2-dimensional to 3-dimensional (2D-3D) image registration were retrospectively analyzed: a group of 100 cranial radiation therapy patients, a group of 100 prostate cancer patients, and a group of 83 patients treated for spinal lesions. The setup images were acquired using fixed in-room kilovoltage imaging systems. In the prostate and cranial patient groups, localizations using image registration were performed between computed tomography (CT) simulation images from radiation therapy planning and setup x-ray images corresponding both to the same patient and to different patients. For the spinal patients, localizations were performed to the correct vertebral body, and to an adjacent vertebral body, using planning CTs and setup x-ray images from the same patient. An image similarity measure used by the IGRT system image registration algorithm was extracted from the IGRT system log files and evaluated as a discriminant for error detection. Results: A threshold value of the similarity measure could be chosen to separate correct and incorrect patient matches and correct and incorrect vertebral body localizations with excellent accuracy for these patient cohorts. A 10-fold cross-validation using linear discriminant analysis yielded misclassification probabilities of 0.000, 0.0045, and 0.014 for the cranial, prostate, and spinal cases, respectively. Conclusions: An automated measure of the image similarity between x-ray setup images and corresponding planning CT images could be used to perform automated patient identification and detection of localization errors in radiation therapy treatments.

  14. Automated patient identification and localization error detection using 2-dimensional to 3-dimensional registration of kilovoltage x-ray setup images.

    Science.gov (United States)

    Lamb, James M; Agazaryan, Nzhde; Low, Daniel A

    2013-10-01

    To determine whether kilovoltage x-ray projection radiation therapy setup images could be used to perform patient identification and detect gross errors in patient setup using a computer algorithm. Three patient cohorts treated using a commercially available image guided radiation therapy (IGRT) system that uses 2-dimensional to 3-dimensional (2D-3D) image registration were retrospectively analyzed: a group of 100 cranial radiation therapy patients, a group of 100 prostate cancer patients, and a group of 83 patients treated for spinal lesions. The setup images were acquired using fixed in-room kilovoltage imaging systems. In the prostate and cranial patient groups, localizations using image registration were performed between computed tomography (CT) simulation images from radiation therapy planning and setup x-ray images corresponding both to the same patient and to different patients. For the spinal patients, localizations were performed to the correct vertebral body, and to an adjacent vertebral body, using planning CTs and setup x-ray images from the same patient. An image similarity measure used by the IGRT system image registration algorithm was extracted from the IGRT system log files and evaluated as a discriminant for error detection. A threshold value of the similarity measure could be chosen to separate correct and incorrect patient matches and correct and incorrect vertebral body localizations with excellent accuracy for these patient cohorts. A 10-fold cross-validation using linear discriminant analysis yielded misclassification probabilities of 0.000, 0.0045, and 0.014 for the cranial, prostate, and spinal cases, respectively. An automated measure of the image similarity between x-ray setup images and corresponding planning CT images could be used to perform automated patient identification and detection of localization errors in radiation therapy treatments. Copyright © 2013 Elsevier Inc. All rights reserved.

  15. What is Fault Tolerant Control

    DEFF Research Database (Denmark)

    Blanke, Mogens; Frei, C. W.; Kraus, K.

    2000-01-01

    Faults in automated processes will often cause undesired reactions and shut-down of a controlled plant, and the consequences could be damage to the plant, to personnel or the environment. Fault-tolerant control is the synonym for a set of recent techniques that were developed to increase plant...... availability and reduce the risk of safety hazards. Its aim is to prevent that simple faults develop into serious failure. Fault-tolerant control merges several disciplines to achieve this goal, including on-line fault diagnosis, automatic condition assessment and calculation of remedial actions when a fault...... is detected. The envelope of the possible remedial actions is wide. This paper introduces tools to analyze and explore structure and other fundamental properties of an automated system such that any redundancy in the process can be fully utilized to enhance safety and a availability....

  16. Fault Tolerant Control Systems

    DEFF Research Database (Denmark)

    Bøgh, S. A.

    This thesis considered the development of fault tolerant control systems. The focus was on the category of automated processes that do not necessarily comprise a high number of identical sensors and actuators to maintain safe operation, but still have a potential for improving immunity to component...

  17. Improving treatment plan evaluation with automation

    Science.gov (United States)

    Covington, Elizabeth L.; Chen, Xiaoping; Younge, Kelly C.; Lee, Choonik; Matuszak, Martha M.; Kessler, Marc L.; Keranen, Wayne; Acosta, Eduardo; Dougherty, Ashley M.; Filpansick, Stephanie E.

    2016-01-01

    The goal of this work is to evaluate the effectiveness of Plan‐Checker Tool (PCT) which was created to improve first‐time plan quality, reduce patient delays, increase the efficiency of our electronic workflow, and standardize and automate the physics plan review in the treatment planning system (TPS). PCT uses an application programming interface to check and compare data from the TPS and treatment management system (TMS). PCT includes a comprehensive checklist of automated and manual checks that are documented when performed by the user as part of a plan readiness check for treatment. Prior to and during PCT development, errors identified during the physics review and causes of patient treatment start delays were tracked to prioritize which checks should be automated. Nineteen of 33 checklist items were automated, with data extracted with PCT. There was a 60% reduction in the number of patient delays in the six months after PCT release. PCT was successfully implemented for use on all external beam treatment plans in our clinic. While the number of errors found during the physics check did not decrease, automation of checks increased visibility of errors during the physics check, which led to decreased patient delays. The methods used here can be applied to any TMS and TPS that allows queries of the database. PACS number(s): 87.55.‐x, 87.55.N‐, 87.55.Qr, 87.55.tm, 89.20.Bb PMID:27929478

  18. An advanced SEU tolerant latch based on error detection

    Science.gov (United States)

    Xu, Hui; Zhu, Jianwei; Lu, Xiaoping; Li, Jingzhao

    2018-05-01

    This paper proposes a latch that can mitigate SEUs via an error detection circuit. The error detection circuit is hardened by a C-element and a stacked PMOS. In the hold state, a particle strikes the latch or the error detection circuit may cause a fault logic state of the circuit. The error detection circuit can detect the upset node in the latch and the fault output will be corrected. The upset node in the error detection circuit can be corrected by the C-element. The power dissipation and propagation delay of the proposed latch are analyzed by HSPICE simulations. The proposed latch consumes about 77.5% less energy and 33.1% less propagation delay than the triple modular redundancy (TMR) latch. Simulation results demonstrate that the proposed latch can mitigate SEU effectively. Project supported by the National Natural Science Foundation of China (Nos. 61404001, 61306046), the Anhui Province University Natural Science Research Major Project (No. KJ2014ZD12), the Huainan Science and Technology Program (No. 2013A4011), and the National Natural Science Foundation of China (No. 61371025).

  19. Phenotyping for patient safety: algorithm development for electronic health record based automated adverse event and medical error detection in neonatal intensive care.

    Science.gov (United States)

    Li, Qi; Melton, Kristin; Lingren, Todd; Kirkendall, Eric S; Hall, Eric; Zhai, Haijun; Ni, Yizhao; Kaiser, Megan; Stoutenborough, Laura; Solti, Imre

    2014-01-01

    Although electronic health records (EHRs) have the potential to provide a foundation for quality and safety algorithms, few studies have measured their impact on automated adverse event (AE) and medical error (ME) detection within the neonatal intensive care unit (NICU) environment. This paper presents two phenotyping AE and ME detection algorithms (ie, IV infiltrations, narcotic medication oversedation and dosing errors) and describes manual annotation of airway management and medication/fluid AEs from NICU EHRs. From 753 NICU patient EHRs from 2011, we developed two automatic AE/ME detection algorithms, and manually annotated 11 classes of AEs in 3263 clinical notes. Performance of the automatic AE/ME detection algorithms was compared to trigger tool and voluntary incident reporting results. AEs in clinical notes were double annotated and consensus achieved under neonatologist supervision. Sensitivity, positive predictive value (PPV), and specificity are reported. Twelve severe IV infiltrates were detected. The algorithm identified one more infiltrate than the trigger tool and eight more than incident reporting. One narcotic oversedation was detected demonstrating 100% agreement with the trigger tool. Additionally, 17 narcotic medication MEs were detected, an increase of 16 cases over voluntary incident reporting. Automated AE/ME detection algorithms provide higher sensitivity and PPV than currently used trigger tools or voluntary incident-reporting systems, including identification of potential dosing and frequency errors that current methods are unequipped to detect. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  20. I trust it, but I don't know why: effects of implicit attitudes toward automation on trust in an automated system.

    Science.gov (United States)

    Merritt, Stephanie M; Heimbaugh, Heather; LaChapell, Jennifer; Lee, Deborah

    2013-06-01

    This study is the first to examine the influence of implicit attitudes toward automation on users' trust in automation. Past empirical work has examined explicit (conscious) influences on user level of trust in automation but has not yet measured implicit influences. We examine concurrent effects of explicit propensity to trust machines and implicit attitudes toward automation on trust in an automated system. We examine differential impacts of each under varying automation performance conditions (clearly good, ambiguous, clearly poor). Participants completed both a self-report measure of propensity to trust and an Implicit Association Test measuring implicit attitude toward automation, then performed an X-ray screening task. Automation performance was manipulated within-subjects by varying the number and obviousness of errors. Explicit propensity to trust and implicit attitude toward automation did not significantly correlate. When the automation's performance was ambiguous, implicit attitude significantly affected automation trust, and its relationship with propensity to trust was additive: Increments in either were related to increases in trust. When errors were obvious, a significant interaction between the implicit and explicit measures was found, with those high in both having higher trust. Implicit attitudes have important implications for automation trust. Users may not be able to accurately report why they experience a given level of trust. To understand why users trust or fail to trust automation, measurements of implicit and explicit predictors may be necessary. Furthermore, implicit attitude toward automation might be used as a lever to effectively calibrate trust.

  1. Gaussian-Based Smooth Dielectric Function: A Surface-Free Approach for Modeling Macromolecular Binding in Solvents

    Directory of Open Access Journals (Sweden)

    Arghya Chakravorty

    2018-03-01

    Full Text Available Conventional modeling techniques to model macromolecular solvation and its effect on binding in the framework of Poisson-Boltzmann based implicit solvent models make use of a geometrically defined surface to depict the separation of macromolecular interior (low dielectric constant from the solvent phase (high dielectric constant. Though this simplification saves time and computational resources without significantly compromising the accuracy of free energy calculations, it bypasses some of the key physio-chemical properties of the solute-solvent interface, e.g., the altered flexibility of water molecules and that of side chains at the interface, which results in dielectric properties different from both bulk water and macromolecular interior, respectively. Here we present a Gaussian-based smooth dielectric model, an inhomogeneous dielectric distribution model that mimics the effect of macromolecular flexibility and captures the altered properties of surface bound water molecules. Thus, the model delivers a smooth transition of dielectric properties from the macromolecular interior to the solvent phase, eliminating any unphysical surface separating the two phases. Using various examples of macromolecular binding, we demonstrate its utility and illustrate the comparison with the conventional 2-dielectric model. We also showcase some additional abilities of this model, viz. to account for the effect of electrolytes in the solution and to render the distribution profile of water across a lipid membrane.

  2. Automated lattice data generation

    Science.gov (United States)

    Ayyar, Venkitesh; Hackett, Daniel C.; Jay, William I.; Neil, Ethan T.

    2018-03-01

    The process of generating ensembles of gauge configurations (and measuring various observables over them) can be tedious and error-prone when done "by hand". In practice, most of this procedure can be automated with the use of a workflow manager. We discuss how this automation can be accomplished using Taxi, a minimal Python-based workflow manager built for generating lattice data. We present a case study demonstrating this technology.

  3. Operational proof of automation

    International Nuclear Information System (INIS)

    Jaerschky, R.; Reifenhaeuser, R.; Schlicht, K.

    1976-01-01

    Automation of the power plant process may imply quite a number of problems. The automation of dynamic operations requires complicated programmes often interfering in several branched areas. This reduces clarity for the operating and maintenance staff, whilst increasing the possibilities of errors. The synthesis and the organization of standardized equipment have proved very successful. The possibilities offered by this kind of automation for improving the operation of power plants will only sufficiently and correctly be turned to profit, however, if the application of these technics of equipment is further improved and if its volume is tallied with a definite etc. (orig.) [de

  4. Operational proof of automation

    International Nuclear Information System (INIS)

    Jaerschky, R.; Schlicht, K.

    1977-01-01

    Automation of the power plant process may imply quite a number of problems. The automation of dynamic operations requires complicated programmes often interfering in several branched areas. This reduces clarity for the operating and maintenance staff, whilst increasing the possibilities of errors. The synthesis and the organization of standardized equipment have proved very successful. The possibilities offered by this kind of automation for improving the operation of power plants will only sufficiently and correctly be turned to profit, however, if the application of these equipment techniques is further improved and if it stands in a certain ratio with a definite efficiency. (orig.) [de

  5. The impact of cockpit automation on crew coordination and communication. Volume 1: Overview, LOFT evaluations, error severity, and questionnaire data

    Science.gov (United States)

    Wiener, Earl L.; Chidester, Thomas R.; Kanki, Barbara G.; Palmer, Everett A.; Curry, Renwick E.; Gregorich, Steven E.

    1991-01-01

    The purpose was to examine, jointly, cockpit automation and social processes. Automation was varied by the choice of two radically different versions of the DC-9 series aircraft, the traditional DC-9-30, and the glass cockpit derivative, the MD-88. Airline pilot volunteers flew a mission in the simulator for these aircraft. Results show that the performance differences between the crews of the two aircraft were generally small, but where there were differences, they favored the DC-9. There were no criteria on which the MD-88 crews performed better than the DC-9 crews. Furthermore, DC-9 crews rated their own workload as lower than did the MD-88 pilots. There were no significant differences between the two aircraft types with respect to the severity of errors committed during the Line-Oriented Flight Training (LOFT) flight. The attitude questionnaires provided some interesting insights, but failed to distinguish between DC-9 and MD-88 crews.

  6. A public database of macromolecular diffraction experiments.

    Science.gov (United States)

    Grabowski, Marek; Langner, Karol M; Cymborowski, Marcin; Porebski, Przemyslaw J; Sroka, Piotr; Zheng, Heping; Cooper, David R; Zimmerman, Matthew D; Elsliger, Marc André; Burley, Stephen K; Minor, Wladek

    2016-11-01

    The low reproducibility of published experimental results in many scientific disciplines has recently garnered negative attention in scientific journals and the general media. Public transparency, including the availability of `raw' experimental data, will help to address growing concerns regarding scientific integrity. Macromolecular X-ray crystallography has led the way in requiring the public dissemination of atomic coordinates and a wealth of experimental data, making the field one of the most reproducible in the biological sciences. However, there remains no mandate for public disclosure of the original diffraction data. The Integrated Resource for Reproducibility in Macromolecular Crystallography (IRRMC) has been developed to archive raw data from diffraction experiments and, equally importantly, to provide related metadata. Currently, the database of our resource contains data from 2920 macromolecular diffraction experiments (5767 data sets), accounting for around 3% of all depositions in the Protein Data Bank (PDB), with their corresponding partially curated metadata. IRRMC utilizes distributed storage implemented using a federated architecture of many independent storage servers, which provides both scalability and sustainability. The resource, which is accessible via the web portal at http://www.proteindiffraction.org, can be searched using various criteria. All data are available for unrestricted access and download. The resource serves as a proof of concept and demonstrates the feasibility of archiving raw diffraction data and associated metadata from X-ray crystallographic studies of biological macromolecules. The goal is to expand this resource and include data sets that failed to yield X-ray structures in order to facilitate collaborative efforts that will improve protein structure-determination methods and to ensure the availability of `orphan' data left behind for various reasons by individual investigators and/or extinct structural genomics

  7. Human-centered automation of testing, surveillance and maintenance

    International Nuclear Information System (INIS)

    Bhatt, S.C.; Sun, B.K.H.

    1991-01-01

    Manual surveillance and testing of instrumentation, control and protection systems at nuclear power plants involves system and human errors which can lead to substantial plant down time. Frequent manual testing can also contribute significantly to operation and maintenance cost. Automation technology offers potential for prudent applications at the power plant to reduce testing errors and cost. To help address the testing problems and to harness the benefit of automation application, input from utilities is obtained on suitable automation approaches. This paper includes lessens from successful past experience at a few plants where some island of automation exist. The results are summarized as a set of specifications for semi automatic testing. A human-centered automation methodology is proposed with the guidelines for optimal human/computer division of tasks given. Implementation obstacles for significant changes of testing practices are identified and methods acceptable to nuclear power plants for addressing these obstacles have been suggested

  8. 75 FR 68214 - Flubendiamide; Pesticide Tolerances; Technical Correction

    Science.gov (United States)

    2010-11-05

    ... typographical errors in the referenced rule, specifically, to revise incorrect tolerance values for the... tolerance value for the established tolerances for corn, field, grain (0.02 ppm); corn, field, stover (0.15... field trial and processing data, these tolerance values should be revised to 0.03 ppm; 15 ppm; 25 ppm...

  9. Automated lattice data generation

    Directory of Open Access Journals (Sweden)

    Ayyar Venkitesh

    2018-01-01

    Full Text Available The process of generating ensembles of gauge configurations (and measuring various observables over them can be tedious and error-prone when done “by hand”. In practice, most of this procedure can be automated with the use of a workflow manager. We discuss how this automation can be accomplished using Taxi, a minimal Python-based workflow manager built for generating lattice data. We present a case study demonstrating this technology.

  10. Thresholds of surface codes on the general lattice structures suffering biased error and loss

    International Nuclear Information System (INIS)

    Tokunaga, Yuuki; Fujii, Keisuke

    2014-01-01

    A family of surface codes with general lattice structures is proposed. We can control the error tolerances against bit and phase errors asymmetrically by changing the underlying lattice geometries. The surface codes on various lattices are found to be efficient in the sense that their threshold values universally approach the quantum Gilbert-Varshamov bound. We find that the error tolerance of the surface codes depends on the connectivity of the underlying lattices; the error chains on a lattice of lower connectivity are easier to correct. On the other hand, the loss tolerance of the surface codes exhibits an opposite behavior; the logical information on a lattice of higher connectivity has more robustness against qubit loss. As a result, we come upon a fundamental trade-off between error and loss tolerances in the family of surface codes with different lattice geometries

  11. Thresholds of surface codes on the general lattice structures suffering biased error and loss

    Energy Technology Data Exchange (ETDEWEB)

    Tokunaga, Yuuki [NTT Secure Platform Laboratories, NTT Corporation, 3-9-11 Midori-cho, Musashino, Tokyo 180-8585, Japan and Japan Science and Technology Agency, CREST, 5 Sanban-cho, Chiyoda-ku, Tokyo 102-0075 (Japan); Fujii, Keisuke [Graduate School of Engineering Science, Osaka University, Toyonaka, Osaka 560-8531 (Japan)

    2014-12-04

    A family of surface codes with general lattice structures is proposed. We can control the error tolerances against bit and phase errors asymmetrically by changing the underlying lattice geometries. The surface codes on various lattices are found to be efficient in the sense that their threshold values universally approach the quantum Gilbert-Varshamov bound. We find that the error tolerance of the surface codes depends on the connectivity of the underlying lattices; the error chains on a lattice of lower connectivity are easier to correct. On the other hand, the loss tolerance of the surface codes exhibits an opposite behavior; the logical information on a lattice of higher connectivity has more robustness against qubit loss. As a result, we come upon a fundamental trade-off between error and loss tolerances in the family of surface codes with different lattice geometries.

  12. SHEAN (Simplified Human Error Analysis code) and automated THERP

    International Nuclear Information System (INIS)

    Wilson, J.R.

    1993-01-01

    One of the most widely used human error analysis tools is THERP (Technique for Human Error Rate Prediction). Unfortunately, this tool has disadvantages. The Nuclear Regulatory Commission, realizing these drawbacks, commissioned Dr. Swain, the author of THERP, to create a simpler, more consistent tool for deriving human error rates. That effort produced the Accident Sequence Evaluation Program Human Reliability Analysis Procedure (ASEP), which is more conservative than THERP, but a valuable screening tool. ASEP involves answering simple questions about the scenario in question, and then looking up the appropriate human error rate in the indicated table (THERP also uses look-up tables, but four times as many). The advantages of ASEP are that human factors expertise is not required, and the training to use the method is minimal. Although not originally envisioned by Dr. Swain, the ASEP approach actually begs to be computerized. That WINCO did, calling the code SHEAN, for Simplified Human Error ANalysis. The code was done in TURBO Basic for IBM or IBM-compatible MS-DOS, for fast execution. WINCO is now in the process of comparing this code against THERP for various scenarios. This report provides a discussion of SHEAN

  13. A fault-tolerant one-way quantum computer

    International Nuclear Information System (INIS)

    Raussendorf, R.; Harrington, J.; Goyal, K.

    2006-01-01

    We describe a fault-tolerant one-way quantum computer on cluster states in three dimensions. The presented scheme uses methods of topological error correction resulting from a link between cluster states and surface codes. The error threshold is 1.4% for local depolarizing error and 0.11% for each source in an error model with preparation-, gate-, storage-, and measurement errors

  14. Adaptive algorithm of selecting optimal variant of errors detection system for digital means of automation facility of oil and gas complex

    Science.gov (United States)

    Poluyan, A. Y.; Fugarov, D. D.; Purchina, O. A.; Nesterchuk, V. V.; Smirnova, O. V.; Petrenkova, S. B.

    2018-05-01

    To date, the problems associated with the detection of errors in digital equipment (DE) systems for the automation of explosive objects of the oil and gas complex are extremely actual. Especially this problem is actual for facilities where a violation of the accuracy of the DE will inevitably lead to man-made disasters and essential material damage, at such facilities, the diagnostics of the accuracy of the DE operation is one of the main elements of the industrial safety management system. In the work, the solution of the problem of selecting the optimal variant of the errors detection system of errors detection by a validation criterion. Known methods for solving these problems have an exponential valuation of labor intensity. Thus, with a view to reduce time for solving the problem, a validation criterion is compiled as an adaptive bionic algorithm. Bionic algorithms (BA) have proven effective in solving optimization problems. The advantages of bionic search include adaptability, learning ability, parallelism, the ability to build hybrid systems based on combining. [1].

  15. Trauma Quality Improvement: Reducing Triage Errors by Automating the Level Assignment Process.

    Science.gov (United States)

    Stonko, David P; O Neill, Dillon C; Dennis, Bradley M; Smith, Melissa; Gray, Jeffrey; Guillamondegui, Oscar D

    2018-04-12

    Trauma patients are triaged by the severity of their injury or need for intervention while en route to the trauma center according to trauma activation protocols that are institution specific. Significant research has been aimed at improving these protocols in order to optimize patient outcomes while striving for efficiency in care. However, it is known that patients are often undertriaged or overtriaged because protocol adherence remains imperfect. The goal of this quality improvement (QI) project was to improve this adherence, and thereby reduce the triage error. It was conducted as part of the formal undergraduate medical education curriculum at this institution. A QI team was assembled and baseline data were collected, then 2 Plan-Do-Study-Act (PDSA) cycles were implemented sequentially. During the first cycle, a novel web tool was developed and implemented in order to automate the level assignment process (it takes EMS-provided data and automatically determines the level); the tool was based on the existing trauma activation protocol. The second PDSA cycle focused on improving triage accuracy in isolated, less than 10% total body surface area burns, which we identified to be a point of common error. Traumas were reviewed and tabulated at the end of each PDSA cycle, and triage accuracy was followed with a run chart. This study was performed at Vanderbilt University Medical Center and Medical School, which has a large level 1 trauma center covering over 75,000 square miles, and which sees urban, suburban, and rural trauma. The baseline assessment period and each PDSA cycle lasted 2 weeks. During this time, all activated, adult, direct traumas were reviewed. There were 180 patients during the baseline period, 189 after the first test of change, and 150 after the second test of change. All were included in analysis. Of 180 patients, 30 were inappropriately triaged during baseline analysis (3 undertriaged and 27 overtriaged) versus 16 of 189 (3 undertriaged and 13

  16. A pattern-based method to automate mask inspection files

    Science.gov (United States)

    Kamal Baharin, Ezni Aznida Binti; Muhsain, Mohamad Fahmi Bin; Ahmad Ibrahim, Muhamad Asraf Bin; Ahmad Noorhani, Ahmad Nurul Ihsan Bin; Sweis, Jason; Lai, Ya-Chieh; Hurat, Philippe

    2017-03-01

    Mask inspection is a critical step in the mask manufacturing process in order to ensure all dimensions printed are within the needed tolerances. This becomes even more challenging as the device nodes shrink and the complexity of the tapeout increases. Thus, the amount of measurement points and their critical dimension (CD) types are increasing to ensure the quality of the mask. In addition to the mask quality, there is a significant amount of manpower needed when the preparation and debugging of this process are not automated. By utilizing a novel pattern search technology with the ability to measure and report match region scan-line (edge) measurements, we can create a flow to find, measure and mark all metrology locations of interest and provide this automated report to the mask shop for inspection. A digital library is created based on the technology product and node which contains the test patterns to be measured. This paper will discuss how these digital libraries will be generated and then utilized. As a time-critical part of the manufacturing process, this can also reduce the data preparation cycle time, minimize the amount of manual/human error in naming and measuring the various locations, reduce the risk of wrong/missing CD locations, and reduce the amount of manpower needed overall. We will also review an example pattern and how the reporting structure to the mask shop can be processed. This entire process can now be fully automated.

  17. Understanding the effect of workload on automation use for younger and older adults.

    Science.gov (United States)

    McBride, Sara E; Rogers, Wendy A; Fisk, Arthur D

    2011-12-01

    This study examined how individuals, younger and older, interacted with an imperfect automated system. The impact of workload on performance and automation use was also investigated. Automation is used in situations characterized by varying levels of workload. As automated systems spread to domains such as transportation and the home, a diverse population of users will interact with automation. Research is needed to understand how different segments of the population use automation. Workload was systematically manipulated to create three levels (low, moderate, high) in a dual-task scenario in which participants interacted with a 70% reliable automated aid. Two experiments were conducted to assess automation use for younger and older adults. Both younger and older adults relied on the automation more than they complied with it. Among younger adults, high workload led to poorer performance and higher compliance, even when that compliance was detrimental. Older adults' performance was negatively affected by workload, but their compliance and reliance were unaffected. Younger and older adults were both able to use and double-check an imperfect automated system. Workload affected how younger adults complied with automation, particularly with regard to detecting automation false alarms. Older adults tended to comply and rely at fairly high rates overall, and this did not change with increased workload. Training programs for imperfect automated systems should vary workload and provide feedback about error types, and strategies for identifying errors. The ability to identify automation errors varies across individuals, thereby necessitating training.

  18. An automated, quantitative, and case-specific evaluation of deformable image registration in computed tomography images

    Science.gov (United States)

    Kierkels, R. G. J.; den Otter, L. A.; Korevaar, E. W.; Langendijk, J. A.; van der Schaaf, A.; Knopf, A. C.; Sijtsema, N. M.

    2018-02-01

    A prerequisite for adaptive dose-tracking in radiotherapy is the assessment of the deformable image registration (DIR) quality. In this work, various metrics that quantify DIR uncertainties are investigated using realistic deformation fields of 26 head and neck and 12 lung cancer patients. Metrics related to the physiologically feasibility (the Jacobian determinant, harmonic energy (HE), and octahedral shear strain (OSS)) and numerically robustness of the deformation (the inverse consistency error (ICE), transitivity error (TE), and distance discordance metric (DDM)) were investigated. The deformable registrations were performed using a B-spline transformation model. The DIR error metrics were log-transformed and correlated (Pearson) against the log-transformed ground-truth error on a voxel level. Correlations of r  ⩾  0.5 were found for the DDM and HE. Given a DIR tolerance threshold of 2.0 mm and a negative predictive value of 0.90, the DDM and HE thresholds were 0.49 mm and 0.014, respectively. In conclusion, the log-transformed DDM and HE can be used to identify voxels at risk for large DIR errors with a large negative predictive value. The HE and/or DDM can therefore be used to perform automated quality assurance of each CT-based DIR for head and neck and lung cancer patients.

  19. Automated Liquibase Generator And ValidatorALGV

    Directory of Open Access Journals (Sweden)

    Manik Jain

    2015-08-01

    Full Text Available Abstract This paper presents an automation tool namely ALGV Automated Liquibase Generator and Validator for the automated generation and verification of liquibase scripts. Liquibase is one of the most efficient ways of applying and persisting changes to a database schema. Since its invention by Nathan Voxland 1 it has become de facto standard for database change management. The advantages of using liquibase scripts over traditional sql queries ranges from version control to reusing the same scripts over multiple database platforms. Irrespective of its advantages manual creation of liquibase scripts takes a lot of effort and sometimes is error-prone. ALGV helps to reduce the time consuming liquibase script generation manual typing efforts possible error occurrence and manual verification process and time by 75. Automating the liquibase generation process also helps to remove the burden of recollecting specific tags to be used for a particular change. Moreover developers can concentrate on the business logic and business data rather than wasting their precious efforts in writing files.

  20. Fault-tolerant measurement-based quantum computing with continuous-variable cluster states.

    Science.gov (United States)

    Menicucci, Nicolas C

    2014-03-28

    A long-standing open question about Gaussian continuous-variable cluster states is whether they enable fault-tolerant measurement-based quantum computation. The answer is yes. Initial squeezing in the cluster above a threshold value of 20.5 dB ensures that errors from finite squeezing acting on encoded qubits are below the fault-tolerance threshold of known qubit-based error-correcting codes. By concatenating with one of these codes and using ancilla-based error correction, fault-tolerant measurement-based quantum computation of theoretically indefinite length is possible with finitely squeezed cluster states.

  1. Macromolecular Networks Containing Fluorinated Cyclic Moieties

    Science.gov (United States)

    2015-12-12

    Briefing Charts 3. DATES COVERED (From - To) 17 Nov 2015 – 12 Dec 2015 4. TITLE AND SUBTITLE Macromolecular Networks Containing Fluorinated Cyclic... FLUORINATED CYCLIC MOIETIES 12 December 2015 Andrew J. Guenthner,1 Scott T. Iacono,2 Cynthia A. Corley,2 Christopher M. Sahagun,3 Kevin R. Lamison,4...Reinforcements Good Flame, Smoke, & Toxicity Characteristics Low Water Uptake with Near Zero Coefficient of Hygroscopic Expansion ∆ DISTRIBUTION A

  2. Design and application of a C++ macromolecular class library.

    Science.gov (United States)

    Chang, W; Shindyalov, I N; Pu, C; Bourne, P E

    1994-01-01

    PDBlib is an extensible object oriented class library written in C++ for representing the 3-dimensional structure of biological macromolecules. PDBlib forms the kernel of a larger software framework being developed for assiting in knowledge discovery from macromolecular structure data. The software design strategy used by PDBlib, how the library may be used and several prototype applications that use the library are summarized. PDBlib represents the structural features of proteins, DNA, RNA, and complexes thereof, at a level of detail on a par with that which can be parsed from a Protein Data Bank (PDB) entry. However, the memory resident representation of the macromolecule is independent of the PDB entry and can be obtained from other back-end data sources, for example, existing relational databases and our own object oriented database (OOPDB) built on top of the commercial object oriented database, ObjectStore. At the front-end are several prototype applications that use the library: Macromolecular Query Language (MMQL) is based on a separate class library (MMQLlib) for building complex queries pertaining to macromolecular structure; PDBtool is an interactive structure verification tool; and PDBview, is a structure rendering tool used either as a standalone tool or as part of another application. Each of these software components are described. All software is available via anonymous ftp from cuhhca.hhmi.columbia.edu.

  3. Error rates and resource overheads of encoded three-qubit gates

    Science.gov (United States)

    Takagi, Ryuji; Yoder, Theodore J.; Chuang, Isaac L.

    2017-10-01

    A non-Clifford gate is required for universal quantum computation, and, typically, this is the most error-prone and resource-intensive logical operation on an error-correcting code. Small, single-qubit rotations are popular choices for this non-Clifford gate, but certain three-qubit gates, such as Toffoli or controlled-controlled-Z (ccz), are equivalent options that are also more suited for implementing some quantum algorithms, for instance, those with coherent classical subroutines. Here, we calculate error rates and resource overheads for implementing logical ccz with pieceable fault tolerance, a nontransversal method for implementing logical gates. We provide a comparison with a nonlocal magic-state scheme on a concatenated code and a local magic-state scheme on the surface code. We find the pieceable fault-tolerance scheme particularly advantaged over magic states on concatenated codes and in certain regimes over magic states on the surface code. Our results suggest that pieceable fault tolerance is a promising candidate for fault tolerance in a near-future quantum computer.

  4. Automated contouring error detection based on supervised geometric attribute distribution models for radiation therapy: A general strategy

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Hsin-Chen; Tan, Jun; Dolly, Steven; Kavanaugh, James; Harold Li, H.; Altman, Michael; Gay, Hiram; Thorstad, Wade L.; Mutic, Sasa; Li, Hua, E-mail: huli@radonc.wustl.edu [Department of Radiation Oncology, Washington University, St. Louis, Missouri 63110 (United States); Anastasio, Mark A. [Department of Biomedical Engineering, Washington University, St. Louis, Missouri 63110 (United States); Low, Daniel A. [Department of Radiation Oncology, University of California Los Angeles, Los Angeles, California 90095 (United States)

    2015-02-15

    Purpose: One of the most critical steps in radiation therapy treatment is accurate tumor and critical organ-at-risk (OAR) contouring. Both manual and automated contouring processes are prone to errors and to a large degree of inter- and intraobserver variability. These are often due to the limitations of imaging techniques in visualizing human anatomy as well as to inherent anatomical variability among individuals. Physicians/physicists have to reverify all the radiation therapy contours of every patient before using them for treatment planning, which is tedious, laborious, and still not an error-free process. In this study, the authors developed a general strategy based on novel geometric attribute distribution (GAD) models to automatically detect radiation therapy OAR contouring errors and facilitate the current clinical workflow. Methods: Considering the radiation therapy structures’ geometric attributes (centroid, volume, and shape), the spatial relationship of neighboring structures, as well as anatomical similarity of individual contours among patients, the authors established GAD models to characterize the interstructural centroid and volume variations, and the intrastructural shape variations of each individual structure. The GAD models are scalable and deformable, and constrained by their respective principal attribute variations calculated from training sets with verified OAR contours. A new iterative weighted GAD model-fitting algorithm was developed for contouring error detection. Receiver operating characteristic (ROC) analysis was employed in a unique way to optimize the model parameters to satisfy clinical requirements. A total of forty-four head-and-neck patient cases, each of which includes nine critical OAR contours, were utilized to demonstrate the proposed strategy. Twenty-nine out of these forty-four patient cases were utilized to train the inter- and intrastructural GAD models. These training data and the remaining fifteen testing data sets

  5. Automated contouring error detection based on supervised geometric attribute distribution models for radiation therapy: A general strategy

    International Nuclear Information System (INIS)

    Chen, Hsin-Chen; Tan, Jun; Dolly, Steven; Kavanaugh, James; Harold Li, H.; Altman, Michael; Gay, Hiram; Thorstad, Wade L.; Mutic, Sasa; Li, Hua; Anastasio, Mark A.; Low, Daniel A.

    2015-01-01

    Purpose: One of the most critical steps in radiation therapy treatment is accurate tumor and critical organ-at-risk (OAR) contouring. Both manual and automated contouring processes are prone to errors and to a large degree of inter- and intraobserver variability. These are often due to the limitations of imaging techniques in visualizing human anatomy as well as to inherent anatomical variability among individuals. Physicians/physicists have to reverify all the radiation therapy contours of every patient before using them for treatment planning, which is tedious, laborious, and still not an error-free process. In this study, the authors developed a general strategy based on novel geometric attribute distribution (GAD) models to automatically detect radiation therapy OAR contouring errors and facilitate the current clinical workflow. Methods: Considering the radiation therapy structures’ geometric attributes (centroid, volume, and shape), the spatial relationship of neighboring structures, as well as anatomical similarity of individual contours among patients, the authors established GAD models to characterize the interstructural centroid and volume variations, and the intrastructural shape variations of each individual structure. The GAD models are scalable and deformable, and constrained by their respective principal attribute variations calculated from training sets with verified OAR contours. A new iterative weighted GAD model-fitting algorithm was developed for contouring error detection. Receiver operating characteristic (ROC) analysis was employed in a unique way to optimize the model parameters to satisfy clinical requirements. A total of forty-four head-and-neck patient cases, each of which includes nine critical OAR contours, were utilized to demonstrate the proposed strategy. Twenty-nine out of these forty-four patient cases were utilized to train the inter- and intrastructural GAD models. These training data and the remaining fifteen testing data sets

  6. Understanding the Effect of Workload on Automation Use for Younger and Older Adults

    Science.gov (United States)

    McBride, Sara E.; Rogers, Wendy A.; Fisk, Arthur D.

    2018-01-01

    Objective This study examined how individuals, younger and older, interacted with an imperfect automated system. The impact of workload on performance and automation use was also investigated. Background Automation is used in situations characterized by varying levels of workload. As automated systems spread to domains such as transportation and the home, a diverse population of users will interact with automation. Research is needed to understand how different segments of the population use automation. Method Workload was systematically manipulated to create three levels (low, moderate, high) in a dual-task scenario in which participants interacted with a 70% reliable automated aid. Two experiments were conducted to assess automation use for younger and older adults. Results Both younger and older adults relied on the automation more than they complied with it. Among younger adults, high workload led to poorer performance and higher compliance, even when that compliance was detrimental. Older adults’ performance was negatively affected by workload, but their compliance and reliance were unaffected. Conclusion Younger and older adults were both able to use and double-check an imperfect automated system. Workload affected how younger adults complied with automation, particularly with regard to detecting automation false alarms. Older adults tended to comply and rely at fairly high rates overall, and this did not change with increased workload. Application Training programs for imperfect automated systems should vary workload and provide feedback about error types, and strategies for identifying errors. The ability to identify automation errors varies across individuals, thereby necessitating training. PMID:22235529

  7. Diagnosis and fault-tolerant control

    CERN Document Server

    Blanke, Mogens; Lunze, Jan; Staroswiecki, Marcel

    2016-01-01

    Fault-tolerant control aims at a gradual shutdown response in automated systems when faults occur. It satisfies the industrial demand for enhanced availability and safety, in contrast to traditional reactions to faults, which bring about sudden shutdowns and loss of availability. The book presents effective model-based analysis and design methods for fault diagnosis and fault-tolerant control. Architectural and structural models are used to analyse the propagation of the fault through the process, to test the fault detectability and to find the redundancies in the process that can be used to ensure fault tolerance. It also introduces design methods suitable for diagnostic systems and fault-tolerant controllers for continuous processes that are described by analytical models of discrete-event systems represented by automata. The book is suitable for engineering students, engineers in industry and researchers who wish to get an overview of the variety of approaches to process diagnosis and fault-tolerant contro...

  8. Automated side-chain model building and sequence assignment by template matching

    International Nuclear Information System (INIS)

    Terwilliger, Thomas C.

    2002-01-01

    A method for automated macromolecular side-chain model building and for aligning the sequence to the map is described. An algorithm is described for automated building of side chains in an electron-density map once a main-chain model is built and for alignment of the protein sequence to the map. The procedure is based on a comparison of electron density at the expected side-chain positions with electron-density templates. The templates are constructed from average amino-acid side-chain densities in 574 refined protein structures. For each contiguous segment of main chain, a matrix with entries corresponding to an estimate of the probability that each of the 20 amino acids is located at each position of the main-chain model is obtained. The probability that this segment corresponds to each possible alignment with the sequence of the protein is estimated using a Bayesian approach and high-confidence matches are kept. Once side-chain identities are determined, the most probable rotamer for each side chain is built into the model. The automated procedure has been implemented in the RESOLVE software. Combined with automated main-chain model building, the procedure produces a preliminary model suitable for refinement and extension by an experienced crystallographer

  9. Fluid Physics and Macromolecular Crystal Growth in Microgravity

    Science.gov (United States)

    Helliwell, John R.; Snell, Edward H.; Chayen, Naomi E.; Judge, Russell A.; Boggon, Titus J.; Pusey, M. L.; Rose, M. Franklin (Technical Monitor)

    2000-01-01

    The first protein crystallization experiment in microgravity was launched in April, 1981 and used Germany's Technologische Experimente unter Schwerelosigkeit (TEXUS 3) sounding rocket. The protein P-galactosidase (molecular weight 465Kda) was chosen as the sample with a liquid-liquid diffusion growth method. A sliding device brought the protein, buffer and salt solution into contact when microgravity was reached. The sounding rocket gave six minutes of microgravity time with a cine camera and schlieren optics used to monitor the experiment, a single growth cell. In microgravity a strictly laminar diffusion process was observed in contrast to the turbulent convection seen on the ground. Several single crystals, approx 100micron in length, were formed in the flight which were of inferior but of comparable visual quality to those grown on the ground over several days. A second experiment using the same protocol but with solutions cooled to -8C (kept liquid with glycerol antifreeze) again showed laminar diffusion. The science of macromolecular structural crystallography involves crystallization of the macromolecule followed by use of the crystal for X-ray diffraction experiments to determine the three dimensional structure of the macromolecule. Neutron protein crystallography is employed for elucidation of H/D exchange and for improved definition of the bound solvent (D20). The structural information enables an understanding of how the molecule functions with important potential for rational drug design, improved efficiency of industrial enzymes and agricultural chemical development. The removal of turbulent convection and sedimentation in microgravity, and the assumption that higher quality crystals will be produced, has given rise to the growing number of crystallization experiments now flown. Many experiments can be flown in a small volume with simple, largely automated, equipment - an ideal combination for a microgravity experiment. The term "protein crystal growth

  10. Model supported sensor information platform for the transversal dynamics and longitudinal dynamics of vehicles. Applications to error diagnostics and error tolerance; Modellgestuetzte Sensorinformationsplattform fuer die Quer- und Laengsdynamik von Kraftfahrzeugen. Anwendungen zur Fehlerdiagnose und Fehlertoleranz

    Energy Technology Data Exchange (ETDEWEB)

    Halbe, Iris

    2008-07-01

    The contribution under consideration contacts engineers and scientists within the range of the motor vehicle dynamics. For the monitoring of the signals measured in series vehicle, a sensor information platform is designed. Thus, this supplies correct sensor information to the monitoring systems and offers estimated parameters and conditions for the regulation. The basis is a comprehensive concept of the transverse dynamics, longitudinal dynamics and staggering dynamics with different physical and experimental models. On the basis of these models, several procedures of error recognition are pointed out (observer, parity equations, parameter estimation), and their suitability in the driving dynamics is examined. Based on the results of error recognition, the defective sensor is diagnosed with Fuzzy in order to replace it with a computed signal. This error tolerance becomes possible by a two-trace model with a Kalman filter. Furthermore, conditions and parameters are estimated with different methods.

  11. Principles and Overview of Sampling Methods for Modeling Macromolecular Structure and Dynamics.

    Science.gov (United States)

    Maximova, Tatiana; Moffatt, Ryan; Ma, Buyong; Nussinov, Ruth; Shehu, Amarda

    2016-04-01

    Investigation of macromolecular structure and dynamics is fundamental to understanding how macromolecules carry out their functions in the cell. Significant advances have been made toward this end in silico, with a growing number of computational methods proposed yearly to study and simulate various aspects of macromolecular structure and dynamics. This review aims to provide an overview of recent advances, focusing primarily on methods proposed for exploring the structure space of macromolecules in isolation and in assemblies for the purpose of characterizing equilibrium structure and dynamics. In addition to surveying recent applications that showcase current capabilities of computational methods, this review highlights state-of-the-art algorithmic techniques proposed to overcome challenges posed in silico by the disparate spatial and time scales accessed by dynamic macromolecules. This review is not meant to be exhaustive, as such an endeavor is impossible, but rather aims to balance breadth and depth of strategies for modeling macromolecular structure and dynamics for a broad audience of novices and experts.

  12. Outrunning free radicals in room-temperature macromolecular crystallography

    International Nuclear Information System (INIS)

    Owen, Robin L.; Axford, Danny; Nettleship, Joanne E.; Owens, Raymond J.; Robinson, James I.; Morgan, Ann W.; Doré, Andrew S.; Lebon, Guillaume; Tate, Christopher G.; Fry, Elizabeth E.; Ren, Jingshan; Stuart, David I.; Evans, Gwyndaf

    2012-01-01

    A systematic increase in lifetime is observed in room-temperature protein and virus crystals through the use of reduced exposure times and a fast detector. A significant increase in the lifetime of room-temperature macromolecular crystals is reported through the use of a high-brilliance X-ray beam, reduced exposure times and a fast-readout detector. This is attributed to the ability to collect diffraction data before hydroxyl radicals can propagate through the crystal, fatally disrupting the lattice. Hydroxyl radicals are shown to be trapped in amorphous solutions at 100 K. The trend in crystal lifetime was observed in crystals of a soluble protein (immunoglobulin γ Fc receptor IIIa), a virus (bovine enterovirus serotype 2) and a membrane protein (human A 2A adenosine G-protein coupled receptor). The observation of a similar effect in all three systems provides clear evidence for a common optimal strategy for room-temperature data collection and will inform the design of future synchrotron beamlines and detectors for macromolecular crystallography

  13. Outrunning free radicals in room-temperature macromolecular crystallography

    Energy Technology Data Exchange (ETDEWEB)

    Owen, Robin L., E-mail: robin.owen@diamond.ac.uk; Axford, Danny [Diamond Light Source, Harwell Science and Innovation Campus, Didcot OX11 0DE (United Kingdom); Nettleship, Joanne E.; Owens, Raymond J. [Rutherford Appleton Laboratory, Didcot OX11 0FA (United Kingdom); The Henry Wellcome Building for Genomic Medicine, Roosevelt Drive, Oxford OX3 7BN (United Kingdom); Robinson, James I.; Morgan, Ann W. [University of Leeds, Leeds LS9 7FT (United Kingdom); Doré, Andrew S. [Heptares Therapeutics Ltd, BioPark, Welwyn Garden City AL7 3AX (United Kingdom); Lebon, Guillaume; Tate, Christopher G. [MRC Laboratory of Molecular Biology, Hills Road, Cambridge CB2 0QH (United Kingdom); Fry, Elizabeth E.; Ren, Jingshan [The Henry Wellcome Building for Genomic Medicine, Roosevelt Drive, Oxford OX3 7BN (United Kingdom); Stuart, David I. [Diamond Light Source, Harwell Science and Innovation Campus, Didcot OX11 0DE (United Kingdom); The Henry Wellcome Building for Genomic Medicine, Roosevelt Drive, Oxford OX3 7BN (United Kingdom); Evans, Gwyndaf [Diamond Light Source, Harwell Science and Innovation Campus, Didcot OX11 0DE (United Kingdom)

    2012-06-15

    A systematic increase in lifetime is observed in room-temperature protein and virus crystals through the use of reduced exposure times and a fast detector. A significant increase in the lifetime of room-temperature macromolecular crystals is reported through the use of a high-brilliance X-ray beam, reduced exposure times and a fast-readout detector. This is attributed to the ability to collect diffraction data before hydroxyl radicals can propagate through the crystal, fatally disrupting the lattice. Hydroxyl radicals are shown to be trapped in amorphous solutions at 100 K. The trend in crystal lifetime was observed in crystals of a soluble protein (immunoglobulin γ Fc receptor IIIa), a virus (bovine enterovirus serotype 2) and a membrane protein (human A{sub 2A} adenosine G-protein coupled receptor). The observation of a similar effect in all three systems provides clear evidence for a common optimal strategy for room-temperature data collection and will inform the design of future synchrotron beamlines and detectors for macromolecular crystallography.

  14. Macromolecular diffusion in crowded media beyond the hard-sphere model.

    Science.gov (United States)

    Blanco, Pablo M; Garcés, Josep Lluís; Madurga, Sergio; Mas, Francesc

    2018-04-25

    The effect of macromolecular crowding on diffusion beyond the hard-core sphere model is studied. A new coarse-grained model is presented, the Chain Entanglement Softened Potential (CESP) model, which takes into account the macromolecular flexibility and chain entanglement. The CESP model uses a shoulder-shaped interaction potential that is implemented in the Brownian Dynamics (BD) computations. The interaction potential contains only one parameter associated with the chain entanglement energetic cost (Ur). The hydrodynamic interactions are included in the BD computations via Tokuyama mean-field equations. The model is used to analyze the diffusion of a streptavidin protein among different sized dextran obstacles. For this system, Ur is obtained by fitting the streptavidin experimental long-time diffusion coefficient Dlongversus the macromolecular concentration for D50 (indicating their molecular weight in kg mol-1) dextran obstacles. The obtained Dlong values show better quantitative agreement with experiments than those obtained with hard-core spheres. Moreover, once parametrized, the CESP model is also able to quantitatively predict Dlong and the anomalous exponent (α) for streptavidin diffusion among D10, D400 and D700 dextran obstacles. Dlong, the short-time diffusion coefficient (Dshort) and α are obtained from the BD simulations by using a new empirical expression, able to describe the full temporal evolution of the diffusion coefficient.

  15. A smooth and differentiable bulk-solvent model for macromolecular diffraction

    Energy Technology Data Exchange (ETDEWEB)

    Fenn, T. D. [Department of Molecular and Cellular Physiology and Howard Hughes Medical Institute, Stanford, California (United States); Schnieders, M. J. [Department of Chemistry, Stanford, California (United States); Brunger, A. T., E-mail: brunger@stanford.edu [Department of Molecular and Cellular Physiology and Howard Hughes Medical Institute, Stanford, California (United States); Departments of Neurology and Neurological Sciences, Structural Biology and Photon Science, Stanford, California (United States)

    2010-09-01

    A new method for modeling the bulk solvent in macromolecular diffraction data based on Babinet’s principle is presented. The proposed models offer the advantage of differentiability with respect to atomic coordinates. Inclusion of low-resolution data in macromolecular crystallography requires a model for the bulk solvent. Previous methods have used a binary mask to accomplish this, which has proven to be very effective, but the mask is discontinuous at the solute–solvent boundary (i.e. the mask value jumps from zero to one) and is not differentiable with respect to atomic parameters. Here, two algorithms are introduced for computing bulk-solvent models using either a polynomial switch or a smoothly thresholded product of Gaussians, and both models are shown to be efficient and differentiable with respect to atomic coordinates. These alternative bulk-solvent models offer algorithmic improvements, while showing similar agreement of the model with the observed amplitudes relative to the binary model as monitored using R, R{sub free} and differences between experimental and model phases. As with the standard solvent models, the alternative models improve the agreement primarily with lower resolution (>6 Å) data versus no bulk solvent. The models are easily implemented into crystallographic software packages and can be used as a general method for bulk-solvent correction in macromolecular crystallography.

  16. A smooth and differentiable bulk-solvent model for macromolecular diffraction

    International Nuclear Information System (INIS)

    Fenn, T. D.; Schnieders, M. J.; Brunger, A. T.

    2010-01-01

    A new method for modeling the bulk solvent in macromolecular diffraction data based on Babinet’s principle is presented. The proposed models offer the advantage of differentiability with respect to atomic coordinates. Inclusion of low-resolution data in macromolecular crystallography requires a model for the bulk solvent. Previous methods have used a binary mask to accomplish this, which has proven to be very effective, but the mask is discontinuous at the solute–solvent boundary (i.e. the mask value jumps from zero to one) and is not differentiable with respect to atomic parameters. Here, two algorithms are introduced for computing bulk-solvent models using either a polynomial switch or a smoothly thresholded product of Gaussians, and both models are shown to be efficient and differentiable with respect to atomic coordinates. These alternative bulk-solvent models offer algorithmic improvements, while showing similar agreement of the model with the observed amplitudes relative to the binary model as monitored using R, R free and differences between experimental and model phases. As with the standard solvent models, the alternative models improve the agreement primarily with lower resolution (>6 Å) data versus no bulk solvent. The models are easily implemented into crystallographic software packages and can be used as a general method for bulk-solvent correction in macromolecular crystallography

  17. Microbial stress tolerance for biofuels. Systems biology

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Zonglin Lewis (ed.) [National Center for Agricultural Utilization Research, USDA-ARS, Peoria, IL (United States)

    2012-07-01

    The development of sustainable and renewable biofuels is attracting growing interest. It is vital to develop robust microbial strains for biocatalysts that are able to function under multiple stress conditions. This Microbiology Monograph provides an overview of methods for studying microbial stress tolerance for biofuels applications using a systems biology approach. Topics covered range from mechanisms to methodology for yeast and bacteria, including the genomics of yeast tolerance and detoxification; genetics and regulation of glycogen and trehalose metabolism; programmed cell death; high gravity fermentations; ethanol tolerance; improving biomass sugar utilization by engineered Saccharomyces; the genomics on tolerance of Zymomonas mobilis; microbial solvent tolerance; control of stress tolerance in bacterial host organisms; metabolomics for ethanologenic yeast; automated proteomics work cell systems for strain improvement; and unification of gene expression data for comparable analyses under stress conditions. (orig.)

  18. Virtual automation.

    Science.gov (United States)

    Casis, E; Garrido, A; Uranga, B; Vives, A; Zufiaurre, C

    2001-01-01

    Total laboratory automation (TLA) can be substituted in mid-size laboratories by a computer sample workflow control (virtual automation). Such a solution has been implemented in our laboratory using PSM, software developed in cooperation with Roche Diagnostics (Barcelona, Spain), to this purpose. This software is connected to the online analyzers and to the laboratory information system and is able to control and direct the samples working as an intermediate station. The only difference with TLA is the replacement of transport belts by personnel of the laboratory. The implementation of this virtual automation system has allowed us the achievement of the main advantages of TLA: workload increase (64%) with reduction in the cost per test (43%), significant reduction in the number of biochemistry primary tubes (from 8 to 2), less aliquoting (from 600 to 100 samples/day), automation of functional testing, drastic reduction of preanalytical errors (from 11.7 to 0.4% of the tubes) and better total response time for both inpatients (from up to 48 hours to up to 4 hours) and outpatients (from up to 10 days to up to 48 hours). As an additional advantage, virtual automation could be implemented without hardware investment and significant headcount reduction (15% in our lab).

  19. ERP Human Enhancement Progress Report : Use case and computational model for adaptive maritime automation

    NARCIS (Netherlands)

    Kleij, R. van der; Broek, J. van den; Brake, G.M. te; Rypkema, J.A.; Schilder, C.M.C.

    2015-01-01

    Automation is often applied in order to increase the cost-effectiveness, reliability and safety of maritime ship and offshore operations. Automation of operator tasks, has not, however, eliminated human error so much as created opportunities for new kinds of error. The ambition of the Adaptive

  20. Visual correlation analytics of event-based error reports for advanced manufacturing

    OpenAIRE

    Nazir, Iqbal

    2017-01-01

    With the growing digitalization and automation in the manufacturing domain, an increasing amount of process data and error reports become available. To minimize the number of errors and maximize the efficiency of the production line, it is important to analyze the generated error reports and find solutions that can reduce future errors. However, not all errors have the equal importance, as some errors may be the result of previously occurred errors. Therefore, it is important for domain exper...

  1. Atomic Scale Structural Studies of Macromolecular Assemblies by Solid-state Nuclear Magnetic Resonance Spectroscopy.

    Science.gov (United States)

    Loquet, Antoine; Tolchard, James; Berbon, Melanie; Martinez, Denis; Habenstein, Birgit

    2017-09-17

    Supramolecular protein assemblies play fundamental roles in biological processes ranging from host-pathogen interaction, viral infection to the propagation of neurodegenerative disorders. Such assemblies consist in multiple protein subunits organized in a non-covalent way to form large macromolecular objects that can execute a variety of cellular functions or cause detrimental consequences. Atomic insights into the assembly mechanisms and the functioning of those macromolecular assemblies remain often scarce since their inherent insolubility and non-crystallinity often drastically reduces the quality of the data obtained from most techniques used in structural biology, such as X-ray crystallography and solution Nuclear Magnetic Resonance (NMR). We here present magic-angle spinning solid-state NMR spectroscopy (SSNMR) as a powerful method to investigate structures of macromolecular assemblies at atomic resolution. SSNMR can reveal atomic details on the assembled complex without size and solubility limitations. The protocol presented here describes the essential steps from the production of 13 C/ 15 N isotope-labeled macromolecular protein assemblies to the acquisition of standard SSNMR spectra and their analysis and interpretation. As an example, we show the pipeline of a SSNMR structural analysis of a filamentous protein assembly.

  2. Stably engineered nanobubbles and ultrasound - An effective platform for enhanced macromolecular delivery to representative cells of the retina.

    Directory of Open Access Journals (Sweden)

    Sachin S Thakur

    Full Text Available Herein we showcase the potential of ultrasound-responsive nanobubbles in enhancing macromolecular permeation through layers of the retina, ultimately leading to significant and direct intracellular delivery; this being effectively demonstrated across three relevant and distinct retinal cell lines. Stably engineered nanobubbles of a highly homogenous and echogenic nature were fully characterised using dynamic light scattering, B-scan ultrasound and transmission electron microscopy (TEM. The nanobubbles appeared as spherical liposome-like structures under TEM, accompanied by an opaque luminal core and darkened corona around their periphery, with both features indicative of efficient gas entrapment and adsorption, respectively. A nanobubble +/- ultrasound sweeping study was conducted next, which determined the maximum tolerated dose for each cell line. Detection of underlying cellular stress was verified using the biomarker heat shock protein 70, measured before and after treatment with optimised ultrasound. Next, with safety to nanobubbles and optimised ultrasound demonstrated, each human or mouse-derived cell population was incubated with biotinylated rabbit-IgG in the presence and absence of ultrasound +/- nanobubbles. Intracellular delivery of antibody in each cell type was then quantified using Cy3-streptavidin. Nanobubbles and optimised ultrasound were found to be negligibly toxic across all cell lines tested. Macromolecular internalisation was achieved to significant, yet varying degrees in all three cell lines. The results of this study pave the way towards better understanding mechanisms underlying cellular responsiveness to ultrasound-triggered drug delivery in future ex vivo and in vivo models of the posterior eye.

  3. A simple quantitative model of macromolecular crowding effects on protein folding: Application to the murine prion protein(121-231)

    Science.gov (United States)

    Bergasa-Caceres, Fernando; Rabitz, Herschel A.

    2013-06-01

    A model of protein folding kinetics is applied to study the effects of macromolecular crowding on protein folding rate and stability. Macromolecular crowding is found to promote a decrease of the entropic cost of folding of proteins that produces an increase of both the stability and the folding rate. The acceleration of the folding rate due to macromolecular crowding is shown to be a topology-dependent effect. The model is applied to the folding dynamics of the murine prion protein (121-231). The differential effect of macromolecular crowding as a function of protein topology suffices to make non-native configurations relatively more accessible.

  4. Error monitoring issues for common channel signaling

    Science.gov (United States)

    Hou, Victor T.; Kant, Krishna; Ramaswami, V.; Wang, Jonathan L.

    1994-04-01

    Motivated by field data which showed a large number of link changeovers and incidences of link oscillations between in-service and out-of-service states in common channel signaling (CCS) networks, a number of analyses of the link error monitoring procedures in the SS7 protocol were performed by the authors. This paper summarizes the results obtained thus far and include the following: (1) results of an exact analysis of the performance of the error monitoring procedures under both random and bursty errors; (2) a demonstration that there exists a range of error rates within which the error monitoring procedures of SS7 may induce frequent changeovers and changebacks; (3) an analysis of the performance ofthe SS7 level-2 transmission protocol to determine the tolerable error rates within which the delay requirements can be met; (4) a demonstration that the tolerable error rate depends strongly on various link and traffic characteristics, thereby implying that a single set of error monitor parameters will not work well in all situations; (5) some recommendations on a customizable/adaptable scheme of error monitoring with a discussion on their implementability. These issues may be particularly relevant in the presence of anticipated increases in SS7 traffic due to widespread deployment of Advanced Intelligent Network (AIN) and Personal Communications Service (PCS) as well as for developing procedures for high-speed SS7 links currently under consideration by standards bodies.

  5. Fault Tolerant Control System Design Using Automated Methods from Risk Analysis

    DEFF Research Database (Denmark)

    Blanke, M.

    Fault tolerant controls have the ability to be resilient to simple faults in control loop components.......Fault tolerant controls have the ability to be resilient to simple faults in control loop components....

  6. Impact of automated dispensing cabinets on medication selection and preparation error rates in an emergency department: a prospective and direct observational before-and-after study.

    Science.gov (United States)

    Fanning, Laura; Jones, Nick; Manias, Elizabeth

    2016-04-01

    The implementation of automated dispensing cabinets (ADCs) in healthcare facilities appears to be increasing, in particular within Australian hospital emergency departments (EDs). While the investment in ADCs is on the increase, no studies have specifically investigated the impacts of ADCs on medication selection and preparation error rates in EDs. Our aim was to assess the impact of ADCs on medication selection and preparation error rates in an ED of a tertiary teaching hospital. Pre intervention and post intervention study involving direct observations of nurses completing medication selection and preparation activities before and after the implementation of ADCs in the original and new emergency departments within a 377-bed tertiary teaching hospital in Australia. Medication selection and preparation error rates were calculated and compared between these two periods. Secondary end points included the impact on medication error type and severity. A total of 2087 medication selection and preparations were observed among 808 patients pre and post intervention. Implementation of ADCs in the new ED resulted in a 64.7% (1.96% versus 0.69%, respectively, P = 0.017) reduction in medication selection and preparation errors. All medication error types were reduced in the post intervention study period. There was an insignificant impact on medication error severity as all errors detected were categorised as minor. The implementation of ADCs could reduce medication selection and preparation errors and improve medication safety in an ED setting. © 2015 John Wiley & Sons, Ltd.

  7. Software fault tolerance in computer operating systems

    Science.gov (United States)

    Iyer, Ravishankar K.; Lee, Inhwan

    1994-01-01

    This chapter provides data and analysis of the dependability and fault tolerance for three operating systems: the Tandem/GUARDIAN fault-tolerant system, the VAX/VMS distributed system, and the IBM/MVS system. Based on measurements from these systems, basic software error characteristics are investigated. Fault tolerance in operating systems resulting from the use of process pairs and recovery routines is evaluated. Two levels of models are developed to analyze error and recovery processes inside an operating system and interactions among multiple instances of an operating system running in a distributed environment. The measurements show that the use of process pairs in Tandem systems, which was originally intended for tolerating hardware faults, allows the system to tolerate about 70% of defects in system software that result in processor failures. The loose coupling between processors which results in the backup execution (the processor state and the sequence of events occurring) being different from the original execution is a major reason for the measured software fault tolerance. The IBM/MVS system fault tolerance almost doubles when recovery routines are provided, in comparison to the case in which no recovery routines are available. However, even when recovery routines are provided, there is almost a 50% chance of system failure when critical system jobs are involved.

  8. Data Error Detection and Recovery in Embedded Systems: a Literature Review

    Directory of Open Access Journals (Sweden)

    Venu Babu Thati

    2017-06-01

    Full Text Available This paper presents a literature review on data flow error detection and recovery techniques in embedded systems. In recent years, embedded systems are being used more and more in an enormous number of applications from small mobile device to big medical devices. At the same time, it is becoming important for embedded developers to make embedded systems fault-tolerant. To make embedded systems fault-tolerant, error detection and recovery mechanisms are effective techniques to take into consideration. Fault tolerance can be achieved by using both hardware and software techniques. This literature review focuses on software-based techniques since hardware-based techniques need extra hardware and are an extra investment in cost per product. Whereas, software-based techniques needed no or limited hardware. A review on various existing data flow error detection and error recovery techniques is given along with their strengths and weaknesses. Such an information is useful to identify the better techniques among the others.

  9. ERROR HANDLING IN INTEGRATION WORKFLOWS

    Directory of Open Access Journals (Sweden)

    Alexey M. Nazarenko

    2017-01-01

    Full Text Available Simulation experiments performed while solving multidisciplinary engineering and scientific problems require joint usage of multiple software tools. Further, when following a preset plan of experiment or searching for optimum solu- tions, the same sequence of calculations is run multiple times with various simulation parameters, input data, or conditions while overall workflow does not change. Automation of simulations like these requires implementing of a workflow where tool execution and data exchange is usually controlled by a special type of software, an integration environment or plat- form. The result is an integration workflow (a platform-dependent implementation of some computing workflow which, in the context of automation, is a composition of weakly coupled (in terms of communication intensity typical subtasks. These compositions can then be decomposed back into a few workflow patterns (types of subtasks interaction. The pat- terns, in their turn, can be interpreted as higher level subtasks.This paper considers execution control and data exchange rules that should be imposed by the integration envi- ronment in the case of an error encountered by some integrated software tool. An error is defined as any abnormal behavior of a tool that invalidates its result data thus disrupting the data flow within the integration workflow. The main requirementto the error handling mechanism implemented by the integration environment is to prevent abnormal termination of theentire workflow in case of missing intermediate results data. Error handling rules are formulated on the basic pattern level and on the level of a composite task that can combine several basic patterns as next level subtasks. The cases where workflow behavior may be different, depending on user's purposes, when an error takes place, and possible error handling op- tions that can be specified by the user are also noted in the work.

  10. Determination of the Optimized Automation Rate considering Effects of Automation on Human Operators in Nuclear Power Plants

    International Nuclear Information System (INIS)

    Lee, Seung Min; Seong, Poong Hyun; Kim, Jong Hyun; Kim, Man Cheol

    2015-01-01

    Automation refers to the use of a device or a system to perform a function previously performed by a human operator. It is introduced to reduce the human errors and to enhance the performance in various industrial fields, including the nuclear industry. However, these positive effects are not always achieved in complex systems such as nuclear power plants (NPPs). An excessive introduction of automation can generate new roles for human operators and change activities in unexpected ways. As more automation systems are accepted, the ability of human operators to detect automation failures and resume manual control is diminished. This disadvantage of automation is called the Out-of-the- Loop (OOTL) problem. We should consider the positive and negative effects of automation at the same time to determine the appropriate level of the introduction of automation. Thus, in this paper, we suggest an estimation method to consider the positive and negative effects of automation at the same time to determine the appropriate introduction of automation. This concept is limited in that it does not consider the effects of automation on human operators. Thus, a new estimation method for automation rate was suggested to overcome this problem

  11. The Postgraduate Study of Macromolecular Sciences at the University of Zagreb (1971-1980

    Directory of Open Access Journals (Sweden)

    Kunst, B.

    2008-07-01

    Full Text Available The postgraduate study of macromolecular sciences (PSMS was established at the University of Zagreb in 1971 as a university study in the time of expressed interdisciplinary permeation of natural sciences - physics, chemistry and biology, and application of their achievements in technologicaldisciplines. PSMS was established by a group of prominent university professors from the schools of Science, Chemical Technology, Pharmacy and Medicine, as well as from the Institute of Biology. The study comprised basic fields of macromolecular sciences: organic chemistry of synthetic macromolecules, physical chemistry of macromolecules, physics of macromolecules, biological macromolecules and polymer engineering with polymer application and processing, and teaching was performed in 29 lecture courses lead by 30 professors with their collaborators. PSMS ceased to exist with the change of legislation in Croatia in 1980, when the attitude prevailed to render back postgraduate studies to the university schools. During 9 years of existence of PSMS the MSci grade was awarded to 37 macromolecular experts. It was assessed that the PSMS some thirty years ago was an important example of modern postgraduate education as compared with the international postgraduate development. In concordance with the recent introduction of similar interdisciplinary studies in macromolecular sciences elsewhere in the world, the establishment of a modern interdisciplinary study in the field would be of importance for further development of these sciences in Croatia.

  12. Synthesis and characterization of macromolecular rhodamine tethers and their interactions with P-glycoprotein.

    Science.gov (United States)

    Crawford, Lindsey; Putnam, David

    2014-08-20

    Rhodamine dyes are well-known P-glycoprotein (P-gp) substrates that have played an important role in the detection of inhibitors and other substrates of P-gp, as well as in the understanding of P-gp function. Macromolecular conjugates of rhodamines could prove useful as tethers for further probing of P-gp structure and function. Two macromolecular derivatives of rhodamine, methoxypolyethylene glycol-rhodamine6G and methoxypolyethylene glycol-rhodamine123, were synthesized through the 2'-position of rhodamine6G and rhodamine123, thoroughly characterized, and then evaluated by inhibition with verapamil for their ability to interact with P-gp and to act as efflux substrates. To put the results into context, the P-gp interactions of the new conjugates were compared to the commercially available methoxypolyethylene glycol-rhodamineB. FACS analysis confirmed that macromolecular tethers of rhodamine6G, rhodamine123, and rhodamineB were accumulated in P-gp expressing cells 5.2 ± 0.3%, 26.2 ± 4%, and 64.2 ± 6%, respectively, compared to a sensitive cell line that does not overexpress P-gp. Along with confocal imaging, the efflux analysis confirmed that the macromolecular rhodamine tethers remain P-gp substrates. These results open potential avenues for new ways to probe the function of P-gp both in vitro and in vivo.

  13. Automating the CMS DAQ

    International Nuclear Information System (INIS)

    Bauer, G; Darlea, G-L; Gomez-Ceballos, G; Bawej, T; Chaze, O; Coarasa, J A; Deldicque, C; Dobson, M; Dupont, A; Gigi, D; Glege, F; Gomez-Reino, R; Hartl, C; Hegeman, J; Masetti, L; Behrens, U; Branson, J; Cittolin, S; Holzner, A; Erhan, S

    2014-01-01

    We present the automation mechanisms that have been added to the Data Acquisition and Run Control systems of the Compact Muon Solenoid (CMS) experiment during Run 1 of the LHC, ranging from the automation of routine tasks to automatic error recovery and context-sensitive guidance to the operator. These mechanisms helped CMS to maintain a data taking efficiency above 90% and to even improve it to 95% towards the end of Run 1, despite an increase in the occurrence of single-event upsets in sub-detector electronics at high LHC luminosity.

  14. Quality of IT service delivery — Analysis and framework for human error prevention

    KAUST Repository

    Shwartz, L.

    2010-12-01

    In this paper, we address the problem of reducing the occurrence of Human Errors that cause service interruptions in IT Service Support and Delivery operations. Analysis of a large volume of service interruption records revealed that more than 21% of interruptions were caused by human error. We focus on Change Management, the process with the largest risk of human error, and identify the main instances of human errors as the 4 Wrongs: request, time, configuration item, and command. Analysis of change records revealed that the humanerror prevention by partial automation is highly relevant. We propose the HEP Framework, a framework for execution of IT Service Delivery operations that reduces human error by addressing the 4 Wrongs using content integration, contextualization of operation patterns, partial automation of command execution, and controlled access to resources.

  15. Automated reactor protection testing saves time and avoids errors

    International Nuclear Information System (INIS)

    Raimondo, E.

    1990-01-01

    When the Pressurized Water Reactor units in the French 900MWe series were designed, the instrumentation and control systems were equipped for manual periodic testing. Manual reactor protection system testing has since been successfully replaced by an automatic system, which is also applicable to other instrumentation testing. A study on the complete automation of process instrumentation testing has been carried out. (author)

  16. Scalable error correction in distributed ion trap computers

    International Nuclear Information System (INIS)

    Oi, Daniel K. L.; Devitt, Simon J.; Hollenberg, Lloyd C. L.

    2006-01-01

    A major challenge for quantum computation in ion trap systems is scalable integration of error correction and fault tolerance. We analyze a distributed architecture with rapid high-fidelity local control within nodes and entangled links between nodes alleviating long-distance transport. We demonstrate fault-tolerant operator measurements which are used for error correction and nonlocal gates. This scheme is readily applied to linear ion traps which cannot be scaled up beyond a few ions per individual trap but which have access to a probabilistic entanglement mechanism. A proof-of-concept system is presented which is within the reach of current experiment

  17. MODELING OF MANUFACTURING ERRORS FOR PIN-GEAR ELEMENTS OF PLANETARY GEARBOX

    Directory of Open Access Journals (Sweden)

    Ivan M. Egorov

    2014-11-01

    Full Text Available Theoretical background for calculation of k-h-v type cycloid reducers was developed relatively long ago. However, recently the matters of cycloid reducer design again attracted heightened attention. The reason for that is that such devices are used in many complex engineering systems, particularly, in mechatronic and robotics systems. The development of advanced technological capabilities for manufacturing of such reducers today gives the possibility for implementation of essential features of such devices: high efficiency, high gear ratio, kinematic accuracy and smooth motion. The presence of an adequate mathematical model gives the possibility for adjusting kinematic accuracy of the reducer by rational selection of manufacturing tolerances for its parts. This makes it possible to automate the design process for cycloid reducers with account of various factors including technological ones. A mathematical model and mathematical technique have been developed giving the possibility for modeling the kinematic error of the reducer with account of multiple factors, including manufacturing errors. The errors are considered in the way convenient for prediction of kinematic accuracy early at the manufacturing stage according to the results of reducer parts measurement on coordinate measuring machines. During the modeling, the wheel manufacturing errors are determined by the eccentricity and radius deviation of the pin tooth centers circle, and the deviation between the pin tooth axes positions and the centers circle. The satellite manufacturing errors are determined by the satellite eccentricity deviation and the satellite rim eccentricity. Due to the collinearity, the pin tooth and pin tooth hole diameter errors and the satellite tooth profile errors for a designated contact point are integrated into one deviation. Software implementation of the model makes it possible to estimate the pointed errors influence on satellite rotation angle error and

  18. Error Mitigation of Point-to-Point Communication for Fault-Tolerant Computing

    Science.gov (United States)

    Akamine, Robert L.; Hodson, Robert F.; LaMeres, Brock J.; Ray, Robert E.

    2011-01-01

    Fault tolerant systems require the ability to detect and recover from physical damage caused by the hardware s environment, faulty connectors, and system degradation over time. This ability applies to military, space, and industrial computing applications. The integrity of Point-to-Point (P2P) communication, between two microcontrollers for example, is an essential part of fault tolerant computing systems. In this paper, different methods of fault detection and recovery are presented and analyzed.

  19. Links between N-modular redundancy and the theory of error-correcting codes

    Science.gov (United States)

    Bobin, V.; Whitaker, S.; Maki, G.

    1992-01-01

    N-Modular Redundancy (NMR) is one of the best known fault tolerance techniques. Replication of a module to achieve fault tolerance is in some ways analogous to the use of a repetition code where an information symbol is replicated as parity symbols in a codeword. Linear Error-Correcting Codes (ECC) use linear combinations of information symbols as parity symbols which are used to generate syndromes for error patterns. These observations indicate links between the theory of ECC and the use of hardware redundancy for fault tolerance. In this paper, we explore some of these links and show examples of NMR systems where identification of good and failed elements is accomplished in a manner similar to error correction using linear ECC's.

  20. Advanced I&C for Fault-Tolerant Supervisory Control of Small Modular Reactors

    Energy Technology Data Exchange (ETDEWEB)

    Cole, Daniel G. [Univ. of Pittsburgh, PA (United States)

    2018-01-30

    In this research, we have developed a supervisory control approach to enable automated control of SMRs. By design the supervisory control system has an hierarchical, interconnected, adaptive control architecture. A considerable advantage to this architecture is that it allows subsystems to communicate at different/finer granularity, facilitates monitoring of process at the modular and plant levels, and enables supervisory control. We have investigated the deployment of automation, monitoring, and data collection technologies to enable operation of multiple SMRs. Each unit's controller collects and transfers information from local loops and optimize that unit’s parameters. Information is passed from the each SMR unit controller to the supervisory controller, which supervises the actions of SMR units and manage plant processes. The information processed at the supervisory level will provide operators the necessary information needed for reactor, unit, and plant operation. In conjunction with the supervisory effort, we have investigated techniques for fault-tolerant networks, over which information is transmitted between local loops and the supervisory controller to maintain a safe level of operational normalcy in the presence of anomalies. The fault-tolerance of the supervisory control architecture, the network that supports it, and the impact of fault-tolerance on multi-unit SMR plant control has been a second focus of this research. To this end, we have investigated the deployment of advanced automation, monitoring, and data collection and communications technologies to enable operation of multiple SMRs. We have created a fault-tolerant multi-unit SMR supervisory controller that collects and transfers information from local loops, supervise their actions, and adaptively optimize the controller parameters. The goal of this research has been to develop the methodologies and procedures for fault-tolerant supervisory control of small modular reactors. To achieve

  1. MMTF-An efficient file format for the transmission, visualization, and analysis of macromolecular structures.

    Directory of Open Access Journals (Sweden)

    Anthony R Bradley

    2017-06-01

    Full Text Available Recent advances in experimental techniques have led to a rapid growth in complexity, size, and number of macromolecular structures that are made available through the Protein Data Bank. This creates a challenge for macromolecular visualization and analysis. Macromolecular structure files, such as PDB or PDBx/mmCIF files can be slow to transfer, parse, and hard to incorporate into third-party software tools. Here, we present a new binary and compressed data representation, the MacroMolecular Transmission Format, MMTF, as well as software implementations in several languages that have been developed around it, which address these issues. We describe the new format and its APIs and demonstrate that it is several times faster to parse, and about a quarter of the file size of the current standard format, PDBx/mmCIF. As a consequence of the new data representation, it is now possible to visualize structures with millions of atoms in a web browser, keep the whole PDB archive in memory or parse it within few minutes on average computers, which opens up a new way of thinking how to design and implement efficient algorithms in structural bioinformatics. The PDB archive is available in MMTF file format through web services and data that are updated on a weekly basis.

  2. Diffusion accessibility as a method for visualizing macromolecular surface geometry.

    Science.gov (United States)

    Tsai, Yingssu; Holton, Thomas; Yeates, Todd O

    2015-10-01

    Important three-dimensional spatial features such as depth and surface concavity can be difficult to convey clearly in the context of two-dimensional images. In the area of macromolecular visualization, the computer graphics technique of ray-tracing can be helpful, but further techniques for emphasizing surface concavity can give clearer perceptions of depth. The notion of diffusion accessibility is well-suited for emphasizing such features of macromolecular surfaces, but a method for calculating diffusion accessibility has not been made widely available. Here we make available a web-based platform that performs the necessary calculation by solving the Laplace equation for steady state diffusion, and produces scripts for visualization that emphasize surface depth by coloring according to diffusion accessibility. The URL is http://services.mbi.ucla.edu/DiffAcc/. © 2015 The Protein Society.

  3. SU-D-BRD-03: Improving Plan Quality with Automation of Treatment Plan Checks

    International Nuclear Information System (INIS)

    Covington, E; Younge, K; Chen, X; Lee, C; Matuszak, M; Kessler, M; Acosta, E; Orow, A; Filpansick, S; Moran, J; Keranen, W

    2015-01-01

    Purpose: To evaluate the effectiveness of an automated plan check tool to improve first-time plan quality as well as standardize and document performance of physics plan checks. Methods: The Plan Checker Tool (PCT) uses the Eclipse Scripting API to check and compare data from the treatment planning system (TPS) and treatment management system (TMS). PCT was created to improve first-time plan quality, reduce patient delays, increase efficiency of our electronic workflow, and to standardize and partially automate plan checks in the TPS. A framework was developed which can be configured with different reference values and types of checks. One example is the prescribed dose check where PCT flags the user when the planned dose and the prescribed dose disagree. PCT includes a comprehensive checklist of automated and manual checks that are documented when performed by the user. A PDF report is created and automatically uploaded into the TMS. Prior to and during PCT development, errors caught during plan checks and also patient delays were tracked in order to prioritize which checks should be automated. The most common and significant errors were determined. Results: Nineteen of 33 checklist items were automated with data extracted with the PCT. These include checks for prescription, reference point and machine scheduling errors which are three of the top six causes of patient delays related to physics and dosimetry. Since the clinical roll-out, no delays have been due to errors that are automatically flagged by the PCT. Development continues to automate the remaining checks. Conclusion: With PCT, 57% of the physics plan checklist has been partially or fully automated. Treatment delays have declined since release of the PCT for clinical use. By tracking delays and errors, we have been able to measure the effectiveness of automating checks and are using this information to prioritize future development. This project was supported in part by P01CA059827

  4. SU-D-BRD-03: Improving Plan Quality with Automation of Treatment Plan Checks

    Energy Technology Data Exchange (ETDEWEB)

    Covington, E; Younge, K; Chen, X; Lee, C; Matuszak, M; Kessler, M; Acosta, E; Orow, A; Filpansick, S; Moran, J [University of Michigan Hospital and Health System, Ann Arbor, MI (United States); Keranen, W [Varian Medical Systems, Palo Alto, CA (United States)

    2015-06-15

    Purpose: To evaluate the effectiveness of an automated plan check tool to improve first-time plan quality as well as standardize and document performance of physics plan checks. Methods: The Plan Checker Tool (PCT) uses the Eclipse Scripting API to check and compare data from the treatment planning system (TPS) and treatment management system (TMS). PCT was created to improve first-time plan quality, reduce patient delays, increase efficiency of our electronic workflow, and to standardize and partially automate plan checks in the TPS. A framework was developed which can be configured with different reference values and types of checks. One example is the prescribed dose check where PCT flags the user when the planned dose and the prescribed dose disagree. PCT includes a comprehensive checklist of automated and manual checks that are documented when performed by the user. A PDF report is created and automatically uploaded into the TMS. Prior to and during PCT development, errors caught during plan checks and also patient delays were tracked in order to prioritize which checks should be automated. The most common and significant errors were determined. Results: Nineteen of 33 checklist items were automated with data extracted with the PCT. These include checks for prescription, reference point and machine scheduling errors which are three of the top six causes of patient delays related to physics and dosimetry. Since the clinical roll-out, no delays have been due to errors that are automatically flagged by the PCT. Development continues to automate the remaining checks. Conclusion: With PCT, 57% of the physics plan checklist has been partially or fully automated. Treatment delays have declined since release of the PCT for clinical use. By tracking delays and errors, we have been able to measure the effectiveness of automating checks and are using this information to prioritize future development. This project was supported in part by P01CA059827.

  5. Local analysis of strains and rotations for macromolecular electron microscopy maps

    Energy Technology Data Exchange (ETDEWEB)

    Martin-Ramos, A.; Prieto, F.; Melero, R.; Martin-Benito, J.; Jonic, S.; Navas-Calvente, J.; Vargas, J.; Oton, J.; Abrishami, V.; Rosa-Trevin, J.L. de la; Gomez-Blanco, J.; Vilas, J.L.; Marabini, R.; Carazo, R.; Sorzano, C.O.S.

    2016-07-01

    Macromolecular complexes can be considered as molecular nano-machines that must have mobile parts in order to perform their physiological functions. The reordering of their parts is essential to execute their task. These rearrangements induce local strains and rotations which, after analyzing them, may provide relevant information about how the proteins perform their function. In this project these deformations of the macromolecular complexes are characterized, translating into a “mathematical language” the conformational changes of the complexes when they perform their function. Electron Microscopy (EM) volumes are analyzed using a method that uses B-splines as its basis functions. It is shown that the results obtained are consistent with the conformational changes described in their corresponding reference publications. (Author)

  6. 77 FR 72984 - Buprofezin Pesticide Tolerances; Technical Correction

    Science.gov (United States)

    2012-12-07

    ... ENVIRONMENTAL PROTECTION AGENCY 40 CFR Part 180 [EPA-HQ-OPP-2011-0759; FRL-9371-3] Buprofezin..., 2012, concerning buprofezin pesticide tolerances. This document corrects a typographical error. DATES...: Sec. 180.511 Buprofezin; tolerances for residues. (a) * * * Parts per Commodity million...

  7. [Macromolecular aromatic network characteristics of Chinese power coal analyzed by synchronous fluorescence and X-ray diffraction].

    Science.gov (United States)

    Ye, Cui-Ping; Feng, Jie; Li, Wen-Ying

    2012-07-01

    Coal structure, especially the macromolecular aromatic skeleton structure, has a strong influence on coke reactivity and coal gasification, so it is the key to grasp the macromolecular aromatic skeleton coal structure for getting the reasonable high efficiency utilization of coal. However, it is difficult to acquire their information due to the complex compositions and structure of coal. It has been found that the macromolecular aromatic network coal structure would be most isolated if small molecular of coal was first extracted. Then the macromolecular aromatic skeleton coal structure would be clearly analyzed by instruments, such as X-ray diffraction (XRD), fluorescence spectroscopy with synchronous mode (Syn-F), Gel permeation chromatography (GPC) etc. Based on the previous results, according to the stepwise fractional liquid extraction, two Chinese typical power coals, PS and HDG, were extracted by silica gel as stationary phase and acetonitrile, tetrahydrofuran (THF), pyridine and 1-methyl-2-pyrollidinone (NMP) as a solvent group for sequential elution. GPC, Syn-F and XRD were applied to investigate molecular mass distribution, condensed aromatic structure and crystal characteristics. The results showed that the size of aromatic layers (La) is small (3-3.95 nm) and the stacking heights (Lc) are 0.8-1.2 nm. The molecular mass distribution of the macromolecular aromatic network structure is between 400 and 1 130 amu, with condensed aromatic numbers of 3-7 in the structure units.

  8. Use of noncrystallographic symmetry for automated model building at medium to low resolution.

    Science.gov (United States)

    Wiegels, Tim; Lamzin, Victor S

    2012-04-01

    A novel method is presented for the automatic detection of noncrystallographic symmetry (NCS) in macromolecular crystal structure determination which does not require the derivation of molecular masks or the segmentation of density. It was found that throughout structure determination the NCS-related parts may be differently pronounced in the electron density. This often results in the modelling of molecular fragments of variable length and accuracy, especially during automated model-building procedures. These fragments were used to identify NCS relations in order to aid automated model building and refinement. In a number of test cases higher completeness and greater accuracy of the obtained structures were achieved, specifically at a crystallographic resolution of 2.3 Å or poorer. In the best case, the method allowed the building of up to 15% more residues automatically and a tripling of the average length of the built fragments.

  9. Concepts and Methods in Fault-tolerant Control

    DEFF Research Database (Denmark)

    Blanke, Mogens; Staroswiecly, M.; Wu, N.E.

    2001-01-01

    Faults in automated processes will often cause undesired reactions and shut-down of a controlled plant, and the consequences could be damage to technical parts of the plant, to personnel or the environment. Fault-tolerant control combines diagnosis with control methods to handle faults...

  10. Flexibility damps macromolecular crowding effects on protein folding dynamics: Application to the murine prion protein (121-231)

    Science.gov (United States)

    Bergasa-Caceres, Fernando; Rabitz, Herschel A.

    2014-01-01

    A model of protein folding kinetics is applied to study the combined effects of protein flexibility and macromolecular crowding on protein folding rate and stability. It is found that the increase in stability and folding rate promoted by macromolecular crowding is damped for proteins with highly flexible native structures. The model is applied to the folding dynamics of the murine prion protein (121-231). It is found that the high flexibility of the native isoform of the murine prion protein (121-231) reduces the effects of macromolecular crowding on its folding dynamics. The relevance of these findings for the pathogenic mechanism are discussed.

  11. Avoidable errors in deposited macromolecular structures: an impediment to efficient data mining

    Directory of Open Access Journals (Sweden)

    Zbigniew Dauter

    2014-05-01

    Full Text Available Whereas the vast majority of the more than 85 000 crystal structures of macromolecules currently deposited in the Protein Data Bank are of high quality, some suffer from a variety of imperfections. Although this fact has been pointed out in the past, it is still worth periodic updates so that the metadata obtained by global analysis of the available crystal structures, as well as the utilization of the individual structures for tasks such as drug design, should be based on only the most reliable data. Here, selected abnormal deposited structures have been analysed based on the Bayesian reasoning that the correctness of a model must be judged against both the primary evidence as well as prior knowledge. These structures, as well as information gained from the corresponding publications (if available, have emphasized some of the most prevalent types of common problems. The errors are often perfect illustrations of the nature of human cognition, which is frequently influenced by preconceptions that may lead to fanciful results in the absence of proper validation. Common errors can be traced to negligence and a lack of rigorous verification of the models against electron density, creation of non-parsimonious models, generation of improbable numbers, application of incorrect symmetry, illogical presentation of the results, or violation of the rules of chemistry and physics. Paying more attention to such problems, not only in the final validation stages but during the structure-determination process as well, is necessary not only in order to maintain the highest possible quality of the structural repositories and databases but most of all to provide a solid basis for subsequent studies, including large-scale data-mining projects. For many scientists PDB deposition is a rather infrequent event, so the need for proper training and supervision is emphasized, as well as the need for constant alertness of reason and critical judgment as absolutely

  12. Bounding quantum gate error rate based on reported average fidelity

    International Nuclear Information System (INIS)

    Sanders, Yuval R; Wallman, Joel J; Sanders, Barry C

    2016-01-01

    Remarkable experimental advances in quantum computing are exemplified by recent announcements of impressive average gate fidelities exceeding 99.9% for single-qubit gates and 99% for two-qubit gates. Although these high numbers engender optimism that fault-tolerant quantum computing is within reach, the connection of average gate fidelity with fault-tolerance requirements is not direct. Here we use reported average gate fidelity to determine an upper bound on the quantum-gate error rate, which is the appropriate metric for assessing progress towards fault-tolerant quantum computation, and we demonstrate that this bound is asymptotically tight for general noise. Although this bound is unlikely to be saturated by experimental noise, we demonstrate using explicit examples that the bound indicates a realistic deviation between the true error rate and the reported average fidelity. We introduce the Pauli distance as a measure of this deviation, and we show that knowledge of the Pauli distance enables tighter estimates of the error rate of quantum gates. (fast track communication)

  13. Fault tolerance and reliability in integrated ship control

    DEFF Research Database (Denmark)

    Nielsen, Jens Frederik Dalsgaard; Izadi-Zamanabadi, Roozbeh; Schiøler, Henrik

    2002-01-01

    Various strategies for achieving fault tolerance in large scale control systems are discussed. The positive and negative impacts of distribution through network communication are presented. The ATOMOS framework for standardized reliable marine automation is presented along with the corresponding...

  14. Large-scale simulations of error-prone quantum computation devices

    International Nuclear Information System (INIS)

    Trieu, Doan Binh

    2009-01-01

    The theoretical concepts of quantum computation in the idealized and undisturbed case are well understood. However, in practice, all quantum computation devices do suffer from decoherence effects as well as from operational imprecisions. This work assesses the power of error-prone quantum computation devices using large-scale numerical simulations on parallel supercomputers. We present the Juelich Massively Parallel Ideal Quantum Computer Simulator (JUMPIQCS), that simulates a generic quantum computer on gate level. It comprises an error model for decoherence and operational errors. The robustness of various algorithms in the presence of noise has been analyzed. The simulation results show that for large system sizes and long computations it is imperative to actively correct errors by means of quantum error correction. We implemented the 5-, 7-, and 9-qubit quantum error correction codes. Our simulations confirm that using error-prone correction circuits with non-fault-tolerant quantum error correction will always fail, because more errors are introduced than being corrected. Fault-tolerant methods can overcome this problem, provided that the single qubit error rate is below a certain threshold. We incorporated fault-tolerant quantum error correction techniques into JUMPIQCS using Steane's 7-qubit code and determined this threshold numerically. Using the depolarizing channel as the source of decoherence, we find a threshold error rate of (5.2±0.2) x 10 -6 . For Gaussian distributed operational over-rotations the threshold lies at a standard deviation of 0.0431±0.0002. We can conclude that quantum error correction is especially well suited for the correction of operational imprecisions and systematic over-rotations. For realistic simulations of specific quantum computation devices we need to extend the generic model to dynamic simulations, i.e. time-dependent Hamiltonian simulations of realistic hardware models. We focus on today's most advanced technology, i

  15. An editor for the generation and customization of geometry restraints.

    Science.gov (United States)

    Moriarty, Nigel W; Draizen, Eli J; Adams, Paul D

    2017-02-01

    Chemical restraints for use in macromolecular structure refinement are produced by a variety of methods, including a number of programs that use chemical information to generate the required bond, angle, dihedral, chiral and planar restraints. These programs help to automate the process and therefore minimize the errors that could otherwise occur if it were performed manually. Furthermore, restraint-dictionary generation programs can incorporate chemical and other prior knowledge to provide reasonable choices of types and values. However, the use of restraints to define the geometry of a molecule is an approximation introduced with efficiency in mind. The representation of a bond as a parabolic function is a convenience and does not reflect the true variability in even the simplest of molecules. Another complicating factor is the interplay of the molecule with other parts of the macromolecular model. Finally, difficult situations arise from molecules with rare or unusual moieties that may not have their conformational space fully explored. These factors give rise to the need for an interactive editor for WYSIWYG interactions with the restraints and molecule. Restraints Editor, Especially Ligands (REEL) is a graphical user interface for simple and error-free editing along with additional features to provide greater control of the restraint dictionaries in macromolecular refinement.

  16. Towards fully automated structure-based NMR resonance assignment of 15N-labeled proteins from automatically picked peaks

    KAUST Repository

    Jang, Richard; Gao, Xin; Li, Ming

    2011-01-01

    In NMR resonance assignment, an indispensable step in NMR protein studies, manually processed peaks from both N-labeled and C-labeled spectra are typically used as inputs. However, the use of homologous structures can allow one to use only N-labeled NMR data and avoid the added expense of using C-labeled data. We propose a novel integer programming framework for structure-based backbone resonance assignment using N-labeled data. The core consists of a pair of integer programming models: one for spin system forming and amino acid typing, and the other for backbone resonance assignment. The goal is to perform the assignment directly from spectra without any manual intervention via automatically picked peaks, which are much noisier than manually picked peaks, so methods must be error-tolerant. In the case of semi-automated/manually processed peak data, we compare our system with the Xiong-Pandurangan-Bailey- Kellogg's contact replacement (CR) method, which is the most error-tolerant method for structure-based resonance assignment. Our system, on average, reduces the error rate of the CR method by five folds on their data set. In addition, by using an iterative algorithm, our system has the added capability of using the NOESY data to correct assignment errors due to errors in predicting the amino acid and secondary structure type of each spin system. On a publicly available data set for human ubiquitin, where the typing accuracy is 83%, we achieve 91% accuracy, compared to the 59% accuracy obtained without correcting for such errors. In the case of automatically picked peaks, using assignment information from yeast ubiquitin, we achieve a fully automatic assignment with 97% accuracy. To our knowledge, this is the first system that can achieve fully automatic structure-based assignment directly from spectra. This has implications in NMR protein mutant studies, where the assignment step is repeated for each mutant. © Copyright 2011, Mary Ann Liebert, Inc.

  17. Towards fully automated structure-based NMR resonance assignment of 15N-labeled proteins from automatically picked peaks

    KAUST Repository

    Jang, Richard

    2011-03-01

    In NMR resonance assignment, an indispensable step in NMR protein studies, manually processed peaks from both N-labeled and C-labeled spectra are typically used as inputs. However, the use of homologous structures can allow one to use only N-labeled NMR data and avoid the added expense of using C-labeled data. We propose a novel integer programming framework for structure-based backbone resonance assignment using N-labeled data. The core consists of a pair of integer programming models: one for spin system forming and amino acid typing, and the other for backbone resonance assignment. The goal is to perform the assignment directly from spectra without any manual intervention via automatically picked peaks, which are much noisier than manually picked peaks, so methods must be error-tolerant. In the case of semi-automated/manually processed peak data, we compare our system with the Xiong-Pandurangan-Bailey- Kellogg\\'s contact replacement (CR) method, which is the most error-tolerant method for structure-based resonance assignment. Our system, on average, reduces the error rate of the CR method by five folds on their data set. In addition, by using an iterative algorithm, our system has the added capability of using the NOESY data to correct assignment errors due to errors in predicting the amino acid and secondary structure type of each spin system. On a publicly available data set for human ubiquitin, where the typing accuracy is 83%, we achieve 91% accuracy, compared to the 59% accuracy obtained without correcting for such errors. In the case of automatically picked peaks, using assignment information from yeast ubiquitin, we achieve a fully automatic assignment with 97% accuracy. To our knowledge, this is the first system that can achieve fully automatic structure-based assignment directly from spectra. This has implications in NMR protein mutant studies, where the assignment step is repeated for each mutant. © Copyright 2011, Mary Ann Liebert, Inc.

  18. TOLERANCE OF Abelmoschus esculentus (L

    African Journals Online (AJOL)

    Cletus

    Key word: - Tolerance, diesel oil, polluted soil, Abelmoschus esculentus. INTRODUCTION ... errors -of the mean values were calculated for the replicate readings and data .... African Schools and Colleges, 2nd Ed. University Press Limited ...

  19. Macromolecular crystallography research at Trombay

    International Nuclear Information System (INIS)

    Kannan, K.K.; Chidamrabam, R.

    1983-01-01

    Neutron diffraction studies of hydrogen positions in small molecules of biological interest at Trombay have provided valuable information that has been used in protein and enzyme structure model-building and in developing hydrogen bond potential functions. The new R-5 reactor is expected to provide higher neutron fluxes and also make possible small-angle neutron scattering studies of large biomolecules and bio-aggregates. In the last few years infrastructure facilities have also been established for macromolecular x-ray crystallography research. Meanwhile, the refinement of carbonic hydrases and lyysozyme structures have been carried out and interesting results obtained on protein dynamics and structure-function relationships. Some interesting presynaptic toxin phospholipases have also taken up for study. (author)

  20. E-health, phase two: the imperative to integrate process automation with communication automation for large clinical reference laboratories.

    Science.gov (United States)

    White, L; Terner, C

    2001-01-01

    The initial efforts of e-health have fallen far short of expectations. They were buoyed by the hype and excitement of the Internet craze but limited by their lack of understanding of important market and environmental factors. E-health now recognizes that legacy systems and processes are important, that there is a technology adoption process that needs to be followed, and that demonstrable value drives adoption. Initial e-health transaction solutions have targeted mostly low-cost problems. These solutions invariably are difficult to integrate into existing systems, typically requiring manual interfacing to supported processes. This limitation in particular makes them unworkable for large volume providers. To meet the needs of these providers, e-health companies must rethink their approaches, appropriately applying technology to seamlessly integrate all steps into existing business functions. E-automation is a transaction technology that automates steps, integration of steps, and information communication demands, resulting in comprehensive automation of entire business functions. We applied e-automation to create a billing management solution for clinical reference laboratories. Large volume, onerous regulations, small margins, and only indirect access to patients challenge large laboratories' billing departments. Couple these problems with outmoded, largely manual systems and it becomes apparent why most laboratory billing departments are in crisis. Our approach has been to focus on the most significant and costly problems in billing: errors, compliance, and system maintenance and management. The core of the design relies on conditional processing, a "universal" communications interface, and ASP technologies. The result is comprehensive automation of all routine processes, driving out errors and costs. Additionally, compliance management and billing system support and management costs are dramatically reduced. The implications of e-automated processes can extend

  1. Dexamethasone attenuates grain sorghum dust extract-induced increase in macromolecular efflux in vivo.

    Science.gov (United States)

    Akhter, S R; Ikezaki, H; Gao, X P; Rubinstein, I

    1999-05-01

    The purpose of this study was to determine whether dexamethasone attenuates grain sorghum dust extract-induced increase in macromolecular efflux from the in situ hamster cheek pouch and, if so, whether this response is specific. By using intravital microscopy, we found that an aqueous extract of grain sorghum dust elicited significant, concentration-dependent leaky site formation and increase in clearance of FITC-labeled dextran (FITC-dextran; mol mass, 70 kDa) from the in situ hamster cheek pouch (P grain sorghum dust extract- and substance P-induced increases in macromolecular efflux from the in situ hamster cheek pouch in a specific fashion.

  2. Isotope labeling for NMR studies of macromolecular structure and interactions

    International Nuclear Information System (INIS)

    Wright, P.E.

    1994-01-01

    Implementation of biosynthetic methods for uniform or specific isotope labeling of proteins, coupled with the recent development of powerful heteronuclear multidimensional NMR methods, has led to a dramatic increase in the size and complexity of macromolecular systems that are now amenable to NMR structural analysis. In recent years, a new technology has emerged that combines uniform 13 C, 15 N labeling with heteronuclear multidimensional NMR methods to allow NMR structural studies of systems approaching 25 to 30 kDa in molecular weight. In addition, with the introduction of specific 13 C and 15 N labels into ligands, meaningful NMR studies of complexes of even higher molecular weight have become feasible. These advances usher in a new era in which the earlier, rather stringent molecular weight limitations have been greatly surpassed and NMR can begin to address many central biological problems that involve macromolecular structure, dynamics, and interactions

  3. Isotope labeling for NMR studies of macromolecular structure and interactions

    Energy Technology Data Exchange (ETDEWEB)

    Wright, P.E. [Scripps Research Institute, La Jolla, CA (United States)

    1994-12-01

    Implementation of biosynthetic methods for uniform or specific isotope labeling of proteins, coupled with the recent development of powerful heteronuclear multidimensional NMR methods, has led to a dramatic increase in the size and complexity of macromolecular systems that are now amenable to NMR structural analysis. In recent years, a new technology has emerged that combines uniform {sup 13}C, {sup 15}N labeling with heteronuclear multidimensional NMR methods to allow NMR structural studies of systems approaching 25 to 30 kDa in molecular weight. In addition, with the introduction of specific {sup 13}C and {sup 15}N labels into ligands, meaningful NMR studies of complexes of even higher molecular weight have become feasible. These advances usher in a new era in which the earlier, rather stringent molecular weight limitations have been greatly surpassed and NMR can begin to address many central biological problems that involve macromolecular structure, dynamics, and interactions.

  4. Macromolecular shape and interactions in layer-by-layer assemblies within cylindrical nanopores.

    Science.gov (United States)

    Lazzara, Thomas D; Lau, K H Aaron; Knoll, Wolfgang; Janshoff, Andreas; Steinem, Claudia

    2012-01-01

    Layer-by-layer (LbL) deposition of polyelectrolytes and proteins within the cylindrical nanopores of anodic aluminum oxide (AAO) membranes was studied by optical waveguide spectroscopy (OWS). AAO has aligned cylindrical, nonintersecting pores with a defined pore diameter d(0) and functions as a planar optical waveguide so as to monitor, in situ, the LbL process by OWS. The LbL deposition of globular proteins, i.e., avidin and biotinylated bovine serum albumin was compared with that of linear polyelectrolytes (linear-PEs), both species being of similar molecular weight. LbL deposition within the cylindrical AAO geometry for different pore diameters (d(0) = 25-80 nm) for the various macromolecular species, showed that the multilayer film growth was inhibited at different maximum numbers of LbL steps (n(max)). The value of n(max) was greatest for linear-PEs, while proteins had a lower value. The cylindrical pore geometry imposes a physical limit to LbL growth such that n(max) is strongly dependent on the overall internal structure of the LbL film. For all macromolecular species, deposition was inhibited in native AAO, having pores of d(0) = 25-30 nm. Both, OWS and scanning electron microscopy showed that LbL growth in larger AAO pores (d(0) > 25-30 nm) became inhibited when approaching a pore diameter of d(eff,n_max) = 25-35 nm, a similar size to that of native AAO pores, with d(0) = 25-30 nm. For a reasonable estimation of d(eff,n_max), the actual volume occupied by a macromolecular assembly must be taken into consideration. The results clearly show that electrostatic LbL allowed for compact macromolecular layers, whereas proteins formed loosely packed multilayers.

  5. Error-tolerant pedal for a brake-by-wire system; Fehlertolerante Pedaleinheit fuer ein elektromechanisches Bremssystem (Brake-by-Wire)

    Energy Technology Data Exchange (ETDEWEB)

    Stoelzl, S.

    2000-07-01

    The author describes the development of an error-tolerant brake-by-wire system with pedal consolidation, including the development of a monitoring and safety concept. [German] Die zunehmende Entwicklung aktiver Fahrerassistenzsysteme im Automobilbereich (z.B. ABS, ESP) zur Erhoehung der Fahrsicherheit erfordert ein staendig wachsendes Funktionspotential. Die Bremsanlagen werden dadurch immer komplexer. Parallel steigen die Anforderungen an den Bremspedalkomfort. Einen Ausweg aus dieser Problematik verspricht die Elektromechanische Bremsanlage (EMB) mit rueckwirkungsfreier Entkopplung des Fahrers von den Radbremsen (Brake-by-Wire). Das Bremskommando des Fahrers wird bei Betaetigung des Bremspedals rein sensorisch erfasst. Da es keine mechanische Rueckfallebene mehr gibt, muessen Fehler der Pedaleinheit erkannt und toleriert werden. Neu an dieser Arbeit ist die Entwicklung der fehlertoleranten elektromechanischen Pedaleinheit der EMB mit Pedalsensorkonsolidierung und Erstellung des dazu notwendigen Sicherheits- und Ueberwachungskonzepts. (orig.)

  6. A fault tolerant system by using distributed RTOS

    International Nuclear Information System (INIS)

    Ge Yingan; Liu Songqiang; Wang Yanfang

    1999-01-01

    The author describes the design and implementation of a prototypal distributed fault tolerant system, which is developed under QNX RTOS by networking two standard PCs. By using a watchdog timer for error detection, the system can be tolerant for fail silent and transient fault of a single node

  7. Should drivers be operating within an automation-free bandwidth? Evaluating haptic steering support systems with different levels of authority.

    Science.gov (United States)

    Petermeijer, Sebastiaan M; Abbink, David A; de Winter, Joost C F

    2015-02-01

    The aim of this study was to compare continuous versus bandwidth haptic steering guidance in terms of lane-keeping behavior, aftereffects, and satisfaction. An important human factors question is whether operators should be supported continuously or only when tolerance limits are exceeded. We aimed to clarify this issue for haptic steering guidance by investigating costs and benefits of both approaches in a driving simulator. Thirty-two participants drove five trials, each with a different level of haptic support: no guidance (Manual); guidance outside a 0.5-m bandwidth (Band1); a hysteresis version of Band1, which guided back to the lane center once triggered (Band2); continuous guidance (Cont); and Cont with double feedback gain (ContS). Participants performed a reaction time task while driving. Toward the end of each trial, the guidance was unexpectedly disabled to investigate aftereffects. All four guidance systems prevented large lateral errors (>0.7 m). Cont and especially ContS yielded smaller lateral errors and higher time to line crossing than Manual, Band1, and Band2. Cont and ContS yielded short-lasting aftereffects, whereas Band1 and Band2 did not. Cont yielded higher self-reported satisfaction and faster reaction times than Band1. Continuous and bandwidth guidance both prevent large driver errors. Continuous guidance yields improved performance and satisfaction over bandwidth guidance at the cost of aftereffects and variability in driver torque (indicating human-automation conflicts). The presented results are useful for designers of haptic guidance systems and support critical thinking about the costs and benefits of automation support systems.

  8. Variationally optimal selection of slow coordinates and reaction coordinates in macromolecular systems

    Science.gov (United States)

    Noe, Frank

    To efficiently simulate and generate understanding from simulations of complex macromolecular systems, the concept of slow collective coordinates or reaction coordinates is of fundamental importance. Here we will introduce variational approaches to approximate the slow coordinates and the reaction coordinates between selected end-states given MD simulations of the macromolecular system and a (possibly large) basis set of candidate coordinates. We will then discuss how to select physically intuitive order paremeters that are good surrogates of this variationally optimal result. These result can be used in order to construct Markov state models or other models of the stationary and kinetics properties, in order to parametrize low-dimensional / coarse-grained model of the dynamics. Deutsche Forschungsgemeinschaft, European Research Council.

  9. Considerations on automation of coating machines

    Science.gov (United States)

    Tilsch, Markus K.; O'Donnell, Michael S.

    2015-04-01

    Most deposition chambers sold into the optical coating market today are outfitted with an automated control system. We surveyed several of the larger equipment providers, and nine of them responded with information about their hardware architecture, data logging, level of automation, error handling, user interface, and interfacing options. In this paper, we present a summary of the results of the survey and describe commonalities and differences together with some considerations of tradeoffs, such as between capability for high customization and simplicity of operation.

  10. Multiple Embedded Processors for Fault-Tolerant Computing

    Science.gov (United States)

    Bolotin, Gary; Watson, Robert; Katanyoutanant, Sunant; Burke, Gary; Wang, Mandy

    2005-01-01

    A fault-tolerant computer architecture has been conceived in an effort to reduce vulnerability to single-event upsets (spurious bit flips caused by impingement of energetic ionizing particles or photons). As in some prior fault-tolerant architectures, the redundancy needed for fault tolerance is obtained by use of multiple processors in one computer. Unlike prior architectures, the multiple processors are embedded in a single field-programmable gate array (FPGA). What makes this new approach practical is the recent commercial availability of FPGAs that are capable of having multiple embedded processors. A working prototype (see figure) consists of two embedded IBM PowerPC 405 processor cores and a comparator built on a Xilinx Virtex-II Pro FPGA. This relatively simple instantiation of the architecture implements an error-detection scheme. A planned future version, incorporating four processors and two comparators, would correct some errors in addition to detecting them.

  11. Superhydrophobic hybrid membranes by grafting arc-like macromolecular bridges on graphene sheets: Synthesis, characterization and properties

    Science.gov (United States)

    Mo, Zhao-Hua; Luo, Zheng; Huang, Qiang; Deng, Jian-Ping; Wu, Yi-Xian

    2018-05-01

    Grafting single end-tethered polymer chains on the surface of graphene is a conventional way to modify the surface properties of graphene oxide. However, grafting arc-like macromolecular bridges on graphene surfaces has been barely reported. Herein, a novel arc-like polydimethylsiloxane (PDMS) macromolecular bridges grafted graphene sheets (GO-g-Arc PDMS) was successfully synthesized via a confined interface reaction at 90 °C. Both the hydrophilic α- and ω-amino groups of linear hydrophobic NH2-PDMS-NH2 macromolecular chains rapidly reacted with epoxy and carboxyl groups on the surfaces of graphene oxide in water suspension to form arc-like PDMS macromolecular bridges on graphene sheets. The grafting density of arc-like PDMS bridges on graphene sheets can reach up to 0.80 mmol g-1 or 1.32 arc-like bridges per nm2 by this confined interface reaction. The water contact angle (WCA) of the hybrid membrane could be increased with increasing both the grafting density and content of covalent arc-like bridges architecture. The superhydrophobic hybrid membrane with a WCA of 153.4° was prepared by grinding of the above arc-like PDMS bridges grafted graphene hybrid, dispersing in ethanol and filtrating by organic filter membrane. This superhydrophobic hybrid membrane shows good self-cleaning and complete oil-water separation properties, which provides potential applications in anticontamination coating and oil-water separation. To the best of our knowledge, this is the first report on the synthesis of functional hybrid membranes by grafting arc-like PDMS macromolecular bridges on graphene sheets via a confined interface reaction.

  12. Small cities face greater impact from automation

    Science.gov (United States)

    Sun, Lijun; Cebrian, Manuel; Rahwan, Iyad

    2018-01-01

    The city has proved to be the most successful form of human agglomeration and provides wide employment opportunities for its dwellers. As advances in robotics and artificial intelligence revive concerns about the impact of automation on jobs, a question looms: how will automation affect employment in cities? Here, we provide a comparative picture of the impact of automation across US urban areas. Small cities will undertake greater adjustments, such as worker displacement and job content substitutions. We demonstrate that large cities exhibit increased occupational and skill specialization due to increased abundance of managerial and technical professions. These occupations are not easily automatable, and, thus, reduce the potential impact of automation in large cities. Our results pass several robustness checks including potential errors in the estimation of occupational automation and subsampling of occupations. Our study provides the first empirical law connecting two societal forces: urban agglomeration and automation's impact on employment. PMID:29436514

  13. Small cities face greater impact from automation.

    Science.gov (United States)

    Frank, Morgan R; Sun, Lijun; Cebrian, Manuel; Youn, Hyejin; Rahwan, Iyad

    2018-02-01

    The city has proved to be the most successful form of human agglomeration and provides wide employment opportunities for its dwellers. As advances in robotics and artificial intelligence revive concerns about the impact of automation on jobs, a question looms: how will automation affect employment in cities? Here, we provide a comparative picture of the impact of automation across US urban areas. Small cities will undertake greater adjustments, such as worker displacement and job content substitutions. We demonstrate that large cities exhibit increased occupational and skill specialization due to increased abundance of managerial and technical professions. These occupations are not easily automatable, and, thus, reduce the potential impact of automation in large cities. Our results pass several robustness checks including potential errors in the estimation of occupational automation and subsampling of occupations. Our study provides the first empirical law connecting two societal forces: urban agglomeration and automation's impact on employment. © 2018 The Authors.

  14. Tuning the properties of an anthracene-based PPE-PPV copolymer by fine variation of its macromolecular parameters

    Czech Academy of Sciences Publication Activity Database

    Tinti, F.; Sabir, F. K.; Gazzano, M.; Righi, S.; Ulbricht, C.; Usluer, Ö.; Pokorná, Veronika; Cimrová, Věra; Yohannes, T.; Egbe, D. A. M.; Camaioni, N.

    2013-01-01

    Roč. 3, č. 19 (2013), s. 6972-6980 ISSN 2046-2069 R&D Projects: GA ČR GAP106/12/0827; GA ČR(CZ) GA13-26542S Institutional support: RVO:61389013 Keywords : anthracene-containing PPE-PPV copolymer * macromolecular parameters * structural and transport properties Subject RIV: CD - Macromolecular Chemistry Impact factor: 3.708, year: 2013

  15. Error response test system and method using test mask variable

    Science.gov (United States)

    Gender, Thomas K. (Inventor)

    2006-01-01

    An error response test system and method with increased functionality and improved performance is provided. The error response test system provides the ability to inject errors into the application under test to test the error response of the application under test in an automated and efficient manner. The error response system injects errors into the application through a test mask variable. The test mask variable is added to the application under test. During normal operation, the test mask variable is set to allow the application under test to operate normally. During testing, the error response test system can change the test mask variable to introduce an error into the application under test. The error response system can then monitor the application under test to determine whether the application has the correct response to the error.

  16. Bar-code automated waste tracking system

    International Nuclear Information System (INIS)

    Hull, T.E.

    1994-10-01

    The Bar-Code Automated Waste Tracking System was designed to be a site-Specific program with a general purpose application for transportability to other facilities. The system is user-friendly, totally automated, and incorporates the use of a drive-up window that is close to the areas dealing in container preparation, delivery, pickup, and disposal. The system features ''stop-and-go'' operation rather than a long, tedious, error-prone manual entry. The system is designed for automation but allows operators to concentrate on proper handling of waste while maintaining manual entry of data as a backup. A large wall plaque filled with bar-code labels is used to input specific details about any movement of waste

  17. Large-scale simulations of error-prone quantum computation devices

    Energy Technology Data Exchange (ETDEWEB)

    Trieu, Doan Binh

    2009-07-01

    The theoretical concepts of quantum computation in the idealized and undisturbed case are well understood. However, in practice, all quantum computation devices do suffer from decoherence effects as well as from operational imprecisions. This work assesses the power of error-prone quantum computation devices using large-scale numerical simulations on parallel supercomputers. We present the Juelich Massively Parallel Ideal Quantum Computer Simulator (JUMPIQCS), that simulates a generic quantum computer on gate level. It comprises an error model for decoherence and operational errors. The robustness of various algorithms in the presence of noise has been analyzed. The simulation results show that for large system sizes and long computations it is imperative to actively correct errors by means of quantum error correction. We implemented the 5-, 7-, and 9-qubit quantum error correction codes. Our simulations confirm that using error-prone correction circuits with non-fault-tolerant quantum error correction will always fail, because more errors are introduced than being corrected. Fault-tolerant methods can overcome this problem, provided that the single qubit error rate is below a certain threshold. We incorporated fault-tolerant quantum error correction techniques into JUMPIQCS using Steane's 7-qubit code and determined this threshold numerically. Using the depolarizing channel as the source of decoherence, we find a threshold error rate of (5.2{+-}0.2) x 10{sup -6}. For Gaussian distributed operational over-rotations the threshold lies at a standard deviation of 0.0431{+-}0.0002. We can conclude that quantum error correction is especially well suited for the correction of operational imprecisions and systematic over-rotations. For realistic simulations of specific quantum computation devices we need to extend the generic model to dynamic simulations, i.e. time-dependent Hamiltonian simulations of realistic hardware models. We focus on today's most advanced

  18. A new efficient algorithmic-based SEU tolerant system architecture

    International Nuclear Information System (INIS)

    Blaquiere, Y.; Gagne, G.; Savaria, Y.; Evequoz, C.

    1995-01-01

    A new ABFT architecture is proposed to tolerate multiple SEU with low overheads. This architecture memorizes operands on a stack upon error detection and it corrects errors by recomputing. This allows uninterrupted input data stream to be processed without data loss

  19. Automated 741 document preparation: Oak Ridge National Laboratory's Automated Safeguards Information System (OASIS)

    International Nuclear Information System (INIS)

    Austin, H.C.; Gray, L.M.

    1982-01-01

    OASIS has been providing for Oak Ridge National Laboratory's total safeguards needs since being place on line in April 1980. The system supports near real-time nuclear materials safeguards and accountability control. The original design of OASIS called for an automated facsimile of a 741 document to be prepared as a functional by-product of updating the inventory. An attempt was made to utilize, intact, DOE-Albuquerque's automated 741 system to generate the facsimile; however, the five page document produced proved too cumbersome. Albuquerque's programs were modified to print an original 741 document utilizing standard DOE/NRC 741 forms. It is felt that the best features of both the automated and manually generated 741 documents have been incorporated. Automation of the source data for 741 shipping documents produces greater efficiency while reducing possible errors. Through utilization of the standard DOE/NRC form, continuity within the NMMSS system is maintained, thus minimizing the confusion and redundancy associated with facsimiles. OASIS now fulfills the original concept of near real-time accountability by furnishing a viable 741 document as a function of updating the inventory

  20. Enhanced fault-tolerant quantum computing in d-level systems.

    Science.gov (United States)

    Campbell, Earl T

    2014-12-05

    Error-correcting codes protect quantum information and form the basis of fault-tolerant quantum computing. Leading proposals for fault-tolerant quantum computation require codes with an exceedingly rare property, a transversal non-Clifford gate. Codes with the desired property are presented for d-level qudit systems with prime d. The codes use n=d-1 qudits and can detect up to ∼d/3 errors. We quantify the performance of these codes for one approach to quantum computation known as magic-state distillation. Unlike prior work, we find performance is always enhanced by increasing d.

  1. Automated sample mounting and technical advance alignment system for biological crystallography at a synchrotron source

    International Nuclear Information System (INIS)

    Snell, Gyorgy; Cork, Carl; Nordmeyer, Robert; Cornell, Earl; Meigs, George; Yegian, Derek; Jaklevic, Joseph; Jin, Jian; Stevens, Raymond C.; Earnest, Thomas

    2004-01-01

    High-throughput data collection for macromolecular crystallography requires an automated sample mounting system for cryo-protected crystals that functions reliably when integrated into protein-crystallography beamlines at synchrotrons. Rapid mounting and dismounting of the samples increases the efficiency of the crystal screening and data collection processes, where many crystals can be tested for the quality of diffraction. The sample-mounting subsystem has random access to 112 samples, stored under liquid nitrogen. Results of extensive tests regarding the performance and reliability of the system are presented. To further increase throughput, we have also developed a sample transport/storage system based on 'puck-shaped' cassettes, which can hold sixteen samples each. Seven cassettes fit into a standard dry shipping Dewar. The capabilities of a robotic crystal mounting and alignment system with instrumentation control software and a relational database allows for automated screening and data collection to be developed

  2. Impact of MSD and mask manufacture errors on 45nm-node lithography

    Science.gov (United States)

    Han, Chunying; Li, Yanqiu; Liu, Lihui; Guo, Xuejia; Wang, Xuxia; Yang, Jianhong

    2012-10-01

    Critical Dimension Uniformity (CDU) is quite sensitive in 45nm node lithography and beyond, thus, more attentions should be paid on the controlling of CDU. Moving Standard Deviation (MSD) and Mask Manufacture Errors (MMEs) including the Mask Critical Dimension Error (MCDE), Mask Transmittance Error (MTE) and Mask Phase Error (MPE) are the two important factors influencing CDU. The study on the impact of MSD and MMEs is a helpful way to improve the lithographic quality. Previous researches often emphasize on the single impact of MSD or MMEs, however the impact of both of them usually exists simultaneously. The studies on the co-impact of MSD and MMEs are more significant. In this paper, the impact and the cross-talk between MSD and MMEs on Critical Dimension (CD) and Exposure Latitude verse Depth of Focus (EL-DOF) for different pattern under various illumination conditions have been evaluated by simulation, which is carried on PROLITHTM X3 and in-house software IntLitho. And then, the MSD's tolerance with the existence of MMEs is discussed. The simulation results show that CD error caused by the co-existence of MSD and MMEs is not the simple algebraic sum of the individual CD error caused by MSD or MMEs. The CD error becomes more pronounced when the MSD and MMEs interact with each other. The studies on the tolerance reveal that the tolerance of MSD decreases due to MMEs' existence and mainly depends on the mask pattern's pitch.

  3. The impact of treatment complexity and computer-control delivery technology on treatment delivery errors

    International Nuclear Information System (INIS)

    Fraass, Benedick A.; Lash, Kathy L.; Matrone, Gwynne M.; Volkman, Susan K.; McShan, Daniel L.; Kessler, Marc L.; Lichter, Allen S.

    1998-01-01

    Purpose: To analyze treatment delivery errors for three-dimensional (3D) conformal therapy performed at various levels of treatment delivery automation and complexity, ranging from manual field setup to virtually complete computer-controlled treatment delivery using a computer-controlled conformal radiotherapy system (CCRS). Methods and Materials: All treatment delivery errors which occurred in our department during a 15-month period were analyzed. Approximately 34,000 treatment sessions (114,000 individual treatment segments [ports]) on four treatment machines were studied. All treatment delivery errors logged by treatment therapists or quality assurance reviews (152 in all) were analyzed. Machines 'M1' and 'M2' were operated in a standard manual setup mode, with no record and verify system (R/V). MLC machines 'M3' and 'M4' treated patients under the control of the CCRS system, which (1) downloads the treatment delivery plan from the planning system; (2) performs some (or all) of the machine set up and treatment delivery for each field; (3) monitors treatment delivery; (4) records all treatment parameters; and (5) notes exceptions to the electronically-prescribed plan. Complete external computer control is not available on M3; therefore, it uses as many CCRS features as possible, while M4 operates completely under CCRS control and performs semi-automated and automated multi-segment intensity modulated treatments. Analysis of treatment complexity was based on numbers of fields, individual segments, nonaxial and noncoplanar plans, multisegment intensity modulation, and pseudoisocentric treatments studied for a 6-month period (505 patients) concurrent with the period in which the delivery errors were obtained. Treatment delivery time was obtained from the computerized scheduling system (for manual treatments) or from CCRS system logs. Treatment therapists rotate among the machines; therefore, this analysis does not depend on fixed therapist staff on particular

  4. Impact of error fields on equilibrium configurations in ITER

    Energy Technology Data Exchange (ETDEWEB)

    Barbato, Lucio [DIEI, Università di Cassino and Lazio Meridionale, Cassino (Italy); Formisano, Alessandro, E-mail: alessandro.formisano@unina2.it [Department of Industrial and Information Engineering, Seconda Univ. di Napoli, Aversa (Italy); Martone, Raffaele [Department of Industrial and Information Engineering, Seconda Univ. di Napoli, Aversa (Italy); Villone, Fabio [DIEI, Università di Cassino and Lazio Meridionale, Cassino (Italy)

    2015-10-15

    Highlights: • Error fields (EF) are discrepancies from nominal magnetic field, and may alter plasma behaviour. • They are due to, e.g., coils manufacturing and assembly errors. • EF impact in ITER equilibria is analyzed using numerical simulations. • A high accuracy 3D field computation module and a Grad-Shafranov solver are used. • Deformations size allow using a linearized model, and performing a sensitivity analysis. - Abstract: Discrepancies between design and actual magnetic field maps in tokamaks are unavoidable, and are associated to a number of causes, e.g. manufacturing and assembly tolerances on magnets, presence of feeders and joints, non-symmetric iron parts. Such error fields may drive plasma to loss of stability, and must be carefully controlled using suitable correction coils. Anyway, even when kept below safety threshold, error fields may alter the behavior of plasma. The present paper, using as example the error fields induced by tolerances in toroidal field coils, quantifies their effect on the plasma boundary shape in equilibrium configurations. In particular, a procedure able to compute the shape perturbations due to given deformations of the coils has been set up and used to carry out a thorough statistical analysis of the error field-shape perturbations relationship.

  5. Bayesian Safety Risk Modeling of Human-Flightdeck Automation Interaction

    Science.gov (United States)

    Ancel, Ersin; Shih, Ann T.

    2015-01-01

    Usage of automatic systems in airliners has increased fuel efficiency, added extra capabilities, enhanced safety and reliability, as well as provide improved passenger comfort since its introduction in the late 80's. However, original automation benefits, including reduced flight crew workload, human errors or training requirements, were not achieved as originally expected. Instead, automation introduced new failure modes, redistributed, and sometimes increased workload, brought in new cognitive and attention demands, and increased training requirements. Modern airliners have numerous flight modes, providing more flexibility (and inherently more complexity) to the flight crew. However, the price to pay for the increased flexibility is the need for increased mode awareness, as well as the need to supervise, understand, and predict automated system behavior. Also, over-reliance on automation is linked to manual flight skill degradation and complacency in commercial pilots. As a result, recent accidents involving human errors are often caused by the interactions between humans and the automated systems (e.g., the breakdown in man-machine coordination), deteriorated manual flying skills, and/or loss of situational awareness due to heavy dependence on automated systems. This paper describes the development of the increased complexity and reliance on automation baseline model, named FLAP for FLightdeck Automation Problems. The model development process starts with a comprehensive literature review followed by the construction of a framework comprised of high-level causal factors leading to an automation-related flight anomaly. The framework was then converted into a Bayesian Belief Network (BBN) using the Hugin Software v7.8. The effects of automation on flight crew are incorporated into the model, including flight skill degradation, increased cognitive demand and training requirements along with their interactions. Besides flight crew deficiencies, automation system

  6. Automated Quantification of Pneumothorax in CT

    Science.gov (United States)

    Do, Synho; Salvaggio, Kristen; Gupta, Supriya; Kalra, Mannudeep; Ali, Nabeel U.; Pien, Homer

    2012-01-01

    An automated, computer-aided diagnosis (CAD) algorithm for the quantification of pneumothoraces from Multidetector Computed Tomography (MDCT) images has been developed. Algorithm performance was evaluated through comparison to manual segmentation by expert radiologists. A combination of two-dimensional and three-dimensional processing techniques was incorporated to reduce required processing time by two-thirds (as compared to similar techniques). Volumetric measurements on relative pneumothorax size were obtained and the overall performance of the automated method shows an average error of just below 1%. PMID:23082091

  7. Fully Automated Data Collection Using PAM and the Development of PAM/SPACE Reversible Cassettes

    Science.gov (United States)

    Hiraki, Masahiko; Watanabe, Shokei; Chavas, Leonard M. G.; Yamada, Yusuke; Matsugaki, Naohiro; Igarashi, Noriyuki; Wakatsuki, Soichi; Fujihashi, Masahiro; Miki, Kunio; Baba, Seiki; Ueno, Go; Yamamoto, Masaki; Suzuki, Mamoru; Nakagawa, Atsushi; Watanabe, Nobuhisa; Tanaka, Isao

    2010-06-01

    To remotely control and automatically collect data in high-throughput X-ray data collection experiments, the Structural Biology Research Center at the Photon Factory (PF) developed and installed sample exchange robots PAM (PF Automated Mounting system) at PF macromolecular crystallography beamlines; BL-5A, BL-17A, AR-NW12A and AR-NE3A. We developed and installed software that manages the flow of the automated X-ray experiments; sample exchanges, loop-centering and X-ray diffraction data collection. The fully automated data collection function has been available since February 2009. To identify sample cassettes, PAM employs a two-dimensional bar code reader. New beamlines, BL-1A at the Photon Factory and BL32XU at SPring-8, are currently under construction as part of Targeted Proteins Research Program (TPRP) by the Ministry of Education, Culture, Sports, Science and Technology of Japan. However, different robots, PAM and SPACE (SPring-8 Precise Automatic Cryo-sample Exchanger), will be installed at BL-1A and BL32XU, respectively. For the convenience of the users of both facilities, pins and cassettes for PAM and SPACE are developed as part of the TPRP.

  8. Enzymes as Green Catalysts for Precision Macromolecular Synthesis.

    Science.gov (United States)

    Shoda, Shin-ichiro; Uyama, Hiroshi; Kadokawa, Jun-ichi; Kimura, Shunsaku; Kobayashi, Shiro

    2016-02-24

    The present article comprehensively reviews the macromolecular synthesis using enzymes as catalysts. Among the six main classes of enzymes, the three classes, oxidoreductases, transferases, and hydrolases, have been employed as catalysts for the in vitro macromolecular synthesis and modification reactions. Appropriate design of reaction including monomer and enzyme catalyst produces macromolecules with precisely controlled structure, similarly as in vivo enzymatic reactions. The reaction controls the product structure with respect to substrate selectivity, chemo-selectivity, regio-selectivity, stereoselectivity, and choro-selectivity. Oxidoreductases catalyze various oxidation polymerizations of aromatic compounds as well as vinyl polymerizations. Transferases are effective catalysts for producing polysaccharide having a variety of structure and polyesters. Hydrolases catalyzing the bond-cleaving of macromolecules in vivo, catalyze the reverse reaction for bond forming in vitro to give various polysaccharides and functionalized polyesters. The enzymatic polymerizations allowed the first in vitro synthesis of natural polysaccharides having complicated structures like cellulose, amylose, xylan, chitin, hyaluronan, and chondroitin. These polymerizations are "green" with several respects; nontoxicity of enzyme, high catalyst efficiency, selective reactions under mild conditions using green solvents and renewable starting materials, and producing minimal byproducts. Thus, the enzymatic polymerization is desirable for the environment and contributes to "green polymer chemistry" for maintaining sustainable society.

  9. Atomic force microscopy applied to study macromolecular content of embedded biological material

    Energy Technology Data Exchange (ETDEWEB)

    Matsko, Nadejda B. [Electron Microscopy Centre, Institute of Applied Physics, HPM C 15.1, ETH-Hoenggerberg, CH-8093, Zurich (Switzerland)]. E-mail: matsko@iap.phys.ethz.ch

    2007-02-15

    We demonstrate that atomic force microscopy represents a powerful tool for the estimation of structural preservation of biological samples embedded in epoxy resin, in terms of their macromolecular distribution and architecture. The comparison of atomic force microscopy (AFM) and transmission electron microscopy (TEM) images of a biosample (Caenorhabditis elegans) prepared following to different types of freeze-substitution protocols (conventional OsO{sub 4} fixation, epoxy fixation) led to the conclusion that high TEM stainability of the sample results from a low macromolecular density of the cellular matrix. We propose a novel procedure aimed to obtain AFM and TEM images of the same particular organelle, which strongly facilitates AFM image interpretation and reveals new ultrastructural aspects (mainly protein arrangement) of a biosample in addition to TEM data.

  10. Comparative Cost-Effectiveness Analysis of Three Different Automated Medication Systems Implemented in a Danish Hospital Setting.

    Science.gov (United States)

    Risør, Bettina Wulff; Lisby, Marianne; Sørensen, Jan

    2018-02-01

    Automated medication systems have been found to reduce errors in the medication process, but little is known about the cost-effectiveness of such systems. The objective of this study was to perform a model-based indirect cost-effectiveness comparison of three different, real-world automated medication systems compared with current standard practice. The considered automated medication systems were a patient-specific automated medication system (psAMS), a non-patient-specific automated medication system (npsAMS), and a complex automated medication system (cAMS). The economic evaluation used original effect and cost data from prospective, controlled, before-and-after studies of medication systems implemented at a Danish hematological ward and an acute medical unit. Effectiveness was described as the proportion of clinical and procedural error opportunities that were associated with one or more errors. An error was defined as a deviation from the electronic prescription, from standard hospital policy, or from written procedures. The cost assessment was based on 6-month standardization of observed cost data. The model-based comparative cost-effectiveness analyses were conducted with system-specific assumptions of the effect size and costs in scenarios with consumptions of 15,000, 30,000, and 45,000 doses per 6-month period. With 30,000 doses the cost-effectiveness model showed that the cost-effectiveness ratio expressed as the cost per avoided clinical error was €24 for the psAMS, €26 for the npsAMS, and €386 for the cAMS. Comparison of the cost-effectiveness of the three systems in relation to different valuations of an avoided error showed that the psAMS was the most cost-effective system regardless of error type or valuation. The model-based indirect comparison against the conventional practice showed that psAMS and npsAMS were more cost-effective than the cAMS alternative, and that psAMS was more cost-effective than npsAMS.

  11. Effect of macromolecular crowding on the rate of diffusion-limited ...

    Indian Academy of Sciences (India)

    The enzymatic reaction rate has been shown to be affected by the presence of such macromolecules. A simple numerical model is proposed here based on percolation and diffusion in disordered systems to study the effect of macromolecular crowding on the enzymatic reaction rates. The model qualitatively explains some ...

  12. Fault-tolerant architectures for superconducting qubits

    International Nuclear Information System (INIS)

    DiVincenzo, David P

    2009-01-01

    In this short review, I draw attention to new developments in the theory of fault tolerance in quantum computation that may give concrete direction to future work in the development of superconducting qubit systems. The basics of quantum error-correction codes, which I will briefly review, have not significantly changed since their introduction 15 years ago. But an interesting picture has emerged of an efficient use of these codes that may put fault-tolerant operation within reach. It is now understood that two-dimensional surface codes, close relatives of the original toric code of Kitaev, can be adapted as shown by Raussendorf and Harrington to effectively perform logical gate operations in a very simple planar architecture, with error thresholds for fault-tolerant operation simulated to be 0.75%. This architecture uses topological ideas in its functioning, but it is not 'topological quantum computation'-there are no non-abelian anyons in sight. I offer some speculations on the crucial pieces of superconducting hardware that could be demonstrated in the next couple of years that would be clear stepping stones towards this surface-code architecture.

  13. Fault tolerance with noisy and slow measurements and preparation.

    Science.gov (United States)

    Paz-Silva, Gerardo A; Brennen, Gavin K; Twamley, Jason

    2010-09-03

    It is not so well known that measurement-free quantum error correction protocols can be designed to achieve fault-tolerant quantum computing. Despite their potential advantages in terms of the relaxation of accuracy, speed, and addressing requirements, they have usually been overlooked since they are expected to yield a very bad threshold. We show that this is not the case. We design fault-tolerant circuits for the 9-qubit Bacon-Shor code and find an error threshold for unitary gates and preparation of p((p,g)thresh)=3.76×10(-5) (30% of the best known result for the same code using measurement) while admitting up to 1/3 error rates for measurements and allocating no constraints on measurement speed. We further show that demanding gate error rates sufficiently below the threshold pushes the preparation threshold up to p((p)thresh)=1/3.

  14. A Novel Multiple-Bits Collision Attack Based on Double Detection with Error-Tolerant Mechanism

    Directory of Open Access Journals (Sweden)

    Ye Yuan

    2018-01-01

    Full Text Available Side-channel collision attacks are more powerful than traditional side-channel attack without knowing the leakage model or establishing the model. Most attack strategies proposed previously need quantities of power traces with high computational complexity and are sensitive to mistakes, which restricts the attack efficiency seriously. In this paper, we propose a multiple-bits side-channel collision attack based on double distance voting detection (DDVD and also an improved version, involving the error-tolerant mechanism, which can find all 120 relations among 16 key bytes when applied to AES (Advanced Encryption Standard algorithm. In addition, we compare our collision detection method called DDVD with the Euclidean distance and the correlation-enhanced collision method under different intensity of noise, which indicates that our detection technique performs better in the circumstances of noise. Furthermore, 4-bit model of our collision detection method is proven to be optimal in theory and in practice. Meanwhile the corresponding practical attack experiments are also performed on a hardware implementation of AES-128 on FPGA board successfully. Results show that our strategy needs less computation time but more traces than LDPC method and the online time for our strategy is about 90% less than CECA and 96% less than BCA with 90% success rate.

  15. In Vitro and In Vivo Evaluation of Microparticulate Drug Delivery Systems Composed of Macromolecular Prodrugs

    Directory of Open Access Journals (Sweden)

    Yoshiharu Machida

    2008-08-01

    Full Text Available Macromolecular prodrugs are very useful systems for achieving controlled drug release and drug targeting. In particular, various macromolecule-antitumor drug conjugates enhance the effectiveness and improve the toxic side effects. Also, polymeric micro- and nanoparticles have been actively examined and their in vivo behaviors elucidated, and it has been realized that their particle characteristics are very useful to control drug behavior. Recently, researches based on the combination of the concepts of macromolecular prodrugs and micro- or nanoparticles have been reported, although they are limited. Macromolecular prodrugs enable drugs to be released at a certain controlled release rate based on the features of the macromolecule-drug linkage. Micro- and nanoparticles can control in vivo behavior based on their size, surface charge and surface structure. These merits are expected for systems produced by the combination of each concept. In this review, several micro- or nanoparticles composed of macromolecule-drug conjugates are described for their preparation, in vitro properties and/or in vivo behavior.

  16. Error-Transparent Quantum Gates for Small Logical Qubit Architectures

    Science.gov (United States)

    Kapit, Eliot

    2018-02-01

    One of the largest obstacles to building a quantum computer is gate error, where the physical evolution of the state of a qubit or group of qubits during a gate operation does not match the intended unitary transformation. Gate error stems from a combination of control errors and random single qubit errors from interaction with the environment. While great strides have been made in mitigating control errors, intrinsic qubit error remains a serious problem that limits gate fidelity in modern qubit architectures. Simultaneously, recent developments of small error-corrected logical qubit devices promise significant increases in logical state lifetime, but translating those improvements into increases in gate fidelity is a complex challenge. In this Letter, we construct protocols for gates on and between small logical qubit devices which inherit the parent device's tolerance to single qubit errors which occur at any time before or during the gate. We consider two such devices, a passive implementation of the three-qubit bit flip code, and the author's own [E. Kapit, Phys. Rev. Lett. 116, 150501 (2016), 10.1103/PhysRevLett.116.150501] very small logical qubit (VSLQ) design, and propose error-tolerant gate sets for both. The effective logical gate error rate in these models displays superlinear error reduction with linear increases in single qubit lifetime, proving that passive error correction is capable of increasing gate fidelity. Using a standard phenomenological noise model for superconducting qubits, we demonstrate a realistic, universal one- and two-qubit gate set for the VSLQ, with error rates an order of magnitude lower than those for same-duration operations on single qubits or pairs of qubits. These developments further suggest that incorporating small logical qubits into a measurement based code could substantially improve code performance.

  17. Homogenization Theory for the Prediction of Obstructed Solute Diffusivity in Macromolecular Solutions.

    Science.gov (United States)

    Donovan, Preston; Chehreghanianzabi, Yasaman; Rathinam, Muruhan; Zustiak, Silviya Petrova

    2016-01-01

    The study of diffusion in macromolecular solutions is important in many biomedical applications such as separations, drug delivery, and cell encapsulation, and key for many biological processes such as protein assembly and interstitial transport. Not surprisingly, multiple models for the a-priori prediction of diffusion in macromolecular environments have been proposed. However, most models include parameters that are not readily measurable, are specific to the polymer-solute-solvent system, or are fitted and do not have a physical meaning. Here, for the first time, we develop a homogenization theory framework for the prediction of effective solute diffusivity in macromolecular environments based on physical parameters that are easily measurable and not specific to the macromolecule-solute-solvent system. Homogenization theory is useful for situations where knowledge of fine-scale parameters is used to predict bulk system behavior. As a first approximation, we focus on a model where the solute is subjected to obstructed diffusion via stationary spherical obstacles. We find that the homogenization theory results agree well with computationally more expensive Monte Carlo simulations. Moreover, the homogenization theory agrees with effective diffusivities of a solute in dilute and semi-dilute polymer solutions measured using fluorescence correlation spectroscopy. Lastly, we provide a mathematical formula for the effective diffusivity in terms of a non-dimensional and easily measurable geometric system parameter.

  18. Homogenization Theory for the Prediction of Obstructed Solute Diffusivity in Macromolecular Solutions.

    Directory of Open Access Journals (Sweden)

    Preston Donovan

    Full Text Available The study of diffusion in macromolecular solutions is important in many biomedical applications such as separations, drug delivery, and cell encapsulation, and key for many biological processes such as protein assembly and interstitial transport. Not surprisingly, multiple models for the a-priori prediction of diffusion in macromolecular environments have been proposed. However, most models include parameters that are not readily measurable, are specific to the polymer-solute-solvent system, or are fitted and do not have a physical meaning. Here, for the first time, we develop a homogenization theory framework for the prediction of effective solute diffusivity in macromolecular environments based on physical parameters that are easily measurable and not specific to the macromolecule-solute-solvent system. Homogenization theory is useful for situations where knowledge of fine-scale parameters is used to predict bulk system behavior. As a first approximation, we focus on a model where the solute is subjected to obstructed diffusion via stationary spherical obstacles. We find that the homogenization theory results agree well with computationally more expensive Monte Carlo simulations. Moreover, the homogenization theory agrees with effective diffusivities of a solute in dilute and semi-dilute polymer solutions measured using fluorescence correlation spectroscopy. Lastly, we provide a mathematical formula for the effective diffusivity in terms of a non-dimensional and easily measurable geometric system parameter.

  19. Judson_Mansouri_Automated_Chemical_Curation_QSAREnvRes_Data

    Data.gov (United States)

    U.S. Environmental Protection Agency — Here we describe the development of an automated KNIME workflow to curate and correct errors in the structure and identity of chemicals using the publically...

  20. Error-finding and error-correcting methods for the start-up of the SLC

    International Nuclear Information System (INIS)

    Lee, M.J.; Clearwater, S.H.; Kleban, S.D.; Selig, L.J.

    1987-02-01

    During the commissioning of an accelerator, storage ring, or beam transfer line, one of the important tasks of an accelertor physicist is to check the first-order optics of the beam line and to look for errors in the system. Conceptually, it is important to distinguish between techniques for finding the machine errors that are the cause of the problem and techniques for correcting the beam errors that are the result of the machine errors. In this paper we will limit our presentation to certain applications of these two methods for finding or correcting beam-focus errors and beam-kick errors that affect the profile and trajectory of the beam respectively. Many of these methods have been used successfully in the commissioning of SLC systems. In order not to waste expensive beam time we have developed and used a beam-line simulator to test the ideas that have not been tested experimentally. To save valuable physicist's time we have further automated the beam-kick error-finding procedures by adopting methods from the field of artificial intelligence to develop a prototype expert system. Our experience with this prototype has demonstrated the usefulness of expert systems in solving accelerator control problems. The expert system is able to find the same solutions as an expert physicist but in a more systematic fashion. The methods used in these procedures and some of the recent applications will be described in this paper

  1. Development of a framework of human-centered automation for the nuclear industry

    International Nuclear Information System (INIS)

    Nelson, W.R.; Haney, L.N.

    1993-01-01

    Introduction of automated systems into control rooms for advanced reactor designs is often justified on the basis of increased efficiency and reliability, without a detailed assessment of how the new technologies will influence the role of the operator. Such a ''technology-centered'' approach carries with it the risk that entirely new mechanisms for human error will be introduced, resulting in some unpleasant surprises when the plant goes into operation. The aviation industry has experienced some of these surprises since the introduction of automated systems into the cockpits of advanced technology aircraft. Pilot errors have actually been induced by automated systems, especially when the pilot doesn't fully understand what the automated systems are doing during all modes of operation. In order to structure the research program for investigating these problems, the National Aeronautics and Space Administration (NASA) has developed a framework for human-centered automation. This framework is described in the NASA document Human-Centered Aircraft Automation Philosophy by Charles Billings. It is the thesis of this paper that a corresponding framework of human-centered automation should be developed for the nuclear industry. Such a framework would serve to guide the design and regulation of automated systems for advanced reactor designs, and would help prevent some of the problems that have arisen in other applications that have followed a ''technology-centered'' approach

  2. SIFT - Design and analysis of a fault-tolerant computer for aircraft control. [Software Implemented Fault Tolerant systems

    Science.gov (United States)

    Wensley, J. H.; Lamport, L.; Goldberg, J.; Green, M. W.; Levitt, K. N.; Melliar-Smith, P. M.; Shostak, R. E.; Weinstock, C. B.

    1978-01-01

    SIFT (Software Implemented Fault Tolerance) is an ultrareliable computer for critical aircraft control applications that achieves fault tolerance by the replication of tasks among processing units. The main processing units are off-the-shelf minicomputers, with standard microcomputers serving as the interface to the I/O system. Fault isolation is achieved by using a specially designed redundant bus system to interconnect the processing units. Error detection and analysis and system reconfiguration are performed by software. Iterative tasks are redundantly executed, and the results of each iteration are voted upon before being used. Thus, any single failure in a processing unit or bus can be tolerated with triplication of tasks, and subsequent failures can be tolerated after reconfiguration. Independent execution by separate processors means that the processors need only be loosely synchronized, and a novel fault-tolerant synchronization method is described.

  3. Automated main-chain model building by template matching and iterative fragment extension.

    Science.gov (United States)

    Terwilliger, Thomas C

    2003-01-01

    An algorithm for the automated macromolecular model building of polypeptide backbones is described. The procedure is hierarchical. In the initial stages, many overlapping polypeptide fragments are built. In subsequent stages, the fragments are extended and then connected. Identification of the locations of helical and beta-strand regions is carried out by FFT-based template matching. Fragment libraries of helices and beta-strands from refined protein structures are then positioned at the potential locations of helices and strands and the longest segments that fit the electron-density map are chosen. The helices and strands are then extended using fragment libraries consisting of sequences three amino acids long derived from refined protein structures. The resulting segments of polypeptide chain are then connected by choosing those which overlap at two or more C(alpha) positions. The fully automated procedure has been implemented in RESOLVE and is capable of model building at resolutions as low as 3.5 A. The algorithm is useful for building a preliminary main-chain model that can serve as a basis for refinement and side-chain addition.

  4. PRIGo: a new multi-axis goniometer for macromolecular crystallography

    Energy Technology Data Exchange (ETDEWEB)

    Waltersperger, Sandro; Olieric, Vincent, E-mail: vincent.olieric@psi.ch; Pradervand, Claude [Paul Scherrer Institute, Villigen PSI (Switzerland); Glettig, Wayne [Centre Suisse d’Electronique et Microtechnique SA, Neuchâtel 2002 (Switzerland); Salathe, Marco; Fuchs, Martin R.; Curtin, Adrian; Wang, Xiaoqiang; Ebner, Simon; Panepucci, Ezequiel; Weinert, Tobias [Paul Scherrer Institute, Villigen PSI (Switzerland); Schulze-Briese, Clemens [Dectris Ltd, Baden 5400 (Switzerland); Wang, Meitian, E-mail: vincent.olieric@psi.ch [Paul Scherrer Institute, Villigen PSI (Switzerland)

    2015-05-09

    The design and performance of the new multi-axis goniometer PRIGo developed at the Swiss Light Source at Paul Scherrer Institute is described. The Parallel Robotics Inspired Goniometer (PRIGo) is a novel compact and high-precision goniometer providing an alternative to (mini-)kappa, traditional three-circle goniometers and Eulerian cradles used for sample reorientation in macromolecular crystallography. Based on a combination of serial and parallel kinematics, PRIGo emulates an arc. It is mounted on an air-bearing stage for rotation around ω and consists of four linear positioners working synchronously to achieve x, y, z translations and χ rotation (0–90°), followed by a ϕ stage (0–360°) for rotation around the sample holder axis. Owing to the use of piezo linear positioners and active correction, PRIGo features spheres of confusion of <1 µm, <7 µm and <10 µm for ω, χ and ϕ, respectively, and is therefore very well suited for micro-crystallography. PRIGo enables optimal strategies for both native and experimental phasing crystallographic data collection. Herein, PRIGo hardware and software, its calibration, as well as applications in macromolecular crystallography are described.

  5. Dendrimer-based Macromolecular MRI Contrast Agents: Characteristics and Application

    Directory of Open Access Journals (Sweden)

    Hisataka Kobayashi

    2003-01-01

    Full Text Available Numerous macromolecular MRI contrast agents prepared employing relatively simple chemistry may be readily available that can provide sufficient enhancement for multiple applications. These agents operate using a ~100-fold lower concentration of gadolinium ions in comparison to the necessary concentration of iodine employed in CT imaging. Herein, we describe some of the general potential directions of macromolecular MRI contrast agents using our recently reported families of dendrimer-based agents as examples. Changes in molecular size altered the route of excretion. Smaller-sized contrast agents less than 60 kDa molecular weight were excreted through the kidney resulting in these agents being potentially suitable as functional renal contrast agents. Hydrophilic and larger-sized contrast agents were found better suited for use as blood pool contrast agents. Hydrophobic variants formed with polypropylenimine diaminobutane dendrimer cores created liver contrast agents. Larger hydrophilic agents are useful for lymphatic imaging. Finally, contrast agents conjugated with either monoclonal antibodies or with avidin are able to function as tumor-specific contrast agents, which also might be employed as therapeutic drugs for either gadolinium neutron capture therapy or in conjunction with radioimmunotherapy.

  6. Progress in rational methods of cryoprotection in macromolecular crystallography

    International Nuclear Information System (INIS)

    Alcorn, Thomas; Juers, Douglas H.

    2010-01-01

    Measurements of the average thermal contractions (294→72 K) of 26 different cryosolutions are presented and discussed in conjunction with other recent advances in the rational design of protocols for cryogenic cooling in macromolecular crystallography. Cryogenic cooling of macromolecular crystals is commonly used for X-ray data collection both to reduce crystal damage from radiation and to gather functional information by cryogenically trapping intermediates. However, the cooling process can damage the crystals. Limiting cooling-induced crystal damage often requires cryoprotection strategies, which can involve substantial screening of solution conditions and cooling protocols. Here, recent developments directed towards rational methods for cryoprotection are described. Crystal damage is described in the context of the temperature response of the crystal as a thermodynamic system. As such, the internal and external parts of the crystal typically have different cryoprotection requirements. A key physical parameter, the thermal contraction, of 26 different cryoprotective solutions was measured between 294 and 72 K. The range of contractions was 2–13%, with the more polar cryosolutions contracting less. The potential uses of these results in the development of cryocooling conditions, as well as recent developments in determining minimum cryosolution soaking times, are discussed

  7. Automated Error Detection in Physiotherapy Training.

    Science.gov (United States)

    Jovanović, Marko; Seiffarth, Johannes; Kutafina, Ekaterina; Jonas, Stephan M

    2018-01-01

    Manual skills teaching, such as physiotherapy education, requires immediate teacher feedback for the students during the learning process, which to date can only be performed by expert trainers. A machine-learning system trained only on correct performances to classify and score performed movements, to identify sources of errors in the movement and give feedback to the learner. We acquire IMU and sEMG sensor data from a commercial-grade wearable device and construct an HMM-based model for gesture classification, scoring and feedback giving. We evaluate the model on publicly available and self-generated data of an exemplary movement pattern executions. The model achieves an overall accuracy of 90.71% on the public dataset and 98.9% on our dataset. An AUC of 0.99 for the ROC of the scoring method could be achieved to discriminate between correct and untrained incorrect executions. The proposed system demonstrated its suitability for scoring and feedback in manual skills training.

  8. Automated Detection of Clinically Significant Prostate Cancer in mp-MRI Images Based on an End-to-End Deep Neural Network.

    Science.gov (United States)

    Wang, Zhiwei; Liu, Chaoyue; Cheng, Danpeng; Wang, Liang; Yang, Xin; Cheng, Kwang-Ting

    2018-05-01

    Automated methods for detecting clinically significant (CS) prostate cancer (PCa) in multi-parameter magnetic resonance images (mp-MRI) are of high demand. Existing methods typically employ several separate steps, each of which is optimized individually without considering the error tolerance of other steps. As a result, they could either involve unnecessary computational cost or suffer from errors accumulated over steps. In this paper, we present an automated CS PCa detection system, where all steps are optimized jointly in an end-to-end trainable deep neural network. The proposed neural network consists of concatenated subnets: 1) a novel tissue deformation network (TDN) for automated prostate detection and multimodal registration and 2) a dual-path convolutional neural network (CNN) for CS PCa detection. Three types of loss functions, i.e., classification loss, inconsistency loss, and overlap loss, are employed for optimizing all parameters of the proposed TDN and CNN. In the training phase, the two nets mutually affect each other and effectively guide registration and extraction of representative CS PCa-relevant features to achieve results with sufficient accuracy. The entire network is trained in a weakly supervised manner by providing only image-level annotations (i.e., presence/absence of PCa) without exact priors of lesions' locations. Compared with most existing systems which require supervised labels, e.g., manual delineation of PCa lesions, it is much more convenient for clinical usage. Comprehensive evaluation based on fivefold cross validation using 360 patient data demonstrates that our system achieves a high accuracy for CS PCa detection, i.e., a sensitivity of 0.6374 and 0.8978 at 0.1 and 1 false positives per normal/benign patient.

  9. Alignment and focusing tolerance influences on optical performance

    International Nuclear Information System (INIS)

    Cross, E.W.

    1982-01-01

    Alignment errors among components of an optical system may substantially degrade the image quality. Focus errors also affect system performance. The potential for serious degradation of image quality is substantial and requires that the tolerances for these errors receive significant attention early in system design. The image quality and reconnaissance performance of an all-reflecting Cassegrain is compared to an all-refractive optical system under conditions of zero and anticipated real world misalignments

  10. Collagen macromolecular drug delivery systems

    International Nuclear Information System (INIS)

    Gilbert, D.L.

    1988-01-01

    The objective of this study was to examine collagen for use as a macromolecular drug delivery system by determining the mechanism of release through a matrix. Collagen membranes varying in porosity, crosslinking density, structure and crosslinker were fabricated. Collagen characterized by infrared spectroscopy and solution viscosity was determined to be pure and native. The collagen membranes were determined to possess native vs. non-native quaternary structure and porous vs. dense aggregate membranes by electron microscopy. Collagen monolithic devices containing a model macromolecule (inulin) were fabricated. In vitro release rates were found to be linear with respect to t 1/2 and were affected by crosslinking density, crosslinker and structure. The biodegradation of the collagen matrix was also examined. In vivo biocompatibility, degradation and 14 C-inulin release rates were evaluated subcutaneously in rats

  11. Accurate macromolecular crystallographic refinement: incorporation of the linear scaling, semiempirical quantum-mechanics program DivCon into the PHENIX refinement package

    Energy Technology Data Exchange (ETDEWEB)

    Borbulevych, Oleg Y.; Plumley, Joshua A.; Martin, Roger I. [QuantumBio Inc., 2790 West College Avenue, State College, PA 16801 (United States); Merz, Kenneth M. Jr [University of Florida, Gainesville, Florida (United States); Westerhoff, Lance M., E-mail: lance@quantumbioinc.com [QuantumBio Inc., 2790 West College Avenue, State College, PA 16801 (United States)

    2014-05-01

    Semiempirical quantum-chemical X-ray macromolecular refinement using the program DivCon integrated with PHENIX is described. Macromolecular crystallographic refinement relies on sometimes dubious stereochemical restraints and rudimentary energy functionals to ensure the correct geometry of the model of the macromolecule and any covalently bound ligand(s). The ligand stereochemical restraint file (CIF) requires a priori understanding of the ligand geometry within the active site, and creation of the CIF is often an error-prone process owing to the great variety of potential ligand chemistry and structure. Stereochemical restraints have been replaced with more robust functionals through the integration of the linear-scaling, semiempirical quantum-mechanics (SE-QM) program DivCon with the PHENIX X-ray refinement engine. The PHENIX/DivCon package has been thoroughly validated on a population of 50 protein–ligand Protein Data Bank (PDB) structures with a range of resolutions and chemistry. The PDB structures used for the validation were originally refined utilizing various refinement packages and were published within the past five years. PHENIX/DivCon does not utilize CIF(s), link restraints and other parameters for refinement and hence it does not make as many a priori assumptions about the model. Across the entire population, the method results in reasonable ligand geometries and low ligand strains, even when the original refinement exhibited difficulties, indicating that PHENIX/DivCon is applicable to both single-structure and high-throughput crystallography.

  12. Non-binary unitary error bases and quantum codes

    Energy Technology Data Exchange (ETDEWEB)

    Knill, E.

    1996-06-01

    Error operator bases for systems of any dimension are defined and natural generalizations of the bit-flip/ sign-change error basis for qubits are given. These bases allow generalizing the construction of quantum codes based on eigenspaces of Abelian groups. As a consequence, quantum codes can be constructed form linear codes over {ital Z}{sub {ital n}} for any {ital n}. The generalization of the punctured code construction leads to many codes which permit transversal (i.e. fault tolerant) implementations of certain operations compatible with the error basis.

  13. Automated Internal Revenue Processing System: A Panacea For ...

    African Journals Online (AJOL)

    Automated Internal Revenue Processing System: A Panacea For Financial ... for the collection and management of internal revenue which is the financial ... them, computational errors, high level of redundancy and inconsistencies in record, ...

  14. Automation bias and verification complexity: a systematic review.

    Science.gov (United States)

    Lyell, David; Coiera, Enrico

    2017-03-01

    While potentially reducing decision errors, decision support systems can introduce new types of errors. Automation bias (AB) happens when users become overreliant on decision support, which reduces vigilance in information seeking and processing. Most research originates from the human factors literature, where the prevailing view is that AB occurs only in multitasking environments. This review seeks to compare the human factors and health care literature, focusing on the apparent association of AB with multitasking and task complexity. EMBASE, Medline, Compendex, Inspec, IEEE Xplore, Scopus, Web of Science, PsycINFO, and Business Source Premiere from 1983 to 2015. Evaluation studies where task execution was assisted by automation and resulted in errors were included. Participants needed to be able to verify automation correctness and perform the task manually. Tasks were identified and grouped. Task and automation type and presence of multitasking were noted. Each task was rated for its verification complexity. Of 890 papers identified, 40 met the inclusion criteria; 6 were in health care. Contrary to the prevailing human factors view, AB was found in single tasks, typically involving diagnosis rather than monitoring, and with high verification complexity. The literature is fragmented, with large discrepancies in how AB is reported. Few studies reported the statistical significance of AB compared to a control condition. AB appears to be associated with the degree of cognitive load experienced in decision tasks, and appears to not be uniquely associated with multitasking. Strategies to minimize AB might focus on cognitive load reduction. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  15. Thermodynamics of Macromolecular Association in Heterogeneous Crowding Environments: Theoretical and Simulation Studies with a Simplified Model.

    Science.gov (United States)

    Ando, Tadashi; Yu, Isseki; Feig, Michael; Sugita, Yuji

    2016-11-23

    The cytoplasm of a cell is crowded with many different kinds of macromolecules. The macromolecular crowding affects the thermodynamics and kinetics of biological reactions in a living cell, such as protein folding, association, and diffusion. Theoretical and simulation studies using simplified models focus on the essential features of the crowding effects and provide a basis for analyzing experimental data. In most of the previous studies on the crowding effects, a uniform crowder size is assumed, which is in contrast to the inhomogeneous size distribution of macromolecules in a living cell. Here, we evaluate the free energy changes upon macromolecular association in a cell-like inhomogeneous crowding system via a theory of hard-sphere fluids and free energy calculations using Brownian dynamics trajectories. The inhomogeneous crowding model based on 41 different types of macromolecules represented by spheres with different radii mimics the physiological concentrations of macromolecules in the cytoplasm of Mycoplasma genitalium. The free energy changes of macromolecular association evaluated by the theory and simulations were in good agreement with each other. The crowder size distribution affects both specific and nonspecific molecular associations, suggesting that not only the volume fraction but also the size distribution of macromolecules are important factors for evaluating in vivo crowding effects. This study relates in vitro experiments on macromolecular crowding to in vivo crowding effects by using the theory of hard-sphere fluids with crowder-size heterogeneity.

  16. Optimizing human-system interface automation design based on a skill-rule-knowledge framework

    International Nuclear Information System (INIS)

    Lin, Chiuhsiang Joe; Yenn, T.-C.; Yang, C.-W.

    2010-01-01

    This study considers the technological change that has occurred in complex systems within the past 30 years. The role of human operators in controlling and interacting with complex systems following the technological change was also investigated. Modernization of instrumentation and control systems and components leads to a new issue of human-automation interaction, in which human operational performance must be considered in automated systems. The human-automation interaction can differ in its types and levels. A system design issue is usually realized: given these technical capabilities, which system functions should be automated and to what extent? A good automation design can be achieved by making an appropriate human-automation function allocation. To our knowledge, only a few studies have been published on how to achieve appropriate automation design with a systematic procedure. Further, there is a surprising lack of information on examining and validating the influences of levels of automation (LOAs) on instrumentation and control systems in the advanced control room (ACR). The study we present in this paper proposed a systematic framework to help in making an appropriate decision towards types of automation (TOA) and LOAs based on a 'Skill-Rule-Knowledge' (SRK) model. From the evaluating results, it was shown that the use of either automatic mode or semiautomatic mode is insufficient to prevent human errors. For preventing the occurrences of human errors and ensuring the safety in ACR, the proposed framework can be valuable for making decisions in human-automation allocation.

  17. Variable effects of soman on macromolecular secretion by ferret trachea

    International Nuclear Information System (INIS)

    McBride, R.K.; Zwierzynski, D.J.; Stone, K.K.; Culp, D.J.; Marin, M.G.

    1991-01-01

    The purpose of this study was to examine the effect of the anticholinesterase agent, soman, on macromolecular secretion by ferret trachea, in vitro. We mounted pieces of ferret trachea in Ussing-type chambers. Secreted sulfated macromolecules were radiolabeled by adding 500 microCi of 35 SO 4 to the submucosal medium and incubating for 17 hr. Soman added to the submucosal side produced a concentration-dependent increase in radiolabeled macromolecular release with a maximal secretory response (mean +/- SD) of 202 +/- 125% (n = 8) relative to the basal secretion rate at a concentration of 10 - 7 M. The addition of either 10 -6 M pralidoxime (acetylcholinesterase reactivator) or 10 -6 M atropine blocked the response to 10 -7 M soman. At soman concentrations greater than 10 -7 M, secretion rate decreased and was not significantly different from basal secretion. Additional experiments utilizing acetylcholine and the acetylcholinesterase inhibitor, physostigmine, suggest that inhibition of secretion by high concentrations of soman may be due to a secondary antagonistic effect of soman on muscarinic receptors

  18. Data Management System at the Photon Factory Macromolecular Crystallography Beamline

    International Nuclear Information System (INIS)

    Yamada, Y; Matsugaki, N; Chavas, L M G; Hiraki, M; Igarashi, N; Wakatsuki, S

    2013-01-01

    Macromolecular crystallography is a very powerful tool to investigate three-dimensional structures of macromolecules at the atomic level, and is widely spread among structural biology researchers. Due to recent upgrades of the macromolecular crystallography beamlines at the Photon Factory, beamline throughput has improved, allowing more experiments to be conducted during a user's beam time. Although the number of beamlines has increased, so has the number of beam time applications. Consequently, both the experimental data from users' experiments and data derived from beamline operations have dramatically increased, causing difficulties in organizing these diverse and large amounts of data for the beamline operation staff and users. To overcome this problem, we have developed a data management system by introducing commercial middleware, which consists of a controller, database, and web servers. We have prepared several database projects using this system. Each project is dedicated to a certain aspect such as experimental results, beam time applications, beam time schedule, or beamline operation reports. Then we designed a scheme to link all the database projects.

  19. Prevalence of discordant microscopic changes with automated CBC analysis

    Directory of Open Access Journals (Sweden)

    Fabiano de Jesus Santos

    2014-12-01

    Full Text Available Introduction:The most common cause of diagnostic error is related to errors in laboratory tests as well as errors of results interpretation. In order to reduce them, the laboratory currently has modern equipment which provides accurate and reliable results. The development of automation has revolutionized the laboratory procedures in Brazil and worldwide.Objective:To determine the prevalence of microscopic changes present in blood slides concordant and discordant with results obtained using fully automated procedures.Materials and method:From January to July 2013, 1,000 hematological parameters slides were analyzed. Automated analysis was performed on last generation equipment, which methodology is based on electrical impedance, and is able to quantify all the figurative elements of the blood in a universe of 22 parameters. The microscopy was performed by two experts in microscopy simultaneously.Results:The data showed that only 42.70% were concordant, comparing with 57.30% discordant. The main findings among discordant were: Changes in red blood cells 43.70% (n = 250, white blood cells 38.46% (n = 220, and number of platelet 17.80% (n = 102.Discussion:The data show that some results are not consistent with clinical or physiological state of an individual, and cannot be explained because they have not been investigated, which may compromise the final diagnosis.Conclusion:It was observed that it is of fundamental importance that the microscopy qualitative analysis must be performed in parallel with automated analysis in order to obtain reliable results, causing a positive impact on the prevention, diagnosis, prognosis, and therapeutic follow-up.

  20. Big data in cryoEM: automated collection, processing and accessibility of EM data.

    Science.gov (United States)

    Baldwin, Philip R; Tan, Yong Zi; Eng, Edward T; Rice, William J; Noble, Alex J; Negro, Carl J; Cianfrocco, Michael A; Potter, Clinton S; Carragher, Bridget

    2018-06-01

    The scope and complexity of cryogenic electron microscopy (cryoEM) data has greatly increased, and will continue to do so, due to recent and ongoing technical breakthroughs that have led to much improved resolutions for macromolecular structures solved using this method. This big data explosion includes single particle data as well as tomographic tilt series, both generally acquired as direct detector movies of ∼10-100 frames per image or per tilt-series. We provide a brief survey of the developments leading to the current status, and describe existing cryoEM pipelines, with an emphasis on the scope of data acquisition, methods for automation, and use of cloud storage and computing. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. SU-D-201-01: A Multi-Institutional Study Quantifying the Impact of Simulated Linear Accelerator VMAT Errors for Nasopharynx

    International Nuclear Information System (INIS)

    Pogson, E; Hansen, C; Blake, S; Thwaites, D; Arumugam, S; Juresic, J; Ochoa, C; Yakobi, J; Haman, A; Trtovac, A; Holloway, L

    2016-01-01

    Purpose: To quantify the impact of differing magnitudes of simulated linear accelerator errors on the dose to the target volume and organs at risk for nasopharynx VMAT. Methods: Ten nasopharynx cancer patients were retrospectively replanned twice with one full arc VMAT by two institutions. Treatment uncertainties (gantry angle and collimator in degrees, MLC field size and MLC shifts in mm) were introduced into these plans at increments of 5,2,1,−1,−2 and −5. This was completed using an in-house Python script within Pinnacle3 and analysed using 3DVH and MatLab. The mean and maximum dose were calculated for the Planning Target Volume (PTV1), parotids, brainstem, and spinal cord and then compared to the original baseline plan. The D1cc was also calculated for the spinal cord and brainstem. Patient average results were compared across institutions. Results: Introduced gantry angle errors had the smallest effect of dose, no tolerances were exceeded for one institution, and the second institutions VMAT plans were only exceeded for gantry angle of ±5° affecting different sided parotids by 14–18%. PTV1, brainstem and spinal cord tolerances were exceeded for collimator angles of ±5 degrees, MLC shifts and MLC field sizes of ±1 and beyond, at the first institution. At the second institution, sensitivity to errors was marginally higher for some errors including the collimator error producing doses exceeding tolerances above ±2 degrees, and marginally lower with tolerances exceeded above MLC shifts of ±2. The largest differences occur with MLC field sizes, with both institutions reporting exceeded tolerances, for all introduced errors (±1 and beyond). Conclusion: The plan robustness for VMAT nasopharynx plans has been demonstrated. Gantry errors have the least impact on patient doses, however MLC field sizes exceed tolerances even with relatively low introduced errors and also produce the largest errors. This was consistent across both departments. The authors

  2. SU-D-201-01: A Multi-Institutional Study Quantifying the Impact of Simulated Linear Accelerator VMAT Errors for Nasopharynx

    Energy Technology Data Exchange (ETDEWEB)

    Pogson, E [Institute of Medical Physics, The University of Sydney, Sydney, NSW (Australia); Liverpool and Macarthur Cancer Therapy Centres, Liverpool, NSW (Australia); Ingham Institute for Applied Medical Research, Sydney, NSW (Australia); Hansen, C [Laboratory of Radiation Physics, Odense University Hospital, Odense (Denmark); Institute of Clinical Research, University of Southern Denmark, Odense (Denmark); Blake, S; Thwaites, D [Institute of Medical Physics, The University of Sydney, Sydney, NSW (Australia); Arumugam, S; Juresic, J; Ochoa, C; Yakobi, J; Haman, A; Trtovac, A [Liverpool and Macarthur Cancer Therapy Centres, Liverpool, NSW (Australia); Holloway, L [Institute of Medical Physics, The University of Sydney, Sydney, NSW (Australia); Liverpool and Macarthur Cancer Therapy Centres, Liverpool, NSW (Australia); Ingham Institute for Applied Medical Research, Sydney, NSW (Australia); South Western Sydney Clinical School, University of New South Wales, Sydney, NSW (Australia); University of Wollongong, Wollongong, NSW (Australia)

    2016-06-15

    Purpose: To quantify the impact of differing magnitudes of simulated linear accelerator errors on the dose to the target volume and organs at risk for nasopharynx VMAT. Methods: Ten nasopharynx cancer patients were retrospectively replanned twice with one full arc VMAT by two institutions. Treatment uncertainties (gantry angle and collimator in degrees, MLC field size and MLC shifts in mm) were introduced into these plans at increments of 5,2,1,−1,−2 and −5. This was completed using an in-house Python script within Pinnacle3 and analysed using 3DVH and MatLab. The mean and maximum dose were calculated for the Planning Target Volume (PTV1), parotids, brainstem, and spinal cord and then compared to the original baseline plan. The D1cc was also calculated for the spinal cord and brainstem. Patient average results were compared across institutions. Results: Introduced gantry angle errors had the smallest effect of dose, no tolerances were exceeded for one institution, and the second institutions VMAT plans were only exceeded for gantry angle of ±5° affecting different sided parotids by 14–18%. PTV1, brainstem and spinal cord tolerances were exceeded for collimator angles of ±5 degrees, MLC shifts and MLC field sizes of ±1 and beyond, at the first institution. At the second institution, sensitivity to errors was marginally higher for some errors including the collimator error producing doses exceeding tolerances above ±2 degrees, and marginally lower with tolerances exceeded above MLC shifts of ±2. The largest differences occur with MLC field sizes, with both institutions reporting exceeded tolerances, for all introduced errors (±1 and beyond). Conclusion: The plan robustness for VMAT nasopharynx plans has been demonstrated. Gantry errors have the least impact on patient doses, however MLC field sizes exceed tolerances even with relatively low introduced errors and also produce the largest errors. This was consistent across both departments. The authors

  3. Automation of TL brick dating by ADAM-1

    International Nuclear Information System (INIS)

    Cechak, T.; Gerndt, J.; Hirsl, P.; Jirousek, P.; Kubelik, M.; Musilek, L.; Kanaval, J.

    2000-01-01

    Thermoluminescence has become an established dating method for ceramics and more recently for bricks. Based on the experiences of the work carried out since the late 1970's at the Rathgen-Forschungslabor in Berlin on the dating of bricks from historic architecture, and after evaluating all commercially available and some individually built automated and semi-automated TL-readers, a specially adapted machine for the fine grain dating of bricks was constructed in an interdisciplinary research project, undertaken by a team recruited from three faculties of the Czech Technical University in Prague. The result is the automated TL-reader ADAM-1 (Automated Dating Apparatus for Monuments) for the dating of historic architecture. Both the specific adaptation of the technique and the necessary optimal automation have influenced the design of this TL-reader. The principle advantage of brick as opposed to ceramic TL-dating emerges from the possibility of being able to obtain both a large number of samples and an above average quantity of datable material from each sample. This, together with the specific physical and chemical conditions in a brick wall, allowed a rethinking of the traditional error calculation and thus lower error margins as those obtained when dating ceramic shards. The TL-reader must therefore be able to measure and evaluate automatically numerous samples. The annular sample holder of ADAM-1 has 60 sample positions, which allow the irradiation and evaluation of samples taken from two locations. The thirty samples from one sampling point are divided into subgroups, which are processed in various ways. Three samples serve for a rough estimate of the TL sensitivity of the brick material. Nine samples are used for the measurement of 'natural TL' of the material. A further nine samples are used for testing the sensitivity of the material to beta radiation. The last nine samples serve for the testing of the sensitivity to alpha radiation. To determine the

  4. A Review Of Fault Tolerant Scheduling In Multicore Systems

    Directory of Open Access Journals (Sweden)

    Shefali Malhotra

    2015-05-01

    Full Text Available Abstract In this paper we have discussed about various fault tolerant task scheduling algorithm for multi core system based on hardware and software. Hardware based algorithm which is blend of Triple Modulo Redundancy and Double Modulo Redundancy in which Agricultural Vulnerability Factor is considered while deciding the scheduling other than EDF and LLF scheduling algorithms. In most of the real time system the dominant part is shared memory.Low overhead software based fault tolerance approach can be implemented at user-space level so that it does not require any changes at application level. Here redundant multi-threaded processes are used. Using those processes we can detect soft errors and recover from them. This method gives low overhead fast error detection and recovery mechanism. The overhead incurred by this method ranges from 0 to 18 for selected benchmarks. Hybrid Scheduling Method is another scheduling approach for real time systems. Dynamic fault tolerant scheduling gives high feasibility rate whereas task criticality is used to select the type of fault recovery method in order to tolerate the maximum number of faults.

  5. Modeling the multi-scale mechanisms of macromolecular resource allocation

    DEFF Research Database (Denmark)

    Yang, Laurence; Yurkovich, James T; King, Zachary A

    2018-01-01

    As microbes face changing environments, they dynamically allocate macromolecular resources to produce a particular phenotypic state. Broad 'omics' data sets have revealed several interesting phenomena regarding how the proteome is allocated under differing conditions, but the functional consequen...... and detail how mathematical models have aided in our understanding of these processes. Ultimately, such modeling efforts have helped elucidate the principles of proteome allocation and hold promise for further discovery....

  6. Pi sampling: a methodical and flexible approach to initial macromolecular crystallization screening

    International Nuclear Information System (INIS)

    Gorrec, Fabrice; Palmer, Colin M.; Lebon, Guillaume; Warne, Tony

    2011-01-01

    Pi sampling, derived from the incomplete factorial approach, is an effort to maximize the diversity of macromolecular crystallization conditions and to facilitate the preparation of 96-condition initial screens. The Pi sampling method is derived from the incomplete factorial approach to macromolecular crystallization screen design. The resulting ‘Pi screens’ have a modular distribution of a given set of up to 36 stock solutions. Maximally diverse conditions can be produced by taking into account the properties of the chemicals used in the formulation and the concentrations of the corresponding solutions. The Pi sampling method has been implemented in a web-based application that generates screen formulations and recipes. It is particularly adapted to screens consisting of 96 different conditions. The flexibility and efficiency of Pi sampling is demonstrated by the crystallization of soluble proteins and of an integral membrane-protein sample

  7. Understanding and Confronting Our Mistakes: The Epidemiology of Error in Radiology and Strategies for Error Reduction.

    Science.gov (United States)

    Bruno, Michael A; Walker, Eric A; Abujudeh, Hani H

    2015-10-01

    Arriving at a medical diagnosis is a highly complex process that is extremely error prone. Missed or delayed diagnoses often lead to patient harm and missed opportunities for treatment. Since medical imaging is a major contributor to the overall diagnostic process, it is also a major potential source of diagnostic error. Although some diagnoses may be missed because of the technical or physical limitations of the imaging modality, including image resolution, intrinsic or extrinsic contrast, and signal-to-noise ratio, most missed radiologic diagnoses are attributable to image interpretation errors by radiologists. Radiologic interpretation cannot be mechanized or automated; it is a human enterprise based on complex psychophysiologic and cognitive processes and is itself subject to a wide variety of error types, including perceptual errors (those in which an important abnormality is simply not seen on the images) and cognitive errors (those in which the abnormality is visually detected but the meaning or importance of the finding is not correctly understood or appreciated). The overall prevalence of radiologists' errors in practice does not appear to have changed since it was first estimated in the 1960s. The authors review the epidemiology of errors in diagnostic radiology, including a recently proposed taxonomy of radiologists' errors, as well as research findings, in an attempt to elucidate possible underlying causes of these errors. The authors also propose strategies for error reduction in radiology. On the basis of current understanding, specific suggestions are offered as to how radiologists can improve their performance in practice. © RSNA, 2015.

  8. Use of noncrystallographic symmetry for automated model building at medium to low resolution

    International Nuclear Information System (INIS)

    Wiegels, Tim; Lamzin, Victor S.

    2012-01-01

    Noncrystallographic symmetry is automatically detected and used to achieve higher completeness and greater accuracy of automatically built protein structures at resolutions of 2.3 Å or poorer. A novel method is presented for the automatic detection of noncrystallographic symmetry (NCS) in macromolecular crystal structure determination which does not require the derivation of molecular masks or the segmentation of density. It was found that throughout structure determination the NCS-related parts may be differently pronounced in the electron density. This often results in the modelling of molecular fragments of variable length and accuracy, especially during automated model-building procedures. These fragments were used to identify NCS relations in order to aid automated model building and refinement. In a number of test cases higher completeness and greater accuracy of the obtained structures were achieved, specifically at a crystallographic resolution of 2.3 Å or poorer. In the best case, the method allowed the building of up to 15% more residues automatically and a tripling of the average length of the built fragments

  9. Interpretation of ensembles created by multiple iterative rebuilding of macromolecular models

    International Nuclear Information System (INIS)

    Terwilliger, Thomas C.; Grosse-Kunstleve, Ralf W.; Afonine, Pavel V.; Adams, Paul D.; Moriarty, Nigel W.; Zwart, Peter; Read, Randy J.; Turk, Dusan; Hung, Li-Wei

    2007-01-01

    Heterogeneity in ensembles generated by independent model rebuilding principally reflects the limitations of the data and of the model-building process rather than the diversity of structures in the crystal. Automation of iterative model building, density modification and refinement in macromolecular crystallography has made it feasible to carry out this entire process multiple times. By using different random seeds in the process, a number of different models compatible with experimental data can be created. Sets of models were generated in this way using real data for ten protein structures from the Protein Data Bank and using synthetic data generated at various resolutions. Most of the heterogeneity among models produced in this way is in the side chains and loops on the protein surface. Possible interpretations of the variation among models created by repetitive rebuilding were investigated. Synthetic data were created in which a crystal structure was modelled as the average of a set of ‘perfect’ structures and the range of models obtained by rebuilding a single starting model was examined. The standard deviations of coordinates in models obtained by repetitive rebuilding at high resolution are small, while those obtained for the same synthetic crystal structure at low resolution are large, so that the diversity within a group of models cannot generally be a quantitative reflection of the actual structures in a crystal. Instead, the group of structures obtained by repetitive rebuilding reflects the precision of the models, and the standard deviation of coordinates of these structures is a lower bound estimate of the uncertainty in coordinates of the individual models

  10. Benzodiazepine Use During Hospitalization: Automated Identification of Potential Medication Errors and Systematic Assessment of Preventable Adverse Events.

    Directory of Open Access Journals (Sweden)

    David Franklin Niedrig

    Full Text Available Benzodiazepines and "Z-drug" GABA-receptor modulators (BDZ are among the most frequently used drugs in hospitals. Adverse drug events (ADE associated with BDZ can be the result of preventable medication errors (ME related to dosing, drug interactions and comorbidities. The present study evaluated inpatient use of BDZ and related ME and ADE.We conducted an observational study within a pharmacoepidemiological database derived from the clinical information system of a tertiary care hospital. We developed algorithms that identified dosing errors and interacting comedication for all administered BDZ. Associated ADE and risk factors were validated in medical records.Among 53,081 patients contributing 495,813 patient-days BDZ were administered to 25,626 patients (48.3% on 115,150 patient-days (23.2%. We identified 3,372 patient-days (2.9% with comedication that inhibits BDZ metabolism, and 1,197 (1.0% with lorazepam administration in severe renal impairment. After validation we classified 134, 56, 12, and 3 cases involving lorazepam, zolpidem, midazolam and triazolam, respectively, as clinically relevant ME. Among those there were 23 cases with associated adverse drug events, including severe CNS-depression, falls with subsequent injuries and severe dyspnea. Causality for BDZ was formally assessed as 'possible' or 'probable' in 20 of those cases. Four cases with ME and associated severe ADE required administration of the BDZ antagonist flumazenil.BDZ use was remarkably high in the studied setting, frequently involved potential ME related to dosing, co-medication and comorbidities, and rarely cases with associated ADE. We propose the implementation of automated ME screening and validation for the prevention of BDZ-related ADE.

  11. Intelligent Automated Nuclear Fuel Pellet Inspection System

    International Nuclear Information System (INIS)

    Keyvan, S.

    1999-01-01

    At the present time, nuclear pellet inspection is performed manually using naked eyes for judgment and decisionmaking on accepting or rejecting pellets. This current practice of pellet inspection is tedious and subject to inconsistencies and error. Furthermore, unnecessary re-fabrication of pellets is costly and the presence of low quality pellets in a fuel assembly is unacceptable. To improve the quality control in nuclear fuel fabrication plants, an automated pellet inspection system based on advanced techniques is needed. Such a system addresses the following concerns of the current manual inspection method: (1) the reliability of inspection due to typical human errors, (2) radiation exposure to the workers, and (3) speed of inspection and its economical impact. The goal of this research is to develop an automated nuclear fuel pellet inspection system which is based on pellet video (photographic) images and uses artificial intelligence techniques

  12. Automated bar coding of air samples at Hanford (ABCASH)

    International Nuclear Information System (INIS)

    Troyer, G.L.; Brayton, D.D.; McNeece, S.G.

    1992-10-01

    This article describes the basis, main features and benefits of an automated system for tracking and reporting radioactive air particulate samples. The system was developed due to recognized need for improving the quality and integrity of air sample data related to personnel and environmental protection. The data capture, storage, and retrieval of air sample data are described. The automation aspect of the associated and data input eliminates a large potential for human error. The system utilizes personal computers, handheld computers, a commercial personal computer database package, commercial programming languages, and complete documentation to satisfy the system's automation objective

  13. Coil Tolerance Impact on Plasma Surface Quality for NCSX

    International Nuclear Information System (INIS)

    Brooks, Art; Reiersen, Wayne

    2003-01-01

    The successful operation of the National Compact Stellarator Experiment (NCSX) machine will require producing plasma configurations with good flux surfaces, with a minimum volume of the plasma lost to magnetic islands or stochastic regions. The project goal is to achieve good flux surfaces over 90% of the plasma volume. NCSX is a three period device designed to be operated with iota ranging from ∼0.4 on axis to ∼0.7 at the edge. The field errors of most concern are those that are resonant with 3/5 and 3/6 modes (for symmetry preserving field errors) and the 1/2 and 2/3 modes (for symmetry breaking field errors). In addition to losses inherent in the physics configuration itself, there will be losses from field errors arising from coil construction and assembly errors. Some of these losses can be recovered through the use of trim coils or correction coils. The impact of coil tolerances on plasma surface quality is evaluated herein for the NCSX design. The methods used in this evaluation are discussed. The ability of the NCSX trim coils to correct for field errors is also examined. The results are used to set coils tolerances for the various coil systems

  14. Measuring Individual Differences in the Perfect Automation Schema.

    Science.gov (United States)

    Merritt, Stephanie M; Unnerstall, Jennifer L; Lee, Deborah; Huber, Kelli

    2015-08-01

    A self-report measure of the perfect automation schema (PAS) is developed and tested. Researchers have hypothesized that the extent to which users possess a PAS is associated with greater decreases in trust after users encounter automation errors. However, no measure of the PAS currently exists. We developed a self-report measure assessing two proposed PAS factors: high expectations and all-or-none thinking about automation performance. In two studies, participants responded to our PAS measure, interacted with imperfect automated aids, and reported trust. Each of the two PAS measure factors demonstrated fit to the hypothesized factor structure and convergent and discriminant validity when compared with propensity to trust machines and trust in a specific aid. However, the high expectations and all-or-none thinking scales showed low intercorrelations and differential relationships with outcomes, suggesting that they might best be considered two separate constructs rather than two subfactors of the PAS. All-or-none thinking had significant associations with decreases in trust following aid errors, whereas high expectations did not. Results therefore suggest that the all-or-none thinking scale may best represent the PAS construct. Our PAS measure (specifically, the all-or-none thinking scale) significantly predicted the severe trust decreases thought to be associated with high PAS. Further, it demonstrated acceptable psychometric properties across two samples. This measure may be used in future work to assess levels of PAS in users of automated systems in either research or applied settings. © 2015, Human Factors and Ergonomics Society.

  15. Establishment of a tolerance budget for the advanced photon source storage ring

    International Nuclear Information System (INIS)

    Bizek, H.; Crosbie, E.; Lessner, E.; Teng, L.

    1993-01-01

    The limitations on the dynamic aperture of the Advanced Photon Source storage ring due to magnet misalignments and fabrication errors are presented. The reduction of the dynamic aperture is analyzed first for each error considered individually, and then for combined error multipole fields in dipole, quadrupole, and sextupole magnets, excluding and including magnet misalignments. Since misalignments of the strong quadrupoles in the ring induce large orbit distortions, the effects on the dynamic aperture are investigated before and after orbit correction. Effects of off-momentum particles and the tune dependence with momentum are also presented. This extensive analysis leads to the establishment of a tolerance budget. With all the errors set at the tolerance level, and with the orbit distortions corrected, the dynamic aperture reduction is no greater than 50% of that of the ideal machine

  16. Establishment of a tolerance budget for the Advanced Photon Source storage ring

    International Nuclear Information System (INIS)

    Bizek, H.; Crosbie, E.; Lessner, E.; Teng, L.

    1993-01-01

    The limitations on the dynamic aperture of the Advanced Photon Source storage ring due to magnet misalignments and fabrication errors are presented. The reduction of the dynamic aperture is analyzed first for each error considered individually, and then for combined error multipole fields in dipole, quadrupole, and sextupole magnets, excluding and including magnet misalignments. Since misalignments of the strong quadrupoles in the ring induce large orbit distortions, the effects on the dynamic aperture are investigated before and after orbit correction. Effects of off-momentum particles and the tune dependence with momentum are also presented. This extensive analysis leads to the establishment of a tolerance budget. With all the errors set at the tolerance level and with the orbit distortions corrected, the dynamic aperture reduction is no greater than 50% of that of the ideal machine

  17. Development of Nuclear Power Plant Safety Evaluation Method for the Automation Algorithm Application

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Seung Geun; Seong, Poong Hyun [KAIST, Daejeon (Korea, Republic of)

    2016-10-15

    It is commonly believed that replacing human operators to the automated system would guarantee greater efficiency, lower workloads, and fewer human error. Conventional machine learning techniques are considered as not capable to handle complex situations in NPP. Due to these kinds of issues, automation is not actively adopted although human error probability drastically increases during abnormal situations in NPP due to overload of information, high workload, and short time available for diagnosis. Recently, new machine learning techniques, which are known as ‘deep learning’ techniques have been actively applied to many fields, and the deep learning technique-based artificial intelligences (AIs) are showing better performance than conventional AIs. In 2015, deep Q-network (DQN) which is one of the deep learning techniques was developed and applied to train AI that automatically plays various Atari 2800 games, and this AI surpassed the human-level playing in many kind of games. Also in 2016, ‘Alpha-Go’, which was developed by ‘Google Deepmind’ based on deep learning technique to play the game of Go (i.e. Baduk), was defeated Se-dol Lee who is the World Go champion with score of 4:1. By the effort for reducing human error in NPPs, the ultimate goal of this study is the development of automation algorithm which can cover various situations in NPPs. As the first part, quantitative and real-time NPP safety evaluation method is being developed in order to provide the training criteria for automation algorithm. For that, EWS concept of medical field was adopted, and the applicability is investigated in this paper. Practically, the application of full automation (i.e. fully replaces human operators) may requires much more time for the validation and investigation of side-effects after the development of automation algorithm, and so the adoption in the form of full automation will take long time.

  18. Development of Nuclear Power Plant Safety Evaluation Method for the Automation Algorithm Application

    International Nuclear Information System (INIS)

    Kim, Seung Geun; Seong, Poong Hyun

    2016-01-01

    It is commonly believed that replacing human operators to the automated system would guarantee greater efficiency, lower workloads, and fewer human error. Conventional machine learning techniques are considered as not capable to handle complex situations in NPP. Due to these kinds of issues, automation is not actively adopted although human error probability drastically increases during abnormal situations in NPP due to overload of information, high workload, and short time available for diagnosis. Recently, new machine learning techniques, which are known as ‘deep learning’ techniques have been actively applied to many fields, and the deep learning technique-based artificial intelligences (AIs) are showing better performance than conventional AIs. In 2015, deep Q-network (DQN) which is one of the deep learning techniques was developed and applied to train AI that automatically plays various Atari 2800 games, and this AI surpassed the human-level playing in many kind of games. Also in 2016, ‘Alpha-Go’, which was developed by ‘Google Deepmind’ based on deep learning technique to play the game of Go (i.e. Baduk), was defeated Se-dol Lee who is the World Go champion with score of 4:1. By the effort for reducing human error in NPPs, the ultimate goal of this study is the development of automation algorithm which can cover various situations in NPPs. As the first part, quantitative and real-time NPP safety evaluation method is being developed in order to provide the training criteria for automation algorithm. For that, EWS concept of medical field was adopted, and the applicability is investigated in this paper. Practically, the application of full automation (i.e. fully replaces human operators) may requires much more time for the validation and investigation of side-effects after the development of automation algorithm, and so the adoption in the form of full automation will take long time

  19. Automated Testing with Targeted Event Sequence Generation

    DEFF Research Database (Denmark)

    Jensen, Casper Svenning; Prasad, Mukul R.; Møller, Anders

    2013-01-01

    Automated software testing aims to detect errors by producing test inputs that cover as much of the application source code as possible. Applications for mobile devices are typically event-driven, which raises the challenge of automatically producing event sequences that result in high coverage...

  20. Advanced Air Traffic Management Research (Human Factors and Automation): NASA Research Initiatives in Human-Centered Automation Design in Airspace Management

    Science.gov (United States)

    Corker, Kevin M.; Condon, Gregory W. (Technical Monitor)

    1996-01-01

    NASA has initiated a significant thrust of research and development focused on providing the flight crew and air traffic managers automation aids to increase capacity in en route and terminal area operations through the use of flexible, more fuel-efficient routing, while improving the level of safety in commercial carrier operations. In that system development, definition of cognitive requirements for integrated multi-operator dynamic aiding systems is fundamental. The core processes of control and the distribution of decision making in that control are undergoing extensive analysis. From our perspective, the human operators and the procedures by which they interact are the fundamental determinants of the safe, efficient, and flexible operation of the system. In that perspective, we have begun to explore what our experience has taught will be the most challenging aspects of designing and integrating human-centered automation in the advanced system. We have performed a full mission simulation looking at the role shift to self-separation on board the aircraft with the rules of the air guiding behavior and the provision of a cockpit display of traffic information and an on-board traffic alert system that seamlessly integrates into the TCAS operations. We have performed and initial investigation of the operational impact of "Dynamic Density" metrics on controller relinquishing and reestablishing full separation authority. (We follow the assumption that responsibility at all times resides with the controller.) This presentation will describe those efforts as well as describe the process by which we will guide the development of error tolerant systems that are sensitive to shifts in operator work load levels and dynamic shifts in the operating point of air traffic management.

  1. Operating procedure automation to enhance safety of nuclear power plants

    International Nuclear Information System (INIS)

    Husseiny, A.A.; Sabri, Z.A.; Adams, S.K.; Rodriguez, R.J.; Packer, D.; Holmes, J.W.

    1989-01-01

    Use of logic statements and computer assist are explored as means for automation and improvement on design of operating procedures including those employed in abnormal and emergency situations. Operating procedures for downpower and loss of forced circulation are used for demonstration. Human-factors analysis is performed on generic emergency operating procedures for three strategies of control; manual, semi-automatic and automatic, using standard emergency operating procedures. Such preliminary analysis shows that automation of procedures is feasible provided that fault-tolerant software and hardware become available for design of the controllers. Recommendations are provided for tests to substantiate the promise of enhancement of plant safety. Adequate design of operating procedures through automation may alleviate several major operational problems of nuclear power plants. Also, automation of procedures is necessary for partial or overall automatic control of plants. Fully automatic operations are needed for space applications while supervised automation of land-based and offshore plants may become the thrust of new generation of nulcear power plants. (orig.)

  2. Photogrammetric approach to automated checking of DTMs

    DEFF Research Database (Denmark)

    Potucková, Marketa

    2005-01-01

    Geometrically accurate digital terrain models (DTMs) are essential for orthoimage production and many other applications. Collecting reference data or visual inspection are reliable but time consuming and therefore expensive methods for finding errors in DTMs. In this paper, a photogrammetric...... approach to automated checking and improving of DTMs is evaluated. Corresponding points in two overlapping orthoimages are found by means of area based matching. Provided the image orientation is correct, discovered displacements correspond to DTM errors. Improvements of the method regarding its...

  3. The impact of a closed-loop electronic prescribing and administration system on prescribing errors, administration errors and staff time: a before-and-after study.

    Science.gov (United States)

    Franklin, Bryony Dean; O'Grady, Kara; Donyai, Parastou; Jacklin, Ann; Barber, Nick

    2007-08-01

    To assess the impact of a closed-loop electronic prescribing, automated dispensing, barcode patient identification and electronic medication administration record (EMAR) system on prescribing and administration errors, confirmation of patient identity before administration, and staff time. Before-and-after study in a surgical ward of a teaching hospital, involving patients and staff of that ward. Closed-loop electronic prescribing, automated dispensing, barcode patient identification and EMAR system. Percentage of new medication orders with a prescribing error, percentage of doses with medication administration errors (MAEs) and percentage given without checking patient identity. Time spent prescribing and providing a ward pharmacy service. Nursing time on medication tasks. Prescribing errors were identified in 3.8% of 2450 medication orders pre-intervention and 2.0% of 2353 orders afterwards (pMedical staff required 15 s to prescribe a regular inpatient drug pre-intervention and 39 s afterwards (p = 0.03; t test). Time spent providing a ward pharmacy service increased from 68 min to 98 min each weekday (p = 0.001; t test); 22% of drug charts were unavailable pre-intervention. Time per drug administration round decreased from 50 min to 40 min (p = 0.006; t test); nursing time on medication tasks outside of drug rounds increased from 21.1% to 28.7% (p = 0.006; chi(2) test). A closed-loop electronic prescribing, dispensing and barcode patient identification system reduced prescribing errors and MAEs, and increased confirmation of patient identity before administration. Time spent on medication-related tasks increased.

  4. Functionalization of Planet-Satellite Nanostructures Revealed by Nanoscopic Localization of Distinct Macromolecular Species

    KAUST Repository

    Rossner, Christian; Roddatis, Vladimir; Lopatin, Sergei; Vana, Philipp

    2016-01-01

    The development of a straightforward method is reported to form hybrid polymer/gold planet-satellite nanostructures (PlSNs) with functional polymer. Polyacrylate type polymer with benzyl chloride in its backbone as a macromolecular tracer

  5. Comparison of manual versus automated data collection method for an evidence-based nursing practice study.

    Science.gov (United States)

    Byrne, M D; Jordan, T R; Welle, T

    2013-01-01

    The objective of this study was to investigate and improve the use of automated data collection procedures for nursing research and quality assurance. A descriptive, correlational study analyzed 44 orthopedic surgical patients who were part of an evidence-based practice (EBP) project examining post-operative oxygen therapy at a Midwestern hospital. The automation work attempted to replicate a manually-collected data set from the EBP project. Automation was successful in replicating data collection for study data elements that were available in the clinical data repository. The automation procedures identified 32 "false negative" patients who met the inclusion criteria described in the EBP project but were not selected during the manual data collection. Automating data collection for certain data elements, such as oxygen saturation, proved challenging because of workflow and practice variations and the reliance on disparate sources for data abstraction. Automation also revealed instances of human error including computational and transcription errors as well as incomplete selection of eligible patients. Automated data collection for analysis of nursing-specific phenomenon is potentially superior to manual data collection methods. Creation of automated reports and analysis may require initial up-front investment with collaboration between clinicians, researchers and information technology specialists who can manage the ambiguities and challenges of research and quality assurance work in healthcare.

  6. Automation, consolidation, and integration in autoimmune diagnostics.

    Science.gov (United States)

    Tozzoli, Renato; D'Aurizio, Federica; Villalta, Danilo; Bizzaro, Nicola

    2015-08-01

    Over the past two decades, we have witnessed an extraordinary change in autoimmune diagnostics, characterized by the progressive evolution of analytical technologies, the availability of new tests, and the explosive growth of molecular biology and proteomics. Aside from these huge improvements, organizational changes have also occurred which brought about a more modern vision of the autoimmune laboratory. The introduction of automation (for harmonization of testing, reduction of human error, reduction of handling steps, increase of productivity, decrease of turnaround time, improvement of safety), consolidation (combining different analytical technologies or strategies on one instrument or on one group of connected instruments) and integration (linking analytical instruments or group of instruments with pre- and post-analytical devices) opened a new era in immunodiagnostics. In this article, we review the most important changes that have occurred in autoimmune diagnostics and present some models related to the introduction of automation in the autoimmunology laboratory, such as automated indirect immunofluorescence and changes in the two-step strategy for detection of autoantibodies; automated monoplex immunoassays and reduction of turnaround time; and automated multiplex immunoassays for autoantibody profiling.

  7. Error correction and degeneracy in surface codes suffering loss

    International Nuclear Information System (INIS)

    Stace, Thomas M.; Barrett, Sean D.

    2010-01-01

    Many proposals for quantum information processing are subject to detectable loss errors. In this paper, we give a detailed account of recent results in which we showed that topological quantum memories can simultaneously tolerate both loss errors and computational errors, with a graceful tradeoff between the threshold for each. We further discuss a number of subtleties that arise when implementing error correction on topological memories. We particularly focus on the role played by degeneracy in the matching algorithms and present a systematic study of its effects on thresholds. We also discuss some of the implications of degeneracy for estimating phase transition temperatures in the random bond Ising model.

  8. Automatic Compensation of Workpiece Positioning Tolerances for Precise Laser

    Directory of Open Access Journals (Sweden)

    N. C. Stache

    2008-01-01

    Full Text Available Precise laser welding plays a fundamental role in the production of high-tech goods, particularly in precision engineering. In this working field, precise adjustment and compensation of positioning tolerances of the parts to be welded with respect to the laser beam is of paramount importance. This procedure mostly requires tedious and error-prone manual adjustment, which additionally results in a sharp increase in production costs. We therefore developed a system which automates and thus accelerates this procedure significantly. To this end, the welding machine is equipped with a camera to acquire high resolution images of the parts to be welded. In addition, a software framework is developed which enables precise automatic position detection of these parts and adjusts the position of the welding contour correspondingly. As a result, the machine is rapidly prepared for welding, and it is much more flexible in adapting to unknown parts.This paper describes the entire concept of extending a conventional welding machine with means for image acquisition and position estimation. In addition to this description, the algorithms, the results of an evaluation of position estimation, and a final welding result are presented. 

  9. ADVANCED MMIS TOWARD SUBSTANTIAL REDUCTION IN HUMAN ERRORS IN NPPS

    Directory of Open Access Journals (Sweden)

    POONG HYUN SEONG

    2013-04-01

    Full Text Available This paper aims to give an overview of the methods to inherently prevent human errors and to effectively mitigate the consequences of such errors by securing defense-in-depth during plant management through the advanced man-machine interface system (MMIS. It is needless to stress the significance of human error reduction during an accident in nuclear power plants (NPPs. Unexpected shutdowns caused by human errors not only threaten nuclear safety but also make public acceptance of nuclear power extremely lower. We have to recognize there must be the possibility of human errors occurring since humans are not essentially perfect particularly under stressful conditions. However, we have the opportunity to improve such a situation through advanced information and communication technologies on the basis of lessons learned from our experiences. As important lessons, authors explained key issues associated with automation, man-machine interface, operator support systems, and procedures. Upon this investigation, we outlined the concept and technical factors to develop advanced automation, operation and maintenance support systems, and computer-based procedures using wired/wireless technology. It should be noted that the ultimate responsibility of nuclear safety obviously belongs to humans not to machines. Therefore, safety culture including education and training, which is a kind of organizational factor, should be emphasized as well. In regard to safety culture for human error reduction, several issues that we are facing these days were described. We expect the ideas of the advanced MMIS proposed in this paper to lead in the future direction of related researches and finally supplement the safety of NPPs.

  10. Advanced MMIS Toward Substantial Reduction in Human Errors in NPPs

    Energy Technology Data Exchange (ETDEWEB)

    Seong, Poong Hyun; Kang, Hyun Gook [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Na, Man Gyun [Chosun Univ., Gwangju (Korea, Republic of); Kim, Jong Hyun [KEPCO International Nuclear Graduate School, Ulsan (Korea, Republic of); Heo, Gyunyoung [Kyung Hee Univ., Yongin (Korea, Republic of); Jung, Yoensub [Korea Hydro and Nuclear Power Co., Ltd., Daejeon (Korea, Republic of)

    2013-04-15

    This paper aims to give an overview of the methods to inherently prevent human errors and to effectively mitigate the consequences of such errors by securing defense-in-depth during plant management through the advanced man-machine interface system (MMIS). It is needless to stress the significance of human error reduction during an accident in nuclear power plants (NPPs). Unexpected shutdowns caused by human errors not only threaten nuclear safety but also make public acceptance of nuclear power extremely lower. We have to recognize there must be the possibility of human errors occurring since humans are not essentially perfect particularly under stressful conditions. However, we have the opportunity to improve such a situation through advanced information and communication technologies on the basis of lessons learned from our experiences. As important lessons, authors explained key issues associated with automation, man-machine interface, operator support systems, and procedures. Upon this investigation, we outlined the concept and technical factors to develop advanced automation, operation and maintenance support systems, and computer-based procedures using wired/wireless technology. It should be noted that the ultimate responsibility of nuclear safety obviously belongs to humans not to machines. Therefore, safety culture including education and training, which is a kind of organizational factor, should be emphasized as well. In regard to safety culture for human error reduction, several issues that we are facing these days were described. We expect the ideas of the advanced MMIS proposed in this paper to lead in the future direction of related researches and finally supplement the safety of NPPs.

  11. Advanced MMIS Toward Substantial Reduction in Human Errors in NPPs

    International Nuclear Information System (INIS)

    Seong, Poong Hyun; Kang, Hyun Gook; Na, Man Gyun; Kim, Jong Hyun; Heo, Gyunyoung; Jung, Yoensub

    2013-01-01

    This paper aims to give an overview of the methods to inherently prevent human errors and to effectively mitigate the consequences of such errors by securing defense-in-depth during plant management through the advanced man-machine interface system (MMIS). It is needless to stress the significance of human error reduction during an accident in nuclear power plants (NPPs). Unexpected shutdowns caused by human errors not only threaten nuclear safety but also make public acceptance of nuclear power extremely lower. We have to recognize there must be the possibility of human errors occurring since humans are not essentially perfect particularly under stressful conditions. However, we have the opportunity to improve such a situation through advanced information and communication technologies on the basis of lessons learned from our experiences. As important lessons, authors explained key issues associated with automation, man-machine interface, operator support systems, and procedures. Upon this investigation, we outlined the concept and technical factors to develop advanced automation, operation and maintenance support systems, and computer-based procedures using wired/wireless technology. It should be noted that the ultimate responsibility of nuclear safety obviously belongs to humans not to machines. Therefore, safety culture including education and training, which is a kind of organizational factor, should be emphasized as well. In regard to safety culture for human error reduction, several issues that we are facing these days were described. We expect the ideas of the advanced MMIS proposed in this paper to lead in the future direction of related researches and finally supplement the safety of NPPs

  12. Thiomers for oral delivery of hydrophilic macromolecular drugs.

    Science.gov (United States)

    Bernkop-Schnürch, Andreas; Hoffer, Martin H; Kafedjiiski, Krum

    2004-11-01

    In recent years thiolated polymers (thiomers) have appeared as a promising new tool in oral drug delivery. Thiomers are obtained by the immobilisation of thio-bearing ligands to mucoadhesive polymeric excipients. By the formation of disulfide bonds with mucus glycoproteins, the mucoadhesive properties of thiomers are up to 130-fold improved compared with the corresponding unmodified polymers. Owing to the formation of inter- and intramolecular disulfide bonds within the thiomer itself, matrix tablets and particulate delivery systems show strong cohesive properties, resulting in comparatively higher stability, prolonged disintegration times and a more controlled drug release. The permeation of hydrophilic macromolecular drugs through the gastrointestinal (GI) mucosa can be improved by the use of thiomers. Furthermore, some thiomers exhibit improved inhibitory properties towards GI peptidases. The efficacy of thiomers in oral drug delivery has been demonstrated by various in vivo studies. A pharmacological efficacy of 1%, for example, was achieved in rats by oral administration of calcitonin tablets comprising a thiomer. Furthermore, tablets comprising a thiomer and pegylated insulin resulted in a pharmacological efficacy of 7% after oral application to diabetic mice. Low-molecular-weight heparin embedded in thiolated polycarbophil led to an absolute bioavailability of > or = 20% after oral administration to rats. In these studies, formulations comprising the corresponding unmodified polymer had only a marginal or no effect. These results indicate drug carrier systems based on thiomers appear to be a promising tool for oral delivery of hydrophilic macromolecular drugs.

  13. An approach to the verification of a fault-tolerant, computer-based reactor safety system: A case study using automated reasoning: Volume 1: Interim report

    International Nuclear Information System (INIS)

    Chisholm, G.H.; Kljaich, J.; Smith, B.T.; Wojcik, A.S.

    1987-01-01

    The purpose of this project is to explore the feasibility of automating the verification process for computer systems. The intent is to demonstrate that both the software and hardware that comprise the system meet specified availability and reliability criteria, that is, total design analysis. The approach to automation is based upon the use of Automated Reasoning Software developed at Argonne National Laboratory. This approach is herein referred to as formal analysis and is based on previous work on the formal verification of digital hardware designs. Formal analysis represents a rigorous evaluation which is appropriate for system acceptance in critical applications, such as a Reactor Safety System (RSS). This report describes a formal analysis technique in the context of a case study, that is, demonstrates the feasibility of applying formal analysis via application. The case study described is based on the Reactor Safety System (RSS) for the Experimental Breeder Reactor-II (EBR-II). This is a system where high reliability and availability are tantamount to safety. The conceptual design for this case study incorporates a Fault-Tolerant Processor (FTP) for the computer environment. An FTP is a computer which has the ability to produce correct results even in the presence of any single fault. This technology was selected as it provides a computer-based equivalent to the traditional analog based RSSs. This provides a more conservative design constraint than that imposed by the IEEE Standard, Criteria For Protection Systems For Nuclear Power Generating Stations (ANSI N42.7-1972)

  14. A SAFE approach towards early design space exploration of Fault-tolerant multimedia MPSoCs

    NARCIS (Netherlands)

    van Stralen, P.; Pimentel, A.

    2012-01-01

    With the reduction in feature size, transient errors start to play an important role in modern embedded systems. It is therefore important to make fault-tolerance a first-class citizen in embedded system design. Fault-tolerance patterns are techniques to make an application fault-tolerant. Not only

  15. ISPyB: an information management system for synchrotron macromolecular crystallography.

    Science.gov (United States)

    Delagenière, Solange; Brenchereau, Patrice; Launer, Ludovic; Ashton, Alun W; Leal, Ricardo; Veyrier, Stéphanie; Gabadinho, José; Gordon, Elspeth J; Jones, Samuel D; Levik, Karl Erik; McSweeney, Seán M; Monaco, Stéphanie; Nanao, Max; Spruce, Darren; Svensson, Olof; Walsh, Martin A; Leonard, Gordon A

    2011-11-15

    Individual research groups now analyze thousands of samples per year at synchrotron macromolecular crystallography (MX) resources. The efficient management of experimental data is thus essential if the best possible experiments are to be performed and the best possible data used in downstream processes in structure determination pipelines. Information System for Protein crystallography Beamlines (ISPyB), a Laboratory Information Management System (LIMS) with an underlying data model allowing for the integration of analyses down-stream of the data collection experiment was developed to facilitate such data management. ISPyB is now a multisite, generic LIMS for synchrotron-based MX experiments. Its initial functionality has been enhanced to include improved sample tracking and reporting of experimental protocols, the direct ranking of the diffraction characteristics of individual samples and the archiving of raw data and results from ancillary experiments and post-experiment data processing protocols. This latter feature paves the way for ISPyB to play a central role in future macromolecular structure solution pipelines and validates the application of the approach used in ISPyB to other experimental techniques, such as biological solution Small Angle X-ray Scattering and spectroscopy, which have similar sample tracking and data handling requirements.

  16. Macromolecular systems for vaccine delivery.

    Science.gov (United States)

    MuŽíková, G; Laga, R

    2016-10-20

    Vaccines have helped considerably in eliminating some life-threatening infectious diseases in past two hundred years. Recently, human medicine has focused on vaccination against some of the world's most common infectious diseases (AIDS, malaria, tuberculosis, etc.), and vaccination is also gaining popularity in the treatment of cancer or autoimmune diseases. The major limitation of current vaccines lies in their poor ability to generate a sufficient level of protective antibodies and T cell responses against diseases such as HIV, malaria, tuberculosis and cancers. Among the promising vaccination systems that could improve the potency of weakly immunogenic vaccines belong macromolecular carriers (water soluble polymers, polymer particels, micelles, gels etc.) conjugated with antigens and immunistumulatory molecules. The size, architecture, and the composition of the high molecular-weight carrier can significantly improve the vaccine efficiency. This review includes the most recently developed (bio)polymer-based vaccines reported in the literature.

  17. Spectroscopic investigation of ionizing-radiation tolerance of a Chlorophyceae green micro-alga

    Energy Technology Data Exchange (ETDEWEB)

    Farhi, E; Compagnon, E; Marzloff, V; Ollivier, J; Boisson, A M; Natali, F; Russo, D [Institut Laue-Langevin, BP 156, 38042 Grenoble cedex 9 (France); Rivasseau, C; Gromova, M; Bligny, R [CEA, Laboratoire de Physiologie Cellulaire Vegetale, 17 rue des Martyrs, 38054 Grenoble cedex 9 (France); Coute, A [Museum National d' Histoire Naturelle, Laboratoire de Cryptogamie, 2 rue Buffon, 75005 Paris (France)

    2008-03-12

    Micro-organisms living in extreme environments are captivating in the peculiar survival processes they have developed. Deinococcus radiodurans is probably the most famous radio-resistant bacteria. Similarly, a specific ecosystem has grown in a research reactor storage pool, and has selected organisms which may sustain radiative stress. An original green micro-alga which was never studied for its high tolerance to radiations has been isolated. It is the only autotrophic eukaryote that develops in this pool, although contamination possibilities coming from outside are not unusual. Studying what could explain this irradiation tolerance is consequently very interesting. An integrative study of the effects of irradiation on the micro-algae physiology, metabolism, internal dynamics, and genomics was initiated. In the work presented here, micro-algae were stressed with irradiation doses up to 20 kGy (2 Mrad), and studied by means of nuclear magnetic resonance, looking for modifications in the metabolism, and on the IN13 neutron backscattering instrument at the ILL, looking for both dynamics and structural macromolecular changes in the cells.

  18. Spectroscopic investigation of ionizing-radiation tolerance of a Chlorophyceae green micro-alga

    International Nuclear Information System (INIS)

    Farhi, E; Compagnon, E; Marzloff, V; Ollivier, J; Boisson, A M; Natali, F; Russo, D; Rivasseau, C; Gromova, M; Bligny, R; Coute, A

    2008-01-01

    Micro-organisms living in extreme environments are captivating in the peculiar survival processes they have developed. Deinococcus radiodurans is probably the most famous radio-resistant bacteria. Similarly, a specific ecosystem has grown in a research reactor storage pool, and has selected organisms which may sustain radiative stress. An original green micro-alga which was never studied for its high tolerance to radiations has been isolated. It is the only autotrophic eukaryote that develops in this pool, although contamination possibilities coming from outside are not unusual. Studying what could explain this irradiation tolerance is consequently very interesting. An integrative study of the effects of irradiation on the micro-algae physiology, metabolism, internal dynamics, and genomics was initiated. In the work presented here, micro-algae were stressed with irradiation doses up to 20 kGy (2 Mrad), and studied by means of nuclear magnetic resonance, looking for modifications in the metabolism, and on the IN13 neutron backscattering instrument at the ILL, looking for both dynamics and structural macromolecular changes in the cells

  19. Development of an Automated Technique for Failure Modes and Effect Analysis

    DEFF Research Database (Denmark)

    Blanke, M.; Borch, Ole; Allasia, G.

    1999-01-01

    Advances in automation have provided integration of monitoring and control functions to enhance the operator's overview and ability to take remedy actions when faults occur. Automation in plant supervision is technically possible with integrated automation systems as platforms, but new design...... methods are needed to cope efficiently with the complexity and to ensure that the functionality of a supervisor is correct and consistent. In particular these methods are expected to significantly improve fault tolerance of the designed systems. The purpose of this work is to develop a software module...... implementing an automated technique for Failure Modes and Effects Analysis (FMEA). This technique is based on the matrix formulation of FMEA for the investigation of failure propagation through a system. As main result, this technique will provide the design engineer with decision tables for fault handling...

  20. Development of an automated technique for failure modes and effect analysis

    DEFF Research Database (Denmark)

    Blanke, Mogens; Borch, Ole; Bagnoli, F.

    1999-01-01

    Advances in automation have provided integration of monitoring and control functions to enhance the operator's overview and ability to take remedy actions when faults occur. Automation in plant supervision is technically possible with integrated automation systems as platforms, but new design...... methods are needed to cope efficiently with the complexity and to ensure that the functionality of a supervisor is correct and consistent. In particular these methods are expected to significantly improve fault tolerance of the designed systems. The purpose of this work is to develop a software module...... implementing an automated technique for Failure Modes and Effects Analysis (FMEA). This technique is based on the matrix formulation of FMEA for the investigation of failure propagation through a system. As main result, this technique will provide the design engineer with decision tables for fault handling...

  1. Auditory interfaces in automated driving: an international survey

    Directory of Open Access Journals (Sweden)

    Pavlo Bazilinskyy

    2015-08-01

    Full Text Available This study investigated peoples’ opinion on auditory interfaces in contemporary cars and their willingness to be exposed to auditory feedback in automated driving. We used an Internet-based survey to collect 1,205 responses from 91 countries. The respondents stated their attitudes towards two existing auditory driver assistance systems, a parking assistant (PA and a forward collision warning system (FCWS, as well as towards a futuristic augmented sound system (FS proposed for fully automated driving. The respondents were positive towards the PA and FCWS, and rated the willingness to have automated versions of these systems as 3.87 and 3.77, respectively (on a scale from 1 = disagree strongly to 5 = agree strongly. The respondents tolerated the FS (the mean willingness to use it was 3.00 on the same scale. The results showed that among the available response options, the female voice was the most preferred feedback type for takeover requests in highly automated driving, regardless of whether the respondents’ country was English speaking or not. The present results could be useful for designers of automated vehicles and other stakeholders.

  2. Transient Tolerant Automated Control System for the LEDA 75kV Injector

    International Nuclear Information System (INIS)

    Thuot, M.E.; Dalesio, L.R.; Harrington, M.; Hodgkins, D.; Kerstiens, D.M.; Stettler, M.W.; Warren, D.S.; Zaugg, T.; Arvin, A.; Bolt, S.; Richards, M.

    1999-01-01

    The Low-Energy Demonstration Accelerator (LEDA) injector is designed to inject 75-keV, 110-mA, proton beams into the LEDA RFQ. The injector operation has been automated to provide long term, high availability operation using the Experimental Physics and Industrial Control System (EPICS). Automated recovery from spark-downs demands reliable spark detection and sequence execution by the injector controller. Reliable computer control in the high-energy transient environment required transient suppression and isolation of hundreds of analog and binary data lines connecting the EPICS computer controller to the injector and it's power supplies and diagnostics. A transient suppression design based on measured and modeled spark transient parameters provides robust injector operation. This paper describes the control system hardware and software design, implementation and operational performance

  3. Design of fault tolerant control system for steam generator using

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Myung Ki; Seo, Mi Ro [Korea Electric Power Research Institute, Taejon (Korea, Republic of)

    1998-12-31

    A controller and sensor fault tolerant system for a steam generator is designed with fuzzy logic. A structure of the proposed fault tolerant redundant system is composed of a supervisor and two fuzzy weighting modulators. A supervisor alternatively checks a controller and a sensor induced performances to identify which part, a controller or a sensor, is faulty. In order to analyze controller induced performance both an error and a change in error of the system output are chosen as fuzzy variables. The fuzzy logic for a sensor induced performance uses two variables : a deviation between two sensor outputs and its frequency. Fuzzy weighting modulator generates an output signal compensated for faulty input signal. Simulations show that the proposed fault tolerant control scheme for a steam generator regulates well water level by suppressing fault effect of either controllers or sensors. Therefore through duplicating sensors and controllers with the proposed fault tolerant scheme, both a reliability of a steam generator control and sensor system and that of a power plant increase even more. 2 refs., 9 figs., 1 tab. (Author)

  4. Automation bias: a systematic review of frequency, effect mediators, and mitigators.

    Science.gov (United States)

    Goddard, Kate; Roudsari, Abdul; Wyatt, Jeremy C

    2012-01-01

    Automation bias (AB)--the tendency to over-rely on automation--has been studied in various academic fields. Clinical decision support systems (CDSS) aim to benefit the clinical decision-making process. Although most research shows overall improved performance with use, there is often a failure to recognize the new errors that CDSS can introduce. With a focus on healthcare, a systematic review of the literature from a variety of research fields has been carried out, assessing the frequency and severity of AB, the effect mediators, and interventions potentially mitigating this effect. This is discussed alongside automation-induced complacency, or insufficient monitoring of automation output. A mix of subject specific and freetext terms around the themes of automation, human-automation interaction, and task performance and error were used to search article databases. Of 13 821 retrieved papers, 74 met the inclusion criteria. User factors such as cognitive style, decision support systems (DSS), and task specific experience mediated AB, as did attitudinal driving factors such as trust and confidence. Environmental mediators included workload, task complexity, and time constraint, which pressurized cognitive resources. Mitigators of AB included implementation factors such as training and emphasizing user accountability, and DSS design factors such as the position of advice on the screen, updated confidence levels attached to DSS output, and the provision of information versus recommendation. By uncovering the mechanisms by which AB operates, this review aims to help optimize the clinical decision-making process for CDSS developers and healthcare practitioners.

  5. Automated Inadvertent Intruder Application

    International Nuclear Information System (INIS)

    Koffman, Larry D.; Lee, Patricia L.; Cook, James R.; Wilhite, Elmer L.

    2008-01-01

    The Environmental Analysis and Performance Modeling group of Savannah River National Laboratory (SRNL) conducts performance assessments of the Savannah River Site (SRS) low-level waste facilities to meet the requirements of DOE Order 435.1. These performance assessments, which result in limits on the amounts of radiological substances that can be placed in the waste disposal facilities, consider numerous potential exposure pathways that could occur in the future. One set of exposure scenarios, known as inadvertent intruder analysis, considers the impact on hypothetical individuals who are assumed to inadvertently intrude onto the waste disposal site. Inadvertent intruder analysis considers three distinct scenarios for exposure referred to as the agriculture scenario, the resident scenario, and the post-drilling scenario. Each of these scenarios has specific exposure pathways that contribute to the overall dose for the scenario. For the inadvertent intruder analysis, the calculation of dose for the exposure pathways is a relatively straightforward algebraic calculation that utilizes dose conversion factors. Prior to 2004, these calculations were performed using an Excel spreadsheet. However, design checks of the spreadsheet calculations revealed that errors could be introduced inadvertently when copying spreadsheet formulas cell by cell and finding these errors was tedious and time consuming. This weakness led to the specification of functional requirements to create a software application that would automate the calculations for inadvertent intruder analysis using a controlled source of input parameters. This software application, named the Automated Inadvertent Intruder Application, has undergone rigorous testing of the internal calculations and meets software QA requirements. The Automated Inadvertent Intruder Application was intended to replace the previous spreadsheet analyses with an automated application that was verified to produce the same calculations and

  6. The contrasting effect of macromolecular crowding on amyloid fibril formation.

    Directory of Open Access Journals (Sweden)

    Qian Ma

    Full Text Available Amyloid fibrils associated with neurodegenerative diseases can be considered biologically relevant failures of cellular quality control mechanisms. It is known that in vivo human Tau protein, human prion protein, and human copper, zinc superoxide dismutase (SOD1 have the tendency to form fibril deposits in a variety of tissues and they are associated with different neurodegenerative diseases, while rabbit prion protein and hen egg white lysozyme do not readily form fibrils and are unlikely to cause neurodegenerative diseases. In this study, we have investigated the contrasting effect of macromolecular crowding on fibril formation of different proteins.As revealed by assays based on thioflavin T binding and turbidity, human Tau fragments, when phosphorylated by glycogen synthase kinase-3β, do not form filaments in the absence of a crowding agent but do form fibrils in the presence of a crowding agent, and the presence of a strong crowding agent dramatically promotes amyloid fibril formation of human prion protein and its two pathogenic mutants E196K and D178N. Such an enhancing effect of macromolecular crowding on fibril formation is also observed for a pathological human SOD1 mutant A4V. On the other hand, rabbit prion protein and hen lysozyme do not form amyloid fibrils when a crowding agent at 300 g/l is used but do form fibrils in the absence of a crowding agent. Furthermore, aggregation of these two proteins is remarkably inhibited by Ficoll 70 and dextran 70 at 200 g/l.We suggest that proteins associated with neurodegenerative diseases are more likely to form amyloid fibrils under crowded conditions than in dilute solutions. By contrast, some of the proteins that are not neurodegenerative disease-associated are unlikely to misfold in crowded physiological environments. A possible explanation for the contrasting effect of macromolecular crowding on these two sets of proteins (amyloidogenic proteins and non-amyloidogenic proteins has been

  7. The State and Trends of Barcode, RFID, Biometric and Pharmacy Automation Technologies in US Hospitals

    Science.gov (United States)

    Uy, Raymonde Charles Y.; Kury, Fabricio P.; Fontelo, Paul A.

    2015-01-01

    The standard of safe medication practice requires strict observance of the five rights of medication administration: the right patient, drug, time, dose, and route. Despite adherence to these guidelines, medication errors remain a public health concern that has generated health policies and hospital processes that leverage automation and computerization to reduce these errors. Bar code, RFID, biometrics and pharmacy automation technologies have been demonstrated in literature to decrease the incidence of medication errors by minimizing human factors involved in the process. Despite evidence suggesting the effectivity of these technologies, adoption rates and trends vary across hospital systems. The objective of study is to examine the state and adoption trends of automatic identification and data capture (AIDC) methods and pharmacy automation technologies in U.S. hospitals. A retrospective descriptive analysis of survey data from the HIMSS Analytics® Database was done, demonstrating an optimistic growth in the adoption of these patient safety solutions. PMID:26958264

  8. The State and Trends of Barcode, RFID, Biometric and Pharmacy Automation Technologies in US Hospitals.

    Science.gov (United States)

    Uy, Raymonde Charles Y; Kury, Fabricio P; Fontelo, Paul A

    2015-01-01

    The standard of safe medication practice requires strict observance of the five rights of medication administration: the right patient, drug, time, dose, and route. Despite adherence to these guidelines, medication errors remain a public health concern that has generated health policies and hospital processes that leverage automation and computerization to reduce these errors. Bar code, RFID, biometrics and pharmacy automation technologies have been demonstrated in literature to decrease the incidence of medication errors by minimizing human factors involved in the process. Despite evidence suggesting the effectivity of these technologies, adoption rates and trends vary across hospital systems. The objective of study is to examine the state and adoption trends of automatic identification and data capture (AIDC) methods and pharmacy automation technologies in U.S. hospitals. A retrospective descriptive analysis of survey data from the HIMSS Analytics® Database was done, demonstrating an optimistic growth in the adoption of these patient safety solutions.

  9. The Stanford Automated Mounter: Enabling High-Throughput Protein Crystal Screening at SSRL

    International Nuclear Information System (INIS)

    Smith, C.A.; Cohen, A.E.

    2009-01-01

    The macromolecular crystallography experiment lends itself perfectly to high-throughput technologies. The initial steps including the expression, purification, and crystallization of protein crystals, along with some of the later steps involving data processing and structure determination have all been automated to the point where some of the last remaining bottlenecks in the process have been crystal mounting, crystal screening, and data collection. At the Stanford Synchrotron Radiation Laboratory, a National User Facility that provides extremely brilliant X-ray photon beams for use in materials science, environmental science, and structural biology research, the incorporation of advanced robotics has enabled crystals to be screened in a true high-throughput fashion, thus dramatically accelerating the final steps. Up to 288 frozen crystals can be mounted by the beamline robot (the Stanford Auto-Mounting System) and screened for diffraction quality in a matter of hours without intervention. The best quality crystals can then be remounted for the collection of complete X-ray diffraction data sets. Furthermore, the entire screening and data collection experiment can be controlled from the experimenter's home laboratory by means of advanced software tools that enable network-based control of the highly automated beamlines.

  10. Polydisulfide Manganese(II) Complexes as Non-Gadolinium Biodegradable Macromolecular MRI Contrast Agents

    Science.gov (United States)

    Ye, Zhen; Jeong, Eun-Kee; Wu, Xueming; Tan, Mingqian; Yin, Shouyu; Lu, Zheng-Rong

    2011-01-01

    Purpose To develop safe and effective manganese(II) based biodegradable macromolecular MRI contrast agents. Materials and Methods In this study, we synthesized and characterized two polydisulfide manganese(II) complexes, Mn-DTPA cystamine copolymers and Mn-EDTA cystamine copolymers, as new biodegradable macromolecular MRI contrast agents. The contrast enhancement of the two manganese based contrast agents were evaluated in mice bearing MDA-MB-231 human breast carcinoma xenografts, in comparison with MnCl2. Results The T1 and T2 relaxivities were 4.74 and 10.38 mM−1s−1 per manganese at 3T for Mn-DTPA cystamine copolymers (Mn=30.50 kDa) and 6.41 and 9.72 mM−1s−1 for Mn-EDTA cystamine copolymers (Mn= 61.80 kDa). Both polydisulfide Mn(II) complexes showed significant liver, myocardium and tumor enhancement. Conclusion The manganese based polydisulfide contrast agents have a potential to be developed as alternative non-gadolinium contrast agents for MR cancer and myocardium imaging. PMID:22031457

  11. MO-FG-303-04: A Smartphone Application for Automated Mechanical Quality Assurance of Medical Accelerators

    International Nuclear Information System (INIS)

    Kim, H; Lee, H; Choi, K; Ye, S

    2015-01-01

    Purpose: The mechanical quality assurance (QA) of medical accelerators consists of a time consuming series of procedures. Since most of the procedures are done manually – e.g., checking gantry rotation angle with the naked eye using a level attached to the gantry –, it is considered to be a process with high potential for human errors. To remove the possibilities of human errors and reduce the procedure duration, we developed a smartphone application for automated mechanical QA. Methods: The preparation for the automated process was done by attaching a smartphone to the gantry facing upward. For the assessments of gantry and collimator angle indications, motion sensors (gyroscope, accelerator, and magnetic field sensor) embedded in the smartphone were used. For the assessments of jaw position indicator, cross-hair centering, and optical distance indicator (ODI), an optical-image processing module using a picture taken by the high-resolution camera embedded in the smartphone was implemented. The application was developed with the Android software development kit (SDK) and OpenCV library. Results: The system accuracies in terms of angle detection error and length detection error were < 0.1° and < 1 mm, respectively. The mean absolute error for gantry and collimator rotation angles were 0.03° and 0.041°, respectively. The mean absolute error for the measured light field size was 0.067 cm. Conclusion: The automated system we developed can be used for the mechanical QA of medical accelerators with proven accuracy. For more convenient use of this application, the wireless communication module is under development. This system has a strong potential for the automation of the other QA procedures such as light/radiation field coincidence and couch translation/rotations

  12. MO-FG-303-04: A Smartphone Application for Automated Mechanical Quality Assurance of Medical Accelerators

    Energy Technology Data Exchange (ETDEWEB)

    Kim, H [Interdisciplinary Program in Radiation applied Life Science, College of Medicine, Seoul National University, Seoul (Korea, Republic of); Lee, H; Choi, K [Program in Biomedical Radiation Sciences, Department of Transdisciplinary Studies, Graduate School of Convergence Science and Technology, Seoul National University, Seoul (Korea, Republic of); Ye, S [Interdisciplinary Program in Radiation applied Life Science, College of Medicine, Seoul National University, Seoul (Korea, Republic of); Program in Biomedical Radiation Sciences, Department of Transdisciplinary Studies, Graduate School of Convergence Science and Technology, Seoul National University, Seoul (Korea, Republic of); Department of Radiation Oncology, Seoul National University Hospital, Seoul (Korea, Republic of)

    2015-06-15

    Purpose: The mechanical quality assurance (QA) of medical accelerators consists of a time consuming series of procedures. Since most of the procedures are done manually – e.g., checking gantry rotation angle with the naked eye using a level attached to the gantry –, it is considered to be a process with high potential for human errors. To remove the possibilities of human errors and reduce the procedure duration, we developed a smartphone application for automated mechanical QA. Methods: The preparation for the automated process was done by attaching a smartphone to the gantry facing upward. For the assessments of gantry and collimator angle indications, motion sensors (gyroscope, accelerator, and magnetic field sensor) embedded in the smartphone were used. For the assessments of jaw position indicator, cross-hair centering, and optical distance indicator (ODI), an optical-image processing module using a picture taken by the high-resolution camera embedded in the smartphone was implemented. The application was developed with the Android software development kit (SDK) and OpenCV library. Results: The system accuracies in terms of angle detection error and length detection error were < 0.1° and < 1 mm, respectively. The mean absolute error for gantry and collimator rotation angles were 0.03° and 0.041°, respectively. The mean absolute error for the measured light field size was 0.067 cm. Conclusion: The automated system we developed can be used for the mechanical QA of medical accelerators with proven accuracy. For more convenient use of this application, the wireless communication module is under development. This system has a strong potential for the automation of the other QA procedures such as light/radiation field coincidence and couch translation/rotations.

  13. Using microwave Doppler radar in automated manufacturing applications

    Science.gov (United States)

    Smith, Gregory C.

    Since the beginning of the Industrial Revolution, manufacturers worldwide have used automation to improve productivity, gain market share, and meet growing or changing consumer demand for manufactured products. To stimulate further industrial productivity, manufacturers need more advanced automation technologies: "smart" part handling systems, automated assembly machines, CNC machine tools, and industrial robots that use new sensor technologies, advanced control systems, and intelligent decision-making algorithms to "see," "hear," "feel," and "think" at the levels needed to handle complex manufacturing tasks without human intervention. The investigator's dissertation offers three methods that could help make "smart" CNC machine tools and industrial robots possible: (1) A method for detecting acoustic emission using a microwave Doppler radar detector, (2) A method for detecting tool wear on a CNC lathe using a Doppler radar detector, and (3) An online non-contact method for detecting industrial robot position errors using a microwave Doppler radar motion detector. The dissertation studies indicate that microwave Doppler radar could be quite useful in automated manufacturing applications. In particular, the methods developed may help solve two difficult problems that hinder further progress in automating manufacturing processes: (1) Automating metal-cutting operations on CNC machine tools by providing a reliable non-contact method for detecting tool wear, and (2) Fully automating robotic manufacturing tasks by providing a reliable low-cost non-contact method for detecting on-line position errors. In addition, the studies offer a general non-contact method for detecting acoustic emission that may be useful in many other manufacturing and non-manufacturing areas, as well (e.g., monitoring and nondestructively testing structures, materials, manufacturing processes, and devices). By advancing the state of the art in manufacturing automation, the studies may help

  14. In-vacuum long-wavelength macromolecular crystallography.

    Science.gov (United States)

    Wagner, Armin; Duman, Ramona; Henderson, Keith; Mykhaylyk, Vitaliy

    2016-03-01

    Structure solution based on the weak anomalous signal from native (protein and DNA) crystals is increasingly being attempted as part of synchrotron experiments. Maximizing the measurable anomalous signal by collecting diffraction data at longer wavelengths presents a series of technical challenges caused by the increased absorption of X-rays and larger diffraction angles. A new beamline at Diamond Light Source has been built specifically for collecting data at wavelengths beyond the capability of other synchrotron macromolecular crystallography beamlines. Here, the theoretical considerations in support of the long-wavelength beamline are outlined and the in-vacuum design of the endstation is discussed, as well as other hardware features aimed at enhancing the accuracy of the diffraction data. The first commissioning results, representing the first in-vacuum protein structure solution, demonstrate the promising potential of the beamline.

  15. A framework for software fault tolerance in real-time systems

    Science.gov (United States)

    Anderson, T.; Knight, J. C.

    1983-01-01

    A classification scheme for errors and a technique for the provision of software fault tolerance in cyclic real-time systems is presented. The technique requires that the process structure of a system be represented by a synchronization graph which is used by an executive as a specification of the relative times at which they will communicate during execution. Communication between concurrent processes is severely limited and may only take place between processes engaged in an exchange. A history of error occurrences is maintained by an error handler. When an error is detected, the error handler classifies it using the error history information and then initiates appropriate recovery action.

  16. Comparison of known food weights with image-based portion-size automated estimation and adolescents' self-reported portion size.

    Science.gov (United States)

    Lee, Christina D; Chae, Junghoon; Schap, TusaRebecca E; Kerr, Deborah A; Delp, Edward J; Ebert, David S; Boushey, Carol J

    2012-03-01

    Diet is a critical element of diabetes self-management. An emerging area of research is the use of images for dietary records using mobile telephones with embedded cameras. These tools are being designed to reduce user burden and to improve accuracy of portion-size estimation through automation. The objectives of this study were to (1) assess the error of automatically determined portion weights compared to known portion weights of foods and (2) to compare the error between automation and human. Adolescents (n = 15) captured images of their eating occasions over a 24 h period. All foods and beverages served were weighed. Adolescents self-reported portion sizes for one meal. Image analysis was used to estimate portion weights. Data analysis compared known weights, automated weights, and self-reported portions. For the 19 foods, the mean ratio of automated weight estimate to known weight ranged from 0.89 to 4.61, and 9 foods were within 0.80 to 1.20. The largest error was for lettuce and the most accurate was strawberry jam. The children were fairly accurate with portion estimates for two foods (sausage links, toast) using one type of estimation aid and two foods (sausage links, scrambled eggs) using another aid. The automated method was fairly accurate for two foods (sausage links, jam); however, the 95% confidence intervals for the automated estimates were consistently narrower than human estimates. The ability of humans to estimate portion sizes of foods remains a problem and a perceived burden. Errors in automated portion-size estimation can be systematically addressed while minimizing the burden on people. Future applications that take over the burden of these processes may translate to better diabetes self-management. © 2012 Diabetes Technology Society.

  17. An automated portal verification system for the tangential breast portal field

    International Nuclear Information System (INIS)

    Yin, F.-F.; Lai, W.; Chen, C. W.; Nelson, D. F.

    1995-01-01

    Hough transform was used to detect and quantify the field edge. Then, the anatomical landmarks (skin line and chest wall) were extracted using both histogram equalization method and Canny edge detector in different subregions. The resulting parameters such as relative shift and rotation from the matching procedure were related to the patient setup variations and were used as the basis for the positioning correction suggestion. Results: The automated portal verification technique was tested using over 100 clinical tangential breast portal images. Both field widths and collimator angles were calculated and were compared to the machine setup parameters. The computer identified anatomical features were evaluated by an expert oncologist by comparing computer-identified edge lines to manual drawings. Results indicated that the computerized algorithm was able to detect the setup field size within an error less than 1.5 mm and collimator angle with an error less than one degree compared to the original field setup. Note that these are the tolerances of the treatment machine. The radiation oncologist rated computer extracted features as absolutely acceptable except 10% of chest walls were acceptable. The subjective evaluation indicated that the computer-identified features was reliable enough for potential clinical applications. The Chamfer matching method provided a matching results between features in two images in an accuracy of within 2 mm. Conclusions: A fully automated portal verification system is developed for the radiation therapy of breast cancer. With a newly developed hierarchical region feature processing and a feature-weighted Chamfer matching techniques, the treatment port for the tangential breast port can be automatically verified. The technique we developed can be also used for the development of automated portal verification systems for the other treatment sites. Our preliminary results showed potentials for the clinical applications

  18. Evaluation of damping estimates by automated Operational Modal Analysis for offshore wind turbine tower vibrations

    DEFF Research Database (Denmark)

    Bajrić, Anela; Høgsberg, Jan Becker; Rüdinger, Finn

    2018-01-01

    Reliable predictions of the lifetime of offshore wind turbine structures are influenced by the limited knowledge concerning the inherent level of damping during downtime. Error measures and an automated procedure for covariance driven Operational Modal Analysis (OMA) techniques has been proposed....... In order to obtain algorithmic independent answers, three identification techniques are compared: Eigensystem Realization Algorithm (ERA), covariance driven Stochastic Subspace Identification (COV-SSI) and the Enhanced Frequency Domain Decomposition (EFDD). Discrepancies between automated identification...... techniques are discussed and illustrated with respect to signal noise, measurement time, vibration amplitudes and stationarity of the ambient response. The best bias-variance error trade-off of damping estimates is obtained by the COV-SSI. The proposed automated procedure is validated by real vibration...

  19. AUTOMATED INADVERTENT INTRUDER APPLICATION

    International Nuclear Information System (INIS)

    Koffman, L; Patricia Lee, P; Jim Cook, J; Elmer Wilhite, E

    2007-01-01

    The Environmental Analysis and Performance Modeling group of Savannah River National Laboratory (SRNL) conducts performance assessments of the Savannah River Site (SRS) low-level waste facilities to meet the requirements of DOE Order 435.1. These performance assessments, which result in limits on the amounts of radiological substances that can be placed in the waste disposal facilities, consider numerous potential exposure pathways that could occur in the future. One set of exposure scenarios, known as inadvertent intruder analysis, considers the impact on hypothetical individuals who are assumed to inadvertently intrude onto the waste disposal site. Inadvertent intruder analysis considers three distinct scenarios for exposure referred to as the agriculture scenario, the resident scenario, and the post-drilling scenario. Each of these scenarios has specific exposure pathways that contribute to the overall dose for the scenario. For the inadvertent intruder analysis, the calculation of dose for the exposure pathways is a relatively straightforward algebraic calculation that utilizes dose conversion factors. Prior to 2004, these calculations were performed using an Excel spreadsheet. However, design checks of the spreadsheet calculations revealed that errors could be introduced inadvertently when copying spreadsheet formulas cell by cell and finding these errors was tedious and time consuming. This weakness led to the specification of functional requirements to create a software application that would automate the calculations for inadvertent intruder analysis using a controlled source of input parameters. This software application, named the Automated Inadvertent Intruder Application, has undergone rigorous testing of the internal calculations and meets software QA requirements. The Automated Inadvertent Intruder Application was intended to replace the previous spreadsheet analyses with an automated application that was verified to produce the same calculations and

  20. SU-C-BRD-06: Sensitivity Study of An Automated System to Acquire and Analyze EPID Exit Dose Images

    Energy Technology Data Exchange (ETDEWEB)

    Olch, A; Zhuang, A [University of Southern California, Los Angeles, CA (United States)

    2015-06-15

    Purpose: The dosimetric consequences of errors in patient setup or beam delivery and anatomical changes are not readily known. A new product, PerFRACTION (Sun Nuclear Corporation), is designed to detect these errors by comparing the EPID exit dose image from each field of each fraction to those from baseline fraction images. This work investigates the sensitivity of PerFRACTION to detect the deviation of induced errors in a variety of realistic scenarios. Methods: Eight plans were created mimicking potential delivery or setup errors. The plans consisted of a nominal field and the field with an induced error. These were used to irradiate the EPID simulating multiple fractions with and without the error. Integrated EPID images were acquired in clinical mode and saved in ARIA. PerFRACTION automatically pulls the images into its database and performs the user defined comparison. In some cases, images were manually pushed to PerFRACTION. We varied the distance-to-agreement or dose tolerance until PerFRACTION showed failing pixels in the affected region and recorded the values. We induced errors of 1mm and greater in jaw, MLC, and couch position, 2 degree collimation rotation (patient yaw), and 0.5% to 1.5% in machine output. Both static and arc fields with the rails in or out were also acquired and compared. Results: PerFRACTION detected position errors of the jaws, MLC, and couch with an accuracy of better than 0.5 mm, and 0.2 degrees for collimator and gantry error. PerFRACTION detected a machine output error within 0.2% and detected the change in rail position. Conclusion: A new automated system for monitoring daily treatments for machine or patient variations from the first fraction using integrated EPID images was found to be sensitive enough to detect small positional, angular, and dosimetric errors within 0.5mm, 0.2 degrees, and 0.2%, respectively. Sun Nuclear Corporation has provided a software license for the product described.

  1. Dependence of fluence errors in dynamic IMRT on leaf-positional errors varying with time and leaf number

    International Nuclear Information System (INIS)

    Zygmanski, Piotr; Kung, Jong H.; Jiang, Steve B.; Chin, Lee

    2003-01-01

    In d-MLC based IMRT, leaves move along a trajectory that lies within a user-defined tolerance (TOL) about the ideal trajectory specified in a d-MLC sequence file. The MLC controller measures leaf positions multiple times per second and corrects them if they deviate from ideal positions by a value greater than TOL. The magnitude of leaf-positional errors resulting from finite mechanical precision depends on the performance of the MLC motors executing leaf motions and is generally larger if leaves are forced to move at higher speeds. The maximum value of leaf-positional errors can be limited by decreasing TOL. However, due to the inherent time delay in the MLC controller, this may not happen at all times. Furthermore, decreasing the leaf tolerance results in a larger number of beam hold-offs, which, in turn leads, to a longer delivery time and, paradoxically, to higher chances of leaf-positional errors (≤TOL). On the other end, the magnitude of leaf-positional errors depends on the complexity of the fluence map to be delivered. Recently, it has been shown that it is possible to determine the actual distribution of leaf-positional errors either by the imaging of moving MLC apertures with a digital imager or by analysis of a MLC log file saved by a MLC controller. This leads next to an important question: What is the relation between the distribution of leaf-positional errors and fluence errors. In this work, we introduce an analytical method to determine this relation in dynamic IMRT delivery. We model MLC errors as Random-Leaf Positional (RLP) errors described by a truncated normal distribution defined by two characteristic parameters: a standard deviation σ and a cut-off value Δx 0 (Δx 0 ∼TOL). We quantify fluence errors for two cases: (i) Δx 0 >>σ (unrestricted normal distribution) and (ii) Δx 0 0 --limited normal distribution). We show that an average fluence error of an IMRT field is proportional to (i) σ/ALPO and (ii) Δx 0 /ALPO, respectively, where

  2. Workshop on algorithms for macromolecular modeling. Final project report, June 1, 1994--May 31, 1995

    Energy Technology Data Exchange (ETDEWEB)

    Leimkuhler, B.; Hermans, J.; Skeel, R.D.

    1995-07-01

    A workshop was held on algorithms and parallel implementations for macromolecular dynamics, protein folding, and structural refinement. This document contains abstracts and brief reports from that workshop.

  3. MR lymphography with macromolecular Gd-DTPA compounds

    International Nuclear Information System (INIS)

    Hamm, B.; Wagner, S.; Branding, G.; Taupitz, M.; Wolf, K.J.

    1990-01-01

    This paper investigates the suitability of macromolecular Gd-DTPA compounds as signal-enhancing lymphographic agents in MR imaging. Two Gd-DTPA polylysin compounds and Gd-DTPA albumin, with molecular weights of 48,000,170,000, and 87,000 daltons, respectively, were tested in rabbits at gadolinium doses of 5 and 15 μmol per animal. Three animals were examined at each dose with T1-weighted sequences. The iliac lymph nodes were imaged prior to and during unilateral endolymphatic infusion into a femoral lymph vessel as well as over a period of 2 hours thereafter. All contrast media showed a homogeneous and pronounced signal enhancement in the lymph nodes during infusion at both doses

  4. Quantifying brain tissue volume in multiple sclerosis with automated lesion segmentation and filling

    Directory of Open Access Journals (Sweden)

    Sergi Valverde

    2015-01-01

    Full Text Available Lesion filling has been successfully applied to reduce the effect of hypo-intense T1-w Multiple Sclerosis (MS lesions on automatic brain tissue segmentation. However, a study of fully automated pipelines incorporating lesion segmentation and lesion filling on tissue volume analysis has not yet been performed. Here, we analyzed the % of error introduced by automating the lesion segmentation and filling processes in the tissue segmentation of 70 clinically isolated syndrome patient images. First of all, images were processed using the LST and SLS toolkits with different pipeline combinations that differed in either automated or manual lesion segmentation, and lesion filling or masking out lesions. Then, images processed following each of the pipelines were segmented into gray matter (GM and white matter (WM using SPM8, and compared with the same images where expert lesion annotations were filled before segmentation. Our results showed that fully automated lesion segmentation and filling pipelines reduced significantly the % of error in GM and WM volume on images of MS patients, and performed similarly to the images where expert lesion annotations were masked before segmentation. In all the pipelines, the amount of misclassified lesion voxels was the main cause in the observed error in GM and WM volume. However, the % of error was significantly lower when automatically estimated lesions were filled and not masked before segmentation. These results are relevant and suggest that LST and SLS toolboxes allow the performance of accurate brain tissue volume measurements without any kind of manual intervention, which can be convenient not only in terms of time and economic costs, but also to avoid the inherent intra/inter variability between manual annotations.

  5. Time reduction and automation of routine planning activities through the use of macros

    International Nuclear Information System (INIS)

    Alaman, C.; Perez-Alija, J.; Herrero, C.; Real, C. del; Osorio, J. L.; Almansa, J.

    2011-01-01

    The use of macros in scheduler automates Adac Pinnacle3 much of the routine activities in the planning process, from the display options and placement of beams, to, among other possibilities, systematic naming them and export of the physical and clinical dosimetry. This automation allows reduction of the times associated with the planning process and an error reduction.

  6. Aging changes of macromolecular synthesis in the mitochondria of mouse hepatocytes as revealed by microscopic radioautography

    Energy Technology Data Exchange (ETDEWEB)

    Nagata, Tetsuji [Shinshu University, Matsumoto (Japan). Dept. of Anatomy and Cell Biology

    2007-07-01

    This mini-review reports aging changes of macromolecular synthesis in the mitochondria of mouse hepatocytes. We have observed the macromolecular synthesis, such as DNA, RNA and proteins, in the mitochondria of various mammalian cells by means of electron microscopic radioautography technique developed in our laboratory. The number of mitochondria per cell, number of labeled mitochondria per cell with 3H-thymidine, 3H-uridine and 3H-leucine, precursors for DNA, RNA and proteins, respectively, were counted and the labeling indices at various ages, from fetal to postnatal early days and several months to 1 and 2 years in senescence, were calculated, which showed variations due to aging. (author)

  7. WIFIP: a web-based user interface for automated synchrotron beamlines.

    Science.gov (United States)

    Sallaz-Damaz, Yoann; Ferrer, Jean Luc

    2017-09-01

    The beamline control software, through the associated graphical user interface (GUI), is the user access point to the experiment, interacting with synchrotron beamline components and providing automated routines. FIP, the French beamline for the Investigation of Proteins, is a highly automatized macromolecular crystallography (MX) beamline at the European Synchrotron Radiation Facility. On such a beamline, a significant number of users choose to control their experiment remotely. This is often performed with a limited bandwidth and from a large choice of computers and operating systems. Furthermore, this has to be possible in a rapidly evolving experimental environment, where new developments have to be easily integrated. To face these challenges, a light, platform-independent, control software and associated GUI are required. Here, WIFIP, a web-based user interface developed at FIP, is described. Further than being the present FIP control interface, WIFIP is also a proof of concept for future MX control software.

  8. On Round-off Error for Adaptive Finite Element Methods

    KAUST Repository

    Alvarez-Aramberri, J.

    2012-06-02

    Round-off error analysis has been historically studied by analyzing the condition number of the associated matrix. By controlling the size of the condition number, it is possible to guarantee a prescribed round-off error tolerance. However, the opposite is not true, since it is possible to have a system of linear equations with an arbitrarily large condition number that still delivers a small round-off error. In this paper, we perform a round-off error analysis in context of 1D and 2D hp-adaptive Finite Element simulations for the case of Poisson equation. We conclude that boundary conditions play a fundamental role on the round-off error analysis, specially for the so-called ‘radical meshes’. Moreover, we illustrate the importance of the right-hand side when analyzing the round-off error, which is independent of the condition number of the matrix.

  9. On Round-off Error for Adaptive Finite Element Methods

    KAUST Repository

    Alvarez-Aramberri, J.; Pardo, David; Paszynski, Maciej; Collier, Nathan; Dalcin, Lisandro; Calo, Victor M.

    2012-01-01

    Round-off error analysis has been historically studied by analyzing the condition number of the associated matrix. By controlling the size of the condition number, it is possible to guarantee a prescribed round-off error tolerance. However, the opposite is not true, since it is possible to have a system of linear equations with an arbitrarily large condition number that still delivers a small round-off error. In this paper, we perform a round-off error analysis in context of 1D and 2D hp-adaptive Finite Element simulations for the case of Poisson equation. We conclude that boundary conditions play a fundamental role on the round-off error analysis, specially for the so-called ‘radical meshes’. Moreover, we illustrate the importance of the right-hand side when analyzing the round-off error, which is independent of the condition number of the matrix.

  10. The Design of Fault Tolerant Quantum Dot Cellular Automata Based Logic

    Science.gov (United States)

    Armstrong, C. Duane; Humphreys, William M.; Fijany, Amir

    2002-01-01

    As transistor geometries are reduced, quantum effects begin to dominate device performance. At some point, transistors cease to have the properties that make them useful computational components. New computing elements must be developed in order to keep pace with Moore s Law. Quantum dot cellular automata (QCA) represent an alternative paradigm to transistor-based logic. QCA architectures that are robust to manufacturing tolerances and defects must be developed. We are developing software that allows the exploration of fault tolerant QCA gate architectures by automating the specification, simulation, analysis and documentation processes.

  11. Computerized automated remote inspection system

    International Nuclear Information System (INIS)

    The automated inspection system utilizes a computer to control the location of the ultrasonic transducer, the actual inspection process, the display of the data, and the storage of the data on IBM magnetic tape. This automated inspection equipment provides two major advantages. First, it provides a cost savings, because of the reduced inspection time, made possible by the automation of the data acquisition, processing, and storage equipment. This reduced inspection time is also made possible by a computerized data evaluation aid which speeds data interpretation. In addition, the computer control of the transducer location drive allows the exact duplication of a previously located position or flaw. The second major advantage is that the use of automated inspection equipment also allows a higher-quality inspection, because of the automated data acquisition, processing, and storage. This storage of data, in accurate digital form on IBM magnetic tape, for example, facilitates retrieval for comparison with previous inspection data. The equipment provides a multiplicity of scan data which will provide statistical information on any questionable volume or flaw. An automatic alarm for location of all reportable flaws reduces the probability of operator error. This system has the ability to present data on a cathode ray tube as numerical information, a three-dimensional picture, or ''hard-copy'' sheet. One important advantage of this system is the ability to store large amounts of data in compact magnetic tape reels

  12. Plant automation-application to SBWR project

    International Nuclear Information System (INIS)

    Rodriguez Rodriguez, C.

    1995-01-01

    In accordance with the requirements set out in the URD (Utility Requirements Document) issued by the EPRI (Electrical Power Research Institute), the design of new reactors, whether evolutionary or passive, shall taken into account the systematic automation of functions relating to normal plant operation. The objectives established are to: =2E Simplify operator-performed tasks =2E Reduce the risk of operator-error by considering human factors in the allocation of tasks =2E Improve man-machine reliability =2E Increase the availability of the plant In previous designs, automation has only been considered from the point of view of compliance with regulatory requirements for safety-related systems, or in isolated cases, as a method of protecting the investment where there is a risk of damage to main equipment. The use of digital technology has prevented the systematic pursuit of such objectives in the design of automated systems for processes associated with normal plant operation (startup, load follow, normal shutdown, etc) from being excessively complex and therefore costly to undertake. This paper describes how the automation of the aforementioned normal plant operation activities has been approached in General Electric's SBWR (Simplified Boiling Water Reactor) design. (Author)

  13. The role of automation and artificial intelligence

    Science.gov (United States)

    Schappell, R. T.

    1983-07-01

    Consideration is given to emerging technologies that are not currently in common use, yet will be mature enough for implementation in a space station. Artificial intelligence (AI) will permit more autonomous operation and improve the man-machine interfaces. Technology goals include the development of expert systems, a natural language query system, automated planning systems, and AI image understanding systems. Intelligent robots and teleoperators will be needed, together with improved sensory systems for the robotics, housekeeping, vehicle control, and spacecraft housekeeping systems. Finally, NASA is developing the ROBSIM computer program to evaluate level of automation, perform parametric studies and error analyses, optimize trajectories and control systems, and assess AI technology.

  14. Biomek Cell Workstation: A Variable System for Automated Cell Cultivation.

    Science.gov (United States)

    Lehmann, R; Severitt, J C; Roddelkopf, T; Junginger, S; Thurow, K

    2016-06-01

    Automated cell cultivation is an important tool for simplifying routine laboratory work. Automated methods are independent of skill levels and daily constitution of laboratory staff in combination with a constant quality and performance of the methods. The Biomek Cell Workstation was configured as a flexible and compatible system. The modified Biomek Cell Workstation enables the cultivation of adherent and suspension cells. Until now, no commercially available systems enabled the automated handling of both types of cells in one system. In particular, the automated cultivation of suspension cells in this form has not been published. The cell counts and viabilities were nonsignificantly decreased for cells cultivated in AutoFlasks in automated handling. The proliferation of manual and automated bioscreening by the WST-1 assay showed a nonsignificant lower proliferation of automatically disseminated cells associated with a mostly lower standard error. The disseminated suspension cell lines showed different pronounced proliferations in descending order, starting with Jurkat cells followed by SEM, Molt4, and RS4 cells having the lowest proliferation. In this respect, we successfully disseminated and screened suspension cells in an automated way. The automated cultivation and dissemination of a variety of suspension cells can replace the manual method. © 2015 Society for Laboratory Automation and Screening.

  15. The impact of a closed‐loop electronic prescribing and administration system on prescribing errors, administration errors and staff time: a before‐and‐after study

    Science.gov (United States)

    Franklin, Bryony Dean; O'Grady, Kara; Donyai, Parastou; Jacklin, Ann; Barber, Nick

    2007-01-01

    Objectives To assess the impact of a closed‐loop electronic prescribing, automated dispensing, barcode patient identification and electronic medication administration record (EMAR) system on prescribing and administration errors, confirmation of patient identity before administration, and staff time. Design, setting and participants Before‐and‐after study in a surgical ward of a teaching hospital, involving patients and staff of that ward. Intervention Closed‐loop electronic prescribing, automated dispensing, barcode patient identification and EMAR system. Main outcome measures Percentage of new medication orders with a prescribing error, percentage of doses with medication administration errors (MAEs) and percentage given without checking patient identity. Time spent prescribing and providing a ward pharmacy service. Nursing time on medication tasks. Results Prescribing errors were identified in 3.8% of 2450 medication orders pre‐intervention and 2.0% of 2353 orders afterwards (pMedical staff required 15 s to prescribe a regular inpatient drug pre‐intervention and 39 s afterwards (p = 0.03; t test). Time spent providing a ward pharmacy service increased from 68 min to 98 min each weekday (p = 0.001; t test); 22% of drug charts were unavailable pre‐intervention. Time per drug administration round decreased from 50 min to 40 min (p = 0.006; t test); nursing time on medication tasks outside of drug rounds increased from 21.1% to 28.7% (p = 0.006; χ2 test). Conclusions A closed‐loop electronic prescribing, dispensing and barcode patient identification system reduced prescribing errors and MAEs, and increased confirmation of patient identity before administration. Time spent on medication‐related tasks increased. PMID:17693676

  16. Impact of Automation on Drivers' Performance in Agricultural Semi-Autonomous Vehicles.

    Science.gov (United States)

    Bashiri, B; Mann, D D

    2015-04-01

    Drivers' inadequate mental workload has been reported as one of the negative effects of driving assistant systems and in-vehicle automation. The increasing trend of automation in agricultural vehicles raises some concerns about drivers' mental workload in such vehicles. Thus, a human factors perspective is needed to identify the consequences of such automated systems. In this simulator study, the effects of vehicle steering task automation (VSTA) and implement control and monitoring task automation (ICMTA) were investigated using a tractor-air seeder system as a case study. Two performance parameters (reaction time and accuracy of actions) were measured to assess drivers' perceived mental workload. Experiments were conducted using the tractor driving simulator (TDS) located in the Agricultural Ergonomics Laboratory at the University of Manitoba. Study participants were university students with tractor driving experience. According to the results, reaction time and number of errors made by drivers both decreased as the automation level increased. Correlations were found among performance parameters and subjective mental workload reported by the drivers.

  17. Automated complex spectra processing of actinide α-radiation

    International Nuclear Information System (INIS)

    Anichenkov, S.V.; Popov, Yu.S.; Tselishchev, I.V.; Mishenev, V.B.; Timofeev, G.A.

    1989-01-01

    Earlier described algorithms of automated processing of complex α - spectra of actinides with the use of Ehlektronika D3-28 computer line, connected with ICA-070 multichannel amplitude pulse analyzer, were realized. The developed program enables to calculated peak intensity and the relative isotope content, to conduct energy calibration of spectra, to calculate peak center of gravity and energy resolution, to perform integral counting in particular part of the spectrum. Error of the method of automated processing depens on the degree of spectrum complication and lies within the limits of 1-12%. 8 refs.; 4 figs.; 2 tabs

  18. A quantum byte with 10{sup -4} crosstalk for fault-tolerant quantum computing

    Energy Technology Data Exchange (ETDEWEB)

    Piltz, Christian; Sriarunothai, Theeraphot; Varon, Andres; Wunderlich, Christof [Department Physik, Universitaet Siegen, 57068 Siegen (Germany)

    2014-07-01

    A prerequisite for fault-tolerant and thus scalable operation of a quantum computer is the use of quantum error correction protocols. Such protocols come with a maximum tolerable gate error, and there is consensus that an error of order 10{sup -4} is an important threshold. This threshold was already breached for single-qubit gates with trapped ions using microwave radiation. However, crosstalk - the error that is induced in qubits within a quantum register, when one qubit (or a subset of qubits) is coherently manipulated, still prevents the realization of a scalable quantum computer. The application of a quantum gate - even if the gate error itself is low - induces errors in other qubits within the quantum register. We present an experimental study using quantum registers consisting of microwave-driven trapped {sup 171}Yb{sup +} ions in a static magnetic gradient. We demonstrate a quantum register of three qubits with a next-neighbour crosstalk of 6(1) . 10{sup -5} that for the first time breaches the error correction threshold. Furthermore, we present a quantum register of eight qubits - a quantum byte - with a next-neighbour crosstalk error better than 2.9(4) . 10{sup -4}. Importantly, our results are obtained with thermally excited ions far above the motional ground state.

  19. A new framework for analysing automated acoustic species-detection data: occupancy estimation and optimization of recordings post-processing

    Science.gov (United States)

    Chambert, Thierry A.; Waddle, J. Hardin; Miller, David A.W.; Walls, Susan; Nichols, James D.

    2018-01-01

    The development and use of automated species-detection technologies, such as acoustic recorders, for monitoring wildlife are rapidly expanding. Automated classification algorithms provide a cost- and time-effective means to process information-rich data, but often at the cost of additional detection errors. Appropriate methods are necessary to analyse such data while dealing with the different types of detection errors.We developed a hierarchical modelling framework for estimating species occupancy from automated species-detection data. We explore design and optimization of data post-processing procedures to account for detection errors and generate accurate estimates. Our proposed method accounts for both imperfect detection and false positive errors and utilizes information about both occurrence and abundance of detections to improve estimation.Using simulations, we show that our method provides much more accurate estimates than models ignoring the abundance of detections. The same findings are reached when we apply the methods to two real datasets on North American frogs surveyed with acoustic recorders.When false positives occur, estimator accuracy can be improved when a subset of detections produced by the classification algorithm is post-validated by a human observer. We use simulations to investigate the relationship between accuracy and effort spent on post-validation, and found that very accurate occupancy estimates can be obtained with as little as 1% of data being validated.Automated monitoring of wildlife provides opportunity and challenges. Our methods for analysing automated species-detection data help to meet key challenges unique to these data and will prove useful for many wildlife monitoring programs.

  20. An investigation into soft error detection efficiency at operating system level.

    Science.gov (United States)

    Asghari, Seyyed Amir; Kaynak, Okyay; Taheri, Hassan

    2014-01-01

    Electronic equipment operating in harsh environments such as space is subjected to a range of threats. The most important of these is radiation that gives rise to permanent and transient errors on microelectronic components. The occurrence rate of transient errors is significantly more than permanent errors. The transient errors, or soft errors, emerge in two formats: control flow errors (CFEs) and data errors. Valuable research results have already appeared in literature at hardware and software levels for their alleviation. However, there is the basic assumption behind these works that the operating system is reliable and the focus is on other system levels. In this paper, we investigate the effects of soft errors on the operating system components and compare their vulnerability with that of application level components. Results show that soft errors in operating system components affect both operating system and application level components. Therefore, by providing endurance to operating system level components against soft errors, both operating system and application level components gain tolerance.

  1. Simulation study on heterogeneous variance adjustment for observations with different measurement error variance

    DEFF Research Database (Denmark)

    Pitkänen, Timo; Mäntysaari, Esa A; Nielsen, Ulrik Sander

    2013-01-01

    of variance correction is developed for the same observations. As automated milking systems are becoming more popular the current evaluation model needs to be enhanced to account for the different measurement error variances of observations from automated milking systems. In this simulation study different...... models and different approaches to account for heterogeneous variance when observations have different measurement error variances were investigated. Based on the results we propose to upgrade the currently applied models and to calibrate the heterogeneous variance adjustment method to yield same genetic......The Nordic Holstein yield evaluation model describes all available milk, protein and fat test-day yields from Denmark, Finland and Sweden. In its current form all variance components are estimated from observations recorded under conventional milking systems. Also the model for heterogeneity...

  2. Improving patient safety via automated laboratory-based adverse event grading.

    Science.gov (United States)

    Niland, Joyce C; Stiller, Tracey; Neat, Jennifer; Londrc, Adina; Johnson, Dina; Pannoni, Susan

    2012-01-01

    The identification and grading of adverse events (AEs) during the conduct of clinical trials is a labor-intensive and error-prone process. This paper describes and evaluates a software tool developed by City of Hope to automate complex algorithms to assess laboratory results and identify and grade AEs. We compared AEs identified by the automated system with those previously assessed manually, to evaluate missed/misgraded AEs. We also conducted a prospective paired time assessment of automated versus manual AE assessment. We found a substantial improvement in accuracy/completeness with the automated grading tool, which identified an additional 17% of severe grade 3-4 AEs that had been missed/misgraded manually. The automated system also provided an average time saving of 5.5 min per treatment course. With 400 ongoing treatment trials at City of Hope and an average of 1800 laboratory results requiring assessment per study, the implications of these findings for patient safety are enormous.

  3. Analysis technique for controlling system wavefront error with active/adaptive optics

    Science.gov (United States)

    Genberg, Victor L.; Michels, Gregory J.

    2017-08-01

    The ultimate goal of an active mirror system is to control system level wavefront error (WFE). In the past, the use of this technique was limited by the difficulty of obtaining a linear optics model. In this paper, an automated method for controlling system level WFE using a linear optics model is presented. An error estimate is included in the analysis output for both surface error disturbance fitting and actuator influence function fitting. To control adaptive optics, the technique has been extended to write system WFE in state space matrix form. The technique is demonstrated by example with SigFit, a commercially available tool integrating mechanical analysis with optical analysis.

  4. Detection and cellular localisation of the synthetic soluble macromolecular drug carrier pHPMA

    Czech Academy of Sciences Publication Activity Database

    Kissel, M.; Peschke, P.; Šubr, Vladimír; Ulbrich, Karel; Strunz, A. M.; Kühnlein, R.; Debus, J.; Friedrich, E.

    2002-01-01

    Roč. 29, č. 8 (2002), s. 1055-1062 ISSN 1619-7070 R&D Projects: GA ČR GV307/96/K226 Institutional research plan: CEZ:AV0Z4050913 Keywords : EPR effect * Radiolabelled macromolecules * Pharmacokinetic Subject RIV: CD - Macromolecular Chemistry Impact factor: 3.568, year: 2002

  5. Automated testing of electro-optical systems; Proceedings of the Meeting, Orlando, FL, Apr. 7, 8, 1988

    International Nuclear Information System (INIS)

    Nestler, J.; Richardson, P.I.

    1988-01-01

    Various papers on the automated testing of electrooptical systems are presented. Individual topics addressed include: simultaneous automated testing of Thematic Mapper dynamic spatial performance characteristics, results of objective automatic minimum resolvable temperature testing of thermal imagers using a proposed new figure of merit, test and manufacture of three-mirror laboratory telescope, calculation of apparent delta-T errors for band-limited detectors, and automated laser seeker performance evaluation system

  6. UQlust: combining profile hashing with linear-time ranking for efficient clustering and analysis of big macromolecular data.

    Science.gov (United States)

    Adamczak, Rafal; Meller, Jarek

    2016-12-28

    Advances in computing have enabled current protein and RNA structure prediction and molecular simulation methods to dramatically increase their sampling of conformational spaces. The quickly growing number of experimentally resolved structures, and databases such as the Protein Data Bank, also implies large scale structural similarity analyses to retrieve and classify macromolecular data. Consequently, the computational cost of structure comparison and clustering for large sets of macromolecular structures has become a bottleneck that necessitates further algorithmic improvements and development of efficient software solutions. uQlust is a versatile and easy-to-use tool for ultrafast ranking and clustering of macromolecular structures. uQlust makes use of structural profiles of proteins and nucleic acids, while combining a linear-time algorithm for implicit comparison of all pairs of models with profile hashing to enable efficient clustering of large data sets with a low memory footprint. In addition to ranking and clustering of large sets of models of the same protein or RNA molecule, uQlust can also be used in conjunction with fragment-based profiles in order to cluster structures of arbitrary length. For example, hierarchical clustering of the entire PDB using profile hashing can be performed on a typical laptop, thus opening an avenue for structural explorations previously limited to dedicated resources. The uQlust package is freely available under the GNU General Public License at https://github.com/uQlust . uQlust represents a drastic reduction in the computational complexity and memory requirements with respect to existing clustering and model quality assessment methods for macromolecular structure analysis, while yielding results on par with traditional approaches for both proteins and RNAs.

  7. Combined methods of tolerance increasing for embedded SRAM

    Science.gov (United States)

    Shchigorev, L. A.; Shagurin, I. I.

    2016-10-01

    The abilities of combined use of different methods of fault tolerance increasing for SRAM such as error detection and correction codes, parity bits, and redundant elements are considered. Area penalties due to using combinations of these methods are investigated. Estimation is made for different configurations of 4K x 128 RAM memory block for 28 nm manufacturing process. Evaluation of the effectiveness of the proposed combinations is also reported. The results of these investigations can be useful for designing fault-tolerant “system on chips”.

  8. Assessing Exhaustiveness of Stochastic Sampling for Integrative Modeling of Macromolecular Structures.

    Science.gov (United States)

    Viswanath, Shruthi; Chemmama, Ilan E; Cimermancic, Peter; Sali, Andrej

    2017-12-05

    Modeling of macromolecular structures involves structural sampling guided by a scoring function, resulting in an ensemble of good-scoring models. By necessity, the sampling is often stochastic, and must be exhaustive at a precision sufficient for accurate modeling and assessment of model uncertainty. Therefore, the very first step in analyzing the ensemble is an estimation of the highest precision at which the sampling is exhaustive. Here, we present an objective and automated method for this task. As a proxy for sampling exhaustiveness, we evaluate whether two independently and stochastically generated sets of models are sufficiently similar. The protocol includes testing 1) convergence of the model score, 2) whether model scores for the two samples were drawn from the same parent distribution, 3) whether each structural cluster includes models from each sample proportionally to its size, and 4) whether there is sufficient structural similarity between the two model samples in each cluster. The evaluation also provides the sampling precision, defined as the smallest clustering threshold that satisfies the third, most stringent test. We validate the protocol with the aid of enumerated good-scoring models for five illustrative cases of binary protein complexes. Passing the proposed four tests is necessary, but not sufficient for thorough sampling. The protocol is general in nature and can be applied to the stochastic sampling of any set of models, not just structural models. In addition, the tests can be used to stop stochastic sampling as soon as exhaustiveness at desired precision is reached, thereby improving sampling efficiency; they may also help in selecting a model representation that is sufficiently detailed to be informative, yet also sufficiently coarse for sampling to be exhaustive. Copyright © 2017 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  9. Noise Threshold and Resource Cost of Fault-Tolerant Quantum Computing with Majorana Fermions in Hybrid Systems.

    Science.gov (United States)

    Li, Ying

    2016-09-16

    Fault-tolerant quantum computing in systems composed of both Majorana fermions and topologically unprotected quantum systems, e.g., superconducting circuits or quantum dots, is studied in this Letter. Errors caused by topologically unprotected quantum systems need to be corrected with error-correction schemes, for instance, the surface code. We find that the error-correction performance of such a hybrid topological quantum computer is not superior to a normal quantum computer unless the topological charge of Majorana fermions is insusceptible to noise. If errors changing the topological charge are rare, the fault-tolerance threshold is much higher than the threshold of a normal quantum computer and a surface-code logical qubit could be encoded in only tens of topological qubits instead of about 1,000 normal qubits.

  10. Development of An Optimization Method for Determining Automation Rate in Nuclear Power Plants

    International Nuclear Information System (INIS)

    Lee, Seung Min; Seong, Poong Hyun; Kim, Jong Hyun

    2014-01-01

    Since automation was introduced in various industrial fields, it has been known that automation provides positive effects like greater efficiency and fewer human errors, and negative effect defined as out-of-the-loop (OOTL). Thus, before introducing automation in nuclear field, the estimation of the positive and negative effects of automation on human operators should be conducted. In this paper, by focusing on CPS, the optimization method to find an appropriate proportion of automation is suggested by integrating the suggested cognitive automation rate and the concepts of the level of ostracism. The cognitive automation rate estimation method was suggested to express the reduced amount of human cognitive loads, and the level of ostracism was suggested to express the difficulty in obtaining information from the automation system and increased uncertainty of human operators' diagnosis. The maximized proportion of automation that maintains the high level of attention for monitoring the situation is derived by an experiment, and the automation rate is estimated by the suggested automation rate estimation method. It is expected to derive an appropriate inclusion proportion of the automation system avoiding the OOTL problem and having maximum efficacy at the same time

  11. Evaluation of an Automated System for Reading and Interpreting Disk Diffusion Antimicrobial Susceptibility Testing of Fastidious Bacteria.

    Science.gov (United States)

    Idelevich, Evgeny A; Becker, Karsten; Schmitz, Janne; Knaack, Dennis; Peters, Georg; Köck, Robin

    2016-01-01

    Results of disk diffusion antimicrobial susceptibility testing depend on individual visual reading of inhibition zone diameters. Therefore, automated reading using camera systems might represent a useful tool for standardization. In this study, the ADAGIO automated system (Bio-Rad) was evaluated for reading disk diffusion tests of fastidious bacteria. 144 clinical isolates (68 β-haemolytic streptococci, 28 Streptococcus pneumoniae, 18 viridans group streptococci, 13 Haemophilus influenzae, 7 Moraxella catarrhalis, and 10 Campylobacter jejuni) were tested on Mueller-Hinton agar supplemented with 5% defibrinated horse blood and 20 mg/L β-NAD (MH-F, Oxoid) according to EUCAST. Plates were read manually with a ruler and automatically using the ADAGIO system. Inhibition zone diameters, indicated by the automated system, were visually controlled and adjusted, if necessary. Among 1548 isolate-antibiotic combinations, comparison of automated vs. manual reading yielded categorical agreement (CA) without visual adjustment of the automatically determined zone diameters in 81.4%. In 20% (309 of 1548) of tests it was deemed necessary to adjust the automatically determined zone diameter after visual control. After adjustment, CA was 94.8%; very major errors (false susceptible interpretation), major errors (false resistant interpretation) and minor errors (false categorization involving intermediate result), calculated according to the ISO 20776-2 guideline, accounted to 13.7% (13 of 95 resistant results), 3.3% (47 of 1424 susceptible results) and 1.4% (21 of 1548 total results), respectively, compared to manual reading. The ADAGIO system allowed for automated reading of disk diffusion testing in fastidious bacteria and, after visual validation of the automated results, yielded good categorical agreement with manual reading.

  12. Predictive Mechanical Characterization of Macro-Molecular Material Chemistry Structures of Cement Paste at Nano Scale - Two-phase Macro-Molecular Structures of Calcium Silicate Hydrate, Tri-Calcium Silicate, Di-Calcium Silicate and Calcium Hydroxide

    Science.gov (United States)

    Padilla Espinosa, Ingrid Marcela

    Concrete is a hierarchical composite material with a random structure over a wide range of length scales. At submicron length scale the main component of concrete is cement paste, formed by the reaction of Portland cement clinkers and water. Cement paste acts as a binding matrix for the other components and is responsible for the strength of concrete. Cement paste microstructure contains voids, hydrated and unhydrated cement phases. The main crystalline phases of unhydrated cement are tri-calcium silicate (C3S) and di-calcium silicate (C2S), and of hydrated cement are calcium silicate hydrate (CSH) and calcium hydroxide (CH). Although efforts have been made to comprehend the chemical and physical nature of cement paste, studies at molecular level have primarily been focused on individual components. Present research focuses on the development of a method to model, at molecular level, and analysis of the two-phase combination of hydrated and unhydrated phases of cement paste as macromolecular systems. Computational molecular modeling could help in understanding the influence of the phase interactions on the material properties, and mechanical performance of cement paste. Present work also strives to create a framework for molecular level models suitable for potential better comparisons with low length scale experimental methods, in which the sizes of the samples involve the mixture of different hydrated and unhydrated crystalline phases of cement paste. Two approaches based on two-phase cement paste macromolecular structures, one involving admixed molecular phases, and the second involving cluster of two molecular phases are investigated. The mechanical properties of two-phase macromolecular systems of cement paste consisting of key hydrated phase CSH and unhydrated phases C3S or C2S, as well as CSH with the second hydrated phase CH were calculated. It was found that these cement paste two-phase macromolecular systems predicted an isotropic material behavior. Also

  13. Modeling of random geometric errors in superconducting magnets with applications to the CERN Large Hadron Collider

    Directory of Open Access Journals (Sweden)

    P. Ferracin

    2000-12-01

    Full Text Available Estimates of random field-shape errors induced by cable mispositioning in superconducting magnets are presented and specific applications to the Large Hadron Collider (LHC main dipoles and quadrupoles are extensively discussed. Numerical simulations obtained with Monte Carlo methods are compared to analytic estimates and are used to interpret the experimental data for the LHC dipole and quadrupole prototypes. The proposed approach can predict the effect of magnet tolerances on geometric components of random field-shape errors, and it is a useful tool to monitor the obtained tolerances during magnet production.

  14. Analysis on the dynamic error for optoelectronic scanning coordinate measurement network

    Science.gov (United States)

    Shi, Shendong; Yang, Linghui; Lin, Jiarui; Guo, Siyang; Ren, Yongjie

    2018-01-01

    Large-scale dynamic three-dimension coordinate measurement technique is eagerly demanded in equipment manufacturing. Noted for advantages of high accuracy, scale expandability and multitask parallel measurement, optoelectronic scanning measurement network has got close attention. It is widely used in large components jointing, spacecraft rendezvous and docking simulation, digital shipbuilding and automated guided vehicle navigation. At present, most research about optoelectronic scanning measurement network is focused on static measurement capacity and research about dynamic accuracy is insufficient. Limited by the measurement principle, the dynamic error is non-negligible and restricts the application. The workshop measurement and positioning system is a representative which can realize dynamic measurement function in theory. In this paper we conduct deep research on dynamic error resources and divide them two parts: phase error and synchronization error. Dynamic error model is constructed. Based on the theory above, simulation about dynamic error is carried out. Dynamic error is quantized and the rule of volatility and periodicity has been found. Dynamic error characteristics are shown in detail. The research result lays foundation for further accuracy improvement.

  15. Tolerance analysis of null lenses using an end-use system performance criterion

    Science.gov (United States)

    Rodgers, J. Michael

    2000-07-01

    An effective method of assigning tolerances to a null lens is to determine the effects of null-lens fabrication and alignment errors on the end-use system itself, not simply the null lens. This paper describes a method to assign null- lens tolerances based on their effect on any performance parameter of the end-use system.

  16. Coevolutionary constraints in the sequence-space of macromolecular complexes reflect their self-assembly pathways.

    Science.gov (United States)

    Mallik, Saurav; Kundu, Sudip

    2017-07-01

    Is the order in which biomolecular subunits self-assemble into functional macromolecular complexes imprinted in their sequence-space? Here, we demonstrate that the temporal order of macromolecular complex self-assembly can be efficiently captured using the landscape of residue-level coevolutionary constraints. This predictive power of coevolutionary constraints is irrespective of the structural, functional, and phylogenetic classification of the complex and of the stoichiometry and quaternary arrangement of the constituent monomers. Combining this result with a number of structural attributes estimated from the crystal structure data, we find indications that stronger coevolutionary constraints at interfaces formed early in the assembly hierarchy probably promotes coordinated fixation of mutations that leads to high-affinity binding with higher surface area, increased surface complementarity and elevated number of molecular contacts, compared to those that form late in the assembly. Proteins 2017; 85:1183-1189. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  17. Identification of Success Criteria for Automated Function Using Feed and Bleed Operation

    International Nuclear Information System (INIS)

    Kim, Bo Gyung; Kim, Sang Ho; Kang, Hyun Gook; Yoon, Ho Joon

    2013-01-01

    Since NPP has lots of functions and systems, operated procedure is much complicated and the chance of human error to operate the safety systems is quite high. In the case of large break loss of coolant accident (LBLOCA) and station black out (SBO), the dependency of operator is very low. However, when many mitigation systems are still available, operators have several choices to mitigate the accident and the human error can be increased more. To reduce the operator's workload and perform the operation accurate after the accident, automated function for safe cooldown based on the feed and bleed (F and B) operation was suggested. The automated function can predict whether the plant will be safe after the automated function is initiated, and perform the safety functions automatically. To expect the success of cooldown, success criteria should be identified. To perform the operation accurately after the accident, the automated function for safe cooldown based on the F and B operation is suggested. To expect the success of cooldown, sequence of RCS situation when heat removal by secondary system fails is identified. Based on the sequence of RCS situation, four levels of necessity of F and B operation are classified. To obtain the boundary of levels, the TH analysis will be performed

  18. Uncertainty analysis and allocation of joint tolerances in robot manipulators based on interval analysis

    International Nuclear Information System (INIS)

    Wu Weidong; Rao, S.S.

    2007-01-01

    Many uncertain factors influence the accuracy and repeatability of robots. These factors include manufacturing and assembly tolerances and deviations in actuators and controllers. The effects of these uncertain factors must be carefully analyzed to obtain a clear insight into the manipulator performance. In order to ensure the position and orientation accuracy of a robot end effector as well as to reduce the manufacturing cost of the robot, it is necessary to quantify the influence of the uncertain factors and optimally allocate the tolerances. This involves a study of the direct and inverse kinematics of robot end effectors in the presence of uncertain factors. This paper focuses on the optimal allocation of joint tolerances with consideration of the positional and directional errors of the robot end effector and the manufacturing cost. The interval analysis is used for predicting errors in the performance of robot manipulators. The Stanford manipulator is considered for illustration. The unknown joint variables are modeled as interval parameters due to the inherent uncertainty. The cost-tolerance model is assumed to be of an exponential form during optimization. The effects of the upper bounds on the minimum cost and relative deviations of the directional and positional errors of the end effector are also studied

  19. Figure tolerance of a Wolter type I mirror for a soft-x-ray microscope

    International Nuclear Information System (INIS)

    Chon, Kwon Su; Namba, Yoshiharu; Yoon, Kwon-Ha

    2007-01-01

    The demand for an x-ray microscope has received much attention because of the desire to study living cells at a high resolution and in a hydrated environment. A Wolter type I mirror used for soft-x-ray microscope optics has many advantages. From the mirror fabrication point of view, it is necessary to perform tolerance analysis, particularly with respect to figure errors that considerably degrade the image quality.The figure tolerance of a Wolter type I mirror for a biological application in terms of the image quality and the state-of-the-art fabrication technology is discussed. The figure errors rapidly destroyed the image quality, and the required slope error depended on the detector used in the soft-x-ray microscope

  20. Radiation damage to nucleoprotein complexes in macromolecular crystallography

    International Nuclear Information System (INIS)

    Bury, Charles; Garman, Elspeth F.; Ginn, Helen Mary; Ravelli, Raimond B. G.; Carmichael, Ian; Kneale, Geoff; McGeehan, John E.

    2015-01-01

    Quantitative X-ray induced radiation damage studies employing a model protein–DNA complex revealed a striking partition of damage sites. The DNA component was observed to be far more resistant to specific damage compared with the protein. Significant progress has been made in macromolecular crystallography over recent years in both the understanding and mitigation of X-ray induced radiation damage when collecting diffraction data from crystalline proteins. In contrast, despite the large field that is productively engaged in the study of radiation chemistry of nucleic acids, particularly of DNA, there are currently very few X-ray crystallographic studies on radiation damage mechanisms in nucleic acids. Quantitative comparison of damage to protein and DNA crystals separately is challenging, but many of the issues are circumvented by studying pre-formed biological nucleoprotein complexes where direct comparison of each component can be made under the same controlled conditions. Here a model protein–DNA complex C.Esp1396I is employed to investigate specific damage mechanisms for protein and DNA in a biologically relevant complex over a large dose range (2.07–44.63 MGy). In order to allow a quantitative analysis of radiation damage sites from a complex series of macromolecular diffraction data, a computational method has been developed that is generally applicable to the field. Typical specific damage was observed for both the protein on particular amino acids and for the DNA on, for example, the cleavage of base-sugar N 1 —C and sugar-phosphate C—O bonds. Strikingly the DNA component was determined to be far more resistant to specific damage than the protein for the investigated dose range. At low doses the protein was observed to be susceptible to radiation damage while the DNA was far more resistant, damage only being observed at significantly higher doses

  1. Tolerance analyses of a quadrupole magnet for advanced photon source upgrade

    Energy Technology Data Exchange (ETDEWEB)

    Liu, J., E-mail: Jieliu@aps.anl.gov; Jaski, M., E-mail: jaski@aps.anl.gov; Borland, M., E-mail: borland@aps.anl.gov [Advanced Photon Source, Argonne National Laboratory, 9700 S. Cass Avenue, Lemont, IL60439 (United States); Jain, A., E-mail: jain@bnl.gov [Superconducting Magnet Division, Brookhaven National Laboratory, P.O. Box 5000. Upton, NY 11973-5000 (United States)

    2016-07-27

    Given physics requirements, the mechanical fabrication and assembly tolerances for storage ring magnets can be calculated using analytical methods [1, 2]. However, this method is not easy for complicated magnet designs [1]. In this paper, a novel method is proposed to determine fabrication and assembly tolerances consistent with physics requirements, through a combination of magnetic and mechanical tolerance analyses. In this study, finite element analysis using OPERA is conducted to estimate the effect of fabrication and assembly errors on the magnetic field of a quadrupole magnet and to determine the allowable tolerances to achieve the specified magnetic performances. Based on the study, allowable fabrication and assembly tolerances for the quadrupole assembly are specified for the mechanical design of the quadrupole magnet. Next, to achieve the required assembly level tolerances, mechanical tolerance stackup analyses using a 3D tolerance analysis package are carried out to determine the part and subassembly level fabrication tolerances. This method can be used to determine the tolerances for design of other individual magnets and of magnet strings.

  2. Tolerance analyses of a quadrupole magnet for advanced photon source upgrade

    International Nuclear Information System (INIS)

    Liu, J.; Jaski, M.; Borland, M.; Jain, A.

    2016-01-01

    Given physics requirements, the mechanical fabrication and assembly tolerances for storage ring magnets can be calculated using analytical methods [1, 2]. However, this method is not easy for complicated magnet designs [1]. In this paper, a novel method is proposed to determine fabrication and assembly tolerances consistent with physics requirements, through a combination of magnetic and mechanical tolerance analyses. In this study, finite element analysis using OPERA is conducted to estimate the effect of fabrication and assembly errors on the magnetic field of a quadrupole magnet and to determine the allowable tolerances to achieve the specified magnetic performances. Based on the study, allowable fabrication and assembly tolerances for the quadrupole assembly are specified for the mechanical design of the quadrupole magnet. Next, to achieve the required assembly level tolerances, mechanical tolerance stackup analyses using a 3D tolerance analysis package are carried out to determine the part and subassembly level fabrication tolerances. This method can be used to determine the tolerances for design of other individual magnets and of magnet strings.

  3. Interplay between the bacterial nucleoid protein H-NS and macromolecular crowding in compacting DNA

    NARCIS (Netherlands)

    Wintraecken, C.H.J.M.

    2012-01-01

    In this dissertation we discuss H-NS and its connection to nucleoid compaction and organization. Nucleoid formation involves a dramatic reduction in coil volume of the genomic DNA. Four factors are thought to influence coil volume: supercoiling, DNA charge neutralization, macromolecular

  4. Bringing macromolecular machinery to life using 3D animation.

    Science.gov (United States)

    Iwasa, Janet H

    2015-04-01

    Over the past decade, there has been a rapid rise in the use of three-dimensional (3D) animation to depict molecular and cellular processes. Much of the growth in molecular animation has been in the educational arena, but increasingly, 3D animation software is finding its way into research laboratories. In this review, I will discuss a number of ways in which 3d animation software can play a valuable role in visualizing and communicating macromolecular structures and dynamics. I will also consider the challenges of using animation tools within the research sphere. Copyright © 2015. Published by Elsevier Ltd.

  5. Inclusions in bone material as a source of error in radiocarbon dating

    International Nuclear Information System (INIS)

    Hassan, A.A.; Ortner, D.J.

    1977-01-01

    Electron probe microanalysis, X-ray diffraction and microscopic examination were conducted on bone material from several archaeological sites in order to identify post-burial inclusions which, if present, may affect radiocarbon dating of bone. Two types of inclusions were identified: (1) precipitates from ground water solutions, and (2) solid intrusion. The first type consists of calcite, pyrite, humates and an unknown material. The second type includes quartz grains, hyphae, rootlets, wood and charcoal. Precipitation of calcite in a macro-molecular level in bone may lead to erroneaous dating of bone apatite if such calcite was not removed completely. A special technique, therefore, must be employed to remove calcite comletely. Hyphae and rootlets also are likely to induce errors in radiocarbon dating of bone collagen. These very fine inclusions require more than hand picking. (author)

  6. Sensitive Data Protection Based on Intrusion Tolerance in Cloud Computing

    OpenAIRE

    Jingyu Wang; xuefeng Zheng; Dengliang Luo

    2011-01-01

    Service integration and supply on-demand coming from cloud computing can significantly improve the utilization of computing resources and reduce power consumption of per service, and effectively avoid the error of computing resources. However, cloud computing is still facing the problem of intrusion tolerance of the cloud computing platform and sensitive data of new enterprise data center. In order to address the problem of intrusion tolerance of cloud computing platform and sensitive data in...

  7. A study on quantification of unavailability of DPPS with fault tolerant techniques considering fault tolerant techniques' characteristics

    International Nuclear Information System (INIS)

    Kim, B. G.; Kang, H. G.; Kim, H. E.; Seung, P. H.; Kang, H. G.; Lee, S. J.

    2012-01-01

    With the improvement of digital technologies, digital I and C systems have included more various fault tolerant techniques than conventional analog I and C systems have, in order to increase fault detection and to help the system safely perform the required functions in spite of the presence of faults. So, in the reliability evaluation of digital systems, the fault tolerant techniques (FTTs) and their fault coverage must be considered. To consider the effects of FTTs in a digital system, there have been several studies on the reliability of digital model. Therefore, this research based on literature survey attempts to develop a model to evaluate the plant reliability of the digital plant protection system (DPPS) with fault tolerant techniques considering detection and process characteristics and human errors. Sensitivity analysis is performed to ascertain important variables from the fault management coverage and unavailability based on the proposed model

  8. Uncertainty and Motivation to Seek Information from Pharmacy Automated Communications.

    Science.gov (United States)

    Bones, Michelle; Nunlee, Martin

    2018-05-28

    Pharmacy personnel often answer telephones to respond to pharmacy customers (subjects) who received messages from automated systems. This research examines the communication process in terms of how users interact and engage with pharmacies after receiving automated messages. No study has directly addressed automated telephone calls and subjects' interactions. The purpose of this study is to test the interpersonal communication (IC) process of uncertainty in subjects in receipt of automated telephone calls ATCs from pharmacies. Subjects completed a survey of validated scales for Satisfaction (S); Relevance (R); Quality (Q); Need for Cognitive Closure (NFC). Relationships between S, R, Q, NFC, and subject preference to ATCs were analyzed to determine whether subjects contacting pharmacies display information seeking behavior. Results demonstrated that seeking information occurs if subjects: are dissatisfied with the content of the ATC; perceive that the Q of ATC is high and like receiving the ATC, or have a high NFC and do not like receiving ATCs. Other interactions presented complexities amongst uncertainty and tolerance of NFC within the IC process.

  9. Macromolecular and dendrimer-based magnetic resonance contrast agents

    Energy Technology Data Exchange (ETDEWEB)

    Bumb, Ambika; Brechbiel, Martin W. (Radiation Oncology Branch, National Cancer Inst., National Inst. of Health, Bethesda, MD (United States)), e-mail: pchoyke@mail.nih.gov; Choyke, Peter (Molecular Imaging Program, National Cancer Inst., National Inst. of Health, Bethesda, MD (United States))

    2010-09-15

    Magnetic resonance imaging (MRI) is a powerful imaging modality that can provide an assessment of function or molecular expression in tandem with anatomic detail. Over the last 20-25 years, a number of gadolinium-based MR contrast agents have been developed to enhance signal by altering proton relaxation properties. This review explores a range of these agents from small molecule chelates, such as Gd-DTPA and Gd-DOTA, to macromolecular structures composed of albumin, polylysine, polysaccharides (dextran, inulin, starch), poly(ethylene glycol), copolymers of cystamine and cystine with GD-DTPA, and various dendritic structures based on polyamidoamine and polylysine (Gadomers). The synthesis, structure, biodistribution, and targeting of dendrimer-based MR contrast agents are also discussed

  10. Antioxidant enzymes as bio-markers for copper tolerance in safflower

    African Journals Online (AJOL)

    USER

    2010-08-16

    Aug 16, 2010 ... tolerance in safflower (Carthamus tinctorius L.) Ali Ahmed1, Ammarah ... Values are mean of three replicates, bars indicate ± standard errors. .... The support from the Higher Education Commission,. Islamabad, Pakistan ...

  11. Estimation of Separation Buffers for Wind-Prediction Error in an Airborne Separation Assistance System

    Science.gov (United States)

    Consiglio, Maria C.; Hoadley, Sherwood T.; Allen, B. Danette

    2009-01-01

    Wind prediction errors are known to affect the performance of automated air traffic management tools that rely on aircraft trajectory predictions. In particular, automated separation assurance tools, planned as part of the NextGen concept of operations, must be designed to account and compensate for the impact of wind prediction errors and other system uncertainties. In this paper we describe a high fidelity batch simulation study designed to estimate the separation distance required to compensate for the effects of wind-prediction errors throughout increasing traffic density on an airborne separation assistance system. These experimental runs are part of the Safety Performance of Airborne Separation experiment suite that examines the safety implications of prediction errors and system uncertainties on airborne separation assurance systems. In this experiment, wind-prediction errors were varied between zero and forty knots while traffic density was increased several times current traffic levels. In order to accurately measure the full unmitigated impact of wind-prediction errors, no uncertainty buffers were added to the separation minima. The goal of the study was to measure the impact of wind-prediction errors in order to estimate the additional separation buffers necessary to preserve separation and to provide a baseline for future analyses. Buffer estimations from this study will be used and verified in upcoming safety evaluation experiments under similar simulation conditions. Results suggest that the strategic airborne separation functions exercised in this experiment can sustain wind prediction errors up to 40kts at current day air traffic density with no additional separation distance buffer and at eight times the current day with no more than a 60% increase in separation distance buffer.

  12. Detecting errors and anomalies in computerized materials control and accountability databases

    International Nuclear Information System (INIS)

    Whiteson, R.; Hench, K.; Yarbro, T.; Baumgart, C.

    1998-01-01

    The Automated MC and A Database Assessment project is aimed at improving anomaly and error detection in materials control and accountability (MC and A) databases and increasing confidence in the data that they contain. Anomalous data resulting in poor categorization of nuclear material inventories greatly reduces the value of the database information to users. Therefore it is essential that MC and A data be assessed periodically for anomalies or errors. Anomaly detection can identify errors in databases and thus provide assurance of the integrity of data. An expert system has been developed at Los Alamos National Laboratory that examines these large databases for anomalous or erroneous data. For several years, MC and A subject matter experts at Los Alamos have been using this automated system to examine the large amounts of accountability data that the Los Alamos Plutonium Facility generates. These data are collected and managed by the Material Accountability and Safeguards System, a near-real-time computerized nuclear material accountability and safeguards system. This year they have expanded the user base, customizing the anomaly detector for the varying requirements of different groups of users. This paper describes the progress in customizing the expert systems to the needs of the users of the data and reports on their results

  13. An Investigation into Soft Error Detection Efficiency at Operating System Level

    Directory of Open Access Journals (Sweden)

    Seyyed Amir Asghari

    2014-01-01

    Full Text Available Electronic equipment operating in harsh environments such as space is subjected to a range of threats. The most important of these is radiation that gives rise to permanent and transient errors on microelectronic components. The occurrence rate of transient errors is significantly more than permanent errors. The transient errors, or soft errors, emerge in two formats: control flow errors (CFEs and data errors. Valuable research results have already appeared in literature at hardware and software levels for their alleviation. However, there is the basic assumption behind these works that the operating system is reliable and the focus is on other system levels. In this paper, we investigate the effects of soft errors on the operating system components and compare their vulnerability with that of application level components. Results show that soft errors in operating system components affect both operating system and application level components. Therefore, by providing endurance to operating system level components against soft errors, both operating system and application level components gain tolerance.

  14. Macromolecularly crowded in vitro microenvironments accelerate the production of extracellular matrix-rich supramolecular assemblies.

    Science.gov (United States)

    Kumar, Pramod; Satyam, Abhigyan; Fan, Xingliang; Collin, Estelle; Rochev, Yury; Rodriguez, Brian J; Gorelov, Alexander; Dillon, Simon; Joshi, Lokesh; Raghunath, Michael; Pandit, Abhay; Zeugolis, Dimitrios I

    2015-03-04

    Therapeutic strategies based on the principles of tissue engineering by self-assembly put forward the notion that functional regeneration can be achieved by utilising the inherent capacity of cells to create highly sophisticated supramolecular assemblies. However, in dilute ex vivo microenvironments, prolonged culture time is required to develop an extracellular matrix-rich implantable device. Herein, we assessed the influence of macromolecular crowding, a biophysical phenomenon that regulates intra- and extra-cellular activities in multicellular organisms, in human corneal fibroblast culture. In the presence of macromolecules, abundant extracellular matrix deposition was evidenced as fast as 48 h in culture, even at low serum concentration. Temperature responsive copolymers allowed the detachment of dense and cohesive supramolecularly assembled living substitutes within 6 days in culture. Morphological, histological, gene and protein analysis assays demonstrated maintenance of tissue-specific function. Macromolecular crowding opens new avenues for a more rational design in engineering of clinically relevant tissue modules in vitro.

  15. Long-wavelength macromolecular crystallography - First successful native SAD experiment close to the sulfur edge

    Science.gov (United States)

    Aurelius, O.; Duman, R.; El Omari, K.; Mykhaylyk, V.; Wagner, A.

    2017-11-01

    Phasing of novel macromolecular crystal structures has been challenging since the start of structural biology. Making use of anomalous diffraction of natively present elements, such as sulfur and phosphorus, for phasing has been possible for some systems, but hindered by the necessity to access longer X-ray wavelengths in order to make most use of the anomalous scattering contributions of these elements. Presented here are the results from a first successful experimental phasing study of a macromolecular crystal structure at a wavelength close to the sulfur K edge. This has been made possible by the in-vacuum setup and the long-wavelength optimised experimental setup at the I23 beamline at Diamond Light Source. In these early commissioning experiments only standard data collection and processing procedures have been applied, in particular no dedicated absorption correction has been used. Nevertheless the success of the experiment demonstrates that the capability to extract phase information can be even further improved once data collection protocols and data processing have been optimised.

  16. Error Evaluation in a Stereovision-Based 3D Reconstruction System

    Directory of Open Access Journals (Sweden)

    Kohler Sophie

    2010-01-01

    Full Text Available The work presented in this paper deals with the performance analysis of the whole 3D reconstruction process of imaged objects, specifically of the set of geometric primitives describing their outline and extracted from a pair of images knowing their associated camera models. The proposed analysis focuses on error estimation for the edge detection process, the starting step for the whole reconstruction procedure. The fitting parameters describing the geometric features composing the workpiece to be evaluated are used as quality measures to determine error bounds and finally to estimate the edge detection errors. These error estimates are then propagated up to the final 3D reconstruction step. The suggested error analysis procedure for stereovision-based reconstruction tasks further allows evaluating the quality of the 3D reconstruction. The resulting final error estimates enable lastly to state if the reconstruction results fulfill a priori defined criteria, for example, fulfill dimensional constraints including tolerance information, for vision-based quality control applications for example.

  17. FACET Tolerances for Static and Dynamic Misalignments

    Energy Technology Data Exchange (ETDEWEB)

    Federico, Joel

    2012-07-13

    The Facility for AdvancedAccelerator and Experimental Tests (FACET) at the SLAC National Accelerator Laboratory is designed to deliver a beam with a transverse spot size on the order of 10 {micro}m x 10 {micro}m in a new beamline constructed at the two kilometer point of the SLAC linac. Commissioning the beamline requires mitigating alignment errors and their effects, which can be significant and result in spot sizes orders of magnitude larger. Sextupole and quadrupole alignment errors in particular can introduce errors in focusing, steering, and dispersion which can result in spot size growth, beta mismatch, and waist movement. Alignment errors due to static misalignments, mechanical jitter, energy jitter, and other physical processes can be analyzed to determine the level of accuracy and precision that the beamline requires. It is important to recognize these effects and their tolerances in order to deliver a beam as designed.

  18. Sport Tournament Automated Scheduling System

    OpenAIRE

    Raof R. A. A; Sudin S.; Mahrom N.; Rosli A. N. C

    2018-01-01

    The organizer of sport events often facing problems such as wrong calculations of marks and scores, as well as difficult to create a good and reliable schedule. Most of the time, the issues about the level of integrity of committee members and also issues about errors made by human came into the picture. Therefore, the development of sport tournament automated scheduling system is proposed. The system will be able to automatically generate the tournament schedule as well as automatically calc...

  19. The In-Situ One-Step Synthesis of a PDC Macromolecular Pro-Drug and the Fabrication of a Novel Core-Shell Micell.

    Science.gov (United States)

    Yu, Cui-Yun; Yang, Sa; Li, Zhi-Ping; Huang, Can; Ning, Qian; Huang, Wen; Yang, Wen-Tong; He, Dongxiu; Sun, Lichun

    2016-01-01

    The development of slow release nano-sized carriers for efficient antineoplastic drug delivery with a biocompatible and biodegradable pectin-based macromolecular pro-drug for tumor therapy has been reported in this study. Pectin-doxorubicin conjugates (PDC), a macromolecular pro-drug, were prepared via an amide condensation reaction, and a novel amphiphilic core-shell micell based on a PDC macromolecular pro-drug (PDC-M) was self-assembled in situ, with pectin as the hydrophilic shell and doxorubicin (DOX) as the hydrophobic core. Then the chemical structure of the PDC macromolecular pro-drug was identified by both Fourier transform infrared spectroscopy (FTIR) and nuclear magnetic resonance spectroscopy ((1)H-NMR), and proved that doxorubicin combined well with the pectin and formed macromolecular pro-drug. The PDC-M were observed to have an unregularly spherical shape and were uniform in size by scanning electron microscopy (SEM). The average particle size of PDC-M, further measured by a Zetasizer nanoparticle analyzer (Nano ZS, Malvern Instruments), was about 140 nm. The encapsulation efficiency and drug loading were 57.82% ± 3.7% (n = 3) and 23.852% ±2.3% (n = 3), respectively. The in vitro drug release behaviors of the resulting PDC-M were studied in a simulated tumor environment (pH 5.0), blood (pH 7.4) and a lysosome media (pH 6.8), and showed a prolonged slow release profile. Assays for antiproliferative effects and flow cytometry of the resulting PDC-M in HepG2 cell lines demonstrated greater properties of delayed and slow release as compared to free DOX. A cell viability study against endothelial cells further revealed that the resulting PDC-M possesses excellent cell compatibilities and low cytotoxicities in comparison with that of the free DOX. Hemolysis activity was investigated in rabbits, and the results also demonstrated that the PDC-M has greater compatibility in comparison with free DOX. This shows that the resulting PDC-M can ameliorate the

  20. Fault Tolerance in ZigBee Wireless Sensor Networks

    Science.gov (United States)

    Alena, Richard; Gilstrap, Ray; Baldwin, Jarren; Stone, Thom; Wilson, Pete

    2011-01-01

    Wireless sensor networks (WSN) based on the IEEE 802.15.4 Personal Area Network standard are finding increasing use in the home automation and emerging smart energy markets. The network and application layers, based on the ZigBee 2007 PRO Standard, provide a convenient framework for component-based software that supports customer solutions from multiple vendors. This technology is supported by System-on-a-Chip solutions, resulting in extremely small and low-power nodes. The Wireless Connections in Space Project addresses the aerospace flight domain for both flight-critical and non-critical avionics. WSNs provide the inherent fault tolerance required for aerospace applications utilizing such technology. The team from Ames Research Center has developed techniques for assessing the fault tolerance of ZigBee WSNs challenged by radio frequency (RF) interference or WSN node failure.

  1. Fault Tolerant Position-mooring Control for Offshore Vessels

    DEFF Research Database (Denmark)

    Blanke, Mogens; Nguyen, Trong Dong

    2018-01-01

    Fault-tolerance is crucial to maintain safety in offshore operations. The objective of this paper is to show how systematic analysis and design of fault-tolerance is conducted for a complex automation system, exemplified by thruster assisted Position-mooring. Using redundancy as required....... Functional faults that are only detectable, are rendered isolable through an active isolation approach. Once functional faults are isolated, they are handled by fault accommodation techniques to meet overall control objectives specified by class requirements. The paper illustrates the generic methodology...... by a system to handle faults in mooring lines, sensors or thrusters. Simulations and model basin experiments are carried out to validate the concept for scenarios with single or multiple faults. The results demonstrate that enhanced availability and safety are obtainable with this design approach. While...

  2. Religious Fundamentalism Modulates Neural Responses to Error-Related Words: The Role of Motivation Toward Closure

    Directory of Open Access Journals (Sweden)

    Małgorzata Kossowska

    2018-03-01

    Full Text Available Examining the relationship between brain activity and religious fundamentalism, this study explores whether fundamentalist religious beliefs increase responses to error-related words among participants intolerant to uncertainty (i.e., high in the need for closure in comparison to those who have a high degree of toleration for uncertainty (i.e., those who are low in the need for closure. We examine a negative-going event-related brain potentials occurring 400 ms after stimulus onset (the N400 due to its well-understood association with the reactions to emotional conflict. Religious fundamentalism and tolerance of uncertainty were measured on self-report measures, and electroencephalographic neural reactivity was recorded as participants were performing an emotional Stroop task. In this task, participants read neutral words and words related to uncertainty, errors, and pondering, while being asked to name the color of the ink with which the word is written. The results confirm that among people who are intolerant of uncertainty (i.e., those high in the need for closure, religious fundamentalism is associated with an increased N400 on error-related words compared with people who tolerate uncertainty well (i.e., those low in the need for closure.

  3. Religious Fundamentalism Modulates Neural Responses to Error-Related Words: The Role of Motivation Toward Closure.

    Science.gov (United States)

    Kossowska, Małgorzata; Szwed, Paulina; Wyczesany, Miroslaw; Czarnek, Gabriela; Wronka, Eligiusz

    2018-01-01

    Examining the relationship between brain activity and religious fundamentalism, this study explores whether fundamentalist religious beliefs increase responses to error-related words among participants intolerant to uncertainty (i.e., high in the need for closure) in comparison to those who have a high degree of toleration for uncertainty (i.e., those who are low in the need for closure). We examine a negative-going event-related brain potentials occurring 400 ms after stimulus onset (the N400) due to its well-understood association with the reactions to emotional conflict. Religious fundamentalism and tolerance of uncertainty were measured on self-report measures, and electroencephalographic neural reactivity was recorded as participants were performing an emotional Stroop task. In this task, participants read neutral words and words related to uncertainty, errors, and pondering, while being asked to name the color of the ink with which the word is written. The results confirm that among people who are intolerant of uncertainty (i.e., those high in the need for closure), religious fundamentalism is associated with an increased N400 on error-related words compared with people who tolerate uncertainty well (i.e., those low in the need for closure).

  4. Religious Fundamentalism Modulates Neural Responses to Error-Related Words: The Role of Motivation Toward Closure

    Science.gov (United States)

    Kossowska, Małgorzata; Szwed, Paulina; Wyczesany, Miroslaw; Czarnek, Gabriela; Wronka, Eligiusz

    2018-01-01

    Examining the relationship between brain activity and religious fundamentalism, this study explores whether fundamentalist religious beliefs increase responses to error-related words among participants intolerant to uncertainty (i.e., high in the need for closure) in comparison to those who have a high degree of toleration for uncertainty (i.e., those who are low in the need for closure). We examine a negative-going event-related brain potentials occurring 400 ms after stimulus onset (the N400) due to its well-understood association with the reactions to emotional conflict. Religious fundamentalism and tolerance of uncertainty were measured on self-report measures, and electroencephalographic neural reactivity was recorded as participants were performing an emotional Stroop task. In this task, participants read neutral words and words related to uncertainty, errors, and pondering, while being asked to name the color of the ink with which the word is written. The results confirm that among people who are intolerant of uncertainty (i.e., those high in the need for closure), religious fundamentalism is associated with an increased N400 on error-related words compared with people who tolerate uncertainty well (i.e., those low in the need for closure). PMID:29636709

  5. MX1: a bending-magnet crystallography beamline serving both chemical and macromolecular crystallography communities at the Australian Synchrotron

    International Nuclear Information System (INIS)

    Cowieson, Nathan Philip; Aragao, David; Clift, Mark; Ericsson, Daniel J.; Gee, Christine; Harrop, Stephen J.; Mudie, Nathan; Panjikar, Santosh; Price, Jason R.; Riboldi-Tunnicliffe, Alan; Williamson, Rachel; Caradoc-Davies, Tom

    2015-01-01

    The macromolecular crystallography beamline MX1 at the Australian Synchrotron is described. MX1 is a bending-magnet crystallography beamline at the 3 GeV Australian Synchrotron. The beamline delivers hard X-rays in the energy range from 8 to 18 keV to a focal spot at the sample position of 120 µm FWHM. The beamline endstation and ancillary equipment facilitate local and remote access for both chemical and biological macromolecular crystallography. Here, the design of the beamline and endstation are discussed. The beamline has enjoyed a full user program for the last seven years and scientific highlights from the user program are also presented

  6. Automation inflicted differences on operator performance in nuclear power plant control rooms

    Energy Technology Data Exchange (ETDEWEB)

    Andersson, Jonas; Osvalder, A.L. [Chalmers Univ. of Technology, Dept. of Product and Producton Development (Sweden)

    2007-03-15

    Today it is possible to automate almost any function in a human-machine system. Therefore it is important to find a balance between automation level and the prerequisites for the operator to maintain safe operation. Different human factors evaluation methods can be used to find differences between automatic and manual operations that have an effect on operator performance; e.g. Predictive Human Error Analysis (PHEA), NASA Task Load Index (NASA-TLX), Halden Questionnaire, and Human Error Assessment and Reduction Technique (HEART). Results from an empirical study concerning automation levels, made at Ringhals power plant, showed that factors as time pressure and criticality of the work situation influenced the operator's performance and mental workload more than differences in level of automation. The results indicate that the operator's attention strategies differ between the manual and automatic sequences. Independently of level of automation, it is essential that the operator retains control and situational understanding. When performing a manual task, the operator is 'closer' to the process and in control with sufficient situational understanding. When the level of automation increases, the demands on information presentation increase to ensure safe plant operation. The need for control can be met by introducing 'control gates' where the operator has to accept that the automatic procedures are continuing as expected. Situational understanding can be established by clear information about process status and by continuous feedback. A conclusion of the study was that a collaborative control room environment is important. Rather than allocating functions to either the operator or the system, a complementary strategy should be used. Key parameters to consider when planning the work in the control room are time constraints and task criticality and how they affect the performance of the joint cognitive system.However, the examined working

  7. Automation inflicted differences on operator performance in nuclear power plant control rooms

    International Nuclear Information System (INIS)

    Andersson, Jonas; Osvalder, A.L.

    2007-03-01

    Today it is possible to automate almost any function in a human-machine system. Therefore it is important to find a balance between automation level and the prerequisites for the operator to maintain safe operation. Different human factors evaluation methods can be used to find differences between automatic and manual operations that have an effect on operator performance; e.g. Predictive Human Error Analysis (PHEA), NASA Task Load Index (NASA-TLX), Halden Questionnaire, and Human Error Assessment and Reduction Technique (HEART). Results from an empirical study concerning automation levels, made at Ringhals power plant, showed that factors as time pressure and criticality of the work situation influenced the operator's performance and mental workload more than differences in level of automation. The results indicate that the operator's attention strategies differ between the manual and automatic sequences. Independently of level of automation, it is essential that the operator retains control and situational understanding. When performing a manual task, the operator is 'closer' to the process and in control with sufficient situational understanding. When the level of automation increases, the demands on information presentation increase to ensure safe plant operation. The need for control can be met by introducing 'control gates' where the operator has to accept that the automatic procedures are continuing as expected. Situational understanding can be established by clear information about process status and by continuous feedback. A conclusion of the study was that a collaborative control room environment is important. Rather than allocating functions to either the operator or the system, a complementary strategy should be used. Key parameters to consider when planning the work in the control room are time constraints and task criticality and how they affect the performance of the joint cognitive system.However, the examined working situations were too different

  8. Periodic Application of Concurrent Error Detection in Processor Array Architectures. PhD. Thesis -

    Science.gov (United States)

    Chen, Paul Peichuan

    1993-01-01

    Processor arrays can provide an attractive architecture for some applications. Featuring modularity, regular interconnection and high parallelism, such arrays are well-suited for VLSI/WSI implementations, and applications with high computational requirements, such as real-time signal processing. Preserving the integrity of results can be of paramount importance for certain applications. In these cases, fault tolerance should be used to ensure reliable delivery of a system's service. One aspect of fault tolerance is the detection of errors caused by faults. Concurrent error detection (CED) techniques offer the advantage that transient and intermittent faults may be detected with greater probability than with off-line diagnostic tests. Applying time-redundant CED techniques can reduce hardware redundancy costs. However, most time-redundant CED techniques degrade a system's performance.

  9. Synthesis of branched polymers under continuous-flow microprocess: an improvement of the control of macromolecular architectures.

    Science.gov (United States)

    Bally, Florence; Serra, Christophe A; Brochon, Cyril; Hadziioannou, Georges

    2011-11-15

    Polymerization reactions can benefit from continuous-flow microprocess in terms of kinetics control, reactants mixing or simply efficiency when high-throughput screening experiments are carried out. In this work, we perform for the first time the synthesis of branched macromolecular architecture through a controlled/'living' polymerization technique, in tubular microreactor. Just by tuning process parameters, such as flow rates of the reactants, we manage to generate a library of polymers with various macromolecular characteristics. Compared to conventional batch process, polymerization kinetics shows a faster initiation step and more interestingly an improved branching efficiency. Due to reduced diffusion pathway, a characteristic of microsystems, it is thus possible to reach branched polymers exhibiting a denser architecture, and potentially a higher functionality for later applications. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Structural analysis of nanoparticulate carriers for encapsulation of macromolecular drugs

    Czech Academy of Sciences Publication Activity Database

    Angelov, Borislav; Garamus, V.M.; Drechsler, M.; Angelova, A.

    2017-01-01

    Roč. 235, Jun (2017), s. 83-89 ISSN 0167-7322 R&D Projects: GA MŠk EF15_003/0000447; GA MŠk EF15_008/0000162 Grant - others:OP VVV - ELIBIO(XE) CZ.02.1.01/0.0/0.0/15_003/0000447; ELI Beamlines(XE) CZ.02.1.01/0.0/0.0/15_008/0000162 Institutional support: RVO:68378271 Keywords : self-assembled nanocarriers * liquid crystalline phase transitions * cationic lipids * macromolecular drugs Subject RIV: BO - Biophysics OBOR OECD: Biophysics Impact factor: 3.648, year: 2016

  11. Protein crystal growth studies at the Center for Macromolecular Crystallography

    International Nuclear Information System (INIS)

    DeLucas, Lawrence J.; Long, Marianna M.; Moore, Karen M.; Harrington, Michael; McDonald, William T.; Smith, Craig D.; Bray, Terry; Lewis, Johanna; Crysel, William B.; Weise, Lance D.

    2000-01-01

    The Center for Macromolecular Crystallography (CMC) has been involved in fundamental studies of protein crystal growth (PCG) in microgravity and in our earth-based laboratories. A large group of co-investigators from academia and industry participated in these experiments by providing protein samples and by performing the x-ray crystallographic analysis. These studies have clearly demonstrated the usefulness of a microgravity environment for enhancing the quality and size of protein crystals. Review of the vapor diffusion (VDA) PCG results from nineteen space shuttle missions is given in this paper

  12. One-Class Classification-Based Real-Time Activity Error Detection in Smart Homes.

    Science.gov (United States)

    Das, Barnan; Cook, Diane J; Krishnan, Narayanan C; Schmitter-Edgecombe, Maureen

    2016-08-01

    Caring for individuals with dementia is frequently associated with extreme physical and emotional stress, which often leads to depression. Smart home technology and advances in machine learning techniques can provide innovative solutions to reduce caregiver burden. One key service that caregivers provide is prompting individuals with memory limitations to initiate and complete daily activities. We hypothesize that sensor technologies combined with machine learning techniques can automate the process of providing reminder-based interventions. The first step towards automated interventions is to detect when an individual faces difficulty with activities. We propose machine learning approaches based on one-class classification that learn normal activity patterns. When we apply these classifiers to activity patterns that were not seen before, the classifiers are able to detect activity errors, which represent potential prompt situations. We validate our approaches on smart home sensor data obtained from older adult participants, some of whom faced difficulties performing routine activities and thus committed errors.

  13. Tolerance limits of X-ray image intensity

    International Nuclear Information System (INIS)

    Stargardt, A.; Juran, R.; Brandt, G.A.

    1985-01-01

    Evaluation of the tolerance limits of X-ray image density accepted by the radiologist shows that for different kinds of examinations, deviations of more than 50% from optimal density lead to images which cannot be used diagnostically. Within this range diagnostic accuracy shows a distinct maximum and diminishes to the limits by 20%. These figures are related to differences in the intensifying factor of screens, sensitivity of films, sensitometric parameters of film processing as well as the doses employed with automatic exposure control devices, measured in clinical conditions. Maximum permissible tolerance limits of the whole imaging system and of its constituents are discussed using the Gaussian law of error addition. (author)

  14. Efficient preparation of large-block-code ancilla states for fault-tolerant quantum computation

    Science.gov (United States)

    Zheng, Yi-Cong; Lai, Ching-Yi; Brun, Todd A.

    2018-03-01

    Fault-tolerant quantum computation (FTQC) schemes that use multiqubit large block codes can potentially reduce the resource overhead to a great extent. A major obstacle is the requirement for a large number of clean ancilla states of different types without correlated errors inside each block. These ancilla states are usually logical stabilizer states of the data-code blocks, which are generally difficult to prepare if the code size is large. Previously, we have proposed an ancilla distillation protocol for Calderbank-Shor-Steane (CSS) codes by classical error-correcting codes. It was assumed that the quantum gates in the distillation circuit were perfect; however, in reality, noisy quantum gates may introduce correlated errors that are not treatable by the protocol. In this paper, we show that additional postselection by another classical error-detecting code can be applied to remove almost all correlated errors. Consequently, the revised protocol is fully fault tolerant and capable of preparing a large set of stabilizer states sufficient for FTQC using large block codes. At the same time, the yield rate can be boosted from O (t-2) to O (1 ) in practice for an [[n ,k ,d =2 t +1

  15. Automated extraction of radiation dose information from CT dose report images.

    Science.gov (United States)

    Li, Xinhua; Zhang, Da; Liu, Bob

    2011-06-01

    The purpose of this article is to describe the development of an automated tool for retrieving texts from CT dose report images. Optical character recognition was adopted to perform text recognitions of CT dose report images. The developed tool is able to automate the process of analyzing multiple CT examinations, including text recognition, parsing, error correction, and exporting data to spreadsheets. The results were precise for total dose-length product (DLP) and were about 95% accurate for CT dose index and DLP of scanned series.

  16. Simple heuristics: A bridge between manual core design and automated optimization methods

    International Nuclear Information System (INIS)

    White, J.R.; Delmolino, P.M.

    1993-01-01

    The primary function of RESCUE is to serve as an aid in the analysis and identification of feasible loading patterns for LWR reload cores. The unique feature of RESCUE is that its physics model is based on some recent advances in generalized perturbation theory (GPT) methods. The high order GPT techniques offer the accuracy, computational efficiency, and flexibility needed for the implementation of a full range of capabilities within a set of compatible interactive (manual and semi-automated) and automated design tools. The basic design philosophy and current features within RESCUE are reviewed, and the new semi-automated capability is highlighted. The online advisor facility appears quite promising and it provides a natural bridge between the traditional trial-and-error manual process and the recent progress towards fully automated optimization sequences. (orig.)

  17. Prospective, observational study comparing automated and visual point-of-care urinalysis in general practice.

    Science.gov (United States)

    van Delft, Sanne; Goedhart, Annelijn; Spigt, Mark; van Pinxteren, Bart; de Wit, Niek; Hopstaken, Rogier

    2016-08-08

    Point-of-care testing (POCT) urinalysis might reduce errors in (subjective) reading, registration and communication of test results, and might also improve diagnostic outcome and optimise patient management. Evidence is lacking. In the present study, we have studied the analytical performance of automated urinalysis and visual urinalysis compared with a reference standard in routine general practice. The study was performed in six general practitioner (GP) group practices in the Netherlands. Automated urinalysis was compared with visual urinalysis in these practices. Reference testing was performed in a primary care laboratory (Saltro, Utrecht, The Netherlands). Analytical performance of automated and visual urinalysis compared with the reference laboratory method was the primary outcome measure, analysed by calculating sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) and Cohen's κ coefficient for agreement. Secondary outcome measure was the user-friendliness of the POCT analyser. Automated urinalysis by experienced and routinely trained practice assistants in general practice performs as good as visual urinalysis for nitrite, leucocytes and erythrocytes. Agreement for nitrite is high for automated and visual urinalysis. κ's are 0.824 and 0.803 (ranked as very good and good, respectively). Agreement with the central laboratory reference standard for automated and visual urinalysis for leucocytes is rather poor (0.256 for POCT and 0.197 for visual, respectively, ranked as fair and poor). κ's for erythrocytes are higher: 0.517 (automated) and 0.416 (visual), both ranked as moderate. The Urisys 1100 analyser was easy to use and considered to be not prone to flaws. Automated urinalysis performed as good as traditional visual urinalysis on reading of nitrite, leucocytes and erythrocytes in routine general practice. Implementation of automated urinalysis in general practice is justified as automation is expected to reduce

  18. Crew/Automation Interaction in Space Transportation Systems: Lessons Learned from the Glass Cockpit

    Science.gov (United States)

    Rudisill, Marianne

    2000-01-01

    The progressive integration of automation technologies in commercial transport aircraft flight decks - the 'glass cockpit' - has had a major, and generally positive, impact on flight crew operations. Flight deck automation has provided significant benefits, such as economic efficiency, increased precision and safety, and enhanced functionality within the crew interface. These enhancements, however, may have been accrued at a price, such as complexity added to crew/automation interaction that has been implicated in a number of aircraft incidents and accidents. This report briefly describes 'glass cockpit' evolution. Some relevant aircraft accidents and incidents are described, followed by a more detailed description of human/automation issues and problems (e.g., crew error, monitoring, modes, command authority, crew coordination, workload, and training). This paper concludes with example principles and guidelines for considering 'glass cockpit' human/automation integration within space transportation systems.

  19. Automation bias in electronic prescribing.

    Science.gov (United States)

    Lyell, David; Magrabi, Farah; Raban, Magdalena Z; Pont, L G; Baysari, Melissa T; Day, Richard O; Coiera, Enrico

    2017-03-16

    Clinical decision support (CDS) in e-prescribing can improve safety by alerting potential errors, but introduces new sources of risk. Automation bias (AB) occurs when users over-rely on CDS, reducing vigilance in information seeking and processing. Evidence of AB has been found in other clinical tasks, but has not yet been tested with e-prescribing. This study tests for the presence of AB in e-prescribing and the impact of task complexity and interruptions on AB. One hundred and twenty students in the final two years of a medical degree prescribed medicines for nine clinical scenarios using a simulated e-prescribing system. Quality of CDS (correct, incorrect and no CDS) and task complexity (low, low + interruption and high) were varied between conditions. Omission errors (failure to detect prescribing errors) and commission errors (acceptance of false positive alerts) were measured. Compared to scenarios with no CDS, correct CDS reduced omission errors by 38.3% (p < .0001, n = 120), 46.6% (p < .0001, n = 70), and 39.2% (p < .0001, n = 120) for low, low + interrupt and high complexity scenarios respectively. Incorrect CDS increased omission errors by 33.3% (p < .0001, n = 120), 24.5% (p < .009, n = 82), and 26.7% (p < .0001, n = 120). Participants made commission errors, 65.8% (p < .0001, n = 120), 53.5% (p < .0001, n = 82), and 51.7% (p < .0001, n = 120). Task complexity and interruptions had no impact on AB. This study found evidence of AB omission and commission errors in e-prescribing. Verification of CDS alerts is key to avoiding AB errors. However, interventions focused on this have had limited success to date. Clinicians should remain vigilant to the risks of CDS failures and verify CDS.

  20. Error management for musicians: an interdisciplinary conceptual framework.

    Science.gov (United States)

    Kruse-Weber, Silke; Parncutt, Richard

    2014-01-01

    Musicians tend to strive for flawless performance and perfection, avoiding errors at all costs. Dealing with errors while practicing or performing is often frustrating and can lead to anger and despair, which can explain musicians' generally negative attitude toward errors and the tendency to aim for flawless learning in instrumental music education. But even the best performances are rarely error-free, and research in general pedagogy and psychology has shown that errors provide useful information for the learning process. Research in instrumental pedagogy is still neglecting error issues; the benefits of risk management (before the error) and error management (during and after the error) are still underestimated. It follows that dealing with errors is a key aspect of music practice at home, teaching, and performance in public. And yet, to be innovative, or to make their performance extraordinary, musicians need to risk errors. Currently, most music students only acquire the ability to manage errors implicitly - or not at all. A more constructive, creative, and differentiated culture of errors would balance error tolerance and risk-taking against error prevention in ways that enhance music practice and music performance. The teaching environment should lay the foundation for the development of such an approach. In this contribution, we survey recent research in aviation, medicine, economics, psychology, and interdisciplinary decision theory that has demonstrated that specific error-management training can promote metacognitive skills that lead to better adaptive transfer and better performance skills. We summarize how this research can be applied to music, and survey-relevant research that is specifically tailored to the needs of musicians, including generic guidelines for risk and error management in music teaching and performance. On this basis, we develop a conceptual framework for risk management that can provide orientation for further music education and

  1. Error management for musicians: an interdisciplinary conceptual framework

    Directory of Open Access Journals (Sweden)

    Silke eKruse-Weber

    2014-07-01

    Full Text Available Musicians tend to strive for flawless performance and perfection, avoiding errors at all costs. Dealing with errors while practicing or performing is often frustrating and can lead to anger and despair, which can explain musicians’ generally negative attitude toward errors and the tendency to aim for errorless learning in instrumental music education. But even the best performances are rarely error-free, and research in general pedagogy and psychology has shown that errors provide useful information for the learning process. Research in instrumental pedagogy is still neglecting error issues; the benefits of risk management (before the error and error management (during and after the error are still underestimated. It follows that dealing with errors is a key aspect of music practice at home, teaching, and performance in public. And yet, to be innovative, or to make their performance extraordinary, musicians need to risk errors. Currently, most music students only acquire the ability to manage errors implicitly - or not at all. A more constructive, creative and differentiated culture of errors would balance error tolerance and risk-taking against error prevention in ways that enhance music practice and music performance. The teaching environment should lay the foundation for the development of these abilities. In this contribution, we survey recent research in aviation, medicine, economics, psychology, and interdisciplinary decision theory that has demonstrated that specific error-management training can promote metacognitive skills that lead to better adaptive transfer and better performance skills. We summarize how this research can be applied to music, and survey relevant research that is specifically tailored to the needs of musicians, including generic guidelines for risk and error management in music teaching and performance. On this basis, we develop a conceptual framework for risk management that can provide orientation for further

  2. Restructuring of workflows to minimise errors via stochastic model checking: An automated evolutionary approach

    DEFF Research Database (Denmark)

    Herbert, Luke Thomas; Hansen, Zaza Nadja Lee

    2016-01-01

    This article presents a framework for the automated restructuring of stochastic workflows to reduce the impact of faults. The framework allows for the modelling of workflows by means of a formalised subset of the BPMN workflow language. We extend this modelling formalism to describe faults...

  3. Extraction of cobalt ion from textile using a complexing macromolecular surfactant in supercritical carbon dioxide

    International Nuclear Information System (INIS)

    Chirat, Mathieu; Ribaut, Tiphaine; Clerc, Sebastien; Lacroix-Desmazes, Patrick; Charton, Frederic; Fournel, Bruno

    2013-01-01

    Cobalt ion under the form of cobalt nitrate is removed from a textile lab coat using supercritical carbon dioxide extraction. The process involves a macromolecular additive of well-defined architecture, acting both as a surfactant and a complexing agent. The extraction efficiency of cobalt reaches 66% when using a poly(1,1,2,2-tetrahydroperfluoro-decyl-acrylate-co-vinyl-benzylphosphonic diacid) gradient copolymer in the presence of water at 160 bar and 40 C. The synergy of the two additives, namely the copolymer and water which are useless if used separately, is pointed out. The potential of the supercritical carbon dioxide process using complexing macromolecular surfactant lies in the ability to modulate the complexing unit as a function of the metal as well as the architecture of the surface-active agent for applications ranging for instance from nuclear decontamination to the recovery of strategic metals. (authors)

  4. Cross tolerance of osmotically and ionically adapted cell lines of rice ...

    African Journals Online (AJOL)

    saad

    2012-01-10

    Jan 10, 2012 ... The phenomenon of cross tolerance in osmotically and ionically adapted rice .... the mean values of 5 replicates ± standard error. variance showed .... Education Commission of Pakistan and Pakistan Science. Foundation.

  5. E-MSD: the European Bioinformatics Institute Macromolecular Structure Database.

    Science.gov (United States)

    Boutselakis, H; Dimitropoulos, D; Fillon, J; Golovin, A; Henrick, K; Hussain, A; Ionides, J; John, M; Keller, P A; Krissinel, E; McNeil, P; Naim, A; Newman, R; Oldfield, T; Pineda, J; Rachedi, A; Copeland, J; Sitnov, A; Sobhany, S; Suarez-Uruena, A; Swaminathan, J; Tagari, M; Tate, J; Tromm, S; Velankar, S; Vranken, W

    2003-01-01

    The E-MSD macromolecular structure relational database (http://www.ebi.ac.uk/msd) is designed to be a single access point for protein and nucleic acid structures and related information. The database is derived from Protein Data Bank (PDB) entries. Relational database technologies are used in a comprehensive cleaning procedure to ensure data uniformity across the whole archive. The search database contains an extensive set of derived properties, goodness-of-fit indicators, and links to other EBI databases including InterPro, GO, and SWISS-PROT, together with links to SCOP, CATH, PFAM and PROSITE. A generic search interface is available, coupled with a fast secondary structure domain search tool.

  6. Integrated polarization beam splitter with relaxed fabrication tolerances.

    Science.gov (United States)

    Pérez-Galacho, D; Halir, R; Ortega-Moñux, A; Alonso-Ramos, C; Zhang, R; Runge, P; Janiak, K; Bach, H-G; Steffan, A G; Molina-Fernández, Í

    2013-06-17

    Polarization handling is a key requirement for the next generation of photonic integrated circuits (PICs). Integrated polarization beam splitters (PBS) are central elements for polarization management, but their use in PICs is hindered by poor fabrication tolerances. In this work we present a fully passive, highly fabrication tolerant polarization beam splitter, based on an asymmetrical Mach-Zehnder interferometer (MZI) with a Si/SiO(2) Periodic Layer Structure (PLS) on top of one of its arms. By engineering the birefringence of the PLS we are able to design the MZI arms so that sensitivities to the most critical fabrication errors are greatly reduced. Our PBS design tolerates waveguide width variations of 400nm maintaining a polarization extinction ratio better than 13dB in the complete C-Band.

  7. Solution-verified reliability analysis and design of bistable MEMS using error estimation and adaptivity.

    Energy Technology Data Exchange (ETDEWEB)

    Eldred, Michael Scott; Subia, Samuel Ramirez; Neckels, David; Hopkins, Matthew Morgan; Notz, Patrick K.; Adams, Brian M.; Carnes, Brian; Wittwer, Jonathan W.; Bichon, Barron J.; Copps, Kevin D.

    2006-10-01

    This report documents the results for an FY06 ASC Algorithms Level 2 milestone combining error estimation and adaptivity, uncertainty quantification, and probabilistic design capabilities applied to the analysis and design of bistable MEMS. Through the use of error estimation and adaptive mesh refinement, solution verification can be performed in an automated and parameter-adaptive manner. The resulting uncertainty analysis and probabilistic design studies are shown to be more accurate, efficient, reliable, and convenient.

  8. Ciliates learn to diagnose and correct classical error syndromes in mating strategies.

    Science.gov (United States)

    Clark, Kevin B

    2013-01-01

    Preconjugal ciliates learn classical repetition error-correction codes to safeguard mating messages and replies from corruption by "rivals" and local ambient noise. Because individual cells behave as memory channels with Szilárd engine attributes, these coding schemes also might be used to limit, diagnose, and correct mating-signal errors due to noisy intracellular information processing. The present study, therefore, assessed whether heterotrich ciliates effect fault-tolerant signal planning and execution by modifying engine performance, and consequently entropy content of codes, during mock cell-cell communication. Socially meaningful serial vibrations emitted from an ambiguous artificial source initiated ciliate behavioral signaling performances known to advertise mating fitness with varying courtship strategies. Microbes, employing calcium-dependent Hebbian-like decision making, learned to diagnose then correct error syndromes by recursively matching Boltzmann entropies between signal planning and execution stages via "power" or "refrigeration" cycles. All eight serial contraction and reversal strategies incurred errors in entropy magnitude by the execution stage of processing. Absolute errors, however, subtended expected threshold values for single bit-flip errors in three-bit replies, indicating coding schemes protected information content throughout signal production. Ciliate preparedness for vibrations selectively and significantly affected the magnitude and valence of Szilárd engine performance during modal and non-modal strategy corrective cycles. But entropy fidelity for all replies mainly improved across learning trials as refinements in engine efficiency. Fidelity neared maximum levels for only modal signals coded in resilient three-bit repetition error-correction sequences. Together, these findings demonstrate microbes can elevate survival/reproductive success by learning to implement classical fault-tolerant information processing in social

  9. Ciliates learn to diagnose and correct classical error syndromes in mating strategies

    Directory of Open Access Journals (Sweden)

    Kevin Bradley Clark

    2013-08-01

    Full Text Available Preconjugal ciliates learn classical repetition error-correction codes to safeguard mating messages and replies from corruption by rivals and local ambient noise. Because individual cells behave as memory channels with Szilárd engine attributes, these coding schemes also might be used to limit, diagnose, and correct mating-signal errors due to noisy intracellular information processing. The present study, therefore, assessed whether heterotrich ciliates effect fault-tolerant signal planning and execution by modifying engine performance, and consequently entropy content of codes, during mock cell-cell communication. Socially meaningful serial vibrations emitted from an ambiguous artificial source initiated ciliate behavioral signaling performances known to advertise mating fitness with varying courtship strategies. Microbes, employing calcium-dependent Hebbian-like decision making, learned to diagnose then correct error syndromes by recursively matching Boltzmann entropies between signal planning and execution stages via power or refrigeration cycles. All eight serial contraction and reversal strategies incurred errors in entropy magnitude by the execution stage of processing. Absolute errors, however, subtended expected threshold values for single bit-flip errors in three-bit replies, indicating coding schemes protected information content throughout signal production. Ciliate preparedness for vibrations selectively and significantly affected the magnitude and valence of Szilárd engine performance during modal and nonmodal strategy corrective cycles. But entropy fidelity for all replies mainly improved across learning trials as refinements in engine efficiency. Fidelity neared maximum levels for only modal signals coded in resilient three-bit repetition error-correction sequences. Together, these findings demonstrate microbes can elevate survival/reproductive success by learning to implement classical fault-tolerant information processing in

  10. Using support vector machines to improve elemental ion identification in macromolecular crystal structures

    Energy Technology Data Exchange (ETDEWEB)

    Morshed, Nader [University of California, Berkeley, CA 94720 (United States); Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Echols, Nathaniel, E-mail: nechols@lbl.gov [Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Adams, Paul D., E-mail: nechols@lbl.gov [Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); University of California, Berkeley, CA 94720 (United States)

    2015-05-01

    A method to automatically identify possible elemental ions in X-ray crystal structures has been extended to use support vector machine (SVM) classifiers trained on selected structures in the PDB, with significantly improved sensitivity over manually encoded heuristics. In the process of macromolecular model building, crystallographers must examine electron density for isolated atoms and differentiate sites containing structured solvent molecules from those containing elemental ions. This task requires specific knowledge of metal-binding chemistry and scattering properties and is prone to error. A method has previously been described to identify ions based on manually chosen criteria for a number of elements. Here, the use of support vector machines (SVMs) to automatically classify isolated atoms as either solvent or one of various ions is described. Two data sets of protein crystal structures, one containing manually curated structures deposited with anomalous diffraction data and another with automatically filtered, high-resolution structures, were constructed. On the manually curated data set, an SVM classifier was able to distinguish calcium from manganese, zinc, iron and nickel, as well as all five of these ions from water molecules, with a high degree of accuracy. Additionally, SVMs trained on the automatically curated set of high-resolution structures were able to successfully classify most common elemental ions in an independent validation test set. This method is readily extensible to other elemental ions and can also be used in conjunction with previous methods based on a priori expectations of the chemical environment and X-ray scattering.

  11. Design and tolerance analysis of a transmission sphere by interferometer model

    Science.gov (United States)

    Peng, Wei-Jei; Ho, Cheng-Fong; Lin, Wen-Lung; Yu, Zong-Ru; Huang, Chien-Yao; Hsu, Wei-Yao

    2015-09-01

    The design of a 6-in, f/2.2 transmission sphere for Fizeau interferometry is presented in this paper. To predict the actual performance during design phase, we build an interferometer model combined with tolerance analysis in Zemax. Evaluating focus imaging is not enough for a double pass optical system. Thus, we study the interferometer model that includes system error, wavefronts reflected from reference surface and tested surface. Firstly, we generate a deformation map of the tested surface. Because of multiple configurations in Zemax, we can get the test wavefront and the reference wavefront reflected from the tested surface and the reference surface of transmission sphere respectively. According to the theory of interferometry, we subtract both wavefronts to acquire the phase of tested surface. Zernike polynomial is applied to transfer the map from phase to sag and to remove piston, tilt and power. The restored map is the same as original map; because of no system error exists. Secondly, perturbed tolerances including fabrication of lenses and assembly are considered. The system error occurs because the test and reference beam are no longer common path perfectly. The restored map is inaccurate while the system error is added. Although the system error can be subtracted by calibration, it should be still controlled within a small range to avoid calibration error. Generally the reference wavefront error including the system error and the irregularity of the reference surface of 6-in transmission sphere is measured within peak-to-valley (PV) 0.1 λ (λ=0.6328 um), which is not easy to approach. Consequently, it is necessary to predict the value of system error before manufacture. Finally, a prototype is developed and tested by a reference surface with PV 0.1 λ irregularity.

  12. Automated replication of cone beam CT-guided treatments in the Pinnacle(3) treatment planning system for adaptive radiotherapy.

    Science.gov (United States)

    Hargrave, Catriona; Mason, Nicole; Guidi, Robyn; Miller, Julie-Anne; Becker, Jillian; Moores, Matthew; Mengersen, Kerrie; Poulsen, Michael; Harden, Fiona

    2016-03-01

    Time-consuming manual methods have been required to register cone-beam computed tomography (CBCT) images with plans in the Pinnacle(3) treatment planning system in order to replicate delivered treatments for adaptive radiotherapy. These methods rely on fiducial marker (FM) placement during CBCT acquisition or the image mid-point to localise the image isocentre. A quality assurance study was conducted to validate an automated CBCT-plan registration method utilising the Digital Imaging and Communications in Medicine (DICOM) Structure Set (RS) and Spatial Registration (RE) files created during online image-guided radiotherapy (IGRT). CBCTs of a phantom were acquired with FMs and predetermined setup errors using various online IGRT workflows. The CBCTs, DICOM RS and RE files were imported into Pinnacle(3) plans of the phantom and the resulting automated CBCT-plan registrations were compared to existing manual methods. A clinical protocol for the automated method was subsequently developed and tested retrospectively using CBCTs and plans for six bladder patients. The automated CBCT-plan registration method was successfully applied to thirty-four phantom CBCT images acquired with an online 0 mm action level workflow. Ten CBCTs acquired with other IGRT workflows required manual workarounds. This was addressed during the development and testing of the clinical protocol using twenty-eight patient CBCTs. The automated CBCT-plan registrations were instantaneous, replicating delivered treatments in Pinnacle(3) with errors of ±0.5 mm. These errors were comparable to mid-point-dependant manual registrations but superior to FM-dependant manual registrations. The automated CBCT-plan registration method quickly and reliably replicates delivered treatments in Pinnacle(3) for adaptive radiotherapy.

  13. Sensor-driven, fault-tolerant control of a maintenance robot

    International Nuclear Information System (INIS)

    Moy, M.M.; Davidson, W.M.

    1987-01-01

    A robot system has been designed to do routine maintenance tasks on the Sandia Pulsed Reactor (SPR). The use of this Remote Maintenance Robot (RMR) is expected to significantly reduce the occupational radiation exposure of the reactor operators. Reactor safety was a key issue in the design of the robot maintenance system. Using sensors to detect error conditions and intelligent control to recover from the errors, the RMR is capable of responding to error conditions without creating a hazard. This paper describes the design and implementation of a sensor-driven, fault-tolerant control for the RMR. Recovery from errors is not automatic; it does rely on operator assistance. However, a key feature of the error recovery procedure is that the operator is allowed to reenter the programmed operation after the error has been corrected. The recovery procedure guarantees that the moving components of the system will not collide with the reactor during recovery

  14. Automated testing of reactor protection instrumentation made easy

    International Nuclear Information System (INIS)

    Iborra, A.; De Marcos, F.; Pastor, J.A.; Alvarez, B.; Jimenez, A.; Mesa, E.; Alsonso, L.; Regidor, J.J.

    1997-01-01

    Maintenance and testing of reactor protection systems is an important cause of unplanned reactor trips. Automated testing is the answer because it minimises test times and reduces human error. The GAMA I system, developed and implemented at Vandellos II in Spain, has the added advantage that it uses visual programming, which means that changing the software does not need specialist programming skills. (author)

  15. Results of the NLO error-propagation exercise

    International Nuclear Information System (INIS)

    Gessiness, B.; Lower, C.W.; Porter, G.K.

    1984-01-01

    The successful conclusion of the Error Propagation Exercise, started 2 years ago at NLO, Inc.'s Feed Materials Production Center, Fernald, Ohio, was reached when a statistically based LEID was determined in a controlled balance area, processing low enriched uranium materials. The three-month test demonstrated that it is possible even in a high-throughput bulk processing facility to collect and process all data necessary for computation of a rigorously determined LEID without interference with production and without significant cost increases. The exercise further demonstrated that much of the data necessary are already collected for other routine uses (e.g., production control, measurement quality control, etc.) so that only a modest increase in data collection is necessary. The automated data collection system developed showed that the additional data can be collected quickly, accurately, and relatively cheaply using readily-available commercial hardware. The benefits of error propagation in terms of increased confidence in nuclear materials safeguards are clear; plans have been developed to extend error propagation to all the enriched uranium processing areas of the Feed Materials Production Center. 6 references, 3 figures

  16. Automated forms processing - An alternative to time-consuming manual double entry of data in arthroplasty registries?

    DEFF Research Database (Denmark)

    Paulsen, Aksel

    2012-01-01

    , there was no statistical difference compared to single-key data entry (error proportion=0.007 (95% CI: 0.001-0.024), (p= 0.656)), as well as double-key data entry (error proportion=0.003 (95% CI: 0.000-0.019), (p= 0.319)). Discussion and conclusion: Automated forms processing is a valid alternative to double manual data......Background: The clinical and scientific usage of patient-reported outcome measures is increasing in the health services. Often paper forms are used. Manual double entry of data is defined as the definitive gold standard for transferring data to an electronic format, but the process is laborious....... Automated forms processing may be an alternative, but further validation is warranted. Materials and Methods: 200 patients were randomly selected from a cohort of 5777 patients who had previously answered two different questionnaires. The questionnaires were scanned using an automated forms processing...

  17. Fault-tolerant search algorithms reliable computation with unreliable information

    CERN Document Server

    Cicalese, Ferdinando

    2013-01-01

    Why a book on fault-tolerant search algorithms? Searching is one of the fundamental problems in computer science. Time and again algorithmic and combinatorial issues originally studied in the context of search find application in the most diverse areas of computer science and discrete mathematics. On the other hand, fault-tolerance is a necessary ingredient of computing. Due to their inherent complexity, information systems are naturally prone to errors, which may appear at any level - as imprecisions in the data, bugs in the software, or transient or permanent hardware failures. This book pr

  18. Macromolecular Engineering: New Routes Towards the Synthesis of Well-??Defined Polyethers/Polyesters Co/Terpolymers with Different Architectures

    KAUST Repository

    Alamri, Haleema

    2016-01-01

    Macromolecular engineering (as discussed in the first chapter) of homo/copolymers refers to the specific tailoring of these materials for achieving an easy and reproducible synthesis that results in precise molecular

  19. Fault Tolerant Autonomous Lateral Control for Heavy Vehicles

    OpenAIRE

    Talbot, Craig Matthew; Papadimitriou, Iakovos; Tomizuka, Masayoshi

    2004-01-01

    This report summarizes the research results of TO4233, "Fault Tolerant Autonomous Lateral Control for Heavy Vehicles". This project represents a continuing effort of PATH's research on Automated Highway Systems (AHS) and more specifically in the area of heavy vehicles. Research on the lateral control of heavy vehicles for AHS has been going on at PATH since 1993. MOU129, "Steering and Braking Control of Heavy Duty Vehicles" was the first project and it was followed by MOU242, "Lateral Control...

  20. Use of laboratory robots in the automation of a urine plutonium bioassay

    International Nuclear Information System (INIS)

    Gonzales, E.R.; Moss, W.D.; Rodriguez, R.; Martinez, G.M.

    1986-01-01

    Determination of plutonium in urine is a routine procedure performed at Los Alamos. Samples are taken from the many workers who handle plutonium in their day to day activities and from those individuals whose jobs may bring them into contact with this metal. The analytical procedure used is based on alkaline earth phosphate precipitation that coprecipitates the plutonium. This procedure gives excellent results but it involves many manipulative steps and the chances for human error are ever present. In order to eliminate potential human error and decrease analysis time this procedure was automated using a Zymark Corporation robotic workcell. The developmental work for the automation process was divided into two parts: robot programmatic needs - software and hardware, and chemical modifications of existing methods for utilization with the robotic system. The optimum integration of these developments are discussed in this paper

  1. Saturn facility oil transfer automation system

    Energy Technology Data Exchange (ETDEWEB)

    Joseph, Nathan R.; Thomas, Rayburn Dean; Lewis, Barbara Ann; Malagon, Hector Ricardo.

    2014-02-01

    The Saturn accelerator, owned by Sandia National Laboratories, has been in operation since the early 1980s and still has many of the original systems. A critical legacy system is the oil transfer system which transfers 250,000 gallons of transformer oil from outside storage tanks to the Saturn facility. The oil transfer system was iden- ti ed for upgrade to current technology standards. Using the existing valves, pumps, and relay controls, the system was automated using the National Instruments cRIO FGPA platform. Engineered safety practices, including a failure mode e ects analysis, were used to develop error handling requirements. The uniqueness of the Saturn Oil Automated Transfer System (SOATS) is in the graphical user interface. The SOATS uses an HTML interface to communicate to the cRIO, creating a platform independent control system. The SOATS was commissioned in April 2013.

  2. Automation of Cassini Support Imaging Uplink Command Development

    Science.gov (United States)

    Ly-Hollins, Lisa; Breneman, Herbert H.; Brooks, Robert

    2010-01-01

    "Support imaging" is imagery requested by other Cassini science teams to aid in the interpretation of their data. The generation of the spacecraft command sequences for these images is performed by the Cassini Instrument Operations Team. The process initially established for doing this was very labor-intensive, tedious and prone to human error. Team management recognized this process as one that could easily benefit from automation. Team members were tasked to document the existing manual process, develop a plan and strategy to automate the process, implement the plan and strategy, test and validate the new automated process, and deliver the new software tools and documentation to Flight Operations for use during the Cassini extended mission. In addition to the goals of higher efficiency and lower risk in the processing of support imaging requests, an effort was made to maximize adaptability of the process to accommodate uplink procedure changes and the potential addition of new capabilities outside the scope of the initial effort.

  3. Automated Processing of Plasma Samples for Lipoprotein Separation by Rate-Zonal Ultracentrifugation.

    Science.gov (United States)

    Peters, Carl N; Evans, Iain E J

    2016-12-01

    Plasma lipoproteins are the primary means of lipid transport among tissues. Defining alterations in lipid metabolism is critical to our understanding of disease processes. However, lipoprotein measurement is limited to specialized centers. Preparation for ultracentrifugation involves the formation of complex density gradients that is both laborious and subject to handling errors. We created a fully automated device capable of forming the required gradient. The design has been made freely available for download by the authors. It is inexpensive relative to commercial density gradient formers, which generally create linear gradients unsuitable for rate-zonal ultracentrifugation. The design can easily be modified to suit user requirements and any potential future improvements. Evaluation of the device showed reliable peristaltic pump accuracy and precision for fluid delivery. We also demonstrate accurate fluid layering with reduced mixing at the gradient layers when compared to usual practice by experienced laboratory personnel. Reduction in layer mixing is of critical importance, as it is crucial for reliable lipoprotein separation. The automated device significantly reduces laboratory staff input and reduces the likelihood of error. Overall, this device creates a simple and effective solution to formation of complex density gradients. © 2015 Society for Laboratory Automation and Screening.

  4. Audit of an automated checklist for quality control of radiotherapy treatment plans

    International Nuclear Information System (INIS)

    Breen, Stephen L.; Zhang Beibei

    2010-01-01

    Purpose: To assess the effect of adding an automated checklist to the treatment planning process for head and neck intensity-modulated radiotherapy. Methods: Plans produced within our treatment planning system were evaluated at the planners' discretion with an automated checklist of more than twenty planning parameters. Plans were rated as accepted or rejected for treatment, during regular review by radiation oncologists and physicists as part of our quality control program. The rates of errors and their types were characterised prior to the implementation of the checklist and with the checklist. Results: Without the checklist, 5.9% of plans were rejected; the use of the checklist reduced the rejection rate to 3.1%. The checklist was used for 64.7% of plans. Pareto analysis of the causes of rejection showed that the checklist reduced the number of causes of rejections from twelve to seven. Conclusions: The use of an automated checklist has reduced the need for reworking of treatment plans. With the use of the checklist, most rejections were due to errors in prescription or inadequate dose distributions. Use of the checklist by planners must be increased to maximise improvements in planning efficiency.

  5. Time-efficient, high-resolution, whole brain three-dimensional macromolecular proton fraction mapping.

    Science.gov (United States)

    Yarnykh, Vasily L

    2016-05-01

    Macromolecular proton fraction (MPF) mapping is a quantitative MRI method that reconstructs parametric maps of a relative amount of macromolecular protons causing the magnetization transfer (MT) effect and provides a biomarker of myelination in neural tissues. This study aimed to develop a high-resolution whole brain MPF mapping technique using a minimal number of source images for scan time reduction. The described technique was based on replacement of an actually acquired reference image without MT saturation by a synthetic one reconstructed from R1 and proton density maps, thus requiring only three source images. This approach enabled whole brain three-dimensional MPF mapping with isotropic 1.25 × 1.25 × 1.25 mm(3) voxel size and a scan time of 20 min. The synthetic reference method was validated against standard MPF mapping with acquired reference images based on data from eight healthy subjects. Mean MPF values in segmented white and gray matter appeared in close agreement with no significant bias and small within-subject coefficients of variation (maps demonstrated sharp white-gray matter contrast and clear visualization of anatomical details, including gray matter structures with high iron content. The proposed synthetic reference method improves resolution of MPF mapping and combines accurate MPF measurements with unique neuroanatomical contrast features. © 2015 Wiley Periodicals, Inc.

  6. Experimental magic state distillation for fault-tolerant quantum computing.

    Science.gov (United States)

    Souza, Alexandre M; Zhang, Jingfu; Ryan, Colm A; Laflamme, Raymond

    2011-01-25

    Any physical quantum device for quantum information processing (QIP) is subject to errors in implementation. In order to be reliable and efficient, quantum computers will need error-correcting or error-avoiding methods. Fault-tolerance achieved through quantum error correction will be an integral part of quantum computers. Of the many methods that have been discovered to implement it, a highly successful approach has been to use transversal gates and specific initial states. A critical element for its implementation is the availability of high-fidelity initial states, such as |0〉 and the 'magic state'. Here, we report an experiment, performed in a nuclear magnetic resonance (NMR) quantum processor, showing sufficient quantum control to improve the fidelity of imperfect initial magic states by distilling five of them into one with higher fidelity.

  7. Fully automated processing of fMRI data in SPM: from MRI scanner to PACS.

    Science.gov (United States)

    Maldjian, Joseph A; Baer, Aaron H; Kraft, Robert A; Laurienti, Paul J; Burdette, Jonathan H

    2009-01-01

    Here we describe the Wake Forest University Pipeline, a fully automated method for the processing of fMRI data using SPM. The method includes fully automated data transfer and archiving from the point of acquisition, real-time batch script generation, distributed grid processing, interface to SPM in MATLAB, error recovery and data provenance, DICOM conversion and PACS insertion. It has been used for automated processing of fMRI experiments, as well as for the clinical implementation of fMRI and spin-tag perfusion imaging. The pipeline requires no manual intervention, and can be extended to any studies requiring offline processing.

  8. NLO error propagation exercise data collection system

    International Nuclear Information System (INIS)

    Keisch, B.; Bieber, A.M. Jr.

    1983-01-01

    A combined automated and manual system for data collection is described. The system is suitable for collecting, storing, and retrieving data related to nuclear material control at a bulk processing facility. The system, which was applied to the NLO operated Feed Materials Production Center, was successfully demonstrated for a selected portion of the facility. The instrumentation consisted of off-the-shelf commercial equipment and provided timeliness, convenience, and efficiency in providing information for generating a material balance and performing error propagation on a sound statistical basis

  9. Implementation and evaluation of an automated dispensing system.

    Science.gov (United States)

    Schwarz, H O; Brodowy, B A

    1995-04-15

    An institution's experience in replacing a traditional unit dose cassette-exchange system with an automated dispensing system is described. A 24-hour unit dose cassette-exchange system was replaced with an automated dispensing system (Pyxis's Medstation Rx) on a 36-bed cardiovascular surgery unit and an 8-bed cardiovascular intensive care unit. Significantly fewer missing doses were reported after Medstation Rx was implemented. No conclusions could be made about the impact of the system on the reporting of medication errors. The time savings for pharmacy associated with the filling, checking, and delivery of new medication orders equated to about 0.5 full-time equivalent (FTE). Medstation Rx also saved substantial nursing time for acquisition of controlled substances and for controlled-substance inventory taking at shift changes. A financial analysis showed that Medstation Rx could save the institution about $1 million over five years if all personnel time savings could be translated into FTE reductions. The automated system was given high marks by the nurses in a survey; 80% wanted to keep the system on their unit. Pilot implementation of an automated dispensing system improved the efficiency of drug distribution over that of the traditional unit dose cassette-exchange system.

  10. Simulating and Detecting Radiation-Induced Errors for Onboard Machine Learning

    Science.gov (United States)

    Wagstaff, Kiri L.; Bornstein, Benjamin; Granat, Robert; Tang, Benyang; Turmon, Michael

    2009-01-01

    Spacecraft processors and memory are subjected to high radiation doses and therefore employ radiation-hardened components. However, these components are orders of magnitude more expensive than typical desktop components, and they lag years behind in terms of speed and size. We have integrated algorithm-based fault tolerance (ABFT) methods into onboard data analysis algorithms to detect radiation-induced errors, which ultimately may permit the use of spacecraft memory that need not be fully hardened, reducing cost and increasing capability at the same time. We have also developed a lightweight software radiation simulator, BITFLIPS, that permits evaluation of error detection strategies in a controlled fashion, including the specification of the radiation rate and selective exposure of individual data structures. Using BITFLIPS, we evaluated our error detection methods when using a support vector machine to analyze data collected by the Mars Odyssey spacecraft. We found ABFT error detection for matrix multiplication is very successful, while error detection for Gaussian kernel computation still has room for improvement.

  11. About problematic peculiarities of Fault Tolerance digital regulation organization

    Science.gov (United States)

    Rakov, V. I.; Zakharova, O. V.

    2018-05-01

    The solution of problems concerning estimation of working capacity of regulation chains and possibilities of preventing situations of its violation in three directions are offered. The first direction is working out (creating) the methods of representing the regulation loop (circuit) by means of uniting (combining) diffuse components and forming algorithmic tooling for building predicates of serviceability assessment separately for the components and the for regulation loops (circuits, contours) in general. The second direction is creating methods of Fault Tolerance redundancy in the process of complex assessment of current values of control actions, closure errors and their regulated parameters. The third direction is creating methods of comparing the processes of alteration (change) of control actions, errors of closure and regulating parameters with their standard models or their surroundings. This direction allows one to develop methods and algorithmic tool means, aimed at preventing loss of serviceability and effectiveness of not only a separate digital regulator, but also the whole complex of Fault Tolerance regulation.

  12. Fault-tolerant clock synchronization validation methodology. [in computer systems

    Science.gov (United States)

    Butler, Ricky W.; Palumbo, Daniel L.; Johnson, Sally C.

    1987-01-01

    A validation method for the synchronization subsystem of a fault-tolerant computer system is presented. The high reliability requirement of flight-crucial systems precludes the use of most traditional validation methods. The method presented utilizes formal design proof to uncover design and coding errors and experimentation to validate the assumptions of the design proof. The experimental method is described and illustrated by validating the clock synchronization system of the Software Implemented Fault Tolerance computer. The design proof of the algorithm includes a theorem that defines the maximum skew between any two nonfaulty clocks in the system in terms of specific system parameters. Most of these parameters are deterministic. One crucial parameter is the upper bound on the clock read error, which is stochastic. The probability that this upper bound is exceeded is calculated from data obtained by the measurement of system parameters. This probability is then included in a detailed reliability analysis of the system.

  13. Generalized Born Models of Macromolecular Solvation Effects

    Science.gov (United States)

    Bashford, Donald; Case, David A.

    2000-10-01

    It would often be useful in computer simulations to use a simple description of solvation effects, instead of explicitly representing the individual solvent molecules. Continuum dielectric models often work well in describing the thermodynamic aspects of aqueous solvation, and approximations to such models that avoid the need to solve the Poisson equation are attractive because of their computational efficiency. Here we give an overview of one such approximation, the generalized Born model, which is simple and fast enough to be used for molecular dynamics simulations of proteins and nucleic acids. We discuss its strengths and weaknesses, both for its fidelity to the underlying continuum model and for its ability to replace explicit consideration of solvent molecules in macromolecular simulations. We focus particularly on versions of the generalized Born model that have a pair-wise analytical form, and therefore fit most naturally into conventional molecular mechanics calculations.

  14. Probing the hydration water diffusion of macromolecular surfaces and interfaces

    International Nuclear Information System (INIS)

    Ortony, Julia H; Cheng, Chi-Yuan; Franck, John M; Pavlova, Anna; Hunt, Jasmine; Han, Songi; Kausik, Ravinath

    2011-01-01

    We probe the translational dynamics of the hydration water surrounding the macromolecular surfaces of selected polyelectrolytes, lipid vesicles and intrinsically disordered proteins with site specificity in aqueous solutions. These measurements are made possible by the recent development of a new instrumental and methodological approach based on Overhauser dynamic nuclear polarization (DNP)-enhanced nuclear magnetic resonance (NMR) spectroscopy. This technique selectively amplifies 1 H NMR signals of hydration water around a spin label that is attached to a molecular site of interest. The selective 1 H NMR amplification within molecular length scales of a spin label is achieved by utilizing short-distance range (∼r -3 ) magnetic dipolar interactions between the 1 H spin of water and the electron spin of a nitroxide radical-based label. Key features include the fact that only minute quantities (<10 μl) and dilute (≥100 μM) sample concentrations are needed. There is no size limit on the macromolecule or molecular assembly to be analyzed. Hydration water with translational correlation times between 10 and 800 ps is measured within ∼10 A distance of the spin label, encompassing the typical thickness of a hydration layer with three water molecules across. The hydration water moving within this time scale has significant implications, as this is what is modulated whenever macromolecules or molecular assemblies undergo interactions, binding or conformational changes. We demonstrate, with the examples of polymer complexation, protein aggregation and lipid-polymer interaction, that the measurements of interfacial hydration dynamics can sensitively and site specifically probe macromolecular interactions.

  15. Injecting Errors for Testing Built-In Test Software

    Science.gov (United States)

    Gender, Thomas K.; Chow, James

    2010-01-01

    Two algorithms have been conceived to enable automated, thorough testing of Built-in test (BIT) software. The first algorithm applies to BIT routines that define pass/fail criteria based on values of data read from such hardware devices as memories, input ports, or registers. This algorithm simulates effects of errors in a device under test by (1) intercepting data from the device and (2) performing AND operations between the data and the data mask specific to the device. This operation yields values not expected by the BIT routine. This algorithm entails very small, permanent instrumentation of the software under test (SUT) for performing the AND operations. The second algorithm applies to BIT programs that provide services to users application programs via commands or callable interfaces and requires a capability for test-driver software to read and write the memory used in execution of the SUT. This algorithm identifies all SUT code execution addresses where errors are to be injected, then temporarily replaces the code at those addresses with small test code sequences to inject latent severe errors, then determines whether, as desired, the SUT detects the errors and recovers

  16. Specification and R and D Program on Magnet Alignment Tolerances for NSLS-II

    International Nuclear Information System (INIS)

    Kramer, S.L.; Jain, A.K.

    2009-01-01

    The NSLS-II light source is a proposed 3 GeV storage ring, with the potential for ultra-low emittance. Despite the reduced emittance goal for the bare lattice, the closed orbit amplification factors are on average >55 in both planes, for random quadrupole alignment errors. The high chromaticity will also require strong sextupoles and the low 3 GeV energy will require large dynamic and momentum aperture to insure adequate lifetime. This will require tight alignment tolerances (∼ 30(micro)m) on the multipole magnets during installation. By specifying tight alignment tolerances of the magnets on the support girders, the random alignment tolerances of the girders in the tunnel can be significantly relaxed. Using beam based alignment to find the golden orbit through the quadrupole centers, the closed orbit offsets in the multipole magnets will then be reduced to essentially the alignment errors of the magnets, restoring much of the dynamic aperture and lifetime of the bare lattice. Our R and D program to achieve these tight alignment tolerances of the magnets on the girders using a vibrating wire technique, will be discussed and initial results presented.

  17. The Accutension Stetho, an automated auscultatory device to validate automated sphygmomanometer readings in individual patients.

    Science.gov (United States)

    Alpert, Bruce S

    2018-04-06

    The aim of this report is to describe a new device that can validate, by automated auscultation, individual blood pressure (BP) readings taken by automated sphygmomanometers.The Accutension Stetho utilizes a smartphone application in conjunction with a specially designed stethoscope that interfaces directly into the smartphone via the earphone jack. The Korotkoff sounds are recorded by the application and are analyzed by the operator on the screen of the smartphone simultaneously with the images from the sphygmomanometer screen during BP estimation. Current auscultatory validation standards require at least 85 subjects and strict statistical criteria for passage. A device that passes can make no guarantee of accuracy on individual patients. The Accutension Stetho is an inexpensive smartphone/stethoscope kit combination that estimates precise BP values by auscultation to confirm the accuracy of an automated sphygmomanometer's readings on individual patients. This should be of great value for both professional and, in certain circumstances, self-measurement BP. Patients will avoid both unnecessary treatment and errors of underestimation of BP, in which the patient requires therapy. The Stetho's software has been validated in an independent ANSI/AAMI/ISO standard study. The Stetho has been shown to perform without difficulty in multiple deflation-based devices by many manufacturers.

  18. Automated Material Accounting Statistics System at Rockwell Hanford Operations

    International Nuclear Information System (INIS)

    Eggers, R.F.; Giese, E.W.; Kodman, G.P.

    1986-01-01

    The Automated Material Accounting Statistics System (AMASS) was developed under the sponsorship of the U.S. Nuclear Regulatory Commission. The AMASS was developed when it was realized that classical methods of error propagation, based only on measured quantities, did not properly control false alarm rate and that errors other than measurement errors affect inventory differences. The classical assumptions that (1) the mean value of the inventory difference (ID) for a particular nuclear material processing facility is zero, and (2) the variance of the inventory difference is due only to errors in measured quantities are overly simplistic. The AMASS provides a valuable statistical tool for estimating the true mean value and variance of the ID data produced by a particular material balance area. In addition it provides statistical methods of testing both individual and cumulative sums of IDs, taking into account the estimated mean value and total observed variance of the ID

  19. A real-time automated quality control of rain gauge data based on multiple sensors

    Science.gov (United States)

    qi, Y.; Zhang, J.

    2013-12-01

    Precipitation is one of the most important meteorological and hydrological variables. Automated rain gauge networks provide direct measurements of precipitation and have been used for numerous applications such as generating regional and national precipitation maps, calibrating remote sensing data, and validating hydrological and meteorological model predictions. Automated gauge observations are prone to a variety of error sources (instrument malfunction, transmission errors, format changes), and require careful quality controls (QC). Many previous gauge QC techniques were based on neighborhood checks within the gauge network itself and the effectiveness is dependent on gauge densities and precipitation regimes. The current study takes advantage of the multi-sensor data sources in the National Mosaic and Multi-Sensor QPE (NMQ/Q2) system and developes an automated gauge QC scheme based the consistency of radar hourly QPEs and gauge observations. Error characteristics of radar and gauge as a function of the radar sampling geometry, precipitation regimes, and the freezing level height are considered. The new scheme was evaluated by comparing an NMQ national gauge-based precipitation product with independent manual gauge observations. Twelve heavy rainfall events from different seasons and areas of the United States are selected for the evaluation, and the results show that the new NMQ product with QC'ed gauges has a more physically spatial distribution than the old product. And the new product agrees much better statistically with the independent gauges.

  20. Defect tolerance in resistor-logic demultiplexers for nanoelectronics.

    Science.gov (United States)

    Kuekes, Philip J; Robinett, Warren; Williams, R Stanley

    2006-05-28

    Since defect rates are expected to be high in nanocircuitry, we analyse the performance of resistor-based demultiplexers in the presence of defects. The defects observed to occur in fabricated nanoscale crossbars are stuck-open, stuck-closed, stuck-short, broken-wire, and adjacent-wire-short defects. We analyse the distribution of voltages on the nanowire output lines of a resistor-logic demultiplexer, based on an arbitrary constant-weight code, when defects occur. These analyses show that resistor-logic demultiplexers can tolerate small numbers of stuck-closed, stuck-open, and broken-wire defects on individual nanowires, at the cost of some degradation in the circuit's worst-case voltage margin. For stuck-short and adjacent-wire-short defects, and for nanowires with too many defects of the other types, the demultiplexer can still achieve error-free performance, but with a smaller set of output lines. This design thus has two layers of defect tolerance: the coding layer improves the yield of usable output lines, and an avoidance layer guarantees that error-free performance is achieved.